id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.04705
Robust black-box quantum-state preparation via quantum signal processing
Black-box quantum-state preparation is a variant of quantum-state preparation where we want to construct an $n$-qubit state $|\psi_c\rangle \propto \sum_x c(x) |x\rangle$ with the amplitudes $c(x)$ given as a (quantum) oracle. This variant is particularly useful when the quantum state has a short and simple classical description. We use recent techniques, namely quantum signal processing (QSP) and quantum singular value transform (QSVT), to construct a new algorithm that prepares $|\psi_c\rangle$ without the need to carry out coherent arithmetic. We then compare our result with current state-of-the-art algorithms, showing that a QSVT-based approach achieves comparable results.
Lorenzo Laneve
2023-05-08T13:37:25Z
http://arxiv.org/abs/2305.04705v3
# Robust oracle quantum-state preparation via quantum signal processing ###### Abstract Oracle quantum state preparation is a variant of quantum state preparation where we want to construct a state \(|\psi_{c}\rangle\propto\sum_{x}c(x)|x\rangle\) with the amplitudes \(c(x)\) given as a (quantum) oracle. This variant is particularly useful when the quantum state has a short and simple classical description. We use recent techniques, namely quantum signal processing (QSP) and quantum singular value transform (QSVT), to construct a new algorithm that uses a polynomial number of qubits and oracle calls to construct \(|\psi_{c}\rangle\). For a large class of states, this translates to an algorithm that is polynomial in the number of qubits, both in depth and width. ## I Introduction Quantum signal processing (QSP) is a novel technique for the design of quantum algorithms [1]. As a introductory example, consider a unitary \(W\): whenever we apply \(W\) twice, the resulting operation is \(W^{2}\), regardless of what \(W\) is. In other words, this construction applies the polynomial \(P(x)=x^{2}\) to \(W\). A natural question arises: which polynomials \(P(x)\) can we apply to \(W\)? Surprisingly it turns out that, with a simple ansatz, we can apply any polynomial satisfying some mild constraints, namely that the polynomial has to be bounded by 1 in absolute value (natural constraint, as otherwise \(P(W)\) cannot be unitary), and of definite parity. The latter constraint can be lifted easily, as one can implement even and odd parts separately, and sum them up using linear combination of unitaries (LCU) [2; 3; 4]. This yields a technique called _quantum eigenvalue transform_, which was extensively used to tackle the Hamiltonian simulation problem with surprising (and nearly optimal) complexity [5; 6], with a more recent construction requiring only a single copy of the initial state [7]. This idea was further developed by Gilyen et al. [8], where the polynomial is applied not on the eigenvalues of the unitary, but rather on the _singular values_ of a matrix embedded in the unitary, namely on its top-left block, thus out even requiring this block to be squared. This new technique, called _quantum singular value transform_ (QSVT), gives a surprising unification and re-formalization of a wide spectrum of already-known quantum algorithms [9], from Grover's search [10; 11] and amplitude amplification [12; 13; 14] to Shor's factoring [15], from quantum phase estimation [16; 17] to the HHL algorithm for solving quantum linear systems [18]. Quantum-state preparation is a central problem in quantum computation: given complex numbers \(c_{1},\ldots,c_{2^{n}}\), we want to construct a circuit that transforms the state \(|0\rangle^{\otimes n}\) into the state \(|\psi_{c}\rangle=\sum_{x}c_{x}|x\rangle\), essentially 'initializing' our quantum register for further computation. This has applications, for example, in machine learning [19; 20] and Hamiltonian simulation [6], but even techniques such as the LCU itself needs to prepare particular quantum states in order to achieve non-trivial linear combinations [2; 3; 4]. Many constructions were devised to prepare an arbitrary state [21; 22; 23; 24; 25], requiring no ancilla qubits, but exponential depth. In particular, Sun et al. [26] found a circuit with depth \(\mathcal{O}(2^{n}/n)\) (\(n\) being the number of qubits), which matches the lower bound. If we allow ancillary qubits, we obtain depths as low as \(\mathcal{O}(n)\), although it requires an exponential number of ancillae [26; 27; 28]. Moreover, Zhang et al. [29] improved the complexity under the assumption of sparse states. Using QSP, a similar problem called _ground-state preparation_ has been tackled [30; 31], where one prepares the ground state of a given Hamiltonian. Moreover, QSVT techniques allow to easily prepare Gibbs states on a quantum computer [32]. In this work we consider the _oracle_ quantum-state preparation problem, where the amplitudes \(\{c_{x}\}_{x}\) are not given as a list, but as algorithm \(c(x)\), with which we can construct a quantum oracle. This idea is nicely applicable if we consider states whose amplitudes are computable (e.g., the purification of a Gibbs state, or a some probability distribution with analytical expression). We show that, if \(c(x)\) is computable in time \(\mathcal{O}(T(n))\), then the \(N=2^{n}\)-dimensional quantum state \(|\psi_{c}\rangle=\frac{1}{\sqrt{\gamma}}\sum_{x}c(x)|x\rangle\) can be prepared within error \(\epsilon\) in time \(\mathcal{O}(\frac{1}{\sqrt{\gamma}}T(n)\log(1/\epsilon))\) and \(\mathcal{O}(\log(1/\epsilon))\) number of ancilla qubits, where \[\sqrt{\gamma}=\frac{1}{2^{n}}\sum_{x}|c(x)|^{2}\] is the normalization factor. This gives a polynomial-time algorithm for the preparation of a large class of quantum states, namely the ones for which the quantum circuit for \(c(x)\) is computable in polynomial time and \(\sqrt{\gamma}\) is an inverse polynomial in \(n\), a dramatic improvement over a previous work [33]. In Section II we give a brief overview of QSP and QSVT, with the necessary elements that we are going to use in the rest of the work. In Section III we show how to extract the 'logarithm' of a unitary using QSVT, a construction taken from [5]. In Section IV we use an ideal implementation of the logarithm of unitary to prepare a quantum state, and Section V gives a full error analysis when the unitary logarithm is implemented via quantum signal processing. ## II Review of quantum signal processing In this section we briefly describe the quantum signal processing and quantum singular value transform techniques. **Theorem 1** (Quantum signal processing with reflections [1; 8]).: _Given the reflection unitary_ \[R(x)=\begin{bmatrix}x&\sqrt{1-x^{2}}\\ \sqrt{1-x^{2}}&-x\end{bmatrix}\] _and a \(d\)-degree polynomial \(P(x)\) such that:_ 1. _has parity_ \((d\bmod 2)\)_;_ 2. _for any_ \(x\in[-1,1]\)_,_ \(|P(x)|\leq 1\)_;_ 3. _for any_ \(x\in(-\infty,-1]\cup[1,\infty)\)_,_ \(|P(x)|\geq 1\)_;_ 4. _when_ \(d\) _is even, for any_ \(x\in\mathbb{R}\)_,_ \(P(ix)P^{*}(ix)\geq 1\)_._ _The following is true for some \(\Phi=(\phi_{1},\ldots,\phi_{d})\in\mathbb{R}^{d}\)_ \[\Pi_{j=1}^{d}\left(e^{i\phi_{j}Z}R(x)\right)=\begin{bmatrix}P(x)&\cdot\\ \cdot&\end{bmatrix}\] This results says that we can construct a unitary containing any polynomial \(P(x)\) in the top-left corner, provided it satisfies conditions (i)-(iv). We can remove the above unintuitive constraints: **Theorem 2** (Real quantum signal processing).: _Given a polynomial \(P_{R}\in\mathbb{R}[x]\) satisfying conditions (i)-(ii) of Theorem 1, there exists a polynomial \(P\in\mathbb{C}[x]\) with real part \(P_{R}\) satisfying (i)-(iv)._ Thus, for any real polynomial of definite parity and with absolute value bounded by \(1\) in our region of interest, the polynomial completed with a suitable imaginary part can be implemented by quantum signal processing. The idea will be to implement both \(P,P^{*}\) (note that the phases \(-\Phi\) generate \(P^{*}\)), and then implement \(P_{R}=(P+P^{*})/2\) via a linear combination of unitaries. It is important to remark that the coefficients of \(P\) as well as the phase factors \(\Phi\) are computable in polynomial time and with a numerically stable algorithm [8; 34; 35] **Definition 1** (Block-encoding [8]).: _Let \(A\) be an \(s\)-qubit matrix and \(U\) a \((s+a)\)-qubit unitary. We say that \(A\) is \((a,\epsilon)\)-block-encoded in \(U\) if_ \[\left\|A-(\langle 0|^{\otimes a}\otimes 1\rangle U(|0\rangle^{\otimes a} \otimes 1)\right\|\leq\epsilon\] This means that, if \(\epsilon=0\), \(U\) would be of the form \[U=\begin{bmatrix}A&\cdot\\ \cdot&\end{bmatrix}\] In general, the top-left block of the matrix is \(\epsilon\)-close to \(A\). Notice that, by unitarity of \(U\), \(\left\|A\right\|\leq 1+\epsilon\). If we need a matrix with norm \(\alpha\) we simply block-encode \(A/\alpha\). Gilyen et al. [8] provide a series of constructions which enable different operations on these block encodings. **Definition 2** (Singular value transformation [8]).: _Let \(A\) be a matrix with singular value decomposition_ \[A=\sum_{i}\sigma_{i}|\tilde{\psi}_{i}\rangle\langle\psi_{i}|\] _Given a polynomial \(P(x)\) of definite parity, the singular value transformation of \(A\) using \(P\) is defined as_ \[P^{(SV)}(A)=\begin{cases}\sum_{i}P(\sigma_{i})|\tilde{\psi}_{i}\rangle\langle \psi_{i}|&P\text{ odd}\\ \sum_{i}P(\sigma_{i})|\psi_{i}\rangle\langle\psi_{i}|&P\text{ even}\end{cases}\] It is important to remark that here we could even take the'singular values' to be negative: by negating the corresponding left singular vector we obtain another singular value decomposition, and one can check that Definition 2 remains consistent for any choice of the signs of the singular values. An important question is the following: given a definite-parity polynomial \(P(x)\) and a block-encoded matrix \(A\), can we obtain a block-encoding of \(P^{(SV)}(A)\)? The answer is positive. **Theorem 3** (Quantum singular value transform [8; 9]).: _Let \(P(x)\) be a polynomial of degree \(d\) satisfying (i)-(iv) of Theorem 1 and let \(\Phi\in\mathbb{R}^{d}\) be the corresponding phase factors. Moreover, \(U\) is a unitary that block encodes \(A\) as follows_ \[A=\tilde{\Pi}U\Pi\] _where \(\tilde{\Pi},\Pi\) are projectors. The following unitary produces a block-encoding of \(P^{(SV)}(A)\):_ \[U_{\Phi}=\begin{cases}e^{i\phi_{1}(2\tilde{\Pi}-1)}U\prod_{k=1}^{(d-1)/2} \left(e^{i\phi_{2k}(2\Pi-1)}U^{\dagger}e^{i\phi_{2k+1}(2\tilde{\Pi}-1)}U \right)&P\text{ odd}\\ \prod_{k=1}^{d/2}\left(e^{i\phi_{2k-1}(2\Pi-1)}U^{\dagger}e^{i\phi_{2k}(2 \tilde{\Pi}-1)}U\right)&P\text{ even}\end{cases}\] _i.e., \(\tilde{\Pi}U_{\Phi}\Pi=P^{(SV)}(A)\)._ Roughly speaking, the proof shows that, considering the subspaces spanned by the \(i\)-th singular vectors, \(U\) acts as \(R(\sigma_{i})\), while the rotations acts as \(Z\)-rotations, and we can apply Theorem 1 on each of these subspaces. This enables to carry out a singular value transform using any polynomial constructible with quantum signal processing. This transformation is also robust, in the sense that, if \(A\) is \((a,\epsilon)\)-block-encoded in \(U\), then we have that \(P^{(SV)}(A)\) is \((a+1,4d\sqrt{\epsilon})\)-block-encoded in \(U_{\Phi}\)[8]. Most of the time we will focus on the case where \(A\) is Hermitian. In this case the singular value and eigenvalue transformations will coincide (remember that here, we can intend singular values to also be negative), i.e., \(P^{(SV)}(A)=P(A)\). ## III The logarithm of a unitary Keeping in mind what we presented in the last section, consider the following problem. **Problem 3** (Unitary logarithm).: _Let \(\mathcal{H}\) be an Hermitian matrix satisfying \(\|\mathcal{H}\|<1\), and denote with \(U=e^{i\pi\mathcal{H}}\) the corresponding unitary. Given controlled versions of \(U,U^{\dagger}\), implement a block-encoding \(C\) of \(\mathcal{H}\), i.e.,_ \[C=\begin{bmatrix}\mathcal{H}&\cdot\\ \cdot&\cdot\end{bmatrix}\] We now show a simple way introduced in [5] to solve Problem 3. This is used in [8] to make fractional queries to \(U\), i.e., to implement \(U^{t}\) for a non-necessarily integer \(t\), namely by extracting the Hamiltonian, multiplying it by a constant using block-encoding arithmetics, and then exponentiating it back with a polynomial that approximates the complex exponential function. Another example in the same work, which we are going to generalize, was done for Gibbs sampling. The idea is as follows: by doing some simple calculations one can check that \[(\langle 0|H\otimes 1\rangle cU^{\dagger}(Y\otimes 1)cU(H|0)\otimes 1)=\sin( \pi\mathcal{H}) \tag{1}\] i.e., this simple circuit (as shown in Figure 1) is a \((1,0)\)-block-encoding for \(\sin(\pi\mathcal{H})\), where \(H\) is the Hadamard gate, \(cU\) is the controlled-\(U\) gate, and \(Y\) is the Pauli matrix. In order to obtain a block-encoding of \(\mathcal{H}\), we need to invert the sine function. **Theorem 4** ([8]).: _An \(\epsilon\)-polynomial approximation of \(f(x)=\frac{1}{\pi}\arcsin(x)\) in the interval \((-1+\delta,1-\delta)\) has degree \(d=\mathcal{O}\big{(}\frac{1}{\delta}\log\frac{1}{\epsilon}\big{)}\)._ Proof.: The Taylor series of \(f(x)\) is \[f(x)=\sum_{k=0}^{\infty}\frac{1}{\pi}\binom{2k}{k}\frac{2^{-2k}}{2k+1}x^{2k+1}\] and the truncation up to the first \(d\) terms is \(\epsilon\)-close to \(f\) in the interval \((-1+\delta,1-\delta)\)[8, Theorem 68]. Therefore, assuming \(\|\mathcal{H}\|<1-\delta\) we can construct a \((2,\epsilon)\)-block-encoding of \(\mathcal{H}\) with only \(\mathcal{O}(\frac{1}{\delta}\log\frac{1}{\epsilon})\) calls to \(cU,cU^{\dagger}\). ## IV Quantum-state preparation The quantum-state preparation problem can be stated without loss of generality as follows: **Problem 4**.: _Let \(N=2^{n}\). Given amplitudes \(c=(c_{0},\cdots,c_{N-1})\in[0,1]^{N}\), construct the state_ \[|\psi_{c}\rangle=\sum_{x}c_{x}|x\rangle\] _from the state \(|0\rangle^{\otimes n}\) up to error \(\epsilon\). More formally, construct a quantum circuit \(C\) such that_ \[\big{\|}C|0\rangle^{\otimes n}-|\psi_{c}\rangle\big{\|}\leq\epsilon\] It will be clear later why we do not need \(c_{i}\) to be complex. In this work, we consider the _oracle_ quantum-state preparation problem, where we assume the amplitudes are computed by an algorithm \(c(x)\in[0,1]\), and this algorithm is given as a quantum oracle \[\mathcal{O}_{c}|x\rangle|0\rangle^{\otimes m}=|x\rangle|c(x)\rangle\] which computes the \(m\) bits after the decimal point. Moreover, we will not need to assume that the \(c^{2}(x)\) are normalized, but we will assume the algorithm has access to the average \(\gamma=\frac{1}{N}\sum_{x}c^{2}(x)\). Thus, in the end the target state will be \[|\psi_{c}\rangle=\frac{1}{\sqrt{N\gamma}}\sum_{x}c(x)|x\rangle\] The idea is quite simple: consider the unitary \(U_{c}\) acting as follows \[U_{c}|x\rangle=e^{i\pi c(x)/2}|x\rangle\] thus \(U_{c}=\text{diag}(e^{i\pi c(x)/2})_{x}=e^{i\pi H_{c}}\) where \[H_{c}=\text{diag}(c(x)/2)_{x}.\] This unitary is actually efficiently implementable using only two copies of \(\mathcal{O}_{c},\mathcal{O}_{c}^{\dagger}\), using a standard construction (Figure 2). This is the reason why we only care for positive real amplitudes, as applying relative phases is always efficiently realizable with a similar transformation. Extracting the logarithm of this unitary using the construction of Section III, yields a \((2,\epsilon)\)-block-encoding of \(H_{c}\) using only \(\mathcal{O}(\log\frac{1}{\epsilon})\) calls to \(\mathcal{O}_{c}\) (notice that \(\|H_{c}\|\leq\frac{1}{2}\)). We denote the unitary of this block-encoding with \(C\). We assume for ease of exposition that the arcsin approximation is perfect, and we will postpone the error analysis to the next section. If we now apply this operator to the equal superposition \(|+\rangle:=|+\rangle^{\otimes n}\) we obtain \[C|00\rangle|+\rangle=|00\rangle H_{c}|+\rangle+|\Phi\rangle \tag{2}\] where \(|00\rangle\) is the initial state of the two control qubits, and \(|\Phi\rangle\) is the garbage state we obtain if we fail, i.e., we pick the wrong block of the block encoding and the two control qubits return \(\neq 00\) (which means \(\langle 00|\Phi\rangle=0\)). The state associated with the \(|00\rangle\) component is \[H_{c}|+\rangle=\frac{1}{\sqrt{N}}\sum_{x}H_{c}|x\rangle=\frac{1}{2\sqrt{N}} \sum_{x}c(x)|x\rangle=\frac{\sqrt{\gamma}}{2}|\psi_{c}\rangle\] Thus, by replacing this in Eq. (2) we obtain \[C|00\rangle|+\rangle=\frac{\sqrt{\gamma}}{2}|00\rangle|\psi_{c}\rangle+|\Phi\rangle\] This means that, if we measure the control qubits, we post-select the correct block and get our state with probability \(\gamma/4\). In order to amplify the success probability we employ a new variant of oblivious amplitude amplification, proving its correctness with the aid of Theorem 3. **Lemma 5** (Non-unitary oblivious fixed-point amplitude amplification).: _Let \(|\Psi\rangle=|00\rangle|\psi_{a}\rangle=|00\rangle S|0^{n}\rangle\) be our initial state and \(|w\rangle=|00\rangle|\psi_{c}\rangle\) is the target state. Given the unitary \(C\) acting as_ \[C|\Psi\rangle=\sigma|w\rangle+|\Phi\rangle\] _where \(\langle 00|\Phi\rangle=0\), i.e., \(|\Phi\rangle\) is our 'garbage' state, given by the other block of the encoding. It is possible to obtain \(|w\rangle\) with probability \(1-\delta\) using \(\mathcal{O}(\frac{1}{\sigma}\log\frac{1}{\delta})\) copies of \(C\) and \(S\)._ The crucial difference between this construction and the already-known oblivious amplitude amplification procedures [13; 14; 36] is that here we do not need the desired transformation of our initial state to be unitary (or close to a unitary). However, we also require many copies of the unitary \(S\) that constructs the initial state, i.e., the initial state is fixed. In our application, the initial state is \(|+\rangle\), so \(S=H^{\otimes n}\) is the \(n\)-fold Hadamard gate, which is easy to construct. Proof.: Denoting the starting state \(|\Psi\rangle=|00\rangle|+\rangle\) with \(|\Psi\rangle\), consider the two projectors \[\tilde{\Pi} =|00\rangle\langle 00|\otimes 1\] \[\Pi =|\Psi\rangle\langle\Psi|=|00\rangle\langle 00|\otimes S|0^{n} \rangle\langle 0^{n}|S^{\dagger}\] Figure 2: Construction of \(U_{c}\) using two copies of \(\mathcal{O}_{c},\mathcal{O}_{c}^{\dagger}\). The controlled phase rotations give a contribution for each bit of the output, so that the total phase obtained is \(\pi c(x)/2\). In order to obtain a \(\epsilon\)-approximation on \(c(x)\) it is sufficient to take \(m=\mathcal{O}(\log(1/\epsilon))\) ancilla qubits. Then \[\tilde{\Pi}C\Pi=\sigma|w\rangle\langle\Psi|\] i.e., \(C\) block encodes a rank-one matrix with a singular value \(\sigma\). All we need to do is to design a \(P(x)\) that satisfies \(|P(\sigma)|\geq 1-\delta/2\). In this way, we apply Theorem 3 to transform this singular value, so our success probability will be \(\geq(1-\delta/2)^{2}\geq 1-\delta\). A polynomial approximation \(P\) to the sign function can achieve this (Caro and Vidal, 2015, Corollary 6), and has degree \(\mathcal{O}(\frac{1}{\Delta}\log\frac{1}{\delta})\): \[|P(x)|\geq 1-\delta/2\,\text{ for }|x|\geq\Delta\] and we plug \(\Delta=\sigma\) (see Figure 3). Therefore, as \(\sigma=\sqrt{\gamma}/2\), by applying the construction of Theorem 5, we obtain **Lemma 6**.: _Starting from the state \(|0\rangle^{\otimes n}\), we can construct the state \(|\psi_{c}\rangle\) with probability \(1-\delta\) using_ \[\mathcal{O}\left(\frac{1}{\sqrt{\gamma}}\log\frac{1}{\delta}\right)\] _copies of \(C\)._ ## V Error Analysis In the previous section we considered \(C\) being a perfect block-encoding of \(H_{c}=\arcsin(\sin H_{c})\) and we proved that, under this assumption, our algorithm delivers exactly \(|\psi_{c}\rangle\) with probability \(1-\delta\). We now replace the unitary \(C\) with some unitary \(\tilde{C}\) such that \[\tilde{C}|00\rangle|+\rangle=|00\rangle\tilde{H}_{c}|+\rangle+|\tilde{\Phi}\rangle\] where \[\|\tilde{H}_{c}-H_{c}\|\leq\epsilon. \tag{3}\] Notice that such \(\tilde{C}\) can be actually implemented using \(\mathcal{O}(\log\frac{1}{\epsilon})\) calls to \(\mathcal{O}_{c}\), as per Theorem 4. Thus, the eigenvalues \(\frac{1}{2}\tilde{c}(x)\) of \(\tilde{H}_{c}\) satisfy \(|\frac{1}{2}\tilde{c}(x)-\frac{1}{2}c(x)|\leq\epsilon\), and \(\tilde{H}_{c}|+\rangle\) will give us the sub-normalized state \[\tilde{H}_{c}|+\rangle=\frac{1}{2\sqrt{N}}\sum_{x}\tilde{c}(x)|x\rangle=: \frac{\sqrt{\gamma}}{2}|\tilde{\psi}_{c}\rangle\] Figure 3: Approximation of the sign function using an odd polynomial. We use a polynomial of degree \(\mathcal{O}(\frac{1}{\Delta}\log\frac{1}{\delta})\) to obtain a \(\delta/2\)-approximation of the sign function when \(|x|\geq\Delta\). By increasing the degree we can increase both the accuracy of the approximation and the range. In amplitude amplification settings, the singular value we want to amplify usually sits close to \(0\), and we want to transform it as close as possible to \(1\). where \(|\tilde{\psi}_{c}\rangle\) and \(\tilde{\gamma}\) are defined analogously as \(|\psi_{c}\rangle\) and \(\gamma\). In particular, the former is the final state returned by our algorithm, after the amplification procedure. Thus our task is to bound \(\|\tilde{\psi}_{c}\rangle-|\psi_{c}\rangle\|\). The first observation is that, by using Eq. (3) \[\left\|\frac{\sqrt{\gamma}}{2}|\tilde{\psi}_{c}\rangle-\frac{\sqrt{\gamma}}{2} |\psi_{c}\rangle\right\|=\left\|(\tilde{H}_{c}-H_{c})|+\rangle\right\|\leq \left\|\tilde{H}_{c}-H_{c}\right\|\leq\epsilon \tag{4}\] A second observation is that \(\gamma,\tilde{\gamma}\) are close \[|\tilde{\gamma}-\gamma|\leq\frac{1}{N}\sum_{x}|\tilde{c}^{2}(x)-c^{2}(x)|= \frac{1}{N}\sum_{x}|\tilde{c}(x)-c(x)|\cdot|\tilde{c}(x)+c(x)|\leq 2\epsilon\] Thus, the distance between the two square roots is \[|\sqrt{\gamma}-\sqrt{\tilde{\gamma}}|=\frac{|\tilde{\gamma}-\gamma|}{|\sqrt{ \gamma}+\sqrt{\tilde{\gamma}}|}\leq\frac{\epsilon}{2\min\{\sqrt{\gamma},\sqrt {\tilde{\gamma}}\}}\] If we assume that \(\epsilon\leq\frac{\gamma}{4}\), then \(\tilde{\gamma}\geq\gamma/2\) and the bound will become \[|\sqrt{\gamma}-\sqrt{\tilde{\gamma}}|\leq\frac{\epsilon}{\sqrt{2}\sqrt{\gamma}} \tag{5}\] Thus, the total error will be \[\left\|\tilde{\psi}_{c}\rangle-|\psi_{c}\rangle\right\| =\frac{2}{\sqrt{\gamma}}\left\|\frac{\sqrt{\gamma}}{2}|\tilde{ \psi}_{c}\rangle-\frac{\sqrt{\gamma}}{2}|\psi_{c}\rangle\right\|\] \[=\frac{2}{\sqrt{\gamma}}\left\|\frac{\sqrt{\gamma}-\sqrt{\gamma}} {2}|\tilde{\psi}_{c}\rangle+\frac{\sqrt{\gamma}}{2}|\tilde{\psi}_{c}\rangle- \frac{\sqrt{\gamma}}{2}|\psi_{c}\rangle\right\|\] \[\leq\frac{\epsilon}{\sqrt{\gamma}2}+\frac{2\epsilon}{\sqrt{\gamma }}\leq\frac{3\epsilon}{\gamma}\] where we used Eqs. (4)-(5) to bound the two terms at the end. The amplitude amplification procedure of Theorem 5 has be run with \(\sigma=\sqrt{\gamma}/2=\Omega(\sqrt{\gamma})\), so \(\mathcal{O}(\frac{1}{\sqrt{\gamma}}\log\frac{1}{\delta})\) copies of \(C\) still suffice. We proved that we get an error bound of \(3\epsilon/\gamma\) with probability \(1-\delta\) using the procedure described in Section IV, and it requires a total of \[\mathcal{O}\!\left(\frac{1}{\sqrt{\gamma}}\log\!\left(\frac{1}{\delta}\right) \log\!\left(\frac{1}{\epsilon}\right)\right)\] calls to the oracle for \(c(x)\). Since we want error bound \(\epsilon\), we simply replace \(\epsilon\leftarrow\frac{\epsilon\gamma}{3}\) in the above argument. This change of variable does not alter the asymptotic complexity, as we assumed \(\epsilon\leq\gamma/4\). **Theorem 7** (Robust oracle quantum-state preparation).: _Let \(c:[2^{n}]\rightarrow[0,1]\) be a function implemented by a quantum circuit_ \[\mathcal{O}_{c}|x\rangle|0\rangle^{\otimes m}=|x\rangle|c(x)\rangle\] _It is possible to construct the normalized state \(|\psi_{c}\rangle=\frac{1}{\sqrt{\gamma}}\sum_{x}c(x)|x\rangle\) using \(\mathcal{O}(\frac{1}{\sqrt{\gamma}}\log\frac{1}{\delta}\log\frac{1}{\epsilon})\) copies of \(\mathcal{O}_{c}\) and a \(\mathrm{poly}(n)\) number of single- and two-qubit gates._ We highlight here that the error bound by Eq. (3) can be used not only to bound the approximation error of the quantum signal processing polynomial for the arcsin. Noise of other nature can arise, for example the imperfection of the gates in Eq. (1), or the fact that \(c(x)\) can be an arbitrary real number and the oracle only computes a \(m\)-bit representation of it. However, all these noises will be treated by the analysis of this section with little effort. Discussion In this paper we devised a new algorithm for quantum state preparation, where the amplitudes are given as quantum oracles \(c(x)\), like in Grover's search [10; 11]. Speaking of Grover's search, one can see that the unstructured search problem can be seen as a special case of oracle quantum state preparation, where \(c(x)\) is 1 for exactly one value \(x_{0}\), and 0 for the others. In this case \(\gamma=N\) and our algorithm takes \(\mathcal{O}(\sqrt{N}\log\frac{1}{\delta}\log\frac{1}{\epsilon})\) queries to prepare \(|x_{0}\rangle\) as in Grover's algorithm (\(\delta\) and \(\epsilon\) can be unified into a single probability of failure upon measurement of the state). It is important to remark, however, that we bound the query complexity for the oracle \(c\) in our analysis, but it can be a challenging problem to construct a polynomial-time quantum circuit for \(c(x)\) if the amplitudes \(c_{1},\ldots,c_{N}\) are truly random. Perhaps quantum algorithmic information theory can be of help in understanding how much we can 'compress' a given list of amplitudes into a polynomial-time algorithm [37]. For the easier-to-handle case where the state to prepare has a nice analytical expression, however, the time complexity of the construction is only given by the normalization factor \(\gamma\). We highlight here again the fact that the above algorithm only constructs states with positive real amplitudes, but complementing this algorithm with a second oracle \(\phi(x)\) constructed as in Figure 2 allows one to obtain amplitudes of the form \(c(x)e^{i\pi\phi(x)}\). ## Acknowledgements I would like to thank William Schober, Stefan Wolf and Charles Bedard for insightful feedback and discussions. This work was supported by the Swiss National Science Foundation (SNF), grant No. 200020_182452.
2303.11938
3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion
We tackle the task of text-to-3D creation with pre-trained latent-based NeRFs (NeRFs that generate 3D objects given input latent code). Recent works such as DreamFusion and Magic3D have shown great success in generating 3D content using NeRFs and text prompts, but the current approach of optimizing a NeRF for every text prompt is 1) extremely time-consuming and 2) often leads to low-resolution outputs. To address these challenges, we propose a novel method named 3D-CLFusion which leverages the pre-trained latent-based NeRFs and performs fast 3D content creation in less than a minute. In particular, we introduce a latent diffusion prior network for learning the w latent from the input CLIP text/image embeddings. This pipeline allows us to produce the w latent without further optimization during inference and the pre-trained NeRF is able to perform multi-view high-resolution 3D synthesis based on the latent. We note that the novelty of our model lies in that we introduce contrastive learning during training the diffusion prior which enables the generation of the valid view-invariant latent code. We demonstrate through experiments the effectiveness of our proposed view-invariant diffusion process for fast text-to-3D creation, e.g., 100 times faster than DreamFusion. We note that our model is able to serve as the role of a plug-and-play tool for text-to-3D with pre-trained NeRFs.
Yu-Jhe Li, Tao Xu, Ji Hou, Bichen Wu, Xiaoliang Dai, Albert Pumarola, Peizhao Zhang, Peter Vajda, Kris Kitani
2023-03-21T15:38:26Z
http://arxiv.org/abs/2303.11938v2
# 3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion ###### Abstract We tackle the task of text-to-3D creation with pre-trained latent-based NeRFs (NeRFs that generate 3D objects given input latent code). Recent works such as DreamFusion and Magic3D have shown great success in generating 3D content using NeRFs and text prompts, but the current approach of optimizing a NeRF for every text prompt is 1) extremely time-consuming and 2) often leads to low-resolution outputs. To address these challenges, we propose a novel method named 3D-CLFusion which leverages the pre-trained latent-based NeRFs and performs fast 3D content creation in less than a minute. In particular, we introduce a latent diffusion prior network for learning the \(w\) latent from the input CLIP text/image embeddings. This pipeline allows us to produce the \(w\) latent without further optimization during inference and the pre-trained NeRF is able to perform multi-view high-resolution 3D synthesis based on the latent. We note that the novelty of our model lies in that we introduce contrastive learning during training the diffusion prior which enables the generation of the valid view-invariant latent code. We demonstrate through experiments the effectiveness of our proposed view-invariant diffusion process for fast text-to-3D creation, e.g., 100 times faster than DreamFusion. We note that our model is able to serve as the role of a plug-and-play tool for text-to-3D with pre-trained NeRFs. ## 1 Introduction We aim the tackling the task of text-to-3D domain-specific content creation. 3D content can be represented with neural radiance field (NeRF) [19] in a photorealistic way. Currently, text-to-3D with NeRFs have been explored in DreamField [11], DreamFusion [26], or Magic3D [17]. By leveraging the pre-trained models from CLIP [27] or diffusion priors [30] as the learning objective, these works are capable of producing 3D content given the input text prompt through training a NeRF from scratch. However, training one individual model for each text prompt would cause the first issue: slow inference (_i.e., around one hour_) for the above models. Due to no real image guidance during updating, the model would lead to the second issue: low-resolution multi-view rendering. Recently, 3D latent-based models using radiance fields (_i.e._, NeRFs [19]), such as EG3D [2] or StyleNeRF [8], have been proposed for unsupervised generation of multi-view consistent images. Similar to StyleGANs [13, 14], these NeRFs learn a controllable \(w\in\mathcal{W}\) space and enable explicit 3D camera control, using only single-view training images. To achieve fast text-to-3D creation, one straightforward way is to produce the \(w\) latent from the input text prompt without further updating the NeRF model. This idea has been explored in clip2latent [25] for text-to-2D creation, which takes less than one minute to generate high-resolution realistic images from the input text prompt Figure 1: Multi-view images generated from text prompts by 3D-CLFusion. with a trained diffusion prior. However, there are two main challenges for directly applying the clip2latent on 3D latent-based NeRF. First, the latent space \(w\in\mathcal{W}\) is assumed to be view-invariant for 3D NeRFs which is different from 2D StyleGANs. That is, if only single-view prompts (image CLIP embeddings) are used to train the diffusion prior, the produced latent code may be only valid when generating images of the same view (camera pose) where this issue is also noted in NeRF-3DE [16]. To address this issue, we can use multi-view CLIP prompts from the same latent \(w\) to train the diffusion prior. However, this leads to our second challenge: how to ensure the diffusion process to produce view-invariant latent from multi-view CLIP embeddings, as shown in Figure 2. To properly achieve fast and realistic text-to-3D creation with pre-trained NeRFs, we propose a framework named 3D-CLFusion to produce the view-invariant latent code from an input text prompt. Our 3D-CLFusion is composed of a diffusion prior network, a pre-trained clip model, and a pre-trained generator (_i.e., NeRF_). First, in order to produce the latent code \(w\) for the generator to render from the input text prompt, we introduce the diffusion prior which takes the CLIP text embedding as a condition and produces \(w\) latent in each diffusion step. Since we do not have labeled text embeddings, we leverage the image CLIP embeddings for training the diffusion prior. This training strategy is inspired by clip2latent [25] where we believe CLIP text and image embeddings share the closed latent space. Second, in order to learn the view-invariant latent in \(\mathcal{W}\) space, we leverage the multi-view images generated by the model itself and contrastive learning to ensure the produced latent code in \(\mathcal{W}\) are the same given different CLIP embeddings from a different view. Later in the experiments, we will show the significance of our introduced contrastive learning in the diffusion process. We have demonstrated the effectiveness of the proposed framework for the fast text-to-3D using StyleNeRF [8] and EG3D [2] as the pre-trained generators in Figure 1 and the experiments. Compared with DreamFusion [26] and Magic3D [17] which take around one hour for inference with NeRF, our model only takes less than 30 seconds for each 3D object. The contributions of this paper are summarized as follows: * We demonstrate the challenges of the task text-to-3D directly using latent-based NeRFs and the limitations of the current models for this task. * We propose a framework named 3D-CLFusion, which aims to produce view-invariant latent for rendering 3D objects with NeRFs from the input text prompt. * Compared with the existing models leveraging diffusion priors for text-to-2D with pre-trained StyleGANs, our designed model achieves more effective text-to-3D with pre-trained NeRFs. * Though the 3D object created by our model is limited to the class of pre-trained model, the inference time is at least 100 times faster than the existing text-to-3D creation from neural rendering. ## 2 Related Works Text-to-image generation.There are several text-to-image generative models that have been proposed in recent months. The majority of these models are trained using large amounts of text-image paired data where image generation is conditioned on the text input. To achieve high-quality and accurate image generation from text, several of the existing approaches make use of pre-trained CLIP to produce text embedding. One line of research is to leverage diffusion models in [28, 29, 30], where the models directly learn the mapping between text and image with diffusion priors (_i.e.,_ Dalle 2 [28] and Glide[20]) or sample from a low-resolution latent space and decode the latent into high-resolution images (_i.e.,_ Stable Diffusion [29]). Another line of work [24, 7, 1, 15] is to perform text-guided generation or editing relying on CLIP and the pre-trained StyleGANs [13, 14]. StyleGANs have achieved high image quality and support different levels of semantic manipulation in the \(w\in\mathcal{W}\) latent space. Recently, clip2latent [25] has been proposed to produce the \(w\) latent in StyleGAN from the input text prompt with the diffusion model. However, most of these works focus on the rendering of 2D images and are not capable of manipulating camera poses easily as NeRF [19]. 3D image synthesis with NeRFs.Methods built on implicit functions, e.g., NeRF [19], have been proposed in Figure 2: **Comparison of text to content creation with pre-trained GANs. Compared with existing effective text-to-2D with StyleGANs [25], it is more challenging to perform text-to-3D with latent-based NeRFs via diffusion process since the denoised latent is assumed to be view-invariant.** [3, 31, 23, 21]. To generate high-resolution images conditioned on the input style latent code, EG3D [2], StyleNeRF [8], VolumeGAN [33], StyleSDF [22], and GMPI [35] have been developed. In addition, some works such as Sofgan [4] and Sem2NeRF [5] are able to perform multi-view synthesis with NeRF by taking into multi-view or single-view semantic masks. However, most of these works are not capable of generating 3D objects given purely input text. We will demonstrate the effectiveness of our proposed diffusion prior to serving as a text-to-3D plug-and-play tool into EG3D [2] and StyleNeRF [8] in this work. Text-to-3D generation with NeRFs.In recent years, several models [11, 26, 17] for the task of text-to-3D generation have been proposed using NeRFs [19]. Dream Fields [11] leverage the pre-trained image-text encoder (_i.e., CLIP_[27]) as the image guidance to optimize the neural implicit scene representations (_i.e., NeRFs_) through online training. Since the pre-trained CLIP models may not be effective image-level generation objectives, some works such as DreamFusion [26] and Magic3D [17] turn to train the NeRF using pre-trained diffusion prior [30] instead of CLIP. Though DreamFusion [26] and Magic3D [17] are capable of performing satisfactory 3D content creation and rendering with an open-vocabulary input text prompt, the inference takes 1.5 hours and 40 minutes for each model to train a NeRF from scratch. In this work, we aim to resolve the issue by leveraging the pre-trained NeRFs and diffusion prior, and producing the latent in less than a minute. ## 3 The Proposed Method ### Problem Formulation and Overview Given the input text prompt, our goal is to generate the conditioned multi-view images with the pre-trained latent-based NeRF generator as our text-to-3D task. Specifically, given the input text embedding with CLIP, denoted as \(e_{t}\), we aim to produce the corresponding \(w\) latent1 as output. The pre-trained NeRF generator denoted as \(G\), is able to synthesize images \(x\) given different camera poses \(p\): \(x=G(w,p)\). Footnote 1: latent \(w\in\mathcal{W}\)[32] or the extended latent \(w\in\mathcal{W}+\)[14] In order to achieve text-to-3D creation by producing the \(w\) latent for fast NeRF rendering, we propose a generative framework named 3D-CLFusion, which is presented in Figure 3. First, in order to produce the latent \(w\) for the generator to render from the input text prompt, we introduce the diffusion prior network (\(f_{\theta}\) where \(\theta\) denotes the network parameters) which is able to take the CLIP text embedding \(e_{txt}\) as an input condition and produces \(w_{0}\) latent in the last diffusion step. More details of the diffusion/denoising process will be presented later. Since we do not have labeled text embeddings for the ground-truth latent \(w_{0}\), we leverage the image CLIP embeddings for training the diffusion prior. We assume CLIP text and image embeddings share the closed latent space. Second, in order to learn the view-invariant latent \(w\) in \(\mathcal{W}\) space, we leverage the multi-view CLIP image embeddings from the images generated by the model and apply the contrastive loss to ensure the produced latent in \(\mathcal{W}\) are view-invariant given different CLIP embeddings from a different view. After the training stage, we can sample text CLIP embedding from a given text input prompt and generate the corresponding 3D multi-view images with the pre-trained NeRF. ### Pre-trained Latent-based NeRF Latent-based Neural Implicit Representation.Following StyleGANs [13, 14], Latent-basde NeRFs [8] also introduce the mapping network \(f\) which maps noise vectors from a spherical Gaussian space \(\mathcal{Z}\) to the style space \(\mathcal{W}\). \(f\) consists of several MLP layers and the input style vector \(w\in\mathcal{W}\) can be derived by \(w=f(z),z\in\mathcal{Z}\). Following the neural rendering mechanism in NeRF [19], all of our pre-trained latent-based NeRF also takes the position \(u\in\mathbb{R}^{3}\) and viewing direction \(d\in\mathbb{S}^{2}\) as inputs, and predicts the density \(\sigma(u)\in\mathbb{R}\) and view-dependent color \(c(u,d)\in\mathbb{R}^{3}\). Volume Rendering with Radiance Fields.Once we have the color and density for each coordinate and view direction, we render the color \(C(r)\) for each pixel along that camera ray \(r(t)=o+td\) passing through the camera center \(o\) with volume rendering [12]: \[C_{w}(r)=\int_{t_{n}}^{t_{f}}T(t)\sigma_{w}(r(t))c_{w}(r(t),d)dt, \tag{1}\] \[\text{where}\quad T(t)=\exp(-\int_{t_{n}}^{t}\sigma_{w}(r(s))ds).\] The function \(T(t)\) denotes the accumulated transmittance along the ray from \(t_{n}\) to \(t\). In practice, the continuous integration is discretized by accumulating sampled points along the ray. More details can be obtained in [19, 8, 2]. ### Latent Diffusion with CLIP embedding In order to produce the corresponding \(w\) from the input text prompt \(e_{txt}\) or image prompt \(e_{img}\), we choose to employ a latent diffusion process to learn the mapping given the success of latent diffusion models in [29, 25, 28]. Now we present the overview of the latent diffusion model which contains: forward (diffusion) and backward (denoising) processing. As stated in DDPM [9] and Latent Diffusion [29], we can formulate the diffusion process of \(w\) latent diffusion for our task as: \[\begin{split} q(w_{1:T}|w_{0})&=\prod_{t=1}^{T}q(w_{t}|w _{t-1}),\\ q(w_{t}|w_{t-1})&=\mathcal{N}(w_{t};\sqrt{1-\beta_{t }}w_{t-1},\beta_{t}\mathbf{I}),\end{split} \tag{2}\] where \(w_{1}\)... \(w_{T}\) are the latent of the same dimensionality of \(w_{0}\) and \(\beta_{1}<...<\beta_{T}\) are variance schedule. The reverse process can also be formulated as: \[\begin{split} p_{\theta}(w_{0:T})&=p(x_{T})\prod_{t =1}^{T}p_{\theta}(w_{t-1}|w_{t}),\\ p_{\theta}(w_{t-1}|w_{t})&=\mathcal{N}(w_{t-1}; \mu_{\theta}(w_{t},t),\Sigma_{\theta}(w_{t},t)),\end{split} \tag{3}\] where \(\theta\) is the learnable parameters. If we set \(\Sigma_{\theta}(x_{t},t)=\sigma_{t}^{2}\mathbf{I}\) as fixed variance, we only need to learn \(\mu_{\theta}(w_{t},t)\). Since we can denote \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{s-1}^{t}\alpha_{s}\), we can estimate the forward process posteriors conditioned on \(w_{0}\) as: \[\begin{split} q(w_{t-1}|w_{t},w_{0})&=\mathcal{N} (w_{t-1};\tilde{\mu}_{t}(w_{t},w_{0}),\tilde{\beta}_{t}\mathbf{I}),\\ \tilde{\mu}_{t}(w_{t},w_{0})&=\frac{\sqrt{\bar{ \alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}}w_{0}+\frac{\sqrt{\alpha_{t}}(1- \bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}w_{t},\\ \tilde{\beta}_{t}&=\frac{1-\bar{\alpha}_{t-1}}{1- \bar{\alpha}_{t}}\beta_{t}.\end{split} \tag{4}\] As stated in DDPM [9], we can choose to learn the \(\mu_{\theta}(w_{t},t)\) in Equ. 3 and reparameterize it as: \[\begin{split}\mu_{\theta}(w_{t},t)&=\tilde{\mu}_{t }(w_{t},w_{0})=\frac{1}{\sqrt{\alpha_{t}}}(w_{t}-\frac{\beta_{t}}{\sqrt{1- \bar{\alpha}_{t}}}\epsilon_{\theta}(w_{t},t,e)),\\ \text{and}& w_{0}&=\frac{1}{\sqrt{\alpha _{t}}}(w_{t},\sqrt{1-\bar{\alpha}_{t}}\epsilon_{\theta}(w_{t},t,e))\end{split} \tag{5}\] where \(\epsilon_{\theta}\) is a function approximator intended to predict the noise \(\epsilon\) from \(w_{t}\), timestep embeddings \(t\), and the CLIP embeddings \(e_{txt}\) or \(e_{img}\). Or we can reparameterize \(\mu_{\theta}(w_{t},t)\) as: \[\begin{split}&\mu_{\theta}(w_{t},t)=\tilde{\mu}_{t}(w_{t},w_{0})\\ &=\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}}f_{ \theta}(w_{t},t,e)+\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{ \alpha}_{t}}w_{t},\end{split} \tag{6}\] where \(f_{\theta}\) is a function approximator intended to predict the \(w_{0}\) from \(w_{t}\), timestep embeddings \(t\), and the CLIP embeddings \(e_{txt}\) or \(e_{img}\). Later we will show learning \(f_{\theta}\) in Equ. 6 is preferable for contrastive learning compared with learning \(\epsilon_{\theta}\) in Equ. 5. The diffusion prior loss can be defined as: \[\begin{split}&\mathcal{L}_{diff}=\mathbb{E}[\|\epsilon-\epsilon_{ \theta}(w_{t},t,e))\|^{2}]\\ \text{or}&\mathcal{L}_{diff}=\mathbb{E}[\|w_{0}-f_{ \theta}(w_{t},t,e)\|^{2}]\end{split} \tag{7}\] ### View-invariant Latent Diffusion For 3D latent-based NeRF diffusion, the latent \(w\) not only needs to match the input CLIP embedding \(e\) but is also assumed to be view-invariant. As stated in NeRF-3DE [16], only the latent \(w\) inside the valid manifold in the latent space \(\mathcal{W}\) for latent-based NeRF is capable to produce reasonable view-invariant 3D objects, while the latent outside the manifold would lead to severe 3D distortion. Since the CLIP image embeddings generated from the same \(w\) with different camera poses \(p_{i}\), \(e_{img}^{i}\) representing CLIP image embeddings from different views are also different. Hence, the produced \(w_{t}\) using Equ. 3 would not be view-invariant. We need to ensure the latent produce from Figure 3: **Overview of our proposed 3D-CLFusion. It consists of three modules: CLIP text/image encoders, contrastive latent diffusion prior, and pre-trained latent-based NeRF. More details can be referred to the section 3.** the diffusion prior in each step are view-invariant as: \[\begin{split} f_{\theta}(w_{t},t,E^{img}_{clip}(x_{i})& =f_{\theta}(w_{t},t,E^{img}_{clip}(x_{j})\\ x_{i}&=G(w_{0},p_{i}),i\neq j\end{split} \tag{8}\] where \(p_{i}\), \(i=0,1,..,n\) represent camera poses. Minimizing the objective allows the model to learn the view-invariant latent code \(\hat{w}\) since it maps multi-view embeddings \(e_{i}=E^{img}_{clip}(x_{i})\) (controlled by the pose \(p_{i}\)) to the same latent code \(w\) for each set of the training sample. During inference, a single-view CLIP embedding (either text or image embeddings) is mapped to the latent code \(\hat{w}\) which can produce multi-view images of the same identity by changing the poses. To ensure this, we can use contrastive learning to train the diffusion prior by applying constraints on \(\tilde{w}_{0}=f_{\theta}(w_{t},t,e)\) in Equ. 6. Specifically, we perform contrastive learning with L2 loss \(\mathcal{L}_{2}\) and triplet loss \(\mathcal{L}_{tri}\) on the produced \(w_{0}\) in each diffusion step. We can formulate \(\mathcal{L}_{2}\) loss as: \[\mathcal{L}_{2}=\|f_{\theta}(w_{t},t,e^{i})-f_{\theta}(w_{t},t,e^{i})\|^{2}, \tag{9}\] where \(e_{i}\) and \(e_{j}\) are produced by the same ground-truth \(w\). To maximize the inter-class discrepancy while minimizing intra-class distinctness, we introduce \(\mathcal{L}_{tri}\). Specifically, for each input image embedding \(e\), we sample a positive image embedding \(e_{\text{pos}}\) with the same identity label and a negative image \(e_{\text{neg}}\) with different identity labels to form a triplet tuple. Then, the following equations compute the distances between \(e\) and \(e_{\text{pos}}\)/\(e_{\text{neg}}\): \[\begin{split} d_{\text{pos}}&=\|f_{\theta}(w_{t},t,e )-f_{\theta}(w_{t},t,e^{\text{pos}})\|_{2},\\ d_{\text{neg}}&=\|f_{\theta}(w_{t},t,e)-f_{\theta} (w_{t},t,e^{\text{neg}})\|_{2},\end{split} \tag{10}\] With the above definitions, we have the triplet loss \(\mathcal{L}_{tri}\) defined as: \[\mathcal{L}_{tri}\ =\max(0,m+d_{\text{pos}}-d_{\text{neg}}), \tag{11}\] where \(m>0\) is the margin used to define the distance difference between the positive image pair \(d_{\text{pos}}\) and the negative image pair \(d_{\text{neg}}\). The contrastive loss can be summed up as: \[\mathcal{L}_{contrast}=\mathcal{L}_{2}+\mathcal{L}_{tri} \tag{12}\] We would like to note that, the contrastive loss can still be applied using Equ. 5 on the predicted \(\tilde{w}_{0}=\frac{1}{\sqrt{\alpha_{t}}}(w_{t},\sqrt{1-\bar{\alpha}_{t}} \epsilon_{\theta}(w_{t},t,e))\). However, applying constraints on this \(\tilde{w}_{0}\) would not be stable since it depends on both predicted \(\epsilon\) and \(w_{t}\) in each step, where \(w_{t}\) is varying and sampled from Gaussian in each step. The total loss \(\mathcal{L}\) for training our proposed NeRF-3DE is summarized as follows: \[\mathcal{L}_{total}=\lambda_{diff}\cdot\mathcal{L}_{diff}+\lambda_{contrast} \cdot\mathcal{L}_{contrast}, \tag{13}\] where \(\lambda_{diff}\) and \(\lambda_{contrast}\) are the hyper-parameters used to control the weighting of the corresponding losses. ## 4 Experiment ### Datasets Since our pipeline relies on the pre-trained latent-based NeRF, we train the latent-based NeRF on some datasets including FFHQ [13], AFHQ [6], and CompCar [34]. **FFHQ**[13] id a face dataset which contains 70,000 face images. We assume the captured faces are mostly in the center. Though the size of the images is provided as 1024 \(\times\) 1024 in the dataset, all of the images are resized into 256 \(\times\) 256 for training the checkpoints. **AFHQ**[6] is an animal face dataset that contains 15,000 high-quality images at 512\(\times\)512 resolution and includes three categories of animals which are cats, dogs, and wildlife. Each category has about 5000 images. For each category, the dataset split around 500 images as a test set and provide all remaining images as a training set. Only the training images are used in the experiments. **CompCars**[34] is a car dataset that contains 136,726 images capturing the different vehicles with various styles. The original dataset contains images with different aspect ratios. All of the images in the dataset are center-cropped and resized into 256 \(\times\) 256. ### Experimental Settings Pre-trained latent-based NeRFs.To train our diffusion prior in our 3D-CLFusion, we use the pre-trained StyleNeRF [8] and EG3D [2] models as the generators trained on FFHQ, AFHQ, and CompCars. To have a fair comparison, we use the pre-trained models that are provided by original papers online. Specifically, StyleNeRF [8] provides checkpoints on FFHQ in 256, 512, and 1024 dimensions. EG3D [2] provides models trained on FFHQ and AFHQ (only cat category). Since StyleNeRF and EG3D do not provide the checkpoints for realistic cars such as the images in CompCars, we additionally train the checkpoint of the generator on cars using StyleNeRF for evaluation. Baselines.Since our 3D-CLFusion is the first diffusion prior to leveraging latent-based NeRFs for text-to-3D, we compare it with the most similar baseline: clip2latent [25]. clip2latent is also a framework for \(w\) latent diffusion while the main difference is their design model is for 2D StyleGAN and does not have the constraints on view-invariant learning for NeRFs. To further compare with the direct optimization method, we compare our model with latent vector optimization in \(\mathcal{W}\)[14]. Evaluation settings.For qualitative comparison, we compare the input text prompt and the multi-view images generated from its latent code. We would like to note that, the generators trained on FFHQ and AFHQ are trained using the frontal head yaw angle roughly ranging between \(-45^{\circ}\) to \(45^{\circ}\) degrees for StyleNeRF and EG3D. Only the checkpoints on CompCars can have 360-degree rendered images. For quantitative results, we use the CLIP similarity score following clip2latent [25] and only measure the frontal view of the latent-based NeRFs generated by the models trained on FFHQ. ### Implementation Details To train our diffusion priors (either \(\epsilon_{\theta}\) or \(f_{\theta}\)), we leverage the generated images from the generators to generate the ground-truth paired data. Specifically, we sample a latent \(w\in\mathbb{R}^{512}\) from the generator and generate \(k=8\) views by manipulating camera poses for one \(w\). Each generated image will be resized to \(224\times 224\) for the CLIP image encoder, _i.e.,_ ViT-B/32 we use in the experiments, to generate the CLIP embeddings \(e\in\mathbb{R}^{512}\). To have the same comparison with clip2latent [25], we also choose the same architecture as the diffusion prior in DALLE-2 [28] where the \(\epsilon_{\theta}\) or \(f_{\theta}\) is implemented with causal transformer [18]. Following the training strategy in clip2latent, we also apply classifier-free Figure 4: **Qualitative comparisons on text-to-3D with pre-trained latent-based NeRF: StyleNeRF [8]. All of the output images are rendered using the same camera poses and the checkpoints from FFHQ and CompCars datasets.** Figure 5: **Qualitative comparisons on text-to-3D with pre-trained latent-based NeRF: EG3D [2]. All of the output images are rendered using the same camera poses and the checkpoints from FFHQ and AFHQ (Cat class) datasets.** guidance [10] and pseudo-text embeddings [36] to enhance the diversity of the generated image and prevent overfitting on the image embeddings. The number of timesteps for the diffusion process is set as 1000. For the hyperparameter for all of the loss functions, all losses are equally weighted (\(\lambda_{diff}=1.0\) and \(\lambda_{contrast}=1.0\)) for all the experiments. The batch size is set as \(512\) where we ensure there are 64 ground-truth \(w\) latent each with 8 different CLIP image embeddings from different views for optimizing the loss. We optimize the network using Adam optimizer with a learning rate of 0.0001 and train the model for 1,000,000 iterations. Each experiment is conducted on 1 Nvidia GPU 3090 (24G) and implemented in PyTorch. ### Results and Comparisons. In this section, we compare our proposed model with the baseline approach: clip2latent [25] and the online optimization method [14] qualitatively (in Figure 4 and Figure 5) and quantitatively (in Table 1). Qualitative results.As shown in Figure 4, we compare the quality of the generated latent \(w\) among all of the models on the generated images from StyleNeRF. The left part displays the results on the FFHQ dataset while the right part displays the results on CompCars. There are some phenomena that can be summarized. First, although clip2latent is able to generate a latent code that can generate close images, semantically the output images are not well matched. For example, the gender on the FFHQ and the color of the vehicle on CompCars are not well matched with the input text prompts. This infers that without the constraints to ensuring the latent \(w\) to be invariant, the generated latent code \begin{table} \begin{tabular}{l|c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{CLIP score} & \multirow{2}{*}{Time} \\ \cline{2-3} \cline{5-4} & StyleNeRF [8] & & \multicolumn{1}{c|}{EG3D [2]} \\ \hline clip2latent [25] & 0.282 & 0.245 & 18.1 s \\ Optimization [14] & **0.358** & **0.343** & 55.5 s \\ Ours (w/ \(f_{\theta}\)) & 0.337 & 0.291 & 18.4 s \\ \hline Ours (w/ \(\epsilon_{\theta}\)) & 0.287 & 0.254 & 18.2 s \\ Ours w/o \(\mathcal{L}_{2}\) & 0.305 & 0.272 & 18.4 s \\ Ours w/o \(\mathcal{L}_{tri}\) & 0.311 & 0.282 & 18.4 s \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results on CLIP score and the inference time.** The CLIP score is measured only on the rendered frontal view for fair comparison. The results are measured on the StyleNeRF and Eg3D generators trained on FFHQ. The testing 64 prompts are from the ones provided by clip2latent [25]. The time is measured on one Nvidia 3090 GPU. The number in bold and underline indicate the best and the second-best results. Figure 6: **Ablation studies on the learning of \(\epsilon_{\theta}\) and learning of \(f_{\theta}\) in the diffusion prior of 3D-CLFusion.** The results are produced from the generators of both StyleNeRF and EG3D, which are trained on the FFHQ dataset. \(w\) from the text CLIP embeddings may not share close latent \(\mathcal{W}\) space. Second, the optimization is able to generate satisfactory results on FFHQ while failing to optimize the entire color of the vehicle. We credit the reason to that because the 360-degree vehicle is difficult to manipulate compared with faces in FFHQ. Thus, directly optimizing CLIP loss will be easy to bring artifacts and generate unnatural results (see the rightmost column of the blue SUV with green background). We can also observe similar observations on EG3D [2] shown in Figure 5. First, clip2latent fails to generate the well-matched latent for EG3D to render (see the sunglasses in the second big column and the color of the cat in the third big column). The optimization method is also able to generate well-matched results while they are unnatural (see the yellow eyes in the rightmost column). Our proposed methods exhibit comparable results as direct optimization. Quantitative results.As shown in the first three rows of Table 1, we compare the two methods with the CLIP similarity score using the 64 text prompts provided by clip2latent [25]. To measure the inference time and have a fair comparison, we train the optimization [14] for 200 iterations for each text prompt. Our proposed model achieves a superior score compared with clip2latent with a similar inference time. On the other hand, although the optimization method has artifacts in qualitative results, it has the best result in CLIP score. This is basically because the objective of the optimization is sorely CLIP similarity. Thus, online optimization is supposed to have the best score. ### Ablation studies Learning objectives: \(\epsilon_{\theta}\) vs \(f_{\theta}\).To further support the claim in Section 3 that learning \(f_{\theta}\) is more stable than \(\epsilon_{\theta}\) when applying contrastive loss, we compare the models learning two different objectives qualitatively in Figure 6 and quantitatively in Table 1. As shown in Figure 6, it is obvious that learning \(\epsilon_{\theta}\) would lead to inferior results. For example, learning \(\epsilon_{\theta}\) can not generate good latent when feeding the input text prompt as "A vampire's face" in either StyleNeRF or EG3D. Table 1 also shows that learning \(\epsilon_{\theta}\) exhibits a worse CLIP score compared with \(f_{\theta}\). We credit the reason that by producing the \(\bar{w}_{0}\) from \(\epsilon_{\theta}\), the \(\bar{w}_{0}\) will be far from consistency since it also depends on \(w_{t}\) in each step. The view-invariant learning will not work easily on the network in \(\epsilon_{\theta}\). Contrative loss function: \(\mathcal{L}_{contrast}\).To further analyze the effectiveness of essential contrastive losses of our proposed 3D-CLFusion, we conduct the experiments with one of them excluded and present the qualitative result in Figure 7 and quantitatively in Table 1 as well. First, apparently without L2 loss \(\mathcal{L}_{2}\), the diffusion model is not able to well derive view-invariant latent code in each diffusion process while it can still ensure the multi-view CLIP embeddings will produce close \(w_{0}\) with the remaining triple loss. Second, we can observe that in the model with the triplet loss \(\mathcal{L}_{tri}\) in our \(\mathcal{L}_{contrast}\) excluded, the produced latent code \(w\) starts to semantically deviate the input text prompt, which infers that the inter-class distance learning is also important for learning the view-invariant latent code in \(\mathcal{W}\) latent space. Third, with the entire contrastive loss \(\mathcal{L}_{contrast}\) excluded, the model is not able to have any guidance on view-invariant learning and would produce similar results as clip2latent [25]. ## 5 Conclusion We have unveiled the challenges of the task text-to-3D directly using latent-based NeRFs and the limitations of the current models for this task. We propose a framework named 3D-CLFusion, which aims to produce view-invariant latent for rendering 3D objects with NeRFs from the input text prompt. Compared with the existing baseline models in the experiments, our designed model achieves more effective text-to-3D with pre-trained NeRFs. Though the 3D object created by our model is domain-specific to the pre-trained model, the inference time is at least 100 times faster than the existing text-to-3D creation from neural rendering. We would like to note that, our model will be applicable in the real world if more pre-trained NeRFs with vast categories of 3D objects are available. Figure 7: **Ablation studies on the proposed losses. The results are from the StyleNeRF generator trained on FFHQ.**
2308.02388
On a general concept of a Hausdorff-type operator
A unified approach to the concept of a Hausdorff operator is proposed in such a way that a number of classical and new operators feet into the given definition. Conditions are given for the boundedness of the operators under consideration in $L^p$ and in the atomic Hardy space $H^1$, and their regularity property is investigated. Examples are considered. The author hopes that this approach will allow one to unify the study of a lot of extensions and analogs of the classical Hausdorff operator.
A. R. Mirotin
2023-08-04T15:31:57Z
http://arxiv.org/abs/2308.02388v2
# On a General Concept of a Hausdorff-Type Operator ###### Abstract. A unified approach to the concept of a Hausdorff operator is proposed in such a way that a number of classical and new operators feet into the given definition. Conditions are given for the boundedness of the operators under consideration in \(L^{p}\) and in the atomic Hardy space \(H^{1}\), and their regularity property is investigated. Examples are considered. The author hopes that this approach will allow one to unify the study of a lot of extensions and analogs of the classical Hausdorff operator. 2020 Mathematics Subject Classification: Primary 47B38; Secondary 47B15, 46E30 Key words and phrases. Hausdorff operator, topological group, discrete Hilbert transform, Hilbert transform, Cauchy transform, Hardy space, space of homogeneous type, regularity. ## 1. Introduction In resent two decades different notions of a Hausdorff operator have been suggested (see, e. g., [18, 19, 5, 17, 24, 38, 39, 22, 27, 28] and bibliography therein). In our opinion, the unified approach to this notion may be as follows. Let \(\mathfrak{S}\) be a set which is an object of some category. In particular one can assume that \(\mathfrak{S}\) is endowed with some mathematical structure (algebraical, topological, analytical, algebraical-topological, order, measure, etc.) in a sense of N. Bourbaki. Let \(\mathrm{Aut}(\mathfrak{S})\) stands for the set of all automorphisms of \(\mathfrak{S}\) in this category, and \((\Omega,\mu)\) denotes some measure space. Finally, let \(A:\Omega\to\mathrm{Aut}(\mathfrak{S})\) be some measurable map (in a sense which will be specified in each concrete situation, see, e.g., definitions 4.4, 5.7, and 5.11 below) defined a. e. \([\mu]\), and \(\Phi\) a \(\mu\)-measurable function. **Definition 1.1**.: _A Hausdorff operator acts on a functions \(f:\mathfrak{S}\to X\) (here \(X\) is some topological vector space) by the rule_ \[(\mathcal{H}_{\Phi,A}f)(x)=\int_{\Omega}\Phi(u)f(A(u)(x))d\mu(u) \tag{1.1}\] _provided the integral converge in a suitable sense._ The author hopes that this approach will allow one to unify the study of a lot of extensions and analogs of the classical Hausdorff operator. The next examples show that a number of classical and new operators feet into Definition 1.1. ## 2. Special cases **Example 1**.: _(Hausdorff operator over a matrix algebra.)_ Let \(\mathfrak{S}=\operatorname{Mat}_{n}(\mathfrak{k})\) be the algebra of square matrices \(M=(m_{ij})\) of order \(n\) over a field \(\mathfrak{k}\). Every \(M\in\operatorname{Mat}_{n}(\mathfrak{k})\) has the form \(M=(m_{1},\dots,m_{n})\), where \(m_{j}\) stands for the \(j\)th column of \(M\). For each permutation \(\sigma\in\mathbf{S}_{n}\) (\(\mathbf{S}_{n}\) denotes the symmetric group of order \(n\)) we denote by \(A(\sigma)\) the bijection of \(\operatorname{Mat}_{n}(\mathfrak{k})\) (automorphism in the category of sets) such that \(A(\sigma)(M)=(m_{\sigma(1)},\dots,m_{\sigma(n)})\). We equip the set \(\Omega=\mathbf{S}_{n}\) with the counting measure. Let \(\Phi(\sigma)=\operatorname{sgn}(\sigma)\), where \(\operatorname{sgn}(\sigma):=1\) if \(\sigma\) is even, and \(\operatorname{sgn}(\sigma):=-1\) otherwise. Then a Hausdorff operator in the sense of Definition 1.1 acts on the function \(f:\operatorname{Mat}_{n}(\mathfrak{k})\to X\) as \[(\mathcal{H}_{\Phi,A}f)(M)=\sum_{\sigma\in\mathbf{S}_{n}}\operatorname{sgn}( \sigma)f(A(\sigma)(M)).\] In particular, if we take \(\mathfrak{k}=X=\mathbb{C}\), \(f_{0}(M)=\prod_{i=1}^{n}m_{ii}\), then \[(\mathcal{H}_{\Phi,A}f_{0})(M) = \sum_{\sigma\in\mathbf{S}_{n}}\operatorname{sgn}(\sigma)f_{0}(A( \sigma)(M))\] \[= \sum_{\sigma\in\mathbf{S}_{n}}\operatorname{sgn}(\sigma)f_{0}(A( \sigma)(m_{1},\dots,m_{n})).\] Since the right-hand side here is an alternate multilinear form (as a function of column vectors \(m_{1},\dots,m_{n}\)), we have \[(\mathcal{H}_{\Phi,A}f_{0})(M)=\det(M)\] (see also [1, p. 202]). One can take also as \(\mathfrak{S}\) any subset of \(\operatorname{Mat}_{n}(\mathfrak{k})\) which is invariant with respect to some family of automorphisms \((A(\sigma))_{\sigma\in\Sigma}\). **Example 2**.: _(The Discrete Hilbert transform.)_ Let \(\mathfrak{S}=\mathbb{Z}\) be the ring of integers with its natural order, \(\Omega=\mathbb{Z}\) endowed with a discrete measure \(\mu(\{k\})=p_{k}\), and \(A(u)(k)=k-u\) (\(k,u\in\mathbb{Z}\)) an order preserving bijections of \(\mathbb{Z}\) (automorphisms in the category of linearly ordered sets). Let \[\Phi(u)=\begin{cases}\frac{2}{\pi u},\text{$u$ odd}\\ 0,u\text{ even}.\end{cases}\] In this case, (1.1) takes the form \[(Hf)(k)=\sum_{u\in\mathbb{Z}}\Phi(u)f(k-u)p_{u}=\begin{cases}\frac{2}{\pi}\sum \limits_{n\text{ odd}}\frac{f(n)}{k-n}p_{k-n},k\text{ even}\\ \frac{2}{\pi}\sum\limits_{n\text{ even}}\frac{f(n)}{k-n}p_{k-n},k\text{ odd}. \end{cases}\] For \(p_{k}\equiv 1\) this is the discrete Hilbert transform of a function \(f:\mathbb{Z}\to X\) (for the case \(X=\mathbb{C}\) see [15]). **Example 3**.: _(The Hilbert transform.) Let \(\mathfrak{S}=\Omega\) be the real line \(\mathbb{R}\) with Euclidean metric and Lebesgue measure, \(A(u)(x)=x-u\) (\(x,u\in\mathbb{R}\)) a distance preserving bijections of \(\mathbb{R}\), and \(\Phi(u)=\frac{1}{\pi u}\), the Cauchi kernel. In this case, (1.1) takes the form_ \[(\mathrm{H}f)(x)=\frac{1}{\pi}\text{p.v.}\int_{-\infty}^{\infty}\frac{f(x-u)} {u}du,\] _the Hilbert transform of a measurable function \(f:\mathbb{R}\to X\)._ Calderon-Zygmund operators can be considered in a similar manner. The previous example can be generalized in the following way (see [37, 34, 35]). **Example 4**.: _(The Hilbert transform along curves.) Let \(\mathfrak{S}=\mathbb{R}^{n}\) with Euclidean metric, \(\Omega=\mathbb{R}\) with Lebesgue measure, and \(A(u)(x)=x-\gamma(u)\) (\(x\in\mathbb{R}^{n}\)) a distance preserving bijections of \(\mathbb{R}^{n}\) where \(\gamma:\mathbb{R}\to\mathbb{R}^{n}\) is a suitable function (say polynomial) satisfying \(\gamma(0)=0\). Then the singular integral operator_ \[(\mathrm{H}f)(x):=\int_{-\infty}^{\infty}\Phi(u)f(x-\gamma(u))du,\] _where \(\Phi\) is a Calderon-Zygmund kernel, is of the form (1.1)._ From now on we shall assume that the integral in (1.1) exists in the sense of Lebesgue. **Example 5**.: _(A Cauchi transform over a circular manifold.) Let \(\mathfrak{S}\) be a complex submanifold of \(\mathbb{C}^{n}\) with automorphisms (biholomorphic mappings) \(A(u)(z)=(u_{1}z_{1},\ldots,u_{n}z_{n})\), where \(u=(u_{1},\ldots,u_{n})\in\mathbb{T}^{n}\) (e. g., let \(\mathfrak{S}\) be a Reinchart domain in \(\mathbb{C}^{n}\), or the torus \(\mathbb{T}^{n}\)). Let also \(\Omega=\mathbb{T}^{n}\) endowed with the Lebesgue measure, and \(\Phi(u)=\frac{1}{(2\pi\imath)^{n}}\frac{1}{(u_{1}-1)\ldots(u_{n}-1)}\). In this case, (1.1) turns to the operator_ \[(\mathcal{C}f)(z)=\frac{1}{(2\pi\imath)^{n}}\int_{\mathbb{T}^{n}}\frac{f(u_{1 }z_{1},\ldots,u_{n}z_{n})}{(u_{1}-1)\ldots(u_{n}-1)}du_{1}\ldots du_{n},\ z\in \mathfrak{S}.\] On can call this operator a Cauchi transform of a measurable function \(f\) on \(\mathfrak{S}\). Indeed, if \(\mathfrak{S}=\mathbb{T}^{n}\), then putting \(u_{j}=\zeta_{j}/z_{j}\) for \(j=1,\ldots,n\) we get \[(\mathcal{C}f)(z)=\frac{1}{(2\pi\imath)^{n}}\int_{\mathbb{T}^{n}}\frac{f(\zeta _{1},\ldots,\zeta_{n})}{(\zeta_{1}-z_{1})\ldots(\zeta_{n}-z_{n})}d\zeta_{1} \ldots d\zeta_{n},\ z\in\mathbb{T}^{n}\] the Cauchi transform of a measurable function \(f\) on \(\mathbb{T}^{n}\). **Example 6**.: _(A convolution with a measure.) Let \(\mathfrak{S}=G=\Omega\) be a multiplicative group equipped with some (left) invariant metric, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Iso}(G)\) the set of isometries of \(G\) (automorphisms in the category of metric spaces), \(A(u)(x)=u^{-1}x\) (\(u\in G\)), \(\Phi(u)=1\). In this case, (1.1) turns into a convolution operator \(f\mapsto f\ast\mu\) on \(G\)._ **Example 7**.: _(Hausdorff operator over a topological group.) Let \(\mathfrak{S}=G\) be a topological group, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(G)\) the group of all topological automorphisms of \(G\). In this case, we have got a general definition of a Hausdorff operator on topological groups. This example contains several known definitions of Hausdorff operators for classical groups (see [24], [25], and examples therein)._ **Example 8**.: _(Discrete Hausdorff operator over the Euclidean space.) Let \(\mathfrak{S}=\mathbb{R}^{d}\) be a \(d\)-dimensional Euclidean space considered as an additive topological group. Then the group \(\operatorname{Aut}(\mathbb{R}^{d})\) of all topological automorphisms of \(\mathbb{R}^{d}\) can be identified with the general linear group \(\operatorname{GL}(d,\mathbb{R})\). Let \(\Omega=\mathbb{Z}\) be endowed with the counting measure. In this case, (1.1) turns into a so-called discrete Hausdorff operator_ \[(\mathcal{H}_{\Phi,A}f)(x)=\sum_{k\in\mathbb{Z}}\Phi(k)f(A(k)x),\] _where \(A(k)\in\operatorname{GL}(d,\mathbb{R})\), \(x\in\mathbb{R}^{d}\) is a column vector. For the spectral theory of such operators see [32, 33]._ **Example 9**.: _(Hausdorff operator over a homogeneous space.) Let \(\mathfrak{S}=G/K\) be a homogeneous space of a locally compact group \(G\), \(K\) a compact subgroup of \(G\). In this case, \(\operatorname{Aut}(\mathfrak{S})\) can be identified with the group \(\operatorname{Aut}_{K}(G)\) of all topological automorphisms of \(G\) which map \(K\) onto itself (see [26], [27]) and examples therein._ **Example 10**.: _(Hausdorff operator over a double coset space of a topological group.) Let \(\mathfrak{S}=G//K\) be a double coset space of a locally compact group \(G\), \(K\) a compact subgroup of \(G\). In this case again \(\operatorname{Aut}(\mathfrak{S})\) can be identified with the group \(\operatorname{Aut}_{K}(G)\) of all topological automorphisms of \(G\) which map \(K\) onto itself (see [28] for details)._ **Example 11**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 12**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 13**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 14**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 15**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 16**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 17**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 18**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 19**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 20**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 21**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 22**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 23**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 24**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 25**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 26**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 27**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 28**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{S})\) is a \(d\)-dimensional Hausdorff operator over \(\mathfrak{S}\)._ **Example 29**.: _(Hausdorff operator over the unit disc.) Let \(\mathfrak{S}=\mathbb{D}\) be the unite disc in the complex plane with its natural analytic structure, \(\operatorname{Aut}(\mathfrak{S})=\operatorname{Aut}(\mathfrak{S})\). Then \(\operatorname{Aut}(\mathfrak{ \(\operatorname{Aut}_{0}(\mathbb{D})\) the group of all involutive Mobius automorphisms of \(\mathbb{D}\), \(A(u)(z)=\frac{u-z}{1-\bar{u}z}\), \(\Omega=\mathbb{D}\). In this case, (1.1) turns to a so called Hausdorff-Zhu operator (see [2, 22, 23, 16, 8]). A similar construction works if \(\mathfrak{S}\) is the unit ball in \(\mathbb{C}^{n}\). **Example 12**.: _(Hausdorff operator over the upper half-plane.) Let \(\mathbb{C}^{+}\) be the upper half-plane with its natural analytic structure, \(\mathfrak{S}=(\mathbb{C}^{+})^{n}\), \(\Omega=(0,\infty)^{n}\), and \(A(u)(z)=(\frac{z_{1}}{u_{1}},\ldots,\frac{z_{n}}{u_{n}})\) (\(u\in\Omega\)) a biholomorphic map of \((\mathbb{C}^{+})^{n}\). In the case \(n=1\), (1.1) turns to a so called Hausdorff operator over the upper half-plane (see [39], [38], [2] and the bibliography therein)._ ## 3. The regularity property of general Hausdorff operators To examine the regularity property of the transformation \(\mathcal{H}_{\Phi,A}\) we need the following definition. **Definition 3.2**.: _Let \(\mathcal{B}\) be a filter base on the set \(\mathfrak{S}\). We say that a family \((A(u))_{u\in\Omega}\) of automorphisms of \(\mathfrak{S}\) agrees with the filter base \(\mathcal{B}\) if \(A(u)^{-1}(B)\) belongs to \(\mathcal{B}\) for each \(B\in\mathcal{B}\) and every \(u\in\Omega\)._ The next proposition is a wide generalization of the classical result of Gerabedyan and Rogosinskii (see [9]). **Proposition 3.3**.: _Suppose that the conditions of Definition 1.1 are fulfilled and a filter base \(\mathcal{B}\) on \(\mathfrak{S}\) is countable. Let a family \((A(u))_{u\in\Omega}\) of automorphisms of \(\mathfrak{S}\) agrees with \(\mathcal{B}\). In order that the transformation \(\mathcal{H}_{\Phi,A}\) should be regular, i. e. that for every bounded function \(f\) on \(\mathfrak{S}\) the equality \(\lim_{x,\mathcal{B}}f(x)=l\) should imply \(\lim_{x,\mathcal{B}}(\mathcal{H}_{\Phi,A}f)(x)=l\) it is necessary and sufficient that_ \[\int_{\Omega}\Phi(u)d\mu(u)=1. \tag{3.2}\] Proof.: If \(f(x)=1\) then \((\mathcal{H}_{\Phi,A}f)(x)=\int_{\Omega}\Phi(u)d\mu(u).\) Thus, \(\int_{\Omega}\Phi(u)d\mu(u)=1\) is a necessary condition. To prove the sufficiency, let \(\int_{\Omega}\Phi(u)d\mu(u)=1\), and \(\lim_{x,\mathcal{B}}f(x)=l\). Then \[\lim_{x,\mathcal{B}}f(A(u)(x))=l \tag{3.3}\] for all \(u\in\Omega\). Indeed, for every \(\varepsilon>0\) there exists such \(B_{\varepsilon}\in\mathcal{B}\) that \(|f(y)-l|<\varepsilon\) for all \(y\in B_{\varepsilon}\). It follows that \(|f(A(u)(x))-l|<\varepsilon\) for all \(x\in A(u)^{-1}(B_{\varepsilon})\), as well. By the Definition 3.2 we have \(A(u)^{-1}(B_{\varepsilon})\in\mathcal{B}\) for each \(u\in\Omega\) and (3.3) follows. Now by the Lebesgue Theorem (one can apply the Lebesgue Theorem, since the filter base \(\mathcal{B}\) is countable) one has \[\lim_{x,\mathcal{B}}(\mathcal{H}_{\Phi,A}f)(x)=\int_{\Omega}\Phi(u)ld\mu(u)=l.\] **Example 13**.: Let in the Example 7\(G\) be sigma-compact and \(\mathcal{B}\) be the set of all complements \(G\setminus K\) of compact subsets \(K\subset G\). In this case, all the conditions of Definition 3.2 are fulfilled for every topological automorphism \(A\) of \(G\), and \(\lim_{x,\mathcal{B}}f(x)=\lim_{x\to\infty}f(x)\). Thus the Proposition 3.3 implies that under the condition (3.2) one has \(\lim_{x\to\infty}(\mathcal{H}_{\Phi,A}f)(x)=l\) whenever \(\lim_{x\to\infty}f(x)=l\) for a bounded measurable function \(f\) on \(G\). **Example 14**.: Let in the Example 5\(\mathcal{B}\) be the set of all balls \(B_{k}:=\{|z|<1/k\}\), \(k\in\mathbb{N}\) in \(\mathbb{C}^{n}\) and \(\mathfrak{S}=B_{1}\). Then the Proposition 3.3 implies that under the condition (3.2) one has \(\lim_{z\to 0}(\mathcal{C}f)(z)=l\) whenever \(\lim_{z\to 0}f(z)=l\) for a bounded measurable function \(f\) on \(B_{1}\). ## 4. \(L^{p}\) boundedness of general Hausdorff-type operators To formulate a result on the \(L^{p}\) boundedness of the operator (1.1) we need the following notion. **Definition 4.4**.: _Let the set \(\mathfrak{S}\) be equipped with some sigma-finite positive measure \(\nu\). We say that the family \((A(u))_{u\in\Omega}\) of authomorphisms of \(\mathfrak{S}\) agrees with the measure \(\nu\) if for each \(\nu\)-measurable set \(E\subset\mathfrak{S}\) of finite measure and for every \(u\in\Omega\) the set \(A(u)^{-1}(E)\) is \(\nu\)-measurable and_ \[\nu(A(u)^{-1}(E))=m(A(u))^{-1}\nu(E)\] _for some positive \(\mu\)-measurable function \(u\mapsto m(A(u))\)._ **Proposition 4.5**.: _Let the set \(\mathfrak{S}\) be equipped with some sigma-finite positive measure \(\nu\), the family \((A(u))_{u\in\Omega}\) agrees with \(\nu\), and \(1\leq p\leq\infty\). If_ \[\|\Phi\|_{A,p}:=\int_{\Omega}|\Phi(u)|m(A(u))^{-1/p}d\mu(u)<\infty\] _(here \(\|\Phi\|_{A,\infty}:=\|\Phi\|_{L^{1}(\mu)}\)), then the operator \(\mathcal{H}_{\Phi,A}\) is bounded in \(L^{p}(\nu)\) and its norm does not exceed \(\|\Phi\|_{A,p}\)._ Proof.: Using Minkowskii integral inequality we have for \(1<p<\infty\) and \(f\in L^{p}(\nu)\) \[\|\mathcal{H}_{\Phi,A}f\|_{L^{p}(\nu)} = \left(\int_{\mathfrak{S}}\left|\int_{\Omega}\Phi(u)f(A(u)(x))d \mu(u)\right|^{p}d\nu(x)\right)^{1/p}\] \[\leq \int_{\Omega}\left(\int_{\mathfrak{S}}|\Phi(u)|^{p}|f(A(u)(x))|^{ p}d\nu(x)\right)^{1/p}d\mu(u)\] \[= \int_{\Omega}|\Phi(u)|\left(\int_{\mathfrak{S}}|f(A(u)(x))|^{p}d \nu(x)\right)^{1/p}d\mu(u).\] Since by the Definition 4.4 \[\int_{\mathfrak{S}}|f(A(u)(x))|^{p}d\nu(x)=m(A(u))^{-1}\int_{\mathfrak{S}}|f(x)|^ {p}d\nu(x) \tag{4.4}\] (it suffices to verify the last equality for \(f=\chi_{E}\), the indicator of a \(\nu\)-measurable set \(E\subset\mathfrak{S}\) of finite measure), we have \[\|\mathcal{H}_{\Phi,A}f\|_{L^{p}(\nu)} \leq \int_{\Omega}|\Phi(u)|m(A(u))^{-1/p}d\mu(u)\left(\int_{\mathfrak{ S}}|f(x)|^{p}d\nu(x)\right)^{1/p}\] \[= \|\Phi\|_{A,p}\|f\|_{L^{p}(\nu)}.\] For \(p=1\) the statement of the proposition follows from Fubini Theorem and for \(p=\infty\) it is obvious. **Example 15**.: Let in the Example 7\(G\) be locally compact, \(\nu\) the Haar measure of \(G\), and the family \(\operatorname{Aut}(G)\) of all topological automorphisms of a locally compact group \(G\) is equipped with its natural topology (see, e. g., [11, (26.1)]). Assume that the map \(u\mapsto A(u)\) from \(\Omega\) to \(\operatorname{Aut}(G)\) is measurable with respect to the measure \(\mu\) in \(\Omega\) and the Borel structure in \(\operatorname{Aut}(G)\). In this case all the conditions of Definition 4.4 are fulfilled for \((A(u))_{u\in\Omega}\). Indeed, we have \(m(A(u))=\operatorname{mod}(A(u))\), the modulus of \(A(u)\), and the map \(A\mapsto\operatorname{mod}(A)\) from \(\operatorname{Aut}(G)\) to \((0,\infty)\) is continuous (see [11, (26.21)]). Thus, the family \((A(u))_{u\in\Omega}\) agrees with the measure \(\nu\) and the Proposition 4.5 is applicable. ## 5. \(H^{1}\) boundedness of general Hausdorff-type operators In this section we shall be working in the following setting. We assume that \(\mathfrak{S}\) is at quasi-metric space with quasi-metric \(\rho\) and positive regular Borel measure \(\nu\). Moreover, the following _doubling condition_ holds: There exists of a constant \(C\) such that \[\nu(B(x,2r))\leq C\nu(B(x,r))\] for each \(x\in\mathfrak{S}\) and \(r>0\). (Here and below \(B(x,r)\) denotes a quasi-ball with respect to \(\rho\) with a center \(x\) and radius \(r>0\)). In this case, the triple \((\mathfrak{S},\rho,\nu)\) is called _a quasi-metric measure space of homogeneous type_[7]. The _doubling constant_ is the smallest constant \(C\geq 1\) for which the doubling inequality holds. We denote this constant by \(C_{\nu}\). Then for each \(x\in\mathfrak{S},k\geq 1\) and \(r>0\) \[\nu(B(x,kr))\leq C_{\nu}k^{s}\nu(B(x,r)), \tag{5.5}\] where \(s=\log_{2}C_{\nu}\) (see, e.g., [10, p. 76]). The number \(s\) sometimes takes the role of a "dimension" for a doubling quasi-metric measure space. **Definition 5.6**.: _Let \((\Omega,\mu)\) be a measure space. We say that a family of automorphisms \((A(u))_{u\in\Omega}\) of a quasi-metric space \((\mathfrak{S},\rho)\) agrees with the quasi-metric if there exists a \(\mu\)-measurable function \(k(u)\) which depends on \(u\in\Omega\) only, such that for every \(x\in\mathfrak{S}\), for every \(u\in\Omega\), and for every \(r>0\)_ \[A(u)^{-1}(B(x,r))\subseteq B(x^{\prime},k(u)r) \tag{5.6}\] _for some point \(x^{\prime}=x^{\prime}(x,u,r)\in\mathfrak{S}\)1._ Footnote 1: In fact, \(k(u)\) depends on \(A(u)\). **Remark 1.** Let \(\Omega\) be a \(\sigma\)-compact quasi-metric space with Radon measure \(\mu\). If \(\mathfrak{S}=G\) is a (finite dimensional real or complex) connected Lie group with left invariant Riemann metric \(\rho\), then every automorphism \(A\in\operatorname{Aut}(G)\) is Lipschitz and \(\operatorname{Aut}(G)\) agrees with \(\rho\) by [27, Lemma 2.6]. **Definition 5.7**.: _Let \((\Omega,\mu)\) be a measure space. We say that a family of automorphisms \((A(u))_{u\in\Omega}\) is \(\mu\)-\(\nu\) measurable if for every \(x\in\mathfrak{S}\) the map \(u\mapsto A(u)(x)\) from \((\Omega,\mu)\) to \((\mathfrak{S},\nu)\) is measurable._ Recall [7] that a \(\nu\)-measurable function \(a\) on \(\mathfrak{S}\) is an \((1,q)\)_-atom_ (\(q\in(1,\infty]\)) if (i) the support of \(a\) is contained in a ball \(B(x,r)\); (ii) \(\|a\|_{\infty}\leq\frac{1}{\nu(B(x,r))}\) if \(q=\infty\), and \(\|a\|_{q}\leq\nu(B(x,r))^{\frac{1}{q}-1}\) if \(q\in(1,\infty)\)2; Footnote 2: \(\|\cdot\|_{q}\) denotes the \(L^{q}\) norm. (iii) \(\int_{G}a(x)d\nu(x)=0\). In case \(\nu(\mathfrak{S})<\infty\) we shall assume \(\nu(\mathfrak{S})=1\); in this case the constant function having value \(1\) is also considered to be an atom. From now on by atom we mean an \((1,q)\)-atom. **Definition 5.8**.: [7, p. 592] _Let \(q\in(1,\infty]\). We define the Hardy space \(H^{1,q}(\mathfrak{S})\) as a space of such functions \(f\) on \(\mathfrak{S}\) that \(f\) admits an atomic decomposition of the form_ \[f=\sum_{j=1}^{\infty}\alpha_{j}a_{j}, \tag{5.7}\] _where \(a_{j}\) are \((1,q)\)-atoms, and \(\sum_{j=1}^{\infty}|\alpha_{j}|<\infty\) (the sums (5.7) are convergent in the \(L^{1}\) norm). 3 In this case,_ Footnote 3: It is known that \(H^{1,q}\) does not depend on \(q\in(1,\infty]\)[7, Theorem A, p. 592]. We write \(H^{1,q}\) instead of \(H^{1}\) in order to stress the fact that we use the norm \(\|\cdot\|_{H^{1,q}}\) described below. \[\|f\|_{H^{1,q}(\mathfrak{S})}:=\inf\sum_{j=1}^{\infty}|\alpha_{j}|,\] _and infimum is taken over all decompositions above of \(f\)._ Since \(\|a\|_{L^{1}}\leq 1\), one has \(\|f\|_{L^{1}(\mathfrak{S})}\leq\|f\|_{H^{1,q}(\mathfrak{S})}\) for a function \(f\) in \(H^{1,q}(\mathfrak{S})\), in particular \(H^{1,q}(\mathfrak{S})\subset L^{1}(\mathfrak{S})\). For the proof of the \(H^{1}\) boundedness of \(\mathcal{H}_{\Phi,A}\) we shall use the following lemma. **Lemma 5.9**.: _[_24_, Lemma 2]_ _Let \((\mathfrak{S};\nu)\) be a measure space, \(\mathcal{F}(\mathfrak{S})\) some Banach space of \(\nu\)-measurable functions on \(\mathfrak{S}\), \((\Omega,\mu)\) a \(\sigma\)-compact quasi-metric space with positive Radon measure \(\mu\), and \(F(u,x)\) a function on \(\Omega\times\mathfrak{S}\). Assume that_ _(a) the convergence of a sequence in norm in \(\mathcal{F}(\mathfrak{S})\) yields the convergence of some subsequence to the same function for \(\nu\)-a. e. \(x\in\mathfrak{S}\);_ _(b) \(F(u,\cdot)\in\mathcal{F}(\mathfrak{S})\) for \(\mu\)-a. e. \(u\in\Omega\);_ _(c) the map \(u\mapsto F(u,\cdot):\Omega\to\mathcal{F}(\mathfrak{S})\) is Bochner integrable with respect to \(\mu\)._ _Then for \(\nu\)-a. e. \(x\in\mathfrak{S}\) one has_ \[\left((B)\int_{\Omega}F(u,\cdot)d\mu(u)\right)(x)=\int_{\Omega}F(u,x)d\mu(u).\] In the following we put \[N(\Phi,A,q)=C_{\nu}^{1-\frac{1}{q}}\int_{\Omega}|\Phi(u)|k(u)^{s\left(1-\frac{ 1}{q}\right)}m(A(u))^{-\frac{1}{q}}d\mu(u).\] **Theorem 5.10**.: _Let \(\Omega\) be a \(\sigma\)-compact quasi-metric space with positive Radon measure \(\mu\) and let \((\mathfrak{S},\rho,\nu)\) be a quasi-metric measure space of homogeneous type such that the space \(H^{1,q}(\mathfrak{S})\) is separable (\(q\in(1,\infty]\)). If a \(\mu\)-\(\nu\)- measurable family of automorphisms \((A(u))_{u\in\Omega}\) of \(\mathfrak{S}\) agrees with the quasi-metric \(\rho\) and with the measure \(\nu\), and \(N(\Phi,A,q)<\infty\), then a Hausdorff operator \(\mathcal{H}_{\Phi,A}\) is bounded in \(H^{1,q}(\mathfrak{S})\) and its norm does not exceed \(N(\Phi,A,q)\)._ Proof.: We use the approach from [24]. First we are going to show that the conditions of Lemma 5.9 are fulfilled with \(\mathcal{F}(\mathfrak{S})=H^{1,q}(\mathfrak{S})\) and \(F(u,x)=\Phi(u)f(A(u)(x))\) where \(f\in H^{1,q}(\mathfrak{S})\). Let \(1<q<\infty\). Since for a function \(f\in H^{1,q}(\mathfrak{S})\) one has \(\|f\|_{L^{1}(\mathfrak{S})}\leq\|f\|_{H^{1,q}(\mathfrak{S})}\), the condition (a) of the Lemma follows from the well known theorem of F. Riesz. To verify conditions (b) and (c), consider a function \(f\in H^{1,q}(\mathfrak{S})\) with an atomic representation (5.7). Then \[f\circ A(u)=\sum_{j=1}^{\infty}\alpha_{j}a_{j}\circ A(u), \tag{5.8}\] for all \(u\in\Omega\). We claim that a function \[a^{\prime}_{j,u}:=C_{\nu}^{\frac{1}{q}-1}k(u)^{s(\frac{1}{q}-1)}m(A(u))^{ \frac{1}{q}}a_{j}\circ A(u)\] is an atom, as well. Indeed, if an atom \(a_{j}\) is supported in a ball \(B(x_{j},r_{j})\) then \(a^{\prime}_{j,u}\) is supported in \(A(u)^{-1}(B(x_{j},r_{j}))\subset B(x^{\prime}_{j},k(u)r_{j})\) by (5.6). Next, since the property (ii) holds for \(a_{j}\), we have by (4.4) \[\|a^{\prime}_{j,u}\|_{q} = C_{\nu}^{\frac{1}{q}-1}k(u)^{s(\frac{1}{q}-1)}m(A(u))^{\frac{1}{q }}\|a_{j}\circ A(u)\|_{q}\] \[= C_{\nu}^{\frac{1}{q}-1}k(u)^{s(\frac{1}{q}-1)}m(A(u))^{\frac{1}{ q}}\left(\int_{\mathfrak{S}}|a_{j}(A(u)(x))|^{q}d\nu(x)\right)^{\frac{1}{q}}\] \[= C_{\nu}^{\frac{1}{q}-1}k(u)^{s(\frac{1}{q}-1)}\|a_{j}\|_{q}\] \[\leq C_{\nu}^{\frac{1}{q}-1}k(u)^{s(\frac{1}{q}-1)}\nu(B(x_{j},r_{j}) )^{\frac{1}{q}-1}.\] On the other hand, the doubling condition (5.5) yields \[\nu(B(x^{\prime}_{j},k(u)r_{j}))\leq C_{\nu}k(u)^{s}\nu(B(x_{j},r_{j}))\] and therefore \[\nu(B(x_{j},r_{j}))^{\frac{1}{q}-1}\leq(C_{\nu}k(u)^{s})^{1-\frac{1}{q}}\nu(B (x^{\prime}_{j},k(u)r_{j}))^{\frac{1}{q}-1}.\] Now (5.9) implies that \[\|a^{\prime}_{j,u}\|_{q}\leq\nu(B(x^{\prime}_{j},k(u)r_{j}))^{\frac{1}{q}-1},\] i.e., (ii) holds for \(a^{\prime}_{j,u}\). Finally, the cancellation condition (iii) for \(a^{\prime}_{j,u}\) follows from (4.4) and the corresponding condition for \(a_{j}\). Further, since for all \(u\in\Omega\) \[a_{j}\circ A(u)=C_{\nu}^{1-\frac{1}{q}}k(u)^{s(1-\frac{1}{q})}m(A(u))^{-\frac {1}{q}}a^{\prime}_{j,u},\] formula (5.8) needs as \[f\circ A(u)=\sum_{j=1}^{\infty}\left(\alpha_{j}C_{\nu}^{1-\frac{1}{q}}k(u)^{s( 1-\frac{1}{q})}m(A(u))^{-\frac{1}{q}}\right)a^{\prime}_{j,u}.\] It follows that \(f\circ A(u)\in H^{1,q}(\mathfrak{S})\) (and therefore the condition (b) holds) and \[\|f\circ A(u)\|_{H^{1,q}}\leq\left(C_{\nu}^{1-\frac{1}{q}}k(u)^{s(1-\frac{1}{ q})}m(A(u))^{-\frac{1}{q}}\right)\|f\|_{H^{1,q}}. \tag{5.10}\] The condition (c) holds, too. Indeed, since \(H^{1,q}(\mathfrak{S})\) is separable, to verify that the \(H^{1,q}(\mathfrak{S})\)-valued function \(u\mapsto f\circ A(u)\) is strongly \(\mu\)-measurable it suffices to prove that it is weakly \(\mu\)-measurable. To this end, in view of (5.7), it suffices to consider the case where \(f=a\) is an atom. Let \(l^{*}\) be a linear continuous functional on \(H^{1,q}(\mathfrak{S})\). Then [7, Theorem B] there is such a function \(l\in BMO(\mathfrak{S})\) that \[l^{*}(a\circ A(u))=\int_{\mathfrak{S}}l(x)a(A(u)(x))d\nu(x).\] The map \(u\mapsto l^{*}(a\circ A(u))\) is \(\mu\)-measurable, if the map \(\phi(u):=a\circ A(u)(x)\) is \(\mu\)-measurable for each \(x\). To verify the last property one can assume that \(a\) is real-valued. Let \(E_{c}=\{y\in\mathfrak{S}:a(y)<c\}\) (\(c\in\mathbb{R}\)). Then \(E_{c}\) is \(\nu\)-measurable and so the set \(\phi^{-1}((-\infty,c))=\{u\in\Omega:A(u)(x)\in E_{c}\}\) is \(\mu\)-measurable by Definition 5.7. Now the inequality (5.10) and the condition \(N(\Phi,A,q)<\infty\) imply that the function \(u\mapsto\|F(u,\cdot)\|_{H^{1,q}}\) is Lebesgue \(\mu\)-integrable and (c) from the Lemma 5.9 holds. Thus, by Lemma 5.9, \[\mathcal{H}_{\Phi,A}f=\int_{\Omega}\Phi(u)f\circ A(u)d\mu(u)\] (the Bochner integral), and therefore \[\|\mathcal{H}_{\Phi,A}f\|_{H^{1,q}} \leq \int_{\Omega}|\Phi(u)|\|f\circ A(u)\|_{H^{1,q}}d\mu(u)\] \[\leq N(\Phi,A,q)\|f\|_{H^{1,q}}.\] The case \(q=\infty\) can be treated in a similar manner. The proof is complete. **Remark 2.** The proof of Theorem 5.10 shows that the condition that the space \(H^{1,q}(\mathfrak{S})\) is separable can be replaced by the condition that for every fixed \(f\in H^{1,q}(\mathfrak{S})\) the range of the map \(u\mapsto f\circ A(u)\), \(\Omega\to H^{1,q}(\mathfrak{S})\) is almost separable. Since \(\Omega\) is separable, \(f\circ A(\Omega)\) is separable if this map is measurable [13, Lemma 1.1.12]. If \(\Omega\) is countable it is obvious that the subspace \(f\circ A(\Omega)\) of \(H^{1,q}(\mathfrak{S})\) is separable. In the following we shall assume that the family \(\operatorname{Aut}(G)\) of all topological automorphisms of a locally compact group \(G\) is equipped with its natural (Braconnier) topology (see, e. g., [11, (26.1)], [12, Section III.3]). **Definition 5.11**.: _Let \((\Omega,\mu)\) be a \(\sigma\)-compact quasi-metric space with positive Radon measure \(\mu\). A family of topological automorphisms \((A(u))_{u\in\Omega}\) of a locally compact group \(G\) is called measurable if the map \(u\mapsto A(u)\) is measurable with respect to the measure \(\mu\) and the Borel structure in \({\rm Aut}(G)\). 4 Footnote 4: Here we concretisize the notion of the measurable family of topological automorphisms of a locally compact group from [24, 27, 26, 28, 25]. **Corollary 5.12**.: _(cf. [24], [27]). Let \((\Omega,\mu)\) be a \(\sigma\)-compact quasi-metric space with positive Radon measure \(\mu\). Let \(\mathfrak{S}=G\) be a locally compact group with Haar measure \(\nu\), and the topology of \(G\) is generated by a quasi-metric \(\rho\). Assume that \((G,\rho,\nu)\) is a space of homogeneous type, and the space \(H^{1,q}(G)\) is separable (\(q\in(1,\infty]\)). If a measurable family of topological automorphisms \((A(u))_{u\in\Omega}\) of \(G\) agrees with the quasi-metric \(\rho\) and \(N(\Phi,A,q)<\infty\), then a Hausdorff operator \(\mathcal{H}_{\Phi,A}\) is bounded in \(H^{1,q}(G)\) and its norm does not exceed \(N(\Phi,A,q)\)._ Proof.: The only conditions of Theorem 5.10 we need to verify are that the family \((A(u))_{u\in\Omega}\) agrees with the measure \(\nu\) and that it is \(\mu\)-\(\nu\)-measurable. For the proof of the first property note that in our case we have \(m(A(u))={\rm mod}(A(u))\), and the map \(u\mapsto{\rm mod}(A(u))\) is \(\mu\)-measurable, since the family \((A(u))_{u\in\Omega}\) is measurable, and the map \(A\mapsto{\rm mod}A\) from \({\rm Aut}(G)\) to \((0,\infty)\) is continuous (see [11, (26.21)]). Finally, since for each \(x\in G\) the map \({\rm Aut}(G)\to G\) sending \(A\) onto \(A(x)\) is continuous [12, Proposition III.3.1, p. 40], and the family \((A(u))_{u\in\Omega}\) is measurable, it is \(\mu\)-\(\nu\)-measurable. **Remark 3.** As in Theorem 5.10 in Corollary 5.12 the condition that the space \(H^{1,q}(G)\) is separable can be replaced by the condition that for every fixed \(f\in H^{1,q}(G)\) the range of the map \(u\mapsto f\circ A(u)\), \(\Omega\to H^{1,q}(G)\) is almost separable. Since \(\Omega\) is separable, and the map \(u\mapsto A(u)\) is measurable, Lemma 1.1.12 from [13] shows that \(f\circ A(\Omega)\) is separable if the map \(A\mapsto f\circ A\), \({\rm Aut}(G)\to H^{1,q}(G)\) is measurable. Also it is obvious that \(f\circ A(\Omega)\) is separable if \(\Omega\) is countable. **Conjecture.** Let \(G\) be a locally compact group with Haar measure \(\nu\), and the topology of \(G\) is generated by a quasi-metric \(\rho\). If \((G,\rho,\nu)\) is a space of homogeneous type, then the map \(A\mapsto f\circ A\), \({\rm Aut}(G)\to H^{1,q}(G)\) is continuous.
2306.12437
Evaluation of microseismic motion at the KAGRA site based on ocean wave data
The microseismic motion, ambient ground vibration caused by ocean waves, affects ground-based gravitational wave detectors. In this study, characteristics of the ocean waves including seasonal variations and correlation coefficients were investigated for the significant wave heights at 13 coasts in Japan. The relationship between the ocean waves and the microseismic motion at the KAGRA site was also evaluated. As a result, it almost succeeded in explaining the microseismic motion at the KAGRA site by the principal components of the ocean wave data. One possible application of this study is microseismic forecasting, an example of which is also presented.
S. Hoshino, Y. Fujikawa, M. Ohkawa, T. Washimi, T. Yokozawa
2023-06-16T05:43:33Z
http://arxiv.org/abs/2306.12437v3
# Evaluation of the microseismic motion at the KAGRA site based on the ocean wave data ###### Abstract The microseismic motion, which is the ambient ground vibration caused by ocean waves, affects ground-based gravitational detectors. In this study, we characterized the properties of the microseismic motion at the KAGRA site and the ocean waves at 13 coasts of Japan, such as the seasonal variation and the correlation between them. As a result, it almost succeeded to explain the microseismic motion at the KAGRA site by the principal components of the ocean wave data. One possible application of this study is the microseismic forecast and its example is also shown. ## 1 Introduction Gravitational waves are ripples of space-time distortion propagating at the speed of light and their direct observation is a key probe in advanced astronomy. The first successful detection was performed in 2015 by the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO, USA) [1, 2], and the first simultaneous detections by LIGO and Virgo (Italy) was performed in 2017 [3, 4, 5]. Kamioka Gravitational Wave Detector (KAGRA) is a laser interferometric gravitational wave detector with 3 km arms in Japan [6]. Two solo observation runs in 2015 and 2018 [7, 8] and the first international joint observation run (O3GK) with the GEO 600 in Germany were performed during April 7-21, 2020 [9, 10, 11]. KAGRA has two unique features compared with other kilometer-scale detectors in the world: (1) it is constructed underground at the Kamioka to reduce the ground vibration noise, (2) the test mass mirrors are cooled down to reduce thermal noise. To attain and maintain the working point of the detector, all mirror positions, angles, and motions must be controlled. When external disturbances occur, it becomes difficult for the interferometer to maintain these controls, and the resonant state is broken. Consequently, the gravitational wave observations must be stopped. This state is called "lock loss" and a reduction in the lock loss rate is important for performing meaningful observations. During the O3GK period in KAGRA, it was occasionally difficult to keep the interferometer locked state when microseismic motions, which are ground vibrations with a frequency range of about 0.1-0.5 Hz induced by the ocean waves, were large [11, 12, 13]. The mechanism of microseismic motion excited by ocean waves was derived by Longuet-Higgins [14] and approximated using a non-linear equation extended by Hasselmann [15]. This approximation was evaluated using the normal-mode equation derived by Tanimoto to obtain negligible errors for ground vibrations occurring in the ocean to a depth of approximately 1 km [16, 17]. Microseismic motion is related to the amplitude and period of the waves, with the period of the ground motions being approximately twice the period of the waves, and the magnitude of the motions being derived from the energy of the waves, as shown by Bromirski _et al._[18]. ## 2 Characterizations for the microseismic motion at the KAGRA site In this section, information about the seismometers used in this study and the characteristics of their data are explained. To monitor the environments around the KAGRA interferometer, several sensors were located at the experimental site and continuously logged using the KAGRA DAQ system [19]. Three seismometers were located at each end and corner station and the horizontal axes were aligned with the orientations of the arms [20]. The seismometers are the Trillium 120QA from _Nanometrics Inc._, sensitive to the ground velocity in three directions from 0.01 Hz to 10 Hz. Figure 1 shows examples of the amplitude spectral density (ASD) of these seismometers in each location and direction. In this figure, two different days are shown to compare the high- and low-noise days, with black dashed lines, which are Peterson's high/low seismic noise model [21]. The significant peak at 0.1-0.5 Hz corresponding to the microseismic motion is clearly seen in all ASDs and its amplitude and structure are almost the same in the stations and the directions for both days. The spectrum below 0.05 Hz varies in the channels and is assumed to be due to atmospheric pressure [22, 23]. Based on these results, for simplicity, the vertical signal of the seismometer located at the corner station was used as a representative of the ground vibration in this study. The band-limited root mean square (BLRMS) of the seismometer signal was used to evaluate the time dependence of the microseismic motion at the KAGRA site. BLRMS is the root mean square every 20 minutes for the time series data filtered using a bandpass filter (TimeSeries.bandpass in gwpy 2.1.4) from 0.1 Hz to 0.5 Hz, to limit the frequency band. Figure 2 shows an example from February 2020. All the data for 2020 are shown in Appendix A. In the O3GK run, the KAGRA interferometer would be difficult to be kept in a locked state when the BLRMS value is above 0.3 \(\mu\)m/s and could not be kept in the locked state at all when the BLRMS value became over 0.5 \(\mu\)m/s [13]. Figure 3 shows the ratio of the microseismic motion for every week in 2020 into the three ranges: below 0.3 \(\mu\)m/s (green), between 0.3 \(\mu\)m/s and 0.5 \(\mu\)m/s (yellow), and above 0.5 \(\mu\)m/s (red). The microseismic motion increased from winter to the beginning of spring (December-March) and remained stable at small values in summer. It also shows large values at the beginning of autumn (September and October) owing to typhoons. ## 3 Characterizations for the ocean waves around Japan Significant wave heights (SHW, \(H_{1/3}\)), which are the average of the highest 1/3 of the waves over a period of time, are widely used as indicators of the strength of ocean waves. Wave data are provided by _the Nationwide Ocean Wave information network for Ports and Harbors_ (NOWPHAS) operated by _the Port and Harbor Bureau, Ministry of Land, Infrastructure, Transport and Tourism_, Japan [24]. NOWPHAS data were measured every 20 minutes using the zero-up-cross method. Seven sites on the Sea of Japan coast (Niigata, Naoetsu, Toyama, Wajima, Fukui, Tsuruga, and Shibayama) and six sites on the Pacific side (Soma, Onahama, Kashima, Shimoda, Shimizu, and Omaesaki), as Figure 1: Amplitude spectral densities (ASDs) of the ground velocity, for each location (Corner, X-end, Y-end) and each direction (X, Y, Z) at the KAGRA site. The measurement time is 4096 seconds on February 18 (left) and June 10 (right) in 2020. Black dashed lines represent Peterson’s high/low seismic noise model [21]. Figure 2: The root mean square of the microseismic motion (0.1–0.5 Hz) at the KAGRA site during February 2020. All data for 2020 are shown in the Appendix A. shown in Fig. 4, are selected for use in this study from the NOWPHAS data. This is because their sites are relatively close to KAGRA, and the microseismic Rayleigh waves attenuate in proportion to the square of the propagation distance. Figure 5 shows the time series of the SHW at these 13 sites throughout 2020. Figure 6 shows the cumulative ratio (subtracted from 1) of the SWH at the four sites (Toyama, Wajima, Kashima, and Omaesaki) calculated every three months. The waves Figure 4: Locations of the KAGRA (black marker) and the NOWPHAS observatories used in this study (color markers): Niigata, Naoetsu, Toyama, Wajima, Fukui, Tsuruga, and Shibayama on the Sea of Japan side, and Soma, Onahama, Kashima, Shimoda, Shimizu, and Omaesaki on the Pacific side. Figure 3: The ratio of the microseismic motion for every week in 2020 into the three ranges: below 0.3 \(\mu\)m/s (green), 0.3–0.5 \(\mu\)m/s (yellow), and above 0.5 \(\mu\)m/s (red). This classification is based on the locked state of KAGRA in the O3GK [13]. are relatively larger in the winter seasons at Wajima and Toyama, which is consistent with the fact that the wind in the Sea of Japan is strongly affected by seasonal wind and becomes stronger in these seasons. Seasonal winds blow from the northwest during winter and southeast during summer in Japan. Toyama Bay is traditionally known as a "quiet bay" and its SWH values are smaller than those of Wajima, even though these sites are close to each other. In Kashima, wave activities seem to be at the same level, except for the summer period. At Omaesaki, there was little change throughout one year approximately 90% of the probability. Typhoons typically approached Japan between July and October. For example, Fig. 7 shows the SWH when a typhoon approached Japan in 2019. The SWH on the side of the Pacific Ocean can reach approximately 10 m. Figure 8 shows the correlation coefficients of SWH at the 13 sites. It suggests that the ocean waves around KAGRA can be categorized into three areas, the Sea of Japan side, the Pacific side of the coast facing east (Pacific side east), and the Pacific side of the coast facing south (Pacific side south). As these three groups have little correlation, it can be understood that their behaviors are independent of each other. ## 4 Prediction of the microseismic motion from the ocean wave Data analysis is performed using seismometer data from KAGRA and ocean wave data. Both the seismometer and ocean wave data were obtained in 2020. In addition, data for the periods of earthquake occurrence or approaching typhoons are excluded for a more accurate analysis. To evaluate (and investigate) the relationship between microseismic motion at KAGRA and ocean waves, a correlation analysis between the seismometer signal and SHW of each coast is performed. Figure 9 shows the correlation coefficients between the microseismic motion at the KAGRA site and the SWH data from NAOPHAS described in Sections 2 and 3. It shows a strong positive correlation with the Sea of Japan and a weak positive correlation with the Pacific Ocean side. This Figure 5: Time series of the significant wave heights (SHW) during February 2020, provided by NOWPHAS [24]. All data for 2020 are shown in the Appendix A. Figure 6: The cumulative ratio (subtracted from 1) of the SWH at the four sites (Toyama, Wajima, Kashima, and Omaesaki) for 2020, for each season: winter (January, February, and December), spring (from March to May), summer (from June to August), and autumn (from September to November). Figure 7: Time series of the significant wave heights (SHW) at three sites (Wajima, Kashima, and Omaesaki), during the typhoon period in October 2019. is reasonable because the KAGRA site is located about 70 km from the Sea of Japan at the shortest distance, and about 200 km from the Pacific Ocean at the shortest distance. It is expected to follow the BLRMS of ground velocity \(v\) and the SWH \(H_{1/3}\) using a simple equation. For example, Ferretti _et al._ introduced the following equation: \[H_{1/3}(t)=\exp{\Big{[}a+b\ln{v(t)}\Big{]}}, \tag{1}\] where \(a,b\) are constants and \(b=0.66\) is derived by fitting [25]. The goal of this section is to derive a similar equation between the microseismic motion at the KAGRA site and the SWH data from NOWPHAS. Figure 8: Correlation coefficient of the significant wave height (SWH) between the NOWPHAS sites used in this study. Here, we predict the microseismic motion at the KAGRA site using wave data from 12 locations (other than Toyama Bay) in the NOWPHAS. In this study, the ground velocity BLRMS \(v\) and SWH \(H_{1/3}\) are used with squared values to calculate the energy. The SWH data are strongly correlated, as shown in Fig. 8. To solve the degeneracy of the data, principal component analysis (PCA) is applied to three areas of the ocean: the Sea of Japan side (Niigata, Naoetsu, Wajima, Fukui, Tsuruga, and Shibayama), the eastern Pacific side (Soma, Onahama, and Kashima), and the southern Pacific side (Shimoda, Shimizu, and Omaesaki). Standardization of each squared SWH \(H_{1/3,j}^{2}(t)\) is performed to obtain the mean 0 and standard deviation 1. \[x_{j}(t)=\frac{H_{1/3,j}^{2}(t)-\mu_{j}}{\sigma_{j}}, \tag{2}\] where \(x_{j}(t)\) is the standardized wave data and \(\mu_{j},\sigma_{j}\) are the mean and the standard deviation of \(H_{1/3,j}^{2}(t)\) for the \(j\)-th location of the NOWPHAS site. The PCA (via scikit-learn 1.0.2) is performed for three areas (Sea of Japan side, eastern Pacific side, and southern Pacific side) labeled by index \(i\): \[PC_{i}(t)=\sum_{j}c_{ij}x_{j}(t), \tag{3}\] where \(PC_{i}(t)\) is the first-principal component, and \(c_{ij}\) is its eigenvector for the \(i\)-th area. The PCA parameter values are summarized in Table 1. Figure 9: The correlation coefficient between the microseismic motion at the KAGRA site and the significant wave heights (SWH) at the 13 coasts in Japan, in 2020. These principal components are difficult to treat in comparison to the microseismic motion because they are dimensionless and include negative values. Therefore, the weighted average of the squared SWH \[\bar{H}_{i}^{2}(t)=\sum_{j}w_{ij}H_{1/3,j}^{2}(t),\quad w_{ij}=\frac{c_{ij}/ \sigma_{j}}{\sum_{k}c_{ik}/\sigma_{k}}, \tag{4}\] provides a better representation of the wave levels for each ocean area. Figure 10 and Fig. 11 are the correlation plots and the correlation coefficients for the observed microseismic motion \(v^{2}(t)\) and the wave levels of the three ocean areas \(\bar{H}_{i}^{2}(t)\). There was no strong correlation between the individual representative significant wave heights. They also showed that waves on the Sea of Japan side strongly influence the ground vibration values. \begin{table} \begin{tabular}{l|c c c c c} \hline & \(\mu\) [m\({}^{2}\)] & \(\sigma\) [m\({}^{2}\)] & \multicolumn{3}{c}{\(c_{ij}\)} \\ & & & sea of Japan & eastern Pacific & southern Pacific \\ \hline Niigata & 1.7 & 3.1 & 0.39 & – & – \\ Naoetsu & 1.8 & 3.1 & 0.42 & – & – \\ Wajima & 2.3 & 3.5 & 0.43 & – & – \\ Fukui & 2.3 & 3.9 & 0.42 & – & – \\ Tsuruga & 1.3 & 2.5 & 0.40 & – & – \\ Shibayama & 2.2 & 3.3 & 0.39 & – & – \\ Soma & 1.2 & 1.8 & – & 0.58 & – \\ Onahama & 1.5 & 2.1 & – & 0.58 & – \\ Kashima & 2.4 & 3.3 & – & 0.57 & – \\ Shimoda & 1.0 & 1.0 & – & – & 0.58 \\ Shimizu & 0.26 & 0.32 & – & – & 0.59 \\ Omaesaki & 0.91 & 1.09 & – & – & 0.57 \\ \hline \end{tabular} \end{table} Table 1: Summary of the ocean wave data. \(\mu\) is the mean of each wave height, _sigma_, is the standard deviation of each wave height, and other data is the eigenvector of PCA, for each area; the sea of Japan side, eastern Pacific side, southern Pacific side. Figure 11: Correlation coefficient among the observed microseismic motion at the KAGRA site and the representative wave levels for the sea of Japan, the eastern Pacific side, and the southern Pacific side. Figure 10: Scatter plot matrix of the observed microseismic motion at the KAGRA site and the representative wave levels for the sea of Japan, the eastern Pacific side, and the southern Pacific side. The prediction for the microseismic motion at the KAGRA site is written as follows: \[v_{\rm pred}^{2}(t)=\sum_{i}\alpha_{i}^{2}\bar{H}_{i}^{2\beta_{i}}(t), \tag{5}\] where \(\alpha_{i}\) and \(\beta_{i}\) are constants and are to be derived by fitting into the data with the nonlinear least squares method (via scipy.optimize.curve_fit). The results are shown in Fig. 12, where the black markers are the observed data and the red lines are the predictions of the ground vibration BLRMS at the KAGRA site for one year. The values of the fitting parameters are shown in Table 2. Almost good agreement between the prediction and the observed data was observed except for the typhoon data not used in the PCA and the fitting. The values of the index \(\beta_{i}\) are close to the \(b^{-1}\sim 1.5\) of Ferretti _et al._[25]. Figure 12: Comparison of the microseismic motion at the KAGRA site, between the observed data (black) and the prediction from the ocean waves (red, with the \(1\sigma\) error band). The typhoon period (gray hatched) is not used for the prediction. To discuss the accuracy of this prediction, Fig. 13 (left) shows a histogram of the difference between the predicted and observed microseismic motion. It is almost a Gaussian distribution and its mean and standard deviation are \(1.2\times 10^{-3}\mu\)m/s and \(7.4\times 10^{-2}\mu\)m/s, respectively. This standard deviation was used for the error band in Fig. 12. Figure 13 (right) shows a 2D histogram of the data points between the predicted and observed microseismic motions, and good linearity was confirmed for most parts. This is achieved owing to the index parameter \(\beta_{i}\); if it is not used (corresponding to \(\beta_{i}=1\)), the prediction becomes systematically smaller when the ocean waves are violent. The cluster distributes around \(v_{pred}\sim 0.3\)\(\mu\)m/s and \(v_{obs}>0.7\)\(\mu\)m/s corresponding to the period that the typhoon was coming to Japan (during October 9-11). This is because an offshore typhoon directly shakes the seabed just below it, and microseismic waves propagate to Japan even though the coasts are still not rough [26]. ## 5 Conclusion and prospects In this study, we investigated the properties of the microseismic motion at the KAGRA site and the significant wave heights on the coasts of Japan measured and opened by NOWPHAS for 2020. The degeneracy of the wave data was solved using a principal component analysis, and the first-principal components of the three ocean regions were extracted. Microseismic motion observed at the KAGRA site was almost predicted by the combination of these components of the ocean waves within the standard deviation of \(7\times 10^{-2}\mu\)m/s. \begin{table} \begin{tabular}{c|c c} \hline & \(\alpha_{i}\) [\(\mu\)m/s/m\({}^{\beta_{i}}\)] & \(\beta_{i}\) \\ \hline Sea of Japan & \(0.358\pm 0.001\) & \(1.314\pm 0.005\) \\ Pacific east & \(0.104\pm 0.002\) & \(1.644\pm 0.024\) \\ Sacific south & \(0.092\pm 0.002\) & \(1.687\pm 0.024\) \\ \hline \end{tabular} \end{table} Table 2: The results of the fitting parameters in Eq. (5). Figure 13: Left: Histogram of the difference between observed values and predicted values of the microseismic motion. Right: 2D histogram of the observed values and predicted values of the microseismic motion. To improve the accuracy of the prediction, it may be necessary to reflect the local variations of each coast trashed using the first-principal components in this study. Other ocean waves data, such as significant wave periods and wind direction, are also important. Machine learning is a possible way to improve this analysis. To include typhoon days, the development of special treatments is necessary, _e.g._ using the position and magnitude of the typhoon. A useful application of this study is _the microseismic forecast_. Future microseismic motion can be forecasted by inputting wave information from commercial weather forecasts into our equation. Figure 14 shows an example of a microseismic forecast using a 1-week weather forecast on the website Otenki.com provided by Bellsystem24 Inc [27]. The blue line represents the forecast with the error band of microseismic motion at the KAGRA site. The vertical black dotted line represents the current time, and the right and left-hand sides represent the future and past, respectively. The horizontal dotted lines (red and yellow) correspond to the benchmark microseismic levels discussed in Section 2. This plot is automatically updated and shown on a website (internal page) and contributes to commissioning works at the KAGRA observatory, especially for its scheduling. ## Acknowledgement This research has made use of data, software, and web tools obtained or developed by the KAGRA Collaboration, especially the administrators of the digital system, and managers of the KAGRA site. The NOWPHAS data is provided by the Port and Harbor Bureau, Ministry of Land, Infrastructure, Transport and Tourism, Japan, and Figure 14: An example of the microseismic forecast graph. The black line shows the current date, in this case, 2023-05-18 11:09 (JST). The right and left sides of it are the future and past, respectively. The horizontal dotted lines (red and yellow) correspond to the benchmark microseismic level discussed in Sec. 2, red is 0.5\(\mu\)m/s, and yellow is 0.3\(\mu\)m/s. the weather forecast data is provided by Bellsystem24 Inc. The KAGRA project is supported by MEXT, JSPS Leading-edge Research Infrastructure Program, JSPS Grant-in-Aid for Specially Promoted Research 26000005, JSPS Grant-in-Aid for Scientific Research on Innovative Areas 2905: JP17H06358, JP17H06361 and JP17H06364, JSPS Core-to-Core Program A. Advanced Research Networks, JSPS Grantin-Aid for Scientific Research (S) 17H06133 and 20H05639, JSPS Grant-in-Aid for Transformative Research Areas (A) 20A203: JP20H05854, the joint research program of the Institute for Cosmic Ray Research, University of Tokyo, National Research Foundation (NRF), Computing Infrastructure Project of Global Science experimental Data hub Center (GSDC) at KISTI, Korea Astronomy and Space Science Institute (KASI), and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS), AS Grid Center (ASGC) and the National Science and Technology Council (NSTC) in Taiwan under grants including the Rising Star Program and Science Vanguard Research Program, Advanced Technology Center (ATC) of NAOJ, and Mechanical Engineering Center of KEK. Especially this work was founded by JSPS Grant-in-Aid for JSPS Fellows 19J01299 and the Joint Research Program of the Institute for Cosmic Ray Research (ICRR) University of Tokyo 2020-G07, 2020-G12, 2020-G21, 2021-G07, 2021-G09, 2021-G10, 2022-G07, 2022-G12, 2022-G21. We would like to thank Editage (www.editage.com) for English language editing. ## Appendix A One year data of the SWH and the microseismic motion
2306.15358
Vortex solitons in moire optical lattices
We show that optical moire lattices enable the existence of vortex solitons of different types in self-focusing Kerr media. We address the properties of such states both in lattices having commensurate and incommensurate geometries (i.e., constructed with Pythagorean and non-Pythagorean twist angles, respectively), in the different regimes that occur below and above the localization-delocalization transition. We find that the threshold power required for the formation of vortex solitons strongly depends on the twist angle and, also, that the families of solitons exhibit intervals where their power is a nearly linear function of the propagation constant and they exhibit strong stability. Also, in the incommensurate phase above the localization-delocalization transition, we found stable embedded vortex solitons whose propagation constants belong to the linear spectral domain of the system.
Sergey K. Ivanov, Vladimir V. Konotop, Yaroslav V. Kartashov, Lluis Torner
2023-06-27T10:15:15Z
http://arxiv.org/abs/2306.15358v1
# Vortex solitons in moire optical lattices ###### Abstract We show that optical moire lattices enable the existence of vortex solitons of different types in self-focusing Kerr media. We address the properties of such states both in lattices having commensurate and incommensurate geometries (i.e., constructed with Pythagorean and non-Pythagorean twist angles, respectively), in the different regimes that occur below and above the localization-delocalization transition. We find that the threshold power required for the formation of vortex solitons strongly depends on the twist angle and, also, that the families of solitons exhibit intervals where their power is a nearly linear function of the propagation constant and they exhibit a strong stability. Also, in the incommensurate phase above the localization-delocalization transition, we found stable embedded vortex solitons whose propagation constants belong to the linear spectral domain of the system. Optical moire lattices (MLs) have been shown to be a versatile tool for controlling and manipulating light propagation. They enable light localization [1, 2], specific reflectivity by metasurfaces [3, 4], magic-angle lasers [5], and can be used to create flat-bands [1, 2] and topologically nontrivial structures [6, 7, 8]. Both mono-layered and bi-layered MLs, are formed by the overlap of two identical sublattices rotated with respect to each other. Depending on the twist angle, an ML is either a commensurate (i.e., periodic) or an incommensurate (i.e., aperiodic) structure. Importantly, in both cases, MLs inherit the rotational symmetry of the sublattices. The spectral changes inherent to the transition between commensurate and incommensurate configurations are responsible for rich physical phenomena, including the occurrence of a localization-delocalization transition (LDT) of light beams [1, 2]. In nonlinear media, MLs significantly affect the properties of existing soliton families. In particular, optical moire lattices can support thresholdless two-dimensional (2D) Kerr solitons [9], while multifrequency solitons in quadratic media acquire properties that are unusual for translationally periodic lattices [10]. In this context, the impact of incommensurability on the potential existence and properties of vortex solitons (VSs) remains unexplored to date. In this Letter, we predict that mono-layered MLs may sustain stable VSs and that the localization properties of the linear spatial eigenmodes of MLs have a strong impact on the power threshold necessary for the formation of the solitons. In the incommensurate phase, MLs enable the excitation of thresholdless VSs that remain spatially localized even in the low-amplitude limit. Furthermore, we find that stability domains for VSs occur above the LDT threshold. We consider the propagation of paraxial light beams in a nonlinear cubic medium with a transverse shallow modulation of the refractive index that has the form of a Pythagorean ML, which is described by the nonlinear Schrodinger equation for the dimensionless light field amplitude \(\psi\) \[i\frac{\partial\psi}{\partial z}=H_{0}\psi-|\psi|^{2}\psi,\quad H_{0}=-\frac{1 }{2}\nabla^{2}-\mathcal{P}(\mathbf{r}). \tag{1}\] Here \(\mathbf{r}=(x,y)\) and \(\nabla=(\partial_{x},\partial_{y})\). The transverse coordinates \(\mathbf{r}\) and the propagation distance \(z\) are normalized to the characteristic transverse scale \(w\) and the diffraction length \(\kappa w^{2}\), respectively, where \(\kappa\) is the wavenumber. The function \(\mathcal{P}(\mathbf{r})=p_{1}V(R(\theta)\mathbf{r})+p_{2}V(\mathbf{r})\) describes MLs created by the superposition of two square sublattices \(V(\mathbf{r})=\sum_{m,q}\ Q(x-na+a/2,y-ma+a/2)\) with \(Q(x,y)=\exp\left[-\left(x^{2}+y^{2}\right)/\sigma^{2}\right]\) and different waveguide depths \(p_{1}\) and \(p_{2}\) corresponding to the refractive index contrast \(\delta n\), \(p=x^{2}w^{2}\delta n/\eta_{0}\), where \(\eta_{0}\) is the refractive index, \(R(\theta)\) is the operator of 2D rotation by the angle \(\theta\). In the numerical calculations reported below, each sublattice is set to have the period \(a=2.5\) and to be composed of identical waveguides of width \(\sigma=0.5\) [Figs. 1(a) and (b)]. To each Pythagorean angle, defined by \(\theta=\arctan[(m^{2}-n^{2})/2mn]\), with \(m>n>0\), and \(m,n\in\mathbb{N}\), one can associate the Pythagorean triple \((m^{2}-n^{2},2mn,m^{2}+n^{2})\). When the twist angle is a Pythagorean one, the ML is periodic (commensurate) [Fig. 1(c)]. Otherwise, the array is aperiodic (incommensurate) [Fig. 1(d)]. Such MLs can be induced in suitable photosensitive materials [2, 9] or inscribed with fs laser pulses in transparent dielectrics [11]. The properties of the linear modes supported by the MLs may change qualitatively not only due to the change of the twist angle but also when the depths of the sublattices vary [1, 2]. Therefore, we first calculate the eigenmodes of the linear Hamiltonian \(H_{0}\). Such linear modes can be written in the form \(\psi(\mathbf{r},z)=u(\mathbf{r})e^{\mathrm{i}\hbar\omega z}\), where \(b_{\mathrm{lin}}\) is the eigenvalue, i.e., the propagation constant of the mode and the function \(u(\mathbf{r})\) describes the transverse mode
2304.03343
Spintronic Physical Reservoir for Autonomous Prediction and Long-Term Household Energy Load Forecasting
In this study, we have shown autonomous long-term prediction with a spintronic physical reservoir. Due to the short-term memory property of the magnetization dynamics, non-linearity arises in the reservoir states which could be used for long-term prediction tasks using simple linear regression for online training. During the prediction stage, the output is directly fed to the input of the reservoir for autonomous prediction. We employ our proposed reservoir for the modeling of the chaotic time series such as Mackey-Glass and dynamic time-series data, such as household building energy loads. Since only the last layer of a RC needs to be trained with linear regression, it is well suited for learning in real time on edge devices. Here we show that a skyrmion based magnetic tunnel junction can potentially be used as a prototypical RC but any nanomagnetic magnetic tunnel junction with nonlinear magnetization behavior can implement such a RC. By comparing our spintronic physical RC approach with energy load forecasting algorithms, such as LSTMs and RNNs, we conclude that the proposed framework presents good performance in achieving high predictions accuracy, while also requiring low memory and energy both of which are at a premium in hardware resource and power constrained edge applications. Further, the proposed approach is shown to require very small training datasets and at the same time being at least 16X energy efficient compared to the sequence to sequence LSTM for accurate household load predictions.
Walid Al Misba, Harindra S. Mavikumbure, Md Mahadi Rajib, Daniel L. Marino, Victor Cobilean, Milos Manic, Jayasimha Atulasimha
2023-04-06T19:42:09Z
http://arxiv.org/abs/2304.03343v2
Spintronic Physical Reservoir for Autonomous Prediction and Long-Term Household Energy Load Forecasting ###### Abstract **ABSTRACT:** In this study, we have shown autonomous long-term prediction with a spintronic physical reservoir. Due to the short-term memory property of the magnetization dynamics, non-linearity arises in the reservoir states which could be used for long-term prediction tasks using simple linear regression for online training. During the prediction stage, the output is directly fed to the input of the reservoir for autonomous prediction. We employ our proposed reservoir for the modeling of the chaotic time series such as Mackey-Glass and dynamic time-series data, such as household building energy loads. Since only the last layer of a RC needs to be trained with linear regression, it is well suited for learning in real time on edge devices. Here we show that a skyrmion based magnetic tunnel junction can potentially be used as a prototypical RC but any nanomagnetic magnetic tunnel junction with nonlinear magnetization behavior can implement such a RC. By comparing our spintronic physical RC approach with state-of-the-art energy load forecasting algorithms, such as LSTMs and RNNs, we conclude that the proposed framework presents good performance in achieving high predictions accuracy, while also requiring low memory and energy both of which are at a premium in hardware resource and power constrained edge applications. Further, the proposed approach is shown to require very small training datasets and at the same time being at least 16X energy efficient compared to the state-of-the-art sequence to sequence LSTM for accurate household load predictions. ## I Introduction Recurrent neural networks (RNNs) [1, 2] are shown to be more suitable in temporal data processing tasks than the traditional feedforward neural networks (FNNs) because of the recurrent connections among constituent neurons. However, RNNs often suffers from vanishing and exploding gradients problem due to the long-term dependencies that could arise in the recurrent layers. To circumvent these issues variations of RNN is proposed, i.e., long short-term memory (LSTM) [3] and reservoir computing (RC) [4, 5]. In RC, the reservoir consists of an RNN which maps temporal inputs to higher dimensional features due to the short-term memory property that exists in the reservoir and a read-out layer that analyzes the features stored as reservoir states. The RNN connections are fixed and only the read-out layer is trained [4]. Thus, the training can be performed with simple learning rules such as linear regression which makes RC much simpler to implement with low training cost. Recently software-based RC systems have been shown to achieve state-of-the-art performance in speech recognition tasks [6] and superior performances in forecasting tasks, such as prediction of financial systems [7], water inflow [8], and chaotic system prediction [9]. Since the essence of RC is to employ non-linearity to transform input to high dimensional space, any physical dynamic non-linear system can work as a reservoir. For a typical RNN implemented on hardware, the required training is performed for all layers of the neural network [10]. This can be implemented on neuromorphic chips [11]. However, in RC the inference is performed using physical phenomena, and only linear regression is used to train weights between select physical reservoir states and the output. This makes information processing much faster and involves low training cost. These features make physical systems the preferred candidate for hardware implementations of RC. Towards this end, various physical systems have been proposed as reservoirs such as optical systems [12-13], mechanical systems [14], analogue circuits [15], memristors [16-18], and spintronics [19-22]. Spintronic reservoir has a great potential due to its higher speed of operation, compatibility with CMOS technology, high endurance and energy efficiency [23-24]. Confined spin textures such as magnetic skyrmion [21,25-26] and patterned nanomagnets [22,27] are shown to perform RC and can be operated with ultra-low energy using voltage control mechanism [28-33]. The performance of RC greatly depends on the dimensionality of the reservoir spaces, however, increasing the numbers of reservoir nodes can pose experimental challenges. Interestingly, a single physical nonlinear node subjected to delayed feedback can act as a chain of virtual nodes [34] without much performance degradation compared to the conventional reservoir. Various hardware implementations adopt this concept for RC [17,19]. Although physical RCs have been shown to implement prediction tasks, most of the works attempt to perform one-step ahead prediction. Long term prediction is important for real-world data as future evolution can facilitate more informed decision and customized policy making. However, multi-step prediction itself is challenging due to the non-linear nature of most real-world data and typically inaccurate predictions of the immediate future accumulate very fast and cause divergence in the future predictions over longer times. Authors in [35] shown multi-step prediction for high spatial dimension data using spatiotemporal transformation and encoder-decoder like reservoir. However, this approach can fall short for univariate data such as individual household power consumption prediction. In household load forecasting tasks, usually the real time power value is readily available from wattmeter, however, the voltage, current values and other parameters are unknown. In [36], autonomous multi-step prediction has been shown for chaotic Mackey-Glass (MG) series by feeding delayed output to the reservoir. However, the reservoir response is transformed using a non-linear function which could be costly, and also diminishes the benefits obtained from linear reservoir operation. Recently, delayed inputs [37] and polynomial transformation of the delayed inputs [38] are used to improve the prediction performance of the reservoir, however, these works attempt to solve the optimized one-step ahead prediction. In this study, we have shown multi-step autonomous prediction using spintronic magnetic skyrmion based RC system. For autonomous prediction, the predicted output is fed directly to the input. To adequately extract the non-linearity that arises in the reservoir dynamics, we include several previous states of the reservoir during the training. This obviates the need of performing any non-linear (i.e. trigonometric, polynomial) transformation of the reservoir states. For long-term prediction our skyrmion based RC employs the virtual node concept as shown in Fig. 1. At first, we have shown long term prediction for chaotic MG time series since it has been frequently used for benchmarking forecasting tasks. Moreover, the long-term prediction can diverge more quickly due to the sensitivity of the chaotic systems to the error. Next, we have shown individual household power demand forecasting, which is an active research area. According to a recent study, the amount of energy wasted in a commercial building, can reach up to 40% if energy consumption is not properly maintained [39]. With energy management system (EMS), a commercial building can save up to 25.6% of its total energy consumption [40]. EMS in a building requires accurate load forecasting to maintain stability, improve performance, and detect abnormal system behavior. However, accurate long-term forecasting especially in a single household building is very challenging due to the volatile and univariate nature of the household power consumption data. Traditional statistical approaches such as auto-regressive moving average (ARIMA) [41], time-series statistical model [42] suffer from low prediction accuracy due to the parameters assumed in the model and the complexity of the systems. Machine learning based models such as RNN [43] and LSTM [44] offer more flexibility in this regard as they do not depend on the parameters of the system. Instead, they are driven by the observed past and present data, however, with significant training cost. RC can perform the prediction tasks with much more efficiency due to its low training cost and thus is very much suitable to be implemented in edge computing platforms which are equipped with low power devices. We use three decoupled and patterned skyrmion devices as our reservoir where the temporal correlation of the inputs is captured by the inherent short-term memory of the breathing skyrmions. Upon excitation, the skyrmions undergoes oscillations and the skyrmion states are read at a regular interval and processed with linear regression for the prediction. Autonomous prediction up to 30-time steps for the MG time series and up to 23 hours (equivalent to 23-time steps as only hourly demand is typically required) for the household power prediction have been demonstrated by using the predicted output as the input for the next time step prediction. The rest of the paper is organized as follows. In the section II, we detail the architecture of the skyrmion reservoir and corresponding magnetization dynamics, in section III we describe the process to set up and train and test the reservoir and in section IV we discuss the results and summarize our findings in the conclusion (section V). ## II Methodology ### Proposed Physical Reservoir Fig. 1a shows a conventional reservoir computing system which consists of an input layer, a reservoir block having recurrent connections among the constituent nodes and an output layer. The solid line arrows show the connections which are fixed and the dashed arrows are the connections which need to be trained. We propose to replace the reservoir block with three patterned and decoupled skyrmions whose magnetization dynamics we simulate. Each of the individual skyrmions is hosted in the ferromagnetic thin films with perpendicular magnetic anisotropic (PMA) as shown in Fig. 1b. A ferromagnetic reference layer, a tunnel barrier (MgO) and a synthetic antiferromagnetic (SAF) layer are patterned on top of the ferromagnetic free layer (that hosts the skyrmion) to create the magnetic tunnel junction (MTJ) as shown in Fig. 1c. This facilitates the read and write operation. The temporal inputs are linearly mapped into a voltage pulse and applied across the MTJs to modulate the PMA using the voltage-controlled magnetic anisotropy (VCMA) effect [45, 46, 47]. All the patterned skyrmions are subjected to the same set of inputs. When the PMA is modulated within a certain range, the skyrmions generate oscillatory response (skyrmion breathing) as shown in Fig. 2. The responses of the skyrmions are read with MTJs and processed with linear regression to compute the predicted values of the temporal time series. For a typical reservoir consists of N number of nodes as seen in Fig. 1a, the time discretized states of the nodes, \(\tau_{l}^{n}\), can be represented as follows: \[r_{i}^{n+1}=f(\sum_{j=0}^{N-1}p_{ij}r_{j}^{n}+q_{i}u^{n}) \tag{1}\] Here, the \(p_{ij}\), \(q_{i}\) are time-independent coefficients that are drawn from a random distribution having a mean of 0 and the standard deviations are adjusted for optimal performances. Also, \(f\) is the activation function which can be linear or non-linear and \(u^{n}\) is the input. Here, the "fading memory" or the short-term memory (an essential property of the reservoir) is achieved by using large number of nodes and their recurrent connections. In comparison, the skyrmion systems have inherent memory effect in their responses, thus instead of using several nodes only one skyrmion device can work as a reservoir. However, to increase the dimensionality, the states of this reservoir can be read at regular interval for a particular input, which acts as the virtual nodes of the reservoir. The concept is originally developed in reservoir with delayed feedback where a nonlinear node subjected to an input and delayed feedback acts as a chain of virtual nodes and provide performance similar to a typical reservoir [34]. Later on, it has been shown the virtual nodes derived from reservoir responses subjected to only the input signal can provide optimal performance [17, 19]. In such a scenario, if the virtual node interval (as shown by \(\theta\) in Fig. 1d) is lower than the characteristic time (relaxation time) of the reservoir, the node states are not only influenced by their own previous states, but also the neighboring node states and the input excitation. This allows for non-linear coupling among the nodes. Thus, the interconnection matrix in Eq. 1 is simplified in the skyrmion reservoir case where the virtual nodes are assumed to be connected in ring topology (as seen in Fig. 1d). The resulting node states of the reservoir can be expressed as: \[\begin{split} r_{0}^{n+1}&=f(r_{0}^{n}+r_{N-1}^{n- 1}+u^{n})\\ r_{i}^{n+1}&=f(r_{i}^{n}+r_{i-1}^{n}+u^{n})\end{split} \tag{2}\] Here, we have used linear activation function, \(f\)(w)=w, thus the read-out reservoir states are used for training without any post-processing or non-linear transformation. The node states of the reservoir act as features, which are used to generate the output. Since we are using only linear activation, we also include several previous responses of the reservoir for generating the output. Due to the short term-memory effect inherent to the skyrmion dynamics, these previous states provide additional non-linear effects. The output of the reservoir can be expressed as follows: \[y^{n}=\sum_{j=n-d}^{n}\sum_{i=0}^{N-1}w_{ij}r_{i}^{j} \tag{3}\] where, d represents the number of time-steps for which the previous reservoir responses are included. The optimal weights, \(w_{ij}\) can be obtained by optimizing a cost function. We use mean squared error as our cost functions: \[c=<(y^{n}-t^{n})^{2}> \tag{4}\] where \(t^{n}\) represents the teacher or target output of the system at nth time step. The optimization can be performed off-line using linear regression with regularization (ridge regression) or on-line using gradient descent optimizer. During the training or weight optimization stage, the teacher input, \(\mathit{I}^{n}=u^{n}\) is applied to the reservoir to predict the next time step value \(\mathit{y}^{n}=u^{n+1}\) as seen from Fig. 1d. Once the optimized weights are obtained, the testing phase begins, where the inputs are disconnected and the output of the reservoir is connected directly to the input, \(\mathit{I}^{n}=\mathit{y}^{n-1}\) as can be seen in Fig. 1e. ### Magnetization dynamics The magnetization dynamics of the skyrmions induced by PMA modulation in the thin film is simulated by solving the Landau-Lifshitz-Gilbert (LLG) equation using the MUMAX3 simulation package [48]: \[(1+\alpha^{2})\frac{d\vec{m}}{dt}=-\gamma\vec{m}\times\vec{H}_{eff}-\alpha \gamma\left(\vec{m}\times\left(\vec{m}\times\vec{H}_{eff}\right)\right) \tag{5}\] where \(\gamma\) and \(\alpha\) represent the gyromagnetic ratio and the Gilbert damping coefficient respectively. \(\vec{m}\) stands for the normalized magnetization vector, which is found by normalizing the magnetization vector (\(\vec{M}\)) with respect to saturation magnetization (\(\mathrm{M_{s}}\)). The thin films are discretized into cells with dimensions of 2nm\(\times\)2nm\(\times\)1nm, which are much shorter than the exchange length (\(\sqrt{\frac{2A_{ex}}{\mu_{\phi}M_{S}^{2}}}\)). In equation (5), the effective Figure 1: **a. A conventional reservoir computing system with input layer, reservoir block with recurrent connections among nodes and the output layer. b. The reservoir block is replaced by a set of patterned skyrmion devices where each of the ferromagnetic films with PMA host a single skyrmion. c. Stacks of a skyrmion device with metallic electrode and MTJ. d. Training of a skyrmion reservoir for autonomous prediction task. The temporal input data is mapped into voltage values which are applied to each of the skyrmion devices and the responses are collected. The responses are read at regular interval and the read-out values act as the virtual node as represented by \(\mathit{r}_{i}^{j}\). The state of the nodes (or reservoir responses) are used to predict the next time step value of the input time series. The weights are trained by computing the error of the predicted and target values and accomplished with simple pseudoinverse operation. e. During testing, the predicted output value is directly fed as input to the reservoir in order to perform multi-step autonomous prediction.** field, \(\vec{H}_{eff}\) accounts for the contributions from perpendicular magnetic anisotropy (PMA), demagnetization, exchange interaction arises from Heisenberg and Dzyaloshinskii-Moriya interaction (DMI) \(\vec{H}_{eff}\) can be expressed as follows: \[\vec{H}_{eff}=\vec{H}_{anis}+\vec{H}_{demag}+\vec{H}_{exch,Heisn}+\vec{H}_{exch, DM1} \tag{6}\] \(\vec{H}_{exch,DMI}\) is the effective field due to Dzyaloshinskii-Moriya interaction which is defined as: \[\vec{H}_{exch,DM1}=\frac{2D}{\mu_{0}M_{s}}\left(\frac{\partial m_{x}}{\partial x },\frac{\partial m_{x}}{\partial y},-\frac{\partial m_{x}}{\partial x}-\frac{ \partial m_{y}}{\partial y}\right) \tag{7}\] where D is the DMI constant and m\({}_{*}\), m\({}_{y}\), and m\({}_{*}\) are the components of unit magnetization vector \(\vec{m}\) along x, y, and z direction, respectively. \(\vec{H}_{anis}\) is the effective field due to the perpendicular anisotropy and expressed by the following equation: \[\vec{H}_{anis}=\frac{2K_{u}}{\mu_{0}M_{s}}\left(\vec{u}.\vec{m}\right)\vec{u} \tag{8}\] where \(K_{u}\) is the first order uniaxial anisotropy constant with \(\vec{u}\) is a unit vector along the anisotropy direction. Applying the voltages across the MTJ changes the PMA, which is obtained by changing the \(K_{u}\) values. We note that, PMA (or \(K_{u}\) coefficient) can be increased (decreased) by applying a negative (positive) voltage [28, 49]. Applying a positive (negative) voltage decreases (increases) the energy barrier exists between the perpendicular and in-plane magnetized states of the free layer. When the barrier is perturbed, the skyrmion undergoes oscillatory response which eventually settles down after some time (a few hundred nanosecond) if the perturbation is kept small (large enough perturbation can switch states and not desired). The simulations have been carried out without the thermal noise (T=0 K), however, from our previous study it has been shown that the short-term memory property of the skyrmion reservoir does not degrade much in the presence of room temperature thermal noise [21]. The simulation parameters are listed in Table I. TABLE I SIMULATION PARAMETER \begin{tabular}{l l} \hline \hline Parameters & Values \\ DMI constant (D) & \(0.0006\,Jm^{-2}\) \\ Gilbert damping (\(\alpha\)) & \(0.015\) \\ Saturation magnetization (\(M_{s}\)) & \(10^{6}\,Am^{-1}\) \\ Exchange constant (\(A_{ex}\)) & \(2\times 10^{-11}Jm^{-1}\) \\ Saturation magnetostriction (\(\lambda_{s}\)) & \(250\,ppm\) \\ Perpendicular Magnetic & \(7.5\times 10^{5}\,Jm^{-3}\) \\ Anisotropy (\(K_{u}\)) & \\ \hline \end{tabular} ### Dataset We evaluated the long-term prediction performance of our proposed reservoir on two different time series forecasting datasets. 3.1 1.1 2. VCMA coefficient, \(\varepsilon=\frac{\Delta K_{sl}}{\Delta V/t_{MgO}}\) (described later in section IV) in the skyrmion devices. For modeling the response, we use voltage pulses which are applied sequentially with a 2 ns duration. After applying each input pulse, the system is relaxed for 16 ns. We note that instead of applying the pulse for 18 ns we opt for a shorter write pulse which not only saves energy but also triggers rich dynamics that occurs during the relaxation phase of the skyrmion device. Moreover, the relaxation offers flexibility in terms of post processing time required for multi-step autonomous prediction (current prediction is provided as input for the next prediction). Fig. 2 shows the magnetization responses of the different reservoirs during the training phase of the MG time series at time step 231-235. The reservoir responses for a single period (18 ns) is read at 3 ns interval (6 times). These 6 values act as the virtual nodes of the reservoir (see in Fig. 2, the red diamond marks). Nodes more than 6 can be selected; however, this does not improve the performance as 6 nodes can adequately capture the amount of information in one period. Next, instead of using only the states in the current period (as been done in our previous study for one step ahead prediction [21,22]), we also include reservoir states from the previous 30 periods of data (total 31 periods, 31*6=186 states for one skyrmion device) for the autonomous long-term prediction of the MG series. The short-term memory capacity of the patterned skyrmion is shown to be \(\sim\) 4 bits [21] (up to 4 periods). Thus, the reservoir is expected to remember inputs from the previous 4 periods and non-linearly transform its states based on the memorized inputs. However, when tasked with autonomous prediction using only the current states, the prediction quickly diverges (large prediction errors after third time-step). Thus, for multi-step prediction, previous responses are included to enable the RC to utilize important contexts from the past few observations. Figure 2: **a.** Three ferromagnetic thin films each hosting a magnetic skyrmion worked as the RC. The responses of the respective skyrmion devices are shown side by side when the thin films are perturbed by the inputs of MG time series from time-step 231 to 235 (inputs are mapped into voltage pulse amplitudes). The PMA modulation by the input voltage pulses are shown in orange color. The virtual nodes are marked in red diamond. Once the reservoir states are obtained from all three skyrmion devices, ridge regression (Tikhonov regularization) is performed for training and computing the optimal weights. The mean squared error is considered as the cost function (shown in Eq. 4) and the activation functions for the reservoir nodes are considered to be linear. With these assumptions, the optimal weights can be found using the following: \[w_{ij}^{opt}=(AA^{T}+\lambda l)^{-1}A^{T}B \tag{10}\] where, A is the reservoir response vector including the present and past observation for all the training inputs, \(\lambda\) is the regularization coefficient, B is the vector of labels containing the target output. For MG series forecasting we choose the regularization coefficient to be, \(\lambda=10^{-8}\). For household power prediction task, slightly modified geometries are used. The ferromagnetic thin films of 1050 nm, 850 nm and 750 nm are used. The additional side lengths of the ferromagnetic regions provide stability to the skyrmions to the stochastic changes presented in the household load data. The etched region of 15 nm and 500 nm middle regions for hosting the skyrmions remain the same. The voltage pulse duration of 2 ns and relaxation period of 16 ns are kept the same. Here, the reservoir states for a total of 21 periods (current period, and previous 20 periods) are used for forecasting task. The optimal weights are computed using ridge regression where the regularization parameter is chosen to be, \(\lambda=10^{-1}\). Due to nonvolatile and stochastic nature of the household load data, it is difficult to train the reservoir as accurately as possible without overfitting, thus a large regularization coefficient is required. ### Training and Testing the Reservoir At first, for each of the input, the reservoir states are read at 3 ns interval up to 18 ns. The state vector can be expressed as \(R_{n}=\{r_{0}^{n},r_{1}^{n},..,r_{5}^{n}\}\), where the superscript n in r represents the nth input of the temporal series and the subscript represents the virtual node number. For MG series prediction task, all 400 training inputs are applied sequentially to all of the skyrmion devices and the corresponding \(R_{n}\) are collected. The label, \(t^{n}\) for the prediction task is the next time-step MG function value, \(t^{n}=u^{n+1}\), where \(u^{n}\) is the MG function value at nth time step. The training could be performed using the reservoir response vector, A\(=[(R_{1},...,R_{d},R_{d+1})^{\prime},...,(R_{n-d-1},...,R_{n-2},R_{n-1})^{\prime},(R_{n-d},..,R_{n-1},R_{n})^{\prime}]\) and the corresponding target output vector, B\(=[t^{1},...,t^{n-1},t^{n}]\) and using the ridge regression equation in Eq. 6. The "\({}^{\prime}^{n}\) symbol denotes the transpose operation. Once the optimal weights are obtained after training the reservoir is ready for testing and performing autonomous prediction. In an actual hardware implementation, the weight optimization (such as pseudoinverse) operation can take time. By the time the optimal weights are computed, the reservoir can be sufficiently relaxed and loses its memory. Thus, before starting the testing phase, the reservoir needs to warm up. The same temporal series used in training can be used for warm-up or initializing the reservoir. Depending on the postprocessing optimization time, instead of using all the training data for weight optimization, some of the training data can be saved for reservoir initialization. In the MG series prediction task, during the warm-up, the inputs are applied sequentially up to 401th input. Once the reservoir responses are collected, we predict the 402th time step value of the series and then this predicted output is fed directly to the reservoir as input as shown in the Fig. 1e. We repeatedly performed these steps to autonomously predict the time series up to 431th time step. For autonomously predicting household power consumption, the reservoir is trained and initialized by providing input up to 261th hour data. Then, the reservoir responses are collected and using the optimal weights, the 262th hour data is predicted. The predicted output is then directly fed as the input. Autonomous prediction up to 284th hour is performed following these steps. ### Sequence-to-sequence LSTM Architecture The performance of the reservoir is also compared with state-of-the-art sequence-to-sequence (S2S) LSTM. S2S is an architecture that was proposed to map sequences of different lengths [51]. The architecture consists of two LSTM networks: an encoder and a decoder. As the input state for the decoder, the encoder's job is to transform input sequences of variable length into fixed-length vectors. Afterward, the decoder produces an output sequence with length n. In this case, the output is the energy load projection for the following n steps is the output sequence. This architecture's key benefit is that it accepts inputs of any length. In other words, the load for an arbitrary number of future time steps can be predicted using any number of available load measurements from previous time steps as inputs. The electricity load (active power) for a time step or multiple time steps in the future, given historical electricity load data. i.e. having \(M\) load measurements available, which can be expressed as \[y\;=\;\{y\;[0],y\;[1],\ldots,y[M-1]\} \tag{11}\] where \(y[t]\) is the actual load measurement for time step t, the load for the following \(T-M\) time steps should be predicted. The predicted load values can be expressed as; \[\hat{y}\;=\;\{\hat{y}\;[M],\hat{y}[M+1],\ldots\hat{y}[T]\} \tag{12}\] For training, the encoder network is pre-trained to minimize the following error: \[\text{LE}=\sum_{i=1}^{M}(y[i]-\hat{y}[i])^{2} \tag{13}\] Then the encoder is plugged into the decoder network and we train the two networks to reduce the objective function: \[\text{LD}=\sum_{i=M+1}^{T}(y[i]-\hat{y}[i])^{2} \tag{14}\] The error of network is minimized using the backpropagation algorithm. Back-propagation signals are allowed to flow from the decoder to the encoder. Therefore, weights for both the encoder and decoder are updated in order to minimize the objective function expressed in Eq. 14. Both decoder and encoder are updated because the pre-training of the encoder alone is insufficient to achieve good performance. In this paper, we tested multiple layers with different numbers of neuron units per layer and we found that the training dataset using a 2-layer network with 50 units in each layer gave the best performance. Increasing the capacity of the network did not improve performance on the testing data. ## IV Results and Discussions ### Autonomous Prediction With Reservoir Long-term prediction results of the proposed reservoir for MG time series prediction is shown in Fig. 2(a). After training the reservoir up to 400 time-step data, the input is disconnected and the predicted output is connected to the reservoir input. The reservoir is able to predict the next 30 time-step output with very good accuracy and with a root mean squared error (RMSE) of 0.0015. As the errors after 30-time steps prediction is extremely small, further prediction is possible. However, we restrict our effort due to the simulation complexity and limited hardware resources. Fig. 3c and Fig. 3d show the phase plot of the training and testing data of the chaotic MG attractor. The overlapping plots in Fig. 3c and 3d shows that the reservoir was able to accurately predict both of the training and test data. When the same task is given to the LSTM as shown in Fig. 3b, it performs well and the resulting RMSE was 0. 0000013. This performance improvement can be due to the dependences that arises among the LSTM cell states in the 2-layer deep architectures because of the use of forget gates (control how many previous states to remember). In addition, the non-linear activation functions are used for the input, output and forget gates which provide the non-linear transformation effect. Despite using much simpler architectures and linear activations, the reservoir is able to predict the chaotic trend with competitive accuracy. In our reservoir, during testing, the output is directly fed as the input, and this connection is not scaled or optimized (as has been done in [36]). Moreover, we did not use any non-linear transformation of the reservoir states, rather use the states as it is, and included several previous states. Thus, the success of the reservoir for long term prediction can be attributed to the use of previous reservoir states during the training and testing which provides the necessary non-linearity. Interestingly, here the non-linearity arises from the physical reservoir's dynamic responses rather than using Figure 3: **a.** Long term autonomous prediction of chaotic MG time series with skyrmion reservoir. The dataset is trained with 31-400 time-step data. The reservoir is tasked to predict the next 30-time step data from 402-431. The overlapping of the predicted test data with actual label suggests accurate prediction **b.** prediction trend for MG time series with 2-layer deep sequence to sequence LSTM architecture. The LSTM is able to accurately predict the trend. Although, RMSE magnitude of the LSTM is lower than the reservoir, however the prediction errors for both of the predictions remain extremely small. **c.** Phase diagram of the chaotic MG attractor during the training with reservoir. The predicted training data is overlapped with actual label implying the efficacy of the ridge regression training. **d.** Phase diagram of the reservoir for autonomous prediction. The superimposed plots suggest good prediction accuracy of the reservoir on test data. any external non-linear transformation (hyperbolic tanh, sinx or polynomial activation). Furthermore, we did not use any temporal mask to encode an input for generating diversified features, instead we used different geometry skyrmion devices for variability. The only parameter that is scanned and optimized during training is the number of previous reservoir states. For MG prediction, we used reservoir states for 30 previous inputs, which is shown to provide adequate non-linearity. For long term prediction with output feedback, it is extremely important to predict the immediate future steps as accurate as possible, as any error in the near future can accumulate fast and make the prediction divergent. Previously, reservoir with delayed input is shown to improve the one-step ahead prediction of the reservoir [34], which demonstrates the importance of the non-linearity effect coming from the delayed input to the prediction performance. In our proposed reservoir, accuracy of the long-term prediction is maintained due to the inclusion of previous states (similar effect as of the delayed input). After the successful performance of the proposed reservoir for the long-term chaotic time series prediction task, we focus on forecasting the long-term individual household power consumption. The task is challenging due to the non-volatile and univariate nature of the data (only the power value is readily available, other parameters are unknown) and especially when the dataset is small. However, we find that the proposed skyrmion reservoir can achieve good accuracy when we use several previous reservoir responses. The total number of previous states that is included in the training are optimized and reservoir states for 20 previous inputs are used. The autonomous long-term prediction results are presented in Fig. 4a and 4b for the proposed reservoir and 2-layer deep S2S LSTM architecture respectively. From Fig. 4a, it is clear that the reservoir is able to predict the household power demand with good accuracy. The RMSE after 23 hours of prediction is calculated to be 0. 0885. The prediction accuracy of the reservoir is good in the first several hours of the prediction (see the hourly RMSE plot in Fig. 5 labeled as reservoir: 262-284). In contrast, the accuracy of the LSTM is poor at the beginning of the prediction, however regains the accuracy in the next few predictions and the overall RMSE is calculated to be 0.0831. The accuracy Figure 4: **a.** Long-term autonomous forecasting of individual household active power load with proposed reservoir. The reservoir is trained with 21-261 hours of data and tasked with predicting the next 23 hours of data. The predicted trend closely follows the actual load level suggesting good prediction accuracy with the skyrmion reservoir. **b.** The same task is performed with 2-layer sequence to sequence LSTM architecture. Although, the LSTM is able to capture the trend, the prediction accuracy is less than the proposed reservoir for the first several hours of prediction. degradation of the LSTM for the first few predictions could be due to lack of training data, as LSTM typically requires large numbers of observations to find the underlying dependencies in the data. Further, the reservoir is tasked to predict the next 286-308 hours of data where the load consumption trend is significantly stochastic. The hourly RMSE for both the reservoir and LSTM prediction for two different intervals: 262 to 284 hours of load and 286 to 308 hours of load are presented in Fig. 5. The training and testing trend of the reservoir for 286-308-hour interval is shown in the inset of the Fig. 5. From the RMSE plot we can see that, in 286-308-hour interval, the initial prediction of the reservoir is beyond the magnitude of the regularization (\(\lambda\)=0.1), however, the reservoir is able to minimize the error in the next few predictions and follow the load trend. The large prediction error of the reservoir for the 286-308 hours of load compared to the previous set of prediction (262-284 hours) arises due to the significantly stochastic load behavior which is difficult to track with the reservoir. The limitation of the reservoir to track unstable stochastic trend is also observed in previous study [35]. For both of the prediction intervals, the reservoir prediction accuracy starts to diverge after several hours. As mentioned earlier, the long-term prediction with feedback depends on the accurate prediction of the immediate future. However, due to nonvolatile and stochastic trend of the household load data, the prediction accuracy degrades quickly, nonetheless good accuracy is found up to 20 hours. Compared to the reservoir, the LSTM prediction accuracy is poor for such stochastic load trends. The steady and higher prediction accuracy of the reservoir compared to the LSTM further proves the efficacy of the RC for the smaller dataset. ### Energy Dissipation There are two main contributions to the energy consumed for the reservoir computing discussed here. First, energy is needed to modulate the PMA of the ferromagnetic layers. Maximum change in PMA coefficients is, \(\Delta PMA=0.75\times 10^{3}/m^{3}\). Thus, the maximum change in the surface anisotropy coefficients is estimated to be, \(\Delta K_{sl}=\frac{\Delta PMA}{t_{CoFeB}}\)\(=0.75\times 10^{3}\)\(=0.75\times 10^{12}\). Assuming the VCMA coefficient of the MTJ to be, \(\varepsilon=\frac{\Delta K_{sl}}{\Delta V/t_{MgO}}\)=31 fJ [52], the thickness of the tunneling barrier MgO to be, \(t_{MgO}\)=1 nm, the magnitude of voltage to perform the maximum PMA modulation can be calculated to be, \(\Delta V\)=0.24V. Assuming the relative permittivity of MgO to be 7, the total capacitance can be calculated to be, C=\(\frac{\epsilon_{0}\epsilon_{r}(L+L)}{t_{CoFeB}}\) ~62 fF. Here, we assume L=1050 nm (so our estimate is conservative) for the length of the side of the square region of ferromagnetic layer. Thus, the total write energy to charge the capacitive tunneling region is estimated to be \(\frac{1}{2}CV^{2}\)\(\sim\) 2 fJ. Second, the reservoir responses that are read after certain interval (3 ns) known as virtual nodes also consumes energy. The read energy can be estimated to be ~1.24 fJ with a read delay of \(\sim\) 0.3 ns [53] (which is well within 3 ns interval). In a period of 18 ns, the read operation is performed 6 times. Thus, the total energy for the write (PMA modulation) and read energy is calculated to be ~ 9.44 fJ. For the household prediction task, a total of 21 hours of data (which translates to 21 discrete data points for the reservoir computing) are used to predict the next-hour household power. Thus, the total energy dissipation for one skyrmion reservoir is = 21*9.44\(\sim\) 198 fJ. During the reservoir initialization stage, the PMA is modulated. However, the states are not read. The total energy during the reservoir initialization stage is estimated to be ~ 440 fJ. Including the reservoir initialization energy, the total energy consumption for 3 skyrmion reservoir to predict one future value of individual household energy consumption is estimated to be ~ 651 fJ (assuming worst case scenario). For LSTM implementation, the GPU energy is calculated to be ~ 0.68 J per prediction. With reservoir implementation, the output layer is a feedforward layer that is implemented in a GPU as well for fair comparison. The GPU energy consumption for the reservoir feedforward layer is calculated to be 0.043 J per prediction. Thus, the reservoir is able to predict one instance with 16\(\times\) lower energy compared to the LSTM. We note that, further reduction in energy consumption with reservoir can be achieved by implementing feedforward output layer with crossbar array of non-volatile memory using in-memory computing [54, 55]. Thus, we could get an overall (at the system level which is the key metric of interest) 10X to 100X reduction in the computing energy needed to predict building energy. At the individual RC MTJ device level, the energy savings would be enormous (pico-Joules instead of milli-Joules) but we do not use this metric as it is not a fair comparison if overall architecture and systems are not considered. ## V Conclusion We have shown long-term autonomous prediction with a skyrmion reservoir. The reservoir is tasked with predicting the chaotic MG time series and real-world individual household load forecasting and is able to predict long-term trends with competitive accuracy. The proposed reservoir is set up using three patterned skyrmions having slightly different geometries. All the skyrmions are provided with the same temporal input series and the resulting skyrmions' oscillation is read at regular intervals and processed with simple linear regression. After training, the output is fed as the reservoir input to perform autonomous long-term prediction. The prediction performance greatly improves due to inclusion of previous states in addition to the current states of the reservoir as these previous states provide the non-linear effect necessary for accurate prediction. Furthermore, the physical reservoir does not consider masking the input and only the output weights and the number of previous states included during training are optimized. Energy consumption estimation shows that skyrmion reservoir can perform autonomous prediction with an energy consumption of 0.043 J/per prediction which is at least 16X less than the state-of-the-art LSTM. In addition, we show that with our proposed physical RC one can achieve competitive accuracy with much smaller dataset. Furthermore, with VCMA control, the skyrmion reservoir can be operated with ultra-low power as the anisotropy modulation is performed with voltage as opposed to energy hungry current control. Since in RC Figure 5: **a.** Hourly RMSE of the prediction accuracy for individual household load forecasting task for both of the proposed reservoir and LSTM. RMSE plots for two different long-term autonomous predictions, 262-284 hours and 286-308 hours are shown. The inset shows the prediction trend of the reservoir for prediction from 286 hour to 308 hours. The RMSE plots indicate higher prediction accuracy of the proposed reservoir compared to LSTM, even for much stochastic trend such as in 286-308 hours of data. only the last layer is trained, thus our skyrmion reservoir provides a pathway to implement extremely energy efficient long-term prediction of real-world problem with high accuracy, which is specifically attractive in hardware and memory constraint edge computing platforms, where energy is at a premium. ## Acknowledgement This work was supported in part by the Virginia Commonwealth Cyber Initiative (CCI) CCI Cybersecurity Research Collaboration Grant and the Central Virginia Node (CVN) of the Commonwealth Cyber Initiative (CCI) research award number VV-1Q23-008.
2307.15071
Writer adaptation for offline text recognition: An exploration of neural network-based methods
Handwriting recognition has seen significant success with the use of deep learning. However, a persistent shortcoming of neural networks is that they are not well-equipped to deal with shifting data distributions. In the field of handwritten text recognition (HTR), this shows itself in poor recognition accuracy for writers that are not similar to those seen during training. An ideal HTR model should be adaptive to new writing styles in order to handle the vast amount of possible writing styles. In this paper, we explore how HTR models can be made writer adaptive by using only a handful of examples from a new writer (e.g., 16 examples) for adaptation. Two HTR architectures are used as base models, using a ResNet backbone along with either an LSTM or Transformer sequence decoder. Using these base models, two methods are considered to make them writer adaptive: 1) model-agnostic meta-learning (MAML), an algorithm commonly used for tasks such as few-shot classification, and 2) writer codes, an idea originating from automatic speech recognition. Results show that an HTR-specific version of MAML known as MetaHTR improves performance compared to the baseline with a 1.4 to 2.0 improvement in word error rate (WER). The improvement due to writer adaptation is between 0.2 and 0.7 WER, where a deeper model seems to lend itself better to adaptation using MetaHTR than a shallower model. However, applying MetaHTR to larger HTR models or sentence-level HTR may become prohibitive due to its high computational and memory requirements. Lastly, writer codes based on learned features or Hinge statistical features did not lead to improved recognition performance.
Tobias van der Werff, Maruf A. Dhali, Lambert Schomaker
2023-07-11T11:35:08Z
http://arxiv.org/abs/2307.15071v1
# Writer adaptation for offline text recognition: An exploration of neural network-based methods ###### Abstract Handwriting recognition has seen significant success with the use of deep learning. However, a persistent shortcoming of neural networks is that they are not well-equipped to deal with shifting data distributions. In the field of handwritten text recognition (HTR), this shows itself in poor recognition accuracy for writers that are not similar to those seen during training. An ideal HTR model should be adaptive to new writing styles in order to handle the vast amount of possible writing styles. In this paper, we explore how HTR models can be made writer adaptive by using only a handful of examples from a new writer (e.g., 16 examples) for adaptation. Two HTR architectures are used as base models, using a ResNet backbone along with either an LSTM or Transformer sequence decoder. Using these base models, two methods are considered to make them writer adaptive: 1) model-agnostic meta-learning (MAML), an algorithm commonly used for tasks such as few-shot classification, and 2) writer codes, an idea originating from automatic speech recognition. Results show that an HTR-specific version of MAML known as MetaHTR improves performance compared to the baseline with a 1.4 to 2.0 improvement in word error rate (WER). The improvement due to writer adaptation is between 0.2 and 0.7 WER, where a deeper model seems to lend itself better to adaptation using MetaHTR than a shallower model. However, applying MetaHTR to larger HTR models or sentence-level HTR may become prohibitive due to its high computational and memory requirements. Lastly, writer codes based on learned features or Hinge statistical features did not lead to improved recognition performance. 1 Footnote 1: Code used for this research can be found at [https://github.com/tobiasvanderwerff/master-thesis](https://github.com/tobiasvanderwerff/master-thesis) Offline handwritten text recognition Writer adaptation Few-shot adaptation Conditionality ## 1 Introduction Handwriting recognition has seen major successes using deep learning, manifested in domains like handwritten text recognition (Michael et al., 2019; Ameryan and Schomaker, 2021), writer identification (Yang et al., 2016; He and Schomaker, 2020), binarization (Dhali et al., 2019), and word spotting (Chanda et al., 2018). However, neural networks are often still lacking when it comes to adapting to novel environments (Kouw and Loog, 2019). Arguably, much of the modern success of deep learning can be attributed to collecting massive amounts of data to cover as many parts of the underlying data distribution as possible, combined with a proportional increase in computing power and model size (Kaplan et al., 2020). However, such a brute-force approach to learning is often not practical for handwriting recognition tasks. Large, high-quality corpora of annotated handwritten texts are often scarce, especially for historical handwriting. In this case, more efficient use of data and reusability of previously learned representations becomes important. In this paper, we focus on improving one of the most common handwriting recognition tasks: handwritten text recognition (HTR), which refers to the process of automatically turning images of handwritten text into letter codes. HTR remains a challenging problem, mainly due to the large number of possible handwriting variations (Fig. 1). In this research, we attempt to make modern HTR models _writer adaptive_, referring to the idea that when a trained HTR model is presented with a novel writing style, it is able to modify its internal representations in such a way as to improve recognition performance for that style. We focus on cases with limited data available for adaptation (10-20 samples), as this represents a realistic scenario for real-time adaptation. In a practical setting, a user of an HTR system could be asked to supply a handful of handwriting examples in order to improve recognition performance on their writing style. How to perform writer-specific adaptation effectively remains an open problem. A popular approach for adapting existing deep learning models is _transfer learning_, where previously learned model parameters are reused for a new but related task that has only a modest amount of training data, leading to notable successes in fields such as natural language processing (Devlin et al., 2018) and computer vision (Oquab et al., 2014). It is important to note that the potential benefit of including writer identity as a conditional variable cannot easily be decoupled from architectural choice. For example, Hidden Markov Models (Baum and Petrie, 1966) have been a common choice for HTR in the past, and methods have been developed to include writer identity in such models. However, these methods are often not usable for modern approaches to HTR using deep neural networks, which use powerful hierarchical representations that outperform past methods. In this sense, a relevant question is whether state-of-the-art deep learning approaches to HTR can benefit from explicit writer information _in the first place_. We will show that this benefit is not obvious, providing at best modest improvements compared to a writer-unaware baseline. There are several problems at hand. In order to adapt effectively based on style information, there is a clear need to identify _what exactly a deep learning model has not learned yet_. The question can be formulated as "What novelty does this new writer introduce that is not effectively handled by the neural network?". Another relevant question is what signal source can be provided to allow for adaptation, and the non-trivial question of effectively including such information into an HTR model. We draw inspiration from a recently published paper by Bhunia et al. (2021) which Figure 1: The word “algebra” written by different writers. Each row contains handwriting for a single writer, recorded at four different times. Note that variation manifests itself between writers but also within individual writers. Figure taken from Schomaker (2002). employs meta-learning to flexibly adapt HTR models to different writers, seemingly with great success. Meta-learning (also known as learning-to-learn) is currently an active area of research (Hospedales et al., 2020). Meta-learning is concerned with improving the learning algorithm itself. Often, the idea is to adapt a learning algorithm to a new task based on a small number of task-specific examples. The aim is to learn underlying _meta-knowledge_ that can be transferred to various tasks, even those unseen during training. The paper by Bhunia et al. (2021) makes use of a modified form of model-agnostic meta-learning (MAML) (Finn et al., 2017), which they call MetaHTR. As this is one of the more promising ideas for writer-aware adaptation, we explore several versions of the MAML approach and will test its ability to perform writer-specific adaptation. Additionally, we experiment with another approach, based on _writer codes_: Compact vector representations of individual writers that are supposed to capture the most relevant information about a writer to allow for effective adaptation. Writer codes can be learned or explicitly given as part of the model input. The codes are inserted into a trained HTR model by adjusting the parameters of batch normalization layers. We experiment with several approaches to creating such a writer code: One based on learned feature vectors and one based on traditional handcrafted features used for writer identification. Although this approach is conceptually appealing, our version of writer codes does not yield concrete benefits for adaptation. We summarize the contributions in this paper as follows: * We show that MAML-based methods applied to a trained HTR model can lead to improved data efficiency, showing an improvement between 1.4 and 2.0 word error rate compared to a naive fine-tuning baseline; * We test the capability of MetaHTR to perform writer-specific adaptation, finding that it leads to an improvement of 0.7 word error rate for a deep HTR model, but shows no significant effect for smaller models; * We analyze how a trained HTR model can be effectively adapted based on writer-specific vector representations, finding that fine-tuning batch normalization scale and bias parameters can be an effective way to obtain additional performance gains, even without writer-specific information; * We show that writer codes based on learned features or Hinge statistical features do not lead to improved recognition performance. This paper is structured as follows. In Section 2, we provide related works. In Section 3, we propose several techniques for writer-adaptive HTR and experiments to verify their performance. In Section 4, we outline our experimental setup. In Section 5, we show results for the proposed methods, and finally, in Section 6 and Section 7, we discuss the results and future work. ## 2 Related works Handwritten text recognition:Early approaches to HTR often employed Hidden Markov Models (Bianne-Bernard et al., 2011) (HMM). More recently, the field of HTR has progressed from HMM-based methods to end-to-end trainable neural networks with many layers. Recurrent neural networks (RNN), and in particular Multi-dimensional Long Short-Term Memory (MDLSTM), networks (Graves et al., 2007) have been commonly used sequence modeling architectures for HTR models (Puigecerver, 2017). The MDLSTM architecture, in combination with the Connectionist Temporal Classification (Graves et al., 2006) loss (CTC), served as a replacement for Hidden Markov Model-based methods (Graves and Schmidhuber, 2008). Whereas standard RNN architectures process data along a one-dimensional axis - e.g., a time axis -, the MDLSTM architecture allows recurrence across multi-dimensional sequences, such as images. In more recent years, it has been observed that the expensive recurrence of the MDLSTM could be replaced by a CNN + bidirectional LSTM architecture (Shi et al., 2016; Puigecerver, 2017). The CNN-RNN hybrid + CTC has been a commonly used architecture (e.g., Dutta et al. (2018); Sueiras et al. (2018); Wigington et al. (2017)). For example, in Dutta et al. (2018), a spatial transformer network, residual convolutional blocks (ResNet-18), stacked BiLSTMs, and a CTC layer are used. Although CTC has been a common decoding method, some of its downsides - such as the inability to consider linguistic dependency across tokens - have led to architectures that replace CTC in favor of attention modules (Bahdanau et al., 2014). Attention-based encoder-decoder architectures have reached state-of-the-art performance in recent years (Michael et al., 2019). Attention alleviates constraints on input image sizes and the need for segmentation or image rectification (Jaderberg et al., 2015) for irregular images. This thus allows for simplification in the design of HTR architectures. In Li et al. (2019), a ResNet-31 is combined with an LSTM-based encoder-decoder along with a 2-dimensional attention module for irregular text recognition in natural scene images. A trend in recent years has been to replace the linear recurrence of RNNs with the more parallelizable Transformer architecture and attention-based approaches more broadly. In a recent work (Diaz et al., 2021), various architectures for universal text line recognition are studied, using various encoder and decoder families. The authors find that a CNN backbone for extracting visual features, coupled with a Transformer encoder, a CTC decoder, and an explicit language model, is the most effective approach for recognizing line strips. Building on top of the idea [14] of using Transformer-only architectures for vision tasks, Li et al. [2021] explore an end-to-end Transformer encoder-decoder architecture for text recognition, initialized with a pretrained vision Transformer for extracting visual features and a pretrained RoBERTa [15] Transformer for sequence decoding. After initialization, the model is pretrained on large-scale synthetic handwritten images and fine-tuned on a human-labeled dataset. Meta-learning:Meta-learning, or learning-to-learn, is an alternative paradigm to traditional neural network training, which aims to improve the learning algorithm itself [16]. By learning shared knowledge across various tasks over multiple learning episodes, the aim is to improve future learning performance. The main meta-learning method we focus on here is Model-Agnostic Meta-Learning [17] (MAML). MAML aims to find a parameter initialization such that a small number of gradient updates using a handful of labeled samples produces a classifier that works well on validation data. MAML is related to transfer learning, in the sense that finding good initialization parameters for a model to facilitate adaptation to various tasks plays a central role. Due to its model-agnostic nature, MAML can be applied to various application domains without significant modifications. Due to the inner/outer-loop optimization process, MAML has great flexibility in terms of the kinds of parameters that can be learned in the inner loop, e.g., parameterized loss functions [1], learning rates [11], and attenuation weights [1]. Meta-learning has been applied to various areas such as reinforcement learning and few-shot classification, but, notably, also to speech recognition, in the form of accent adaptation [18] and speaker adaptation [10]. MetaSGD [11] is a modification of MAML and involves learning the update direction and learning rate along with the parameter initialization. MAML++ [15] addresses the training instability of MAML that is commonly observed. MAML has also been used in combination with other types of meta-learning. For example, in Rusu et al. [2018], the authors combine MAML with model-based meta-learning, using a latent generative representation of model parameters and applying MAML in this lower-dimensional latent space. Writer adaptation:Many early approaches for writer adaptation are proposed for HMMs using Gaussian Mixture Models. For example, Vinciarelli and Bengio [2002] use linear transformations between original parameters and re-estimated parameters for adjusting GMM parameters using maximum likelihood linear regression. More recently, there have been several attempts at adaptation in the space of HTR using neural networks. In Nair et al. [2018], the authors perform simple fine-tuning on a new handwriting collection, showing that this can lead to efficient transfer between datasets using a limited amount of fine-tuning data. In Szummer and Bishop [2006], the authors cluster writers by style and train a classifier for each cluster, using a mixture-of-experts setup for choosing the best combination of classifiers. For a new writer, the combination of classifiers is based on classification confidence for that writer. In Zhang and Liu [2012], the authors learn a linear writer-specific feature transformation in order to create a style-invariant classifier, which they call Style Transfer Mapping (STM). Whereas the original approach was not used in the context of neural networks, a later approach [19] uses STM for neural networks in the context of Chinese character recognition. In Wang et al. [2020], the authors employ writer codes for writer-specific Chinese handwritten text recognition using a CNN-HMM hybrid model. They feed a writer code into adaptation layers tied to individual convolution layers. The result is added element-wise to the intermediate CNN feature maps. At train time, writer codes are jointly learned with the adaptation layers. At test time, codes for new writers are randomly initialized and optimized using one to three gradient steps. Recently, Wang and Du [2022] used a style extractor network trained on a writer identification task to extract a writer code, used to adapt a writer-independent recognizer. Specifically, the writer code is added to the convolutional layer output after being fed through a fully-connected layer. The writer adaptation problem has also been formulated as a domain adaptation problem [19, Kang et al., 2020, Yang et al., 2018]. In Zhang et al. [2019], a gated attention similarity unit is used to find character-level writer-invariant features. In Kang et al. [2020], the authors employ an adversarial learning approach using synthetic data. A generic HTR model is initially trained using synthetic data and adapted to new writers using a domain discriminator network. ## 3 Methodology Overview:An HTR model \(f_{\theta}\) - corresponding to a deep neural network -, is trained to maximize the probability \(p(Y|\mathcal{I};\theta)\) of the correct transcription given an input image \(\mathcal{I}\) and ground truth character sequence \(Y=(y_{1},y_{2},\dots,y_{L})\), where each \(y_{i}\) is picked from a vocabulary \(V\) (e.g., ASCII characters). A training dataset \(\mathcal{D}=\{(\mathcal{I}_{1},Y_{1}),(\mathcal{I}_{2},Y_{2}),\dots,(\mathcal{ I}_{N},Y_{N})\}\) consists of tuples containing an image \(\mathcal{I}_{i}\) and the corresponding charac ter sequence \(Y_{i}\). The cost function is derived from cross-entropy, which, for a single example, is of the following form: \[\mathcal{L}(\mathcal{I},Y;\theta)=-\frac{1}{L}\sum_{t=1}^{L}\log p(Y_{t}=y_{t}|y _{<t},\mathcal{I};\theta). \tag{1}\] ### Base models We make use of two base models: FPHTR (Singh and Karayev, 2021) and SAR (Li et al., 2019). FPHTR builds on the Transformer architecture, and SAR on the LSTM architecture. In Fig. 2, we show a high-level overview of both models to highlight their overall structure and similarity. For both models, we use two versions: a smaller version using an 18-layer ResNet backbone and a larger version with a 31-layer ResNet backbone (see Appendix 8 for parameter counts). The base models are standard HTR models that do not make use of explicit writer information, chosen based on their competitive performance on common benchmarks. Their performance serves as a baseline for "writer-unaware" HTR models. #### 3.1.1 Sar The SAR model (Li et al., 2019) is based on the Long Short-Term Memory (LSTM) architecture (Hochreiter and Schmidhuber, 1997). It consists of a ResNet image processing backbone, LSTM encoder, LSTM decoder, and a 2-dimensional attention module. The CNN backbone consists of a modified ResNet (He et al., 2016; Shi et al., 2016), which outputs a 2-dimensional feature map \(\mathbf{V}\). This is used by the consecutive LSTM encoder to extract a holistic feature vector for the whole image and also serves as context for the 2D attention network. The final encoder hidden state \(\mathbf{h}_{W}\) is fed as the initial input to the LSTM decoder. A special start-of-sequence token (<SOS>) is fed as input to the decoder. At each timestep of the LSTM, a new character is sampled autoregressively. Each input at the timesteps that follow is either 1) the previous character from the ground truth character sequence (also known as _teacher forcing_), or 2) the sampled character from the previous timestep (at test time). If the latter is the case, the end of the sampling procedure is signified by sampling a special end-of-sequence token (<EOS>). All token inputs are fed in as vector representations, followed by a linear transformation, \(\psi(.)\). After being fed through an LSTM cell along with the previous hidden state, the timestep prediction is then calculated as \(\mathbf{y}_{t}=\phi(\mathbf{h}^{\prime}_{t},\mathbf{g}_{t})=\text{softmax}(\mathbf{W}_{o}[\bm {h}^{\prime}_{t};\mathbf{g}_{t}])\), where \(\mathbf{h}^{\prime}_{t}\) is the current hidden state and \(\mathbf{g}_{t}\) is the output of the attention module. \(\mathbf{W}_{o}\) is a linear transformation, which maps the features to a vector whose size is equal to the number of character classes. The attention module is a modification of the standard 1D attention module for dealing with a 2D spatial layout. It takes into account neighborhood information in Figure 2: Schematic overview of the two base models: FPHTR and SAR. the 2D plane: \[\begin{cases}\mathbf{e}_{ij}&=\text{tanh}(\mathbf{W}_{v}\mathbf{v}_{ij}+\sum_{p,q \in\mathcal{N}_{ij}}\mathbf{\tilde{W}}_{p-i,q-j}\cdot\mathbf{v}_{pq}+\mathbf{W} _{h}\mathbf{h}_{t}^{\prime})\\ \alpha_{ij}&=\text{softmax}(\mathbf{w}_{c}^{r}\cdot\mathbf{e}_{ij})\\ \mathbf{g}_{t}&=\sum_{i,j}\alpha_{ij}\mathbf{v}_{ij},\quad i=1,\dots,H,\quad j= 1,\dots,W\end{cases}\] Explanation of the symbols: \(\mathbf{v}_{ij}\) is the local feature vector at position \((i,j)\) in \(\mathbf{V}\); \(\mathcal{N}_{ij}\) is the eight-neighborhood around this position; \(\mathbf{W}_{v},\mathbf{W}_{h},\mathbf{\tilde{W}}\) are learned linear transformations; \(\alpha_{ij}\) is the attention weight at location \((i,j)\); and \(\mathbf{g}_{t}\) is the weighted sum of local features, also known as a _glimpse_. The difference with a traditional attention module is the addition of the \(\sum_{p,q\in\mathcal{N}_{ij}}\mathbf{\tilde{W}}_{p-i,q-j}\cdot\mathbf{v}_{pq}\) term when weighing \(\mathbf{v}_{ij}\). #### 3.1.2 Fphtr FPHTR (Singh and Karayev, 2021) is a Transformer-based architecture, consisting of a CNN backbone combined with a Transformer (Vaswani et al., 2017) module for decoding the visual feature map into a character sequence. The architecture was originally proposed for full-document HTR, but due to its generic nature, it can easily be applied to both word and line images without any real modifications. The CNN takes an image as input and produces a 2D feature map with hidden size \(d_{model}\) as output. A 2D position encoding based on sinusoidal functions is added, and the feature map is flattened into a 1D sequence of feature vectors - each representing a position in the image -, that can be processed by the Transformer decoder. The Transformer decoder is a standard Transformer architecture (Vaswani et al., 2017) with non-causal attention to the encoder output (it can attend to the entire output of the encoder) and causal self-attention (it can only attend to past positions of its character input). Input vectors are enhanced with 1D position encodings. Sampling is done autoregressively, in the same way as the SAR model. ### Meta-learning Our first attempt to make HTR models writer adaptive involves meta-learning (Hospedales et al., 2020). Adaptation occurs by providing the model with labeled examples of a writer that it should adapt to, after which the weights of the model are updated using the model-agnostic meta-learning algorithm. We first provide a brief overview of model-agnostic meta-learning in Section 3.2.1, then turn to the MetaHTR approach in Section 3.2.2. The explanation of these methods will be brief; for a more detailed explanation, we refer the reader to the original papers. #### 3.2.1 Model-agnostic meta-learning Model-agnostic meta-learning (MAML) (Finn et al., 2017) is an approach to meta-learning aimed at finding initial parameters that facilitate rapid adaptation. Let \(p(\mathcal{T})\) be a distribution over tasks to which a model should be able to adapt. During meta-training, a batch of tasks \(\mathcal{T}_{i}\sim p(\mathcal{T})\) is sampled, where samples from each task are split up in a support set \(D^{tr}\) of size \(K\) for adaptation (where typically \(K\) is relatively small, e.g., \(K\leq 16\)), and a query set \(D^{val}\) for testing the task-specific performance after adaptation. Training is done using stochastic gradient descent (SGD), where the model parameters \(\theta\) are adapted to a task as follows: \[\theta_{i}^{\prime}=\theta-\alpha\nabla_{\theta}\mathcal{L}^{inner}(D_{i}^{tr}; \theta). \tag{2}\] This is referred to as the _inner loop_, using an inner loop learning rate \(\alpha\). After inner loop adaptation, the adapted parameters \(\theta_{i}^{{}^{\prime}}\) are evaluated on the query set, and the original parameters are updated by aggregating the loss over the sampled tasks, using an _outer loop_ learning rate \(\beta\): \[\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{\mathcal{T}_{i}\sim p( \mathcal{T})}\mathcal{L}^{outer}(D_{i}^{val};\theta_{i}^{\prime}). \tag{3}\] Whereas the inner loop optimizes for task-specific performance, the outer loop optimizes for a parameter set \(\theta\) so that the task-specific training is more efficient, aiming to achieve good generalization across various tasks. #### 3.2.2 MetaHTR MetaHTR is a modification of the MAML algorithm optimized for text recognition. Within the MetaHTR framework, each task instance \(\mathcal{T}_{i}\) corresponds to a different writer. The full training process is summarized in Algorithm 1. Once MetaHTR is trained, it can be used to rapidly adapt to specific writers at inference time. This is shown in Algorithm 2. With respect to MAML, MetaHTR introduces two modifications: _character instance-specific weights_, and _learnable layer-wise learning rates_. Character instance-specific weights:Instance-specific weight values are added to the inner loop loss such that the model can adapt better with respect to characters having a high discrepancy. Given a ground truth character sequence \(Y=\{y_{1},y_{2},\ldots,y_{L}\}\) and image \(\mathcal{I}\), the inner loop loss now adds a value \(\gamma_{t}\) for each time-step \(t\): \[\mathcal{L}^{inner}=-\frac{1}{L}\sum_{t=1}^{L}\gamma_{t}\log p(y_{t}|\mathcal{I };\theta), \tag{4}\] which is a modified version of cross-entropy, including \(\gamma_{t}\) values inside the summation. In order to calculate \(\gamma_{t}\), gradient information from the final classification layer is used. The idea is that the gradients provide information related to disagreement, i.e., what knowledge is missing from the model that still needs to be learned. Specifically, let the weights of the final classification be denoted as \(\phi\). The gradients of the \(t\)'th instance loss with respect to the weights of the final classification layer are used, denoted as \(\nabla_{\phi}\mathcal{L}^{t}\), in combination with the gradients of the mean loss (Eq. 1), denoted as \(\nabla_{\phi}\mathcal{L}\). Both inputs are concatenated and fed as input to a network \(g_{\psi}\), leading to character instance-specific weight \(\gamma_{t}\), where \(\gamma_{t}=g_{\psi}([\nabla_{\phi}\mathcal{L}^{t};\nabla_{\phi}\mathcal{L}])\). \(g_{\psi}\) takes the form of a 3-layer MLP with parameters \(\psi\), followed by a sigmoid layer to produce a scalar output value in the range [0, 1]. Learnable layer-wise learning rates:The inner loop learning rate used in MAML is replaced by a learnable one [Li et al., 2017]. Specifying a learnable learning rate for every model parameter allows the model to express differences between what parameters should be updated more or less. However, using a learning rate for every parameter also doubles the parameter count, which is prohibitive. Therefore, learning rates are used for individual layers in the model, which are trained along with all the other parameters. This is also shown in Algorithm 1. ``` 0: Training dataset \(\mathcal{D}=\left\{\mathcal{D}_{1},\mathcal{D}_{2},\ldots,\mathcal{D}_{| \mathcal{W}^{test}|}\right\}\) 0:\(\beta\): learning rate 1: Initialize \(\theta,\psi,\alpha\) 2:while not done do 3: Sample writer-specific \(\mathcal{T}_{i}=\left\{D_{i}^{tr},D_{i}^{val}\right\}\sim p(\mathcal{T})\) 4:for all\(\mathcal{T}_{i}\)do 5: Evaluate inner objective: \(\mathcal{L}^{inner}(\theta;D_{i}^{tr})\) 6: Adapt: \(\theta_{i}^{\prime}=\theta-\alpha\nabla_{\theta}\mathcal{L}^{inner}(\theta;D_ {i}^{tr})\) 7: Compute outer objective: \(\mathcal{L}^{outer}(\theta_{i}^{\prime};D_{i}^{val})\) 8:endfor 9: Update meta-parameters: \((\theta,\psi,\alpha)\leftarrow(\theta,\psi,\alpha)-\beta\nabla_{(\theta,\psi, \alpha)}\sum_{\mathcal{T}_{i}}\mathcal{L}^{outer}(\theta_{i}^{\prime};D^{val})\) 10:endwhile ``` **Algorithm 1** Training for MetaHTR, adapted from Bhunia et al. [2021]. #### 3.2.3 Meta-learning evaluation We evaluate several variants of the MAML/MetaHTR approach. One downside of the MAML approach and MetaHTR, in particular, is that it leads to a notable increase in memory and computational requirements. We, therefore, analyze variations of the MAML-based approach to investigate to what degree it can be simplified. Concretely, we experiment with three different models: MAML, MAML + llr, and MetaHTR. 1. **MAML:** The original MAML algorithm, as proposed in Finn et al. (2017), using the sequence-based cross-entropy loss function shown in 1. 2. **MAML + llr:** The MAML algorithm is complemented with learnable inner loop learning rates (Section 3.2.2). This alleviates the need to manually set the inner loop learning rate, at the cost of only a few hundred additional parameters (see Appendix 10) 3. **MetaHTR:** The full MetaHTR model is explained in Section 3.2.2. A downside of the MetaHTR approach is the additional complexity that it introduces. Next to the calculation of higher-order derivatives as part of the MAML algorithm, MetaHTR also requires an additional backward pass in order to calculate the instance-specific weights. This makes the approach expensive both in terms of computation and in terms of memory usage, therefore making it challenging to scale to larger contexts such as sentence-level HTR. ### Writer codes Our second attempt to include writer information into the base HTR models is based on the idea of representing style or writer information as a compact feature vector. In speech recognition, such a code is known as a _speaker code_(Abdel-Hamid and Jiang, 2013). We take a similar approach by trying to model writers or styles using a small feature vector, which is used to adapt the weights of an existing HTR model. We will refer to such vectors as _writer codes_. A writer code is a dense feature vector \(\mathbf{x}\in\mathbb{R}^{M}\), where \(M\) is set based on the desired representational capacity. A relevant property of writer codes is that they should be able to obtain them even for writers that are not part of the initial training set. Writer codes have certain properties that make them appealing as a method for writer-adaptive HTR: they are efficient to compute and often require minimal changes to a base architecture. #### 3.3.1 Code insertion First, we address the question of how the codes should be inserted into the base model for effective adaptation. A comprehensive evaluation of possible methods for code insertion is beyond the scope of this paper, but we note here that, based on various experiments, naive insertion of codes into the base models can easily deteriorate base-level performance. Notably, naively modifying batch normalization (batch norm) parameters can lead to catastrophic forgetting. Furthermore, we found that adapting only certain key layers of the network, such as the last layers of the ResNet backbone, was not sufficient to allow for effective adaptation. Instead, an effective form of vector-based adaptation comes from fine-tuning the normalization layers of the model. This approach is inspired by work on generative models, such as conditional GANs (Karras et al., 2019; Zhang and Schomaker, 2022) and methods for style transfer (Dumoulin et al., 2016; Ulyanov et al., 2016). Previous work in the field of style transfer suggests that in order to adapt features to a particular style, it can be sufficient to specialize scaling and shifting parameters after normalization layers, conditioned on style information (Dumoulin et al., 2016). We adopt a similar approach, where we update the learnable weights of the normalization layers in our network, conditioned on a specific writer code. Specifically, we focus on batch normalization layers, which are present in the ResNet backbone2. Given a minibatch of activations \(B=\{x_{1,\dots,m}\}\), batch normalization layers are of the following form: Footnote 2: It is worth noting that for the FPHTR model, layer normalization is used in addition to batch normalization. However, we found no concrete benefit in adjusting these normalization layers. \[y_{i}=\frac{x_{i}-\mu_{B}}{\sqrt{\sigma_{B}^{2}+\epsilon}}\cdot\gamma+\beta, \tag{5}\] where \(\gamma\) and \(\beta\) are learnable parameter vectors of size equal to the number of channels in the input. The \(\epsilon\) parameter is a small constant added for numerical stability. The normalization statistics are calculated along the batch dimension: \[\mu_{B}=\frac{1}{m}\sum_{i=1}^{m}x_{i},\qquad\sigma_{B}^{2}=\frac{1}{m}\sum_{i =1}^{m}(x_{i}-\mu_{B})^{2}. \tag{6}\] For inserting writer codes into the neural network, we modify the \(\beta\) and \(\gamma\) parameters based on an input code (corresponding to an approach called _conditional batch normalization_(De Vries et al., 2017)). Given pretrained parameters \(\beta_{c}\) and \(\gamma_{c}\), changes in these parameters are predicted based on an input code \(e\) and a two-layer MLP: \[\Delta\beta=\phi_{1}(e),\qquad\Delta\gamma=\phi_{2}(e), \tag{7}\] where \(\phi_{1}\) and \(\phi_{2}\) are MLPs. The predicted deltas are then added to the original \(\beta_{c}\) and \(\gamma_{c}\) parameters: \(\hat{\beta}_{c}=\beta_{c}+\Delta\beta_{c},\hat{\gamma}_{c}=\gamma_{c}+\Delta \gamma_{c}\), where \(\hat{\beta}_{c}\) and \(\hat{\gamma}_{c}\) replace the batch norm parameters for the current forward pass. All other parameters are frozen during training, including \(\beta\) and \(\gamma\). By changing the \(\gamma\) and \(\beta\) affine parameters that follow normalization, there is great flexibility in changing the intermediate feature maps according to the specifics of a particular code, while the risk of catastrophic forgetting is mitigated by keeping the original batch normalization weights largely intact. #### 3.3.2 Code creation Given the conditional batch normalization method for inserting writer codes into an HTR model, we turn to the question of how we create writer codes. An important criterion is that writer codes are not created under a closed writer set assumption; we should be able to instantiate them for novel writers as well. We experiment with two kinds of writer codes: learned codes, and codes based on statistical writer information (Hinge codes and style codes). Learned codes:Learned writer codes are obtained by training them in the same way as the weights of the network. A similar idea is commonly seen in NLP (e.g., Devlin et al. (2018)), where for each token in a predefined vocabulary, an associated vector representation is learned (often called an "embedding") that is more expressive than a one-hot vector indicating the identity of the token. Note that this approach implies a fixed set of writer codes initialized at the start of training - one for each writer in the training set. In the case when a new writer is presented that is unseen during training, we follow Abdel-Hamid and Jiang (2013) by randomly initializing a new writer code, followed by one or several gradient steps on the newly initialized code, using a small batch of labeled writer-specific data. Hinge codes:When it comes to capturing writer individuality, there exists a rich literature on this topic in the field of writer identification (Schomaker, 2007). In contrast to the learned features discussed in the previous section, features for writer identification are often handcrafted or statistical in nature. One of the more successful features for writer identification is the Hinge feature (Bulacu and Schomaker, 2007), which uses a probability distribution of the angle combination of two hinged edge fragments to characterize writer individuality. The assumption here is that these features can lead to a meaningful clustering of writers based on their style differences. These writer codes are attractive because they are easy to calculate and do not require additional adaptation data at inference time. Style codes:We also focus on generic style clusters in feature space, rather than features that are highly writer-specific. For example, style clusters could point to high-level writing styles such as cursive or mixed cursive. We perform k-means clustering on Hinge codes to obtain generic style clusters. For each style cluster, we train a writer code using backpropagation. Thus, given an image input, we find the closest style cluster based on the Hinge features and map the style cluster identity to a learned writer code that is updated using gradient descent. ## 4 Experiments ### Dataset We use the IAM dataset (Marti and Bunke, 2002) for evaluation, using word-level images. The dataset consists of English handwritten texts contributed by 657 writers, making a total of 1,539 handwritten pages consisting of 115,320 segmented words. The data is labeled at the sentence, line, and word level. Examples of word images are shown in Fig. 3. For splitting the data into a training, validation, and test set, we use the widely used Aachen splits (SLR, 2023). An important property of these splits is that the writer sets are disjoint, i.e., writers seen during training are not seen during testing. The Aachen splits contain 500 writers making up a total of 75,476 images. ### Implementation details Base models:Character error rate (CER) and word error rate (WER) are used for evaluation, with the best model chosen based on the lowest WER. We use a character-level vocabulary, converting all characters to lowercase. No Figure 3: Examples of word images from the IAM dataset. linguistic post-processing on word predictions is used. We report average performance over five random seeds, along with standard deviations for all results. For training of the base models, the Adam optimizer is used Kingma and Ba (2014), with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). We use gradient clipping to avoid exploding gradients based on the L2-norm of the gradient vector. All models are implemented using PyTorch Paszke et al. (2019), using a single Nvidia V100 GPU with 32GB of memory. See appendix Table 5 for full details about hyperparameter settings. We use random image rotation, scaling, brightness, contrast adjustment, and Gaussian noise to increase image diversity. We reduce the resolution by 50% to reduce memory footprint while keeping the text legible. Meta-learning:Given the \(K\)-shot \(N\)-way meta-learning formulation, we use \(K=16\) and \(N=8\), following Bhunia et al. (2021). This means that during adaptation, a batch of \(K=16\) writer-specific examples are used to adapt the model to a specific writer, and outer loop gradients are averaged over \(N=8\) writers (see Eq. 3). During training, we randomly sample writer-specific batches of size \(2K\), split into a support and query set of size \(K\). At test time, we use all examples for a given writer: Given the \(j\)'th writer with \(N_{j}\) total examples, we randomly split the data into a support batch (adaptation batch) of size \(K\), and use the remaining \(N_{j}-K\) examples for evaluation of the adapted model. Performance per writer is averaged over ten runs. For all models, we use dropout in the outer loop. Batch norm statistics are fixed to their running values and not updated during training, as this led to more stable performance (see Appendix A for a more extensive discussion concerning the particulars of using batch normalization in combination with MAML). We use the learn2learn library (Arnold et al., 2020) for implementing all meta-learning methods. Full hyperparameter settings are shown in the Appendix (Table 7). Writer codes:For the learned writer codes discussed in Section 3.3.2, we require adaptation data at test time to initialize codes for novel writers. Splitting of writer data is done in the same way as for meta-learning. During training, the weights of the trained HTR model are frozen, and only the writer code values and the parameters of the conditional batch norm MLPs are updated. We use a code size of 64 and an adaptation batch size of 16. For style codes, we use k-means clustering with \(k=3\), based on validation set performance. Complete hyperparameters are shown in Table 6 in the Appendix. ## 5 Results ### Base models The results for the base models on the IAM validation and test set are shown in Table 1. We report average performance as well as the performance of the best run. From the results in Table 1, we can see that the Transformer-based model (FPHTR) outperforms the LSTM-based model (SAR) on validation and test, both for the smaller 18-layer case (15-18M weights) and the larger 31-layer case (52-58M weights). This difference is significant in the case of the larger 31-layer models, with FPHTR outperforming SAR on the test with a difference of 4.1 WER and 4.8 CER. For the smaller 18-layer models, FPHTR outperforms SAR by a difference of 0.5 WER and 0.7 CER. ### Meta-learning Results for meta-learning are shown in Table 2. It should be noted that since all models presented here make use of additional adaptation data at test time, a direct comparison between the base models in Table 1 is not directly meaningful. In other words, the MAML-based models have access to parts of the test data as part of their adaptation procedure. Therefore, we devise a different baseline, by evaluating the base models after performing fine-tuning on the same adaptation data that is made available to the MAML-based models. Specifically, we fine-tune the final classification layer of a base model using the adaptation data. We use the Adam (Kingma and Ba, 2014) optimizer with a learning rate \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Val**} & \multicolumn{3}{c}{**Test**} \\ \cline{2-7} & \multicolumn{2}{c}{**WER**} & \multicolumn{2}{c}{**CER**} & \multicolumn{2}{c}{**WER**} & \multicolumn{2}{c}{**CER**} \\ \cline{2-7} & Avg. & Best & Avg. & Best & Avg. & Best & Avg. & Best \\ \hline SAR-18 & \(16.3\pm 0.6\) & 15.5 & \(13.5\pm 1.0\) & 12.2 & \(20.7\pm 0.8\) & 19.7 & \(17.3\pm 0.8\) & 15.8 \\ FPHTR-18 & \(\mathbf{16.0\pm 0.4}\) & 15.3 & \(\mathbf{12.6\pm 0.4}\) & 12.1 & \(\mathbf{20.2\pm 0.2}\) & 19.9 & \(\mathbf{16.6\pm 0.3}\) & 16.4 \\ \hline SAR-31 & \(14.9\pm 0.2\) & 14.7 & \(11.3\pm 0.5\) & 10.6 & \(19.7\pm 0.7\) & 18.8 & \(15.7\pm 1.0\) & 14.5 \\ FPHTR-31 & \(\mathbf{11.6\pm 0.3}\) & 11.1 & \(\mathbf{7.9\pm 0.4}\) & 7.5 & \(\mathbf{15.6\pm 0.8}\) & 14.6 & \(\mathbf{10.9\pm 0.7}\) & 10.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of the base models on the IAM val and test set (lower is better). of 1e-3 for 3 optimization steps. Due to persistent out-of-memory errors for the SAR-31 MetaHTR model3, we only include FPHTR-31 in addition to the smaller 18-layer variants. From these results, we can see that MetaHTR performs best, improving upon the baseline by 1.4, 2.0, and 1.7 WER for FPHTR-18, SAR-18, and FPHTR-31, respectively. Footnote 3: Another performance-related issue worth mentioning is that MetaHTR requires calculation of instance-specific gradients, which, at the time of running the experiments, is something that is not supported in batch form in the PyTorch library. Therefore, this required a manual calculation of instance-specific gradients using a for-loop, which made the MetaHTR training procedure considerably slower than MAML. This problem is something that can be fixed using additional software, but the additional complexity of MetaHTR due to the extra backward pass remains. We plot the learned inner loop learning rates in Fig. 4, to get an idea of the relative weight assigned to each layer in the adaptation process. We show learned inner loop learning rates for two randomly chosen runs of the FPHTR-18 and FPHTR-31 models using MAML + llr (we include the figure for FPHTR-18 in the appendix, Fig. 6). Looking at these plots, we see a relatively high weight assigned to the ResNet layers, decreasing towards the head of the network. For the Transformer module, we observe an increasing trend in the learning rates across layers. This is an indication that the lower layers of the Transformer network require relatively fewer adaptation than layers closer to the output, with the final classification layer requiring the most adaptation. It is worth noting here that the performance improvements for MetaHTR (between 1.4 to 2.0 WER compared to the baseline) are much smaller than reported in the original paper (Bhunia et al., 2021), where MetaHTR improved upon the SAR baseline by a difference of 7.1 WER, and 6.8 after naive fine-tuning on the adaptation data. In email correspondence with the authors of the MetaHTR paper, we were not able to resolve the cause of this discrepancy. Furthermore, due to the lack of published code by the MetaHTR authors, it is difficult to cross-verify the MetaHTR results. #### 5.2.1 Testing the adaptation premise of MetaHTR An important question concerning the efficacy of MetaHTR is to what degree it truly _adapts_ based on a set of writer-specific images at test time. This is an important premise, since the additional computational overhead of MetaHTR as well as the increased complexity compared to regular neural network training is supposedly warranted by a clear goal: An ability to adapt in a flexible way to various writers leading to a performance improvement compared to a writer-unaware model. In the words of the authors, the goal of MetaHTR is to offer a "adapt to my writing \begin{table} \begin{tabular}{c c c c} \hline & FPHTR-18 & SAR-18 & FPHTR-31 \\ \hline Baseline & \(20.0\pm 0.2\) & \(20.6\pm 0.6\) & \(15.3\pm 0.7\) \\ \hline MAML & \(19.1\pm 0.3\) & \(19.5\pm 0.7\) & \(14.3\pm 0.3\) \\ MAML + llr & \(19.3\pm 0.5\) & \(19.3\pm 0.7\) & \(14.3\pm 0.2\) \\ MetaHTR & \(\mathbf{18.6\pm 0.4}\) & \(\mathbf{18.6\pm 0.5}\) & \(\mathbf{13.5\pm 0.2}\) \\ \hline \end{tabular} \end{table} Table 2: Meta-learning results on the IAM test set, measured in WER (lower is better). Figure 4: Learned per-layer learning rates for the MAML + llr model, for FPHTR-31. button" (Bhunia et al., 2021), where one is asked to write a specific sentence in order to make recognition performance of that handwriting more accurate. Note that because the MetaHTR objective function and training procedure are different from the training procedure used for the baseline, it is not clear that the improved performance of MetaHTR is due to writer adaptation. The MetaHTR objective function is designed for writer-specific adaptation, but it may simply be a more effective way to train the neural network, regardless of whether writer adaptation is performed or not. The writer adaptation performed at test time is what is supposed to make MetaHTR writer adaptive. Therefore, if it is writer adaptive, it should perform better than MetaHTR _without_ writer adaptation at test time. In order to test this, we leave out the writer-specific adaptation. More concretely, we train MetaHTR the same way as done before but evaluate it without performing inner loop adaptation on a support batch of \(K\) images. Results are shown in Table 3. The additional benefit of adaptation is 0.2 WER for FPHTR-18, 0.7 WER for SAR-18, and 0.7 WER for FPHTR-31. We use a two-sample t-test to measure the statistical significance of the difference in results. Using a significance level \(\alpha=0.05\), we observe that the difference in results is not significant for FPHTR-18 (\(p=0.4143\)) and SAR-18 (\(p=0.0832\)), but _is_ significant for FPHTR-31 (\(p=0.0001\)). In other words, adaptation only shows a significant effect for the larger FPHTR-31 model, but not for the smaller 18-layer variants. ### Writer codes We show results for all writer codes in Table 4. From the table, it can be seen that the learned codes do not improve upon the performance of the baseline. The fact that writer codes at test time are created by random initialization followed by only a small number of gradient steps is a potential factor here - codes trained in this way seem to hurt performance rather than improve it. Next, we consider Hinge and style codes. Both methods outperform the baseline. For the Hinge code, this is a difference of 1.7 and 1.6 WER for FPHTR and SAR, respectively. A similar performance improvement can be seen for the style code, obtained by clustering Hinge codes with a single learned code per style cluster. In this case, the difference is 1.8 and 1.7 WER for FPHTR and SAR, respectively. Although these results show improvement compared to the baselines, they do not provide adequate insight into the efficacy of the codes themselves. Recall from Section 3.3.1 that conditional batch normalization uses a 3-layer MLP with the writer codes as input to predict changes to the original batch norm weights. It is possible that the MLP learns effective bias vectors that improve performance regardless of the writer code input, i.e., the writer code could simply be ignored (e.g., assigned zero weights). To test this, we replace the writer codes with a zero code that contains no writer information whatsoever, i.e., a vector with only zero values. As seen from Table 4, this leads to almost identical performance compared to both the Hinge and style code. This is a strong indication that writer information is not the direct cause of the increase in performance, but rather, _conditional batch normalization seems to be an effective way to fine-tune the batch norm weights, even without the presence of conditional information_. Although this may be an interesting way to perform general fine-tuning, it does not rely on writer-specific information to make it possible. \begin{table} \begin{tabular}{l c c} \hline \hline & FPHTR-18 & SAR-18 \\ \hline Baseline & \(20.2\pm 0.2\) & \(20.7\pm 0.8\) \\ \hline Learned code & \(24.5\pm 0.3\) & \(23.7\pm 0.4\) \\ Hinge code & \(\mathbf{18.5\pm 0.2}\) & \(\mathbf{19.1\pm 0.6}\) \\ Style code & \(\mathbf{18.4\pm 0.2}\) & \(\mathbf{19.0\pm 0.6}\) \\ Zero code & \(\mathbf{18.5\pm 0.3}\) & \(\mathbf{19.0\pm 0.5}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Writer code results on the IAM test set, measured in WER (lower is better). \begin{table} \begin{tabular}{l c c c} \hline \hline & FPHTR-18 & SAR-18 & FPHTR-31 \\ \hline w/ adaptation & \(18.6\pm 0.4\) & \(18.6\pm 0.5\) & \(13.5\pm 0.2\) \\ w/o adaptation & \(18.8\pm 0.4\) & \(19.3\pm 0.5\) & \(14.2\pm 0.2\) \\ \hline \hline \end{tabular} \end{table} Table 3: MetaHTR performance with and without writer adaptation, measured in WER. ## 6 Discussions ### Meta-learning An appealing aspect of the meta-learning approach is that there is a great deal of flexibility in the way the model can adapt to a writer by differentially updating the layers of the model (e.g., as demonstrated in Fig. 4). Nevertheless, the added benefit of writer adaptation using MetaHTR is not obvious, as shown in Section 5.2.1. Even without using any adaptation data at test time, the MetaHTR model still improves upon the baseline performance. This indicates that more effective representations play a role in the additional performance gains, rather than rapid adaptability of the model parameters, a phenomenon observed before in the literature on meta-learning (Raghu et al., 2020). This makes MetaHTR interesting for improving overall model performance, but not necessarily for writer-specific adaptation. Another downside of the MetaHTR approach is the additional complexity that it introduces. Next to the calculation of higher-order derivatives as part of the MAML algorithm, MetaHTR requires an additional backward pass to calculate the instance-specific weights (Section 3.2.2). This makes the approach expensive both in terms of computation and memory usage and makes it challenging to scale to larger contexts such as sentence-level HTR. This is exemplified by the fact that we were not able to train MetaHTR in combination with the SAR-31 base model on a 32GB GPU due to persistent out-of-memory errors. This is somewhat problematic given our finding that a deeper model lends itself better to adaptation using MetaHTR than a shallower one. Another example of additional complexity is the difficulty caused by the interaction of MAML with batch normalization (see Appendix A for a more extensive discussion on this topic). Moreover, training of MetaHTR requires a good deal of fine-tuning of various hyperparameters to make it work well, which is something that has also been observed for MAML more broadly (Antoniou et al., 2018). Given the modest benefits for writer adaptation (0.7 WER in the best case), combined with the increased model complexity, it can be argued that MetaHTR is perhaps not worth the extra investment for writer adaptation. This is especially true given that when more labeled examples are available, a simpler method, such as transfer learning, may be more cost and time effective. ### Writer codes The results in Table 4 show the limited effectiveness of the writer code idea. We showed that statistical features for characterizing writer identity do not show a benefit over a constant zero vector. The fact that the Hinge feature is designed to be independent of the textual context of the handwriting samples may play a role here (Bulacu and Schomaker, 2007). An option for future work would be to explore features that lend themselves better to characterize the most relevant writer characteristics, such as idiosyncratic letter shapes that are difficult to classify. For example, a Fraglet approach based on shape codebooks (Bulacu and Schomaker, 2007) may capture the individual shape features of a particular handwriting more appropriately (see Fig. 5). A histogram can be compiled by matching codebook prototypes with the character shapes observed for an individual writer, counting the matched codebook entries. The normalized histogram can subsequently be used as a vector representation. One factor which may play an important role here is data volume. For example, consider automatic speech recognition, where the notion of "speaker adaptation" appears to be more common. One facet in which speech and text recognition diverge is the availability of large-scale labeled datasets. Whereas collecting and labeling handwriting samples can be cumbersome and labor-intensive, speech transcriptions are generally easier to obtain. Thus, if data volume is the Figure 5: Examples of codebooks that capture shape information based on clustering of character shapes. The codebook entries act as prototypes representative of the types of shapes commonly seen in handwriting. Figure taken from Bulacu and Schomaker (2007). critical bottleneck for learning robust representations that lend themselves well to adaptation, methods used in speech recognition relying on large-scale datasets may not transfer as well to HTR. Indeed, as shown by recent work on large language models (Brown et al., 2020), scale may be a major enabling factor for effective few-shot adaptation. ## 7 Conclusion In this paper, we studied various methods for making neural network-based HTR models writer adaptive. Meta-learning showed the most promising results, with both MAML and MetaHTR leading to improved performance compared to baseline models. However, we showed that only a relatively small portion of these improvements (between 14-39%, or 0.2-0.7 WER) can be attributed to writer adaptation, with most of the improvements coming from changes in the way the neural network is trained. It remains to be seen whether MetaHTR could be used to handle more radical domain shifts, as seen, for example, in historical handwriting. Given the observation that writer adaptation using MetaHTR may work better for deeper models, potential future work may focus on scaling up MetaHTR to deeper models. However, memory and/or computational requirements may become prohibitive in this case. Lastly, results show that writer code-based adaptation using learned features or statistical Hinge features does not lead to increased performance. However, updating batch normalization weights may be an effective way to perform general fine-tuning.
2305.15608
Semantic Segmentation by Semantic Proportions
Semantic segmentation is a critical task in computer vision that aims to identify and classify individual pixels in an image, with numerous applications for example autonomous driving and medical image analysis. However, semantic segmentation can be super challenging particularly due to the need for large amounts of annotated data. Annotating images is a time-consuming and costly process, often requiring expert knowledge and significant effort. In this paper, we propose a novel approach for semantic segmentation by eliminating the need of ground-truth segmentation maps. Instead, our approach requires only the rough information of individual semantic class proportions, shortened as semantic proportions. It greatly simplifies the data annotation process and thus will significantly reduce the annotation time and cost, making it more feasible for large-scale applications. Moreover, it opens up new possibilities for semantic segmentation tasks where obtaining the full ground-truth segmentation maps may not be feasible or practical. Extensive experimental results demonstrate that our approach can achieve comparable and sometimes even better performance against the benchmark method that relies on the ground-truth segmentation maps. Utilising semantic proportions suggested in this work offers a promising direction for future research in the field of semantic segmentation.
Halil Ibrahim Aysel, Xiaohao Cai, Adam Prügel-Bennett
2023-05-24T22:51:52Z
http://arxiv.org/abs/2305.15608v1
# Semantic Segmentation by Semantic Proportions ###### Abstract Semantic segmentation is a critical task in computer vision that aims to identify and classify individual pixels in an image, with numerous applications for example autonomous driving and medical image analysis. However, semantic segmentation can be super challenging particularly due to the need for large amounts of annotated data. Annotating images is a time-consuming and costly process, often requiring expert knowledge and significant effort. In this paper, we propose a novel approach for semantic segmentation by eliminating the need of ground-truth segmentation maps. Instead, our approach requires only the rough information of individual semantic class proportions, shortened as semantic proportions. It greatly simplifies the data annotation process and thus will significantly reduce the annotation time and cost, making it more feasible for large-scale applications. Moreover, it opens up new possibilities for semantic segmentation tasks where obtaining the full ground-truth segmentation maps may not be feasible or practical. Extensive experimental results demonstrate that our approach can achieve comparable and sometimes even better performance against the benchmark method that relies on the ground-truth segmentation maps. Utilising semantic proportions suggested in this work offers a promising direction for future research in the field of semantic segmentation. ## 1 Introduction Semantic segmentation is widely resorted in a variety of fields such as autonomous driving [25], medical imaging [1; 28], augmented reality [29] and robotics [18]. Impressive improvements have been shown in those areas with the recent development of deep neural networks (DNNs), benefited from the availability of extensive annotated segmentation datasets at a large scale [10; 12]. However, creating such datasets can be very expensive and time-consuming due to the usual need of annotating pixel-wise labels as it takes between 54 and 79 seconds per object [3], and thus requiring a couple of minutes per image with a few objects. Moreover, requiring full supervision is rather impractical in some cases for example medical imaging where expert knowledge is required. Annotating 3D data for semantic segmentation is even more costly and time-consuming due to the additional complexity and dimensionality of the data, which generally requires voxel (i.e., point in 3D space) annotation. Skilled annotators from outsourcing companies that are dedicated to data annotation may be needed for specific requests to ensure annotation accuracy and consistency, adding further to the cost [11]. Different approaches have been proposed to reduce the fine-grained level (e.g. pixel-wise) annotation costs. One line of research is to train segmentation models in a weakly supervised manner by requiring image-level labels [23; 26], scribbles [15], eye tracks [21] or point supervision [3; 17] rather than costly segmentation masks of individual semantic classes. In contrast, we in this paper propose to utilise the proportion (i.e., percentage information) of each semantic class present in the image for semantic segmentation. For simplicity, we call this type of annotation _semantic (class) proportions_ (SP). To the best of our knowledge, this is the fist time of utilising SP for semantic segmentation. This innovative way, different from the existing ways, could significantly simplify and reduce the human involvement required for data annotation in semantic segmentation. Our proposed semantic segmentation approach by utilising the SP annotation can achieve comparable and sometimes even better performance in comparison to the benchmark method with full supervision utilising the ground-truth segmentation masks, see for example Figure 1. Our main contributions are: i) propose a new semantic segmentation methodology utilising SP annotations; ii) conduct extensive experiments on representative benchmark datasets from distinct fields to demonstrate the effectiveness and robustness of the proposed approach; iii) draw an insightful discussion for semantic segmentation with weakly annotated data and future directions. ## 2 Methodology The semantic segmentation task is to segment an image into different semantic classes/categories. **Notation**. Let \(\mathcal{X}\) be a set of images. Without loss of generality, we assume each image in \(\mathcal{X}\) contains no more than \(C\) semantic classes. \(\forall\mathbf{X}_{i}\in\mathcal{X}\), \(\mathbf{X}_{i}\in\mathbb{R}^{M\times H}\), where \(M\times H\) is the image size. Let \(\mathcal{X}_{\mathrm{T}}\subset\mathcal{X}\) and \(\mathcal{X}_{\mathrm{V}}\subset\mathcal{X}\) be the training and validation (test) sets, respectively. Let \(\Omega_{\mathrm{T}}\subset\mathbb{N}\) be the set containing the indexes of the images in \(\mathcal{X}_{\mathrm{T}}\). \(\forall\mathbf{X}_{i}\in\mathcal{X}_{\mathrm{T}}\), annotations are available. The most general annotation is the ground-truth segmentation maps, say \(\{\mathbf{Y}^{*}_{ij}\}_{j=1}^{C}\), for \(\mathbf{X}_{i}\), where each \(\mathbf{Y}^{*}_{ij}\in\mathbb{R}^{M\times H}\) is a binary mark for the semantic class \(j\) in \(\mathbf{X}_{i}\). For simplicity, let \(\mathbf{Y}^{*}_{i}\) be a tensor formed by \(\{\mathbf{Y}^{*}_{ij}\}_{j=1}^{C}\), where its \(j\)-th channel is \(\mathbf{Y}^{*}_{ij}\). Note, importantly, that the ground-truth segmentation maps are not used in our approach for semantic segmentation in this paper unless specifically stated; instead, they are used by the benchmark method for comparison purpose. Analogously, let \(\mathbf{Y}_{i}\) be the predicted segmentation maps following the same format as \(\mathbf{Y}^{*}_{i}\). Let \(\mathbf{\rho}^{*}_{i}=(\rho^{*}_{i1},\cdots,\rho^{*}_{iC})\) be the given SP annotation of image \(\mathbf{X}_{i}\in\mathcal{X}_{\mathrm{T}}\), which will be used to train our approach, where each \(\rho^{*}_{ij}\in[0,1]\) and \(\sum_{j=1}^{C}\rho^{*}_{ij}=1\). **Loss function**. Two types of loss functions are introduced in the architectures of our method. One is based on the mean squared error (MSE). MSE is commonly used to evaluate the performance of regression models where there are numerical target values to predict. We employ MSE to measure the discrepancy between the ground-truth SP and the predicted ones. For ease of reference, we call this loss function \(\mathcal{L}_{\mathrm{sp}}\) throughout the paper, i.e., \[\mathcal{L}_{\mathrm{sp}}=\frac{1}{|\Omega_{\mathrm{T}}|}\sum_{i\in\Omega_{ \mathrm{T}}}\|\mathbf{\rho}^{*}_{i}-\mathbf{\rho}_{i}\|^{2}, \tag{1}\] where \(\mathbf{\rho}_{i}\) is the predicted SP for image \(\mathbf{X}_{i}\in\mathcal{X}_{\mathrm{T}}\) and \(|\Omega_{\mathrm{T}}|\) is the cardinality of set \(\Omega_{\mathrm{T}}\). The other loss function is defined based on the binary cross-entropy (BCE), see Section 2.2 for the detail. BCE is a commonly used loss function in binary classification problems and measures the discrepancy between the predicted probabilities and the true binary ones. Below we define the BCE function for the \(j\)-th semantic class of image \(\mathbf{X}_{i}\) at coordinate (location) \((m,h)\in[1,M]\times[1,H]\) as \[f_{ij}(m,h)=-\left(\mathbf{Y}^{*}_{ij}[m,h]\log(\mathbf{Y}_{ij}[m,h])+(1-\mathbf{Y}^{*}_{ ij}[m,h])\log(1-\mathbf{Y}_{ij}[m,h])\right), \tag{2}\] Figure 1: Difference between the proposed semantic segmentation approach and benchmark methods. where \(\mathbf{Y}_{ij}\) is the predicted segmentation map for the \(j\)-th semantic class of image \(\mathbf{X}_{i}\) and \(\mathbf{Y}_{ij}[m,h]\) is the value of \(\mathbf{Y}_{ij}\) at coordinate \((m,h)\). ### Proposed SP-based Semantic Segmentation Architecture The proposed SP-based semantic segmentation (SPSS) architecture is shown in Figure 2. It contains two main parts, see below. The first part of the SPSS architecture is feature extraction. Employing a convolutional neural network (CNN) is a common approach in current state-of-the-art semantic segmentation methods. In our SPSS, a CNN (or other type of DNNs) is utilised as its backbone to extract high-level image features \(\mathbf{Y}_{i}\) from the input image \(\mathbf{X}_{i}\). The second part of the SPSS architecture is a global average pooling (GAP) layer, which takes the image features \(\mathbf{Y}_{i}\) to generate the SP, \(\mathbf{\rho}_{i}\), for the input image \(\mathbf{X}_{i}\). The SPSS architecture is then trained by using the loss function \(\mathcal{L}_{\mathrm{sp}}\) defined in Eq. (1). After training the SPSS architecture, the extracted features \(\mathbf{Y}_{i}\) of the trained CNN are, surprisingly, the prediction of the class-wise segmentation masks; that is how the SPSS architecture performs semantic segmentation by just using the SP rather than the ground-truth segmentation maps. We remark that both parts in the SPSS architecture are well-known and commonly employed for e.g. computer vision tasks. To the best of our knowledge, it is, for the first time, to combine these two parts for semantic segmentation in order to reduce the need of labour-intensive (fine-grained) ground-truth segmentation masks to the (coarse-grained) SP level. ### Architecture Enhancement The proposed SPSS architecture in Figure 2 only uses the SP annotation for semantic segmentation, which is quite cheap in terms of annotation generation. Moreover, SPSS is also very flexible, i.e., it can be enhanced straightforwardly when additional annotation information is available. Below we give a showcase regarding how to involve several annotated _semantic keypoints_ for some semantic classes (e.g. the minority classes) to enhance the SPSS architecture, see Figure 3. For ease of reference, we call the enhanced architecture in Figure 3_SPSS+_. Let \(\Lambda^{ij}\in[1,M]\times[1,H]\) be the set containing a few number of annotated semantic keypoints for the \(j\)-th semantic class of image \(\mathbf{X}_{i}\), i.e., additional annotation information against the SP. We remark that the semantic keypoints in \(\Lambda^{ij}\) could be the ones after dilation from for example only two or three semantic keypoints given by experts. Let \(\Omega_{\mathrm{sk}}\subset\mathbb{N}\) be the set containing the indexes of the images in \(\mathcal{X}_{\mathrm{T}}\) which have annotated semantic keypoints and \(\Omega^{i}_{\mathrm{sc}}\subset\mathbb{N}\) be the set containing the indexes of the semantic classes with annotated semantic keypoints for image \(\mathbf{X}_{i}\). Note that in this case limited additional annotation information indicates \(|\Omega_{\mathrm{sk}}|\ll|\mathcal{X}_{\mathrm{T}}|\) and \(\Omega^{i}_{\mathrm{sc}}\ll C\) for most \(i\in\Omega_{\mathrm{sk}}\). Below we define a new loss, \(\mathcal{L}_{\mathrm{sk}}\), for the annotated semantic keypoints, i.e., \[\mathcal{L}_{\mathrm{sk}}=\sum_{i\in\Omega_{\mathrm{sk}}}\sum_{j\in\Omega^{i}_ {\mathrm{sc}}}\sum_{(m,h)\in\Lambda^{ij}}f_{ij}(m,h)/|\Lambda^{ij}|, \tag{3}\] Figure 2: The SPSS (SP-based semantic segmentation) architecture. In the training stage, features are firstly extracted by a CNN from the input; and then the extracted features are through a GAP layer calculating the SP. After training using the loss function \(\mathcal{L}_{\mathrm{sp}}\), the proposed SPSS architecture can force the extracted features to be the prediction of the class-wise segmentation masks. where \(f_{ij}(m,h)\) is defined in Eq. (2). Finally, the total loss for the SPSS+ architecture is \[\mathcal{L}_{\mathrm{total}}=\alpha\mathcal{L}_{\mathrm{sp}}+(1-\alpha)\mathcal{ L}_{\mathrm{sk}}, \tag{4}\] where \(\alpha\) is an adjustable weight to determine the trade-off between the loss \(\mathcal{L}_{\mathrm{sp}}\) and loss \(\mathcal{L}_{\mathrm{sk}}\). The SPSS+ architecture (in Figure 3) uses the loss \(\mathcal{L}_{\mathrm{total}}\), which considers the annotations of the SP and semantic keypoints, to train the CNN backbone. Similar to the SPSS architecture (in Figure 2), the extracted features \(\boldsymbol{Y}_{i}\) of the trained CNN in the SPSS+ architecture are the prediction of the class-wise segmentation masks, i.e., the semantic segmentation results. Our SPSS can generally achieve comparable performance against benchmark semantic segmentation methods. Moreover, we find that for quite challenging semantic segmentation problems for example the ones with severe semantic class imbalance, our SPSS+ can still perform effectively. Please see Section 3 for more details regarding validation and comparison. ## 3 Experiments The proposed SP-based methodology for semantic segmentation is trained and tested on two benchmark datasets. The details of the data, implementation and experimental results are given below. **Data.** Satellite images of Dubai, i.e., Aerial Dubai, is an open-source aerial imagery dataset presented as part of a Kaggle competition2. The dataset includes 8 tiles and each tile has 9 images of various sizes and their corresponding ground-truth segmentation masks for 6 classes, _i.e., building, land, road, vegetation, water and unlabeled_. The other dataset used in this work is the Electron Microscopy dataset3, which is a binary segmentation problem and contains 165 slices of microscopy images with the size of \(768\times 1024\). The primary aim for this medical dataset is to identify and classify mitochondria pixels. This dataset is quite challenging since its semantic classes are severely imbalanced, i.e., the size of the mitochondria in most slices is very small (e.g. see Figure 6). Footnote 2: [https://www.kaggle.com/datasets/humansmitheloop/semantic-segmentation-of-aerial-imagery](https://www.kaggle.com/datasets/humansmitheloop/semantic-segmentation-of-aerial-imagery) Footnote 3: [https://www.epfl.ch/labs/cvlab/data/data-em/](https://www.epfl.ch/labs/cvlab/data/data-em/) **Implementation setup**. * Both datasets include large images. They are cropped into smaller patches and we obtain 1647 patches of size \(224\times 224\times 3\) and 1980 patches of size \(256\times 256\) for the Aerial Dubai and Electron Microscopy datasets, respectively. * The CNN backbone utilised in our SPSS and SPSS+ architectures is a modified version of U-Net [24], which involves four blocks in the contracting path with increased size of filters. Each block has two convolutional layers with \(3\times 3\) filters and ReLU activation. Each block then reduces the size of its input to half by a \(2\times 2\) max-pooling layer. The expansive path also has four blocks but Figure 3: The SPSS+ architecture (_cf._ the SPSS architecture in Figure 2). In the training stage, features are firstly extracted by a CNN from the input; and then the extracted features are through a GAP layer calculating the SP. After training using the loss function \(\mathcal{L}_{\mathrm{total}}\) (see Eq. (4)), the SPSS+ architecture can force the extracted features to be the prediction of the class-wise segmentation masks. with decreased size of filters. Each block involves two transpose convolution layers with a \(3\times 3\) filter and ReLU activation. There are also four concatenation operations that stack the resulting components of each of the four blocks in the contracting path with its same-size counterpart in the expansive path. Finally, a \(1\times 1\) convolutional layer with \(n\) filters and softmax activation are employed to match the \(C\) number of the semantic classes. Thus \(n\) is set to 6 and 1 to output feature maps of the size \(224\times 224\times 6\) and \(256\times 256\times 1\) respectively for the Aerial Dubai and Electron Microscopy datasets. Note that there is no need to set \(n\) to 2 for the binary segmentation problem dataset Electron Microscopy. * A GAP layer is then applied to these resulting feature maps to obtain 6 and 1 values, i.e., predicted SP, for the Aerial Dubai and Electron Microscopy datasets, respectively. To obtain segmentation maps during the test stage, we extract the feature maps prior to the GAP layer and visualise them per semantic class (see e.g. Figure 5 for visualisation). * For all experiments, an 80/20 split for the training/test, Adam optimizer with a learning rate of \(10^{-3}\), and a batch size of 16 are chosen. The number of epochs is set to 100 with early stopping applied with patience set to 10 based on the validation loss. All the experiments were implemented on a personal laptop with the following specifications: i7-8750H CPU, GeForce GTX 1060 GPU and 16GB RAM. Training of SPSS and SPSS+ takes around 30 minutes and 20 minutes, respectively. We emphasise that our main aim here is to show that semantic segmentation can be achieved with significantly weaker annotations, i.e., the SP annotation. We do not focus on performance improvement compared to the _benchmark method_, i.e., the CNN backbone (i.e., U-Net [24]) trained end-to-end for semantic segmentation using the ground-truth segmentation masks, which are generally much more expensive to annotate compared to the SP that our proposed SPSS and SPSS+ methods utilise. To make fair comparison, the same training images are used to train all the models. Given our limited resources, the image sizes mentioned above are the biggest we could try when the batch size is set to 16. Therefore, all the models are likely to improve their performances with bigger image sizes and hyper-parameter fine-tuning. Note again that the difference between SPSS and SPSS+ is just the way of using the annotations for their training, i.e., SPSS+ is to address the scenario that (except for the SP) a little more cheap annotations are available. Figure 4 illustrates the difference by utilising the SPSS and SPSS+ architectures on the datasets Aerial Dubai and Electronic Microscopy, respectively. ### Semantic Segmentation Performance Comparison **Quantitative comparison**. Table 1 and Table 2 give the quantitative results of our method and the benchmark method for the Aerial Dubai and Electron Microscopy datasets, respectively. Well-known evaluation metrics, i.e., mean intersection over union (Mean IoU) scores, per-class F1 scores and mean accuracy, are employed. Variances are obtained by training the models three times with randomly initialised weights. Tables 1 and 2 show that our model performs comparably to the benchmark segmentation method in both tasks; particularly for the challenging Electron Microscopy dataset, the mean accuracy of our method is just \(\sim 1\%\) less than that of the benchmark Figure 4: Diagrams of the proposed models SPSS and SPSS+ on the datasets Aerial Dubai (_left_) and Electronic Microscopy (_right_), respectively. method, demonstrating the great performance of our methods and the rightfulness of utilising the SP annotation for semantic segmentation that our methodology introduces. **Qualitative comparison**. Figure 5 shows the qualitative results of our method and the benchmark method for the Aerial Dubai dataset. Surprisingly, the class-wise segmentation maps that our method achieves (middle of Figure 5) are visually significantly better than that of the benchmark method (right of Figure 5) in terms of the binarisation ability, indicating the effectiveness of the loss \(\mathcal{L}_{\mathrm{sp}}\) (defined in Eq. (1)) using the SP annotation introduced in our method. For the significant class imbalance dataset Electronic Microscopy, Figure 6 shows the qualitative results of our method and the benchmark method for some challenging cases. Again, our method exhibits superior performance against the benchmark method. For example, our method accurately segments the mitochondria on the top-left corner of the second image despite employing much less annotation, but the benchmark method completely misses it even though trained using the ground-truth segmentation masks. This again validates the rightfulness of utilising the SP annotation for semantic segmentation that our methodology introduces. Most importantly, due to the great binarisation ability of the loss function introduced in our method using SP, it may serve as auxiliary loss even in scenarios where ground-truth segmentation masks are available so as to enhance the semantic segmentation performance of many existing methods. ### Sensitivity Analysis The task of obtaining highly precise SP annotations may be challenging, and as a result, annotators may provide rough estimates instead. Below we investigate the robustness of our models corresponding to the quality of the SP. Two extreme ways degrading the SP are examined, i.e., one is adding noises to the SP directly and the other is assigning images in individual clusters the same SP. \begin{table} \begin{tabular}{c||c|c c c c|c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Mean IoU} & \multicolumn{4}{c}{Per-class F1 score} & \multirow{2}{*}{ \begin{tabular}{c} Mean \\ accuracy \\ \end{tabular} } \\ \cline{2-2} \cline{5-7} & & & & & & & Mean \\ \hline \hline _Benchmark_ & \(52.3\pm 1.3\) & \(75.2\pm 0.9\) & \(84.5\pm 1.3\) & \(68.3\pm 1.2\) & \(67.7\pm 0.6\) & \(90.1\pm 0.4\) & \(82.3\pm 0.5\) \\ _SPSS (ours)_ & \(45.4\pm 0.9\) & \(63.5\pm 1.1\) & \(85.4\pm 0.8\) & \(46.1\pm 1.1\) & \(61.2\pm 0.8\) & \(89.5\pm 0.5\) & \(75\pm 0.7\) \\ \hline \end{tabular} \end{table} Table 1: Quantitative semantic segmentation results on the Aerial Dubai dataset. \begin{table} \begin{tabular}{c||c|c c c|c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Mean IoU} & \multicolumn{4}{c}{Per-class F1 score} & \multirow{2}{*}{ \begin{tabular}{c} Mean \\ accuracy \\ \end{tabular} } \\ \cline{2-2} \cline{5-7} & & & & & & Mean \\ \hline \hline _Benchmark_ & \(69.2\pm 0.3\) & \(99.1\pm 0.2\) & \(81.3\pm 0.5\) & \(98.4\pm 0.2\) \\ _SPSS+ (ours)_ & \(65.3\pm 0.8\) & \(98.1\pm 0.8\) & \(79.3\pm 0.3\) & \(97.7\pm 0.8\) \\ \hline \end{tabular} \end{table} Table 2: Quantitative semantic segmentation results on the Electron Microscopy dataset. Figure 5: Qualitative semantic segmentation comparison between our SPSS method (_middle_) and the benchmark method (_right_) on the Aerial Dubai dataset. **SP degraded by Gaussian noise**. We firstly conduct the sensitivity analysis of our method by systematically adding Gaussian noise to the SP. Let \(\mathcal{N}(0,\sigma)\) be the normal distribution with 0 mean and standard deviation \(\sigma\). For the given ST \(\boldsymbol{\rho}_{i}^{*}=(\rho_{i1}^{*},\cdots,\rho_{iC}^{*})\) of \(\forall\boldsymbol{X}_{i}\in\mathcal{X}_{\text{T}}\), let \(\boldsymbol{\tilde{\rho}}_{i}^{*}=(\tilde{\rho}_{i1}^{*},\cdots,\tilde{\rho}_ {iC}^{*})\), where \[\tilde{\rho}_{ij}^{*}=\rho_{ij}^{*}+\mathcal{N}(0,\sigma),\;\;j=1,\cdots,C. \tag{5}\] Then the softmax operator is used to normalise \(\boldsymbol{\tilde{\rho}}_{i}^{*}\), and the normalised \(\boldsymbol{\tilde{\rho}}_{i}^{*}\) is used as the new ST to train our model. Here the standard deviation \(\sigma\) controls the level of the Gaussian noise being added to the ST; e.g. \(\sigma=0.1\) represents \(10\%\) Gaussian noise. Table 3 showcases the robustness of our methodology, as it continues performing well even with the SP degraded by quite high levels of noise; e.g., the Mean IOU our method achieves only drops \(4\%\) for \(10\%\) Gaussian noise being added to the SP. Our method can still work to some extent even with the SP degraded by \(50\%\) Gaussian noise. This shows that our method is indeed quite robust corresponding to the SP, which means the annotators could in practice spend much less effort for providing rough SP rather than the precise SP. **SP degraded by clustering**. We now conduct the sensitivity analysis of our method by degrading the SP of the training images by clustering. The degradation procedures are: i) clustering the set of the given ST, i.e., \(\{\boldsymbol{\rho}_{i}^{*}\}_{i\in\Omega_{\text{T}}}\), into \(K\) clusters by \(K\)-means; ii) clustering the training set \(\mathcal{X}_{\text{T}}\) into the same \(K\) clusters, say \(\mathcal{X}_{\text{T}}^{k},k=1,\ldots,K\), corresponding to the ST clusters; and iii) assigning all the training images in cluster \(\mathcal{X}_{\text{T}}^{k}\) the same SP which is randomly selected from the SP of one image in this cluster; see also return the SP. Most importantly, this also inspires us that the annotators could in practice just provide rough SP for some representative images from the whole training set rather than the SP for the whole training set, which will further significantly reduce the annotation effort. Figure 7 for illustration. Obviously, implementing this way of degrading the SP, all the images' SP in the training set \(\mathcal{X}_{\text{T}}\) are changed except for \(K\) (i.e., the number of clusters) images. The smaller the number \(K\), the severer the ST degradation. The performance of our method regarding the ST degraded by clustering is shown in Table 4, indicating again the robustness of our methodology corresponding to the SP. For example, after just using \(K=100\) images' SP for the whole training set \(\mathcal{X}_{\text{T}}\), the Mean IoU of our method only drops 1%; and just using \(K=5\) images' SP for the whole training set, our method can still work to some extent (i.e., the Mean IoU just drops less than half). This again shows that our method is indeed quite robust corresponding to the SP. Most importantly, this also inspires us that the annotators could in practice just provide rough SP for some representative images from the whole training set rather than the SP for the whole training set, which will further significantly reduce the annotation effort. \begin{table} \begin{tabular}{c||c|c} \hline Data & \# Clusters & Mean IoU \\ \hline \hline & _100_ & 44 \\ & _50_ & 42 \\ Aerial & _30_ & 41 \\ Dubai & _20_ & 37 \\ & _10_ & 36 \\ & \(5\) & 27 \\ \hline \end{tabular} \end{table} Table 4: Performance of our model trained by using the SP degraded by clustering. Figure 6: Qualitative semantic segmentation comparison between our SPSS+ method (_upper_) and the benchmark method (_lower_) on some images from the Electronic Microscopy dataset. \begin{table} \begin{tabular}{c||c|c} \hline Data & Noise (\%) & Mean IoU \\ \hline \hline & \(0\) & 45 \\ & \(5\) & 42 \\ & _10_ & 41 \\ Aerial & _15_ & 38 \\ Dubai & _20_ & 32 \\ & _30_ & 29 \\ & _40_ & 28 \\ & _50_ & 26 \\ \hline \end{tabular} \end{table} Table 3: Performance of our model trained by using the SP degraded by Gaussian noise. ## 4 Discussion and Limitations **SP (semantic proportions)**. SP for each training image is required as annotation/label information for the presented semantic segmentation model. In this work, we obtained these proportions from the segmentation maps available for the chosen datasets just to demonstrate the effectiveness and robustness of our proposed SP-based methodology. We would like to stress that the reason why we benefited from the existing segmentation maps, which seems controversial to our main aim at first glance, is to show that the proposed methodology is feasible in the presence of SP. Arguably, the most accurate proportions can be extracted from these ground-truth segmentation maps if they are annotated properly. Therefore, obtaining SP from the readily available maps to achieve our aim is sensible. Clearly, our goal is to train our proposed model when the segmentation maps are unavailable. It is evident that obtaining SP annotation could be much cheaper than obtaining the precise segmentation maps particularly for data volumes in high dimensions for example. There are various ways to obtain SP readily in the absence of the segmentation maps, such as by employing mechanical turks or utilising pre-trained large language models, e.g., ChatGPT [20]. The results that we present in Section 3 are promising and one may wonder if the exact proportions are a must, which would make the proposed setting as expensive as the traditional one. To demonstrate that it is not the case and that our methodology only needs rough SP, we presented sensitivity analysis regarding SP, where we added various amounts of noise to the extracted SP and demonstrated that the model performs satisfactorily well when trained with noisy SP. We also presented sensitivity analysis through investigating degraded SP by clustering to further support the robustness of our methodology when the precise SP is unavailable. The analysis inspires us that our methodology not only works well with rough SP, but also with rough SP for only some representative images from the whole training set, indicating its need of significantly less annotation effort. **Additional annotations**. In many scenarios, different types of annotations may exist. This raises the question that whether it is feasible for semantic segmentation methods to use the combination of different types of annotations to boost their performance. In this regard, our proposed semantic segmentation methodology based on SP delivers quite promising results. For datasets where the ground-truth segmentation maps are available, based on which the SP annotation can be calculated directly. This way naturally augments the annotation types from one to two already. Our proposed model, i.e., SPSS+, can directly utilise both annotation types. The results shown in Section 3 on the Electron Microscopy dataset (with significant class imbalance) demonstrated the great performance of SPSS+ by just using several annotated points from the full ground-truth segmentation maps. The enhanced performance of our method by utilising both annotation types may benefit from our introduced loss function \(\mathcal{L}_{\mathrm{total}}\) in Eq. (4). It contains the \(\mathcal{L}_{\mathrm{sp}}\) loss defined in Eq. (1), which measures the MSE between the predicted SP and the given SP. The visualisation results in Figure 5 showed that our \(\mathcal{L}_{\mathrm{sp}}\) loss is much better than the loss directly measuring the segmentation maps (that the benchmark method uses) in terms of the binarisation ability. Therefore, combining the \(\mathcal{L}_{\mathrm{sp}}\) loss with the \(\mathcal{L}_{\mathrm{sk}}\) loss and then forming the \(\mathcal{L}_{\mathrm{total}}\) loss could boost the semantic segmentation performance, e.g. see the visualisation given in Figure 6. Figure 7: Diagram of the SP annotation degraded by clustering. Images are clustered corresponding to the SP clusters which are achieved by applying \(K\)-means on the SP set. An SP annotation for one image in each image cluster is then randomly selected from that cluster and is assigned to all the images in that image cluster. Therefore, after degradation, only \(K\) images have their original SP annotation if we assume every training image has different SP annotation in the original SP set. On the whole, we in this work proposed a new semantic segmentation methodology by introducing the SP annotation. In the scenario of quite limited annotation, using SP for semantic segmentation can already achieve competitive results. If additional annotations are available, our method can easily utilise them for performance boost. On the other hand, for existing segmentation methods that use different types of annotations, we also suggest involving SP in these methods; e.g., our proposed \(\mathcal{L}_{\mathrm{sp}}\) loss could be served as a type of regularisation given its effectiveness in binarisation. ## 5 Related Work **Supervision levels in semantic segmentation**. In recent years, more and more researchers have focused on reducing the annotation cost for semantic segmentation tasks. One such approach is to use weakly supervised learning techniques that require less precise or less expensive forms of supervision. For instance, Wei et al. [26] proposed a method that utilises image-level labels, the work in [22; 8] uses bounding boxes, and the methods in [15; 14] feed scribbles as labels instead of precise annotations to conduct semantic segmentation. Those approaches can significantly reduce the annotation cost, as they require less manual effort to annotate the data. However, there is always a trade-off between the annotation cost and the model performance, i.e., models trained with higher levels of supervision generally perform better than weakly supervised models. Active learning is an alternative approach to reduce the annotation cost by selecting the most informative samples to annotate based on the current model's uncertainty. With the selected most informative samples, active learning can reduce the amount of data that needs to be labelled, thus reducing the annotation cost [19; 27]. It is worth mentioning that this is actually similar to the way we used for the SP degraded by clustering. Reducing the annotation cost could also be achieved by generating synthetic data that can be used to augment the real-world data [7]. Synthetic data can be generated using e.g. computer graphics or other techniques to simulate realistic images and labels. **DNNs for semantic segmentation**. Long et al. [16] made a breakthrough by proposing the fully convolutional networks (FCNs) for semantic segmentation. The FCNs utilise CNN to transform input images into a probability map, where each entry of the probability map represents the likelihood of the corresponding image pixel belonging to a particular class. This approach allows the model to learn spatial features and eliminate the need for hand-crafted features. Following FCN, several variants have been proposed to improve the segmentation performance. For example, SegNet [2] is a modification of FCN employing an encoder-decoder architecture to achieve better performance; and DeepLab [6] introduced a novel technique called atrous spatial pyramid pooling to capture multi-scale information from the input image. U-Net [24], the backbone used in our proposed methodology, is a type of CNN consisting of a contracting path and an expansive path. The skip connections in U-Net allow the network to retain and reuse high-level feature representations learned in the contracting path, helping to improve segmentation accuracy. The U-Net architecture has been widely used for biomedical image segmentation tasks such as cell segmentation [13], organ segmentation [5] and lesion detection [9; 4], due to its ability in accurately segmenting objects within images while using relatively fewer training samples. Furthermore, its modular architecture and efficient training make it adaptable to a wide range of segmentation tasks. Therefore, to demonstrate our methodology utilising SP, we employ a modified and relatively basic version of the U-Net architecture as the backbone of our models. ## 6 Conclusion Semantic segmentation methodologies generally require costly annotations such as the ground-truth segmentation masks in order to achieve satisfying performance. Motivated by reducing the annotation time and cost for semantic segmentation, we in this paper presented a new methodology - SPSS - which relies on the SP annotation instead of the costly ground-truth segmentation maps. Extensive experiments validated the great potential of the proposed methodology in reducing the time and cost required for annotation, making it more feasible for large-scale applications. Furthermore, this innovative design opens up new opportunities for semantic segmentation tasks where obtaining the full ground-truth segmentation maps may not be feasible or practical. We believe that the use of the SP annotation suggested in this paper offers a new and promising avenue for future research in the field of semantic segmentation, and wide real-world applications are evident.
2303.00835
Bayesian inference for the Net Promoter Score
The Net Promoter Score is a simple measure used by several companies as indicator of customer loyalty. Studies that address the statistical properties of this measure are still scarce and none of them considered the sample size determination problem. We adopt a Bayesian approach to provide point and interval estimators for the Net Promoter Score and discuss the determination of the sample size. Computational tools were implemented to use this methodology in practice. An illustrative example with data from financial services is also presented.
Eliardo G. Costa, Rachel Tarini Q. Ponte
2023-03-01T21:47:05Z
http://arxiv.org/abs/2303.00835v1
# Bayesian inference for the Net Promoter Score ###### Abstract The Net Promoter Score is a simple measure used by several companies as indicator of customer loyalty. Studies that address the statistical properties of this measure are still scarce and none of them considered the sample size determination problem. We adopt a Bayesian approach to provide point and interval estimators for the Net Promoter Score and discuss the determination of the sample size. Computational tools were implemented to use this methodology in practice. An illustrative example with data from financial services is also presented. _Keywords:_ customer loyalty, multinomial distribution, Dirichlet distribution, sample size, average length criterion. ## 1 Introduction Reichheld (2003) proposed a statistics called Net Promoter Score (NPS) that may be used by a company as an indicator of customer loyalty. The author applied a questionnaire with some questions related to loyalty to a sample of customers of some industries, and with the purchase history of each customer it was possible to determine which questions had the strongest statistical correlation with repeat purchase or referrals. One of these questions performed better in most industries: "How likely is it that you would recommend [company X] to a friend or colleague?". Reichheld (2003) suggested that the response to the this questions must be on a 0 to 10 rating scale. Then, it is considered "promoters" the customers who respond with 9 or 10, "passives" the customers who respond with 7 or 8, and "detractors" the customers who respond with 0 through 6. The idea is that the more "promoters" company X has, the bigger its growth. An estimate of the NPS is computed as the difference between the proportions (or percentages) of "promoters" and "detractors". Keiningham et al. (2008) discuss the claims that NPS is the single most reliable indicator of a company's ability to grow, and that it is a superior metric to costumer satisfaction. Rocks (2016) presents a brief summary of some critiques about the NPS, see references therein. In the context of statistical modeling, Rocks (2016) focus on estimating intervals for the NPS in a frequentist approach via Wald intervals and Score methods. Also, the author perform a study simulation to assess the coverage probability of the proposed interval estimates, and conclude that variations on the adjusted Wald and an iterative Score method performed better. Markoulidakis et al. (2021) approach the customer experience as a NPS classification problem via machine learning algorithms. We may also cite Eskildsen & Kristensen (2011) and Kristensen & Eskildsen (2014) for related work. Studies that address the statistical properties of this measure are still scarce and none of them, to the best of our knowledge, considered the sample size determination problem. In this context, we propose a Bayesian model in order to make inference for the NPS and to establish a sample size determination methodology. See Rossi & Allenby (2003) for an exposition of the usefulness of the Bayesian methods in marketing. In Section 2, we describe the Bayesian model and the methodologies to obtain point and interval estimates for the NPS. The problem of the minimum sample size determination is discussed and implemented in Section 3. In Section 4 we present an illustrative example with data on financial services. We conclude with some remarks in Section 5.2 ## 2 Bayesian model Let \(\mathbf{\theta}=(\theta_{1},\theta_{2},\theta_{3})\), where \(\theta_{1},\theta_{2}\) and \(\theta_{3}\) are the proportions of detractors, passives and promoters in the customer population, respectively. Then, the NPS in the respective population is given by \(\Delta=\theta_{3}-\theta_{1}\), the parameter of interest. In a sample of \(n\) customers we count the number of customers in each category based on their responses for the aforementioned question. Let \(\mathbf{X}_{n}=(X_{1},X_{2},X_{3})\), where \(X_{1},X_{2}\) and \(X_{3}\) are numbers of customers categorized as detractors, passives and promoters, respectively, in the customer sample. Given \(\mathbf{\theta}\), we assume a multinomial distribution for the counts \(\mathbf{X}_{n}\), and we denote \(\mathbf{X}_{n}|\mathbf{\theta}\sim\mathrm{Mult}(n,\mathbf{\theta})\). The respective probability distribution is given by \[\mathbb{P}\left[X_{1}=x_{1},X_{2}=x_{2},X_{3}=x_{3}\right]=\frac{n!}{x_{1}!x_ {2}!x_{3}!}\theta_{1}^{x_{1}}\theta_{2}^{x_{2}}\theta_{3}^{x_{3}},\] where \(x_{1},x_{2},x_{3}=0,1,\ldots,n\) such that \(x_{1}+x_{2}+x_{3}=n\), and \(\theta_{1}+\theta_{2}+\theta_{3}=1\). The natural (conjugate) choice for the prior distribution of \(\mathbf{\theta}\) is a Dirichlet distribution, we denote \(\mathbf{\theta}\sim\mathrm{Dir}(\mathbf{\alpha})\) and the respective probability density function is given by \[\pi(\mathbf{\theta})=\frac{\Gamma(\alpha_{1}+\alpha_{2}+\alpha_{3})}{\Gamma( \alpha_{1})\Gamma(\alpha_{2})\Gamma(\alpha_{3})}\theta_{1}^{\alpha_{1}-1} \theta_{2}^{\alpha_{2}-1}\theta_{3}^{\alpha_{3}-1},\] where \(\theta_{1}+\theta_{2}+\theta_{3}=1\), \(\mathbf{\alpha}=(\alpha_{1},\alpha_{2},\alpha_{3})\) is a vector of positive hyperparameters and \(\Gamma(\cdot)\) is the gamma function. The model may be written hierarchically as follows \[\mathbf{X}_{n}|\mathbf{\theta}\sim\mathrm{Mult}(\mathbf{\theta});\quad\mathbf{\theta}\sim \mathrm{Dir}(\mathbf{\alpha}). \tag{1}\] In this setting, given a observation \(\mathbf{x}_{n}\) of \(\mathbf{X}_{n}\), we have that the posterior distribution for \(\mathbf{\theta}\) is a Dirichlet distribution with parameter \(\mathbf{\alpha}+\mathbf{x}_{n}\), _i.e._, \(\mathbf{\theta}|\mathbf{x}_{n}\sim\mathrm{Dir}(\mathbf{\alpha}+\mathbf{x}_{n})\)(Turkman et al., 2019). Also, Bayesian updating becomes straightforward since the current parameters of the posterior distribution may be used as the hyperparameters of the prior distribution in the next sampling of \(\boldsymbol{X}_{n}\). Given a way to generate random values from the Dirichlet distribution, this provide us a simple way to draw values from the posterior distribution of \(\Delta\) in order to obtain, approximately, posterior summaries as the mean, median, variance, quantiles, etc, and make inferences about the NPS. An algorithm to obtain a sample of size \(N\) from the posterior distribution of the \(\Delta\) is outlined as follows. 1. Set the values of \(\boldsymbol{\alpha}\), \(\boldsymbol{x}_{n}\) and \(N\) (_e.g._, \(N=1000\)). 2. Draw a value of \(\boldsymbol{\theta}=(\theta_{1},\theta_{2},\theta_{3})\) from the Dirichlet distribution with parameter \(\boldsymbol{\alpha}+\boldsymbol{x}_{\mathbf{n}}\). 3. Compute \(\Delta=\theta_{3}-\theta_{1}\) and keep this value. 4. Repeat Steps 2-3 \(N\) times. It is well known that the marginal distributions of a Dirichlet distribution are beta distributions. Let \(\boldsymbol{\alpha}^{*}=\boldsymbol{\alpha}+\boldsymbol{x}_{n}=(\alpha_{1}^{* },\alpha_{2}^{*},\alpha_{3}^{*})^{\top}\). Then, it follows that \[\theta_{1}|\boldsymbol{x}_{n}\sim\mathrm{Beta}(\alpha_{1}^{*},\alpha_{2}^{* }+\alpha_{3}^{*})\quad\text{and}\quad\theta_{3}|\boldsymbol{x}_{n}\sim\mathrm{ Beta}(\alpha_{3}^{*},\alpha_{1}^{*}+\alpha_{2}^{*}),\] which give us that the mean of the posterior distribution of the NPS is \[\mathbb{E}\left[\Delta\big{|}\boldsymbol{x}_{n}\right]=\mathbb{E}\left[ \theta_{3}-\theta_{1}\big{|}\boldsymbol{x}_{n}\right]=\frac{\alpha_{3}^{*}- \alpha_{1}^{*}}{\alpha_{0}^{*}}, \tag{2}\] where \(\alpha_{0}^{*}=\alpha_{1}^{*}+\alpha_{2}^{*}+\alpha_{3}^{*}\). This mean may be used as a point estimator for the NPS. The respective variance is given by \[\mathrm{Var}\left[\Delta\big{|}\boldsymbol{x}_{n}\right] = \mathrm{Var}\left[\theta_{3}\big{|}\boldsymbol{x}_{n}\right]+ \mathrm{Var}\left[\theta_{3}\big{|}\boldsymbol{x}_{n}\right]-2\mathrm{Cov}( \theta_{3},\theta_{1}|\boldsymbol{x}_{n}) \tag{3}\] \[= \frac{\alpha_{1}^{*}\alpha_{2}^{*}+\alpha_{2}^{*}\alpha_{3}^{*}+4 \alpha_{1}^{*}\alpha_{3}^{*}}{(\alpha_{0}^{*})^{2}(\alpha_{0}^{*}+1)}.\] A credible interval that we may construct is based on (2) and (3), _i.e._, \(\mathbb{E}\left[\Delta\big{|}\boldsymbol{x}_{n}\right]\pm\gamma\sqrt{\mathrm{ Var}\left[\Delta\big{|}\boldsymbol{x}_{n}\right]}\), where \(\gamma\) is a fixed constant. We developed an Excel spreadsheet that computes this credible interval and a point estimate based on (2) and (3). See Supplementary Material for more details. Another credible interval may be specified by the highest posterior density (HPD) interval. In this case we use a Monte Carlo approach to approximate the HPD interval. In other words, we use a sample drawn from the posterior distribution of \(\Delta\), which may be easily done since the posterior distribution is a Dirichlet distribution. See Turkman et al. (2019, pgs. 47-48) for more details. ## 3 Minimum sample size To determine the minimum sample size required to estimate \(\Delta\) with a pre-specified precision, we consider a criterion based on the average length of credible intervals. The posterior credible interval accounts for the magnitude of the NPS and this may help the company to know when to perform a gap analysis and create a business action plan in order to improve the NPS, _i.e._, increase the NPS until the company has more promoters than detractors (\(\Delta>0\)). Let \(a(\boldsymbol{x}_{n})\) and \(b(\boldsymbol{x}_{n})\) be the lower and upper bounds of the HPD interval for \(\Delta\). The rationale here is to set the minimum Bayesian coverage probability \(1-\rho\) and obtain the minimum sample size by requiring that the length of the HPD interval \(\ell(\boldsymbol{x}_{n})=b(\boldsymbol{x}_{n})-a(\boldsymbol{x}_{n})\) be such that \[\int_{\mathcal{X}}\ell(\boldsymbol{x}_{n})g(\boldsymbol{x}_{n})\,d\boldsymbol {x}_{n}\leq\ell_{\max}, \tag{4}\] where \(\ell_{\max}\) is the maximum admissible length for the HPD interval, \(\mathcal{X}\) is the sample space associated to \(\boldsymbol{x}_{n}\) and \(g(\boldsymbol{x}_{n})\) is the marginal probability function of the outcomes. This is called average length criterion (ALC). See Costa et al. (2021) and references therein for more details about this criterion. Since it is impractical to obtain analytically the lower and upper bounds of the HPD interval for \(\Delta\), we use a Monte Carlo approach (Chen & Shao, 1999) to obtain the respective bounds as well as the the respective integral. An algorithm to obtain the minimum sample size satisfying this criterion is outlined as follows. 1. Set values for \(\ell_{\max}\), \(\boldsymbol{\alpha}\), \(\rho\) and take \(n=1\). 2. Draw a sample of size \(L\) (_e.g._, \(L=1000\)) of \(\boldsymbol{x}_{n}\); to draw \(\boldsymbol{x}_{n}\), first draw one value of \(\boldsymbol{\theta}\) from the Dirichlet distribution with parameter \(\boldsymbol{\alpha}\) and given this value, draw \(\boldsymbol{x}_{n}\) from the multinomial distribution with parameter \(\boldsymbol{\theta}\). 3. Obtain the HPD interval of probability \(1-\rho\) for each \(\boldsymbol{x}_{n}\) that was drawn and then the respective interval length: for each value drawn in Step 2, obtain the lower and upper bounds of the HPD interval of probability \(1-\rho\) as indicated in Chen & Shao (1999). Then, compute the difference between the upper and lower bounds for each value drawn in order to obtain the interval lengths. 4. Compute the average of the \(L\) HPD interval lengths. 5. If this average is lower or equal to \(\ell_{\max}\), stop. The value \(n\) obtained in this step is the required value. Otherwise, set \(n=n+1\) and return to Step 2. We developed an R package (R Core Team, 2022) which provides a function to obtain point and interval estimates via Monte Carlo simulation as discussed in the previous section. Also, the package have a function to compute the minimum sample size to estimate the NPS through HPD interval via ALC (see Supplementary Material). In Tables 1-4, we present the minimum sample size to estimate the NPS using the HPD computed via ALC for all the scenarios for the prior distribution of \(\boldsymbol{\theta}\) presented in Figure 1 and some values of \(\ell_{\max}\) and \(\rho\). For other scenarios the R package may be used. The cases where \(\alpha_{1}=\alpha_{2}=\alpha_{3}=1\) and \(\alpha_{1}=\alpha_{2}=\alpha_{3}=5\) represent scenarios in which the prior expected value of the NPS (\(\Delta\)) are equal to zero, but with different variability. The case where \(\alpha_{1}=2\), \(\alpha_{2}=5\) and \(\alpha_{3}=8\) represents a scenario where the prior expected value of the NPS (\(\Delta\)) is positive, and where \(\alpha_{1}=8\), \(\alpha_{2}=5\) and \(\alpha_{3}=2\) we have that the prior expected value of the NPS (\(\Delta\)) is negative, but the respective variances are equal. For fixed \(\rho\) (\(\ell_{\max}\)), the minimum sample size decreases as \(\ell_{\max}\) (\(\rho\)) increases, as expected (Tables 1-4). In the case where all the \(\alpha_{i}\)'s are equal and increase the minimum sample size seems to increases, irrespective the values of \(\ell_{\max}\) and \(\rho\) (Tables 1-2). The minimum sample size for the case where \(\alpha_{1}=2\), \(\alpha_{2}=5\) and \(\alpha_{3}=8\) are approximately equal to those with \(\alpha_{1}=8\)\(\alpha_{2}=5\), \(\alpha_{3}=2\) and the same \(\ell_{\max}\) and \(\rho\) (Tables 3-4), it makes sense since these scenarios are "complementary" with respect to the expected value but with the same variance. For the adopted model parameters, the running time to compute the minimum sample size varied from 47 seconds to 4.69 hours, depending on the setting. The smaller the values of \(\ell_{\max}\) and/or \(\rho\) the greater the running time. The computer that has been used has the following characteristics: OS Linux Ubuntu 20.04, RAM 7.7 GB, processor AMD PRO A8-8600B. ## 4 Illustrative example In the situation where the sample is not obtained yet, we may determine the sample size to obtain a HPD interval for the NPS via ALC by setting \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\), \(\ell_{\max}\) and \(\rho\). For example, if we consider \(\alpha_{1}=\alpha_{2}=\alpha_{3}=1\), \(\ell_{\max}=0.10\) and \(\rho=0.05\), the minimum sample size is 655 (Table 1), _i.e._, to obtain a HPD interval with maximum length equals to 0.10 and respective probability equals to 0.95, we should ask 655 people the NPS question and then categorize them into detractors, passives and promoters in order to obtain the observed values of \(X_{1}\), \(X_{2}\) and \(X_{3}\), respectively. Given the difficult to obtain a real NPS dataset from a company because such a information is very sensitive, we consider a hypothetical dataset on financial services in three markets in the year of 2021 (see Supplementary Material) to mimic the application of the methods in a real situation. To illustrate the methodology and the Bayesian updating, we consider the \begin{table} \begin{tabular}{c r r r} \hline & \multicolumn{3}{c}{\(\rho\)} \\ \cline{2-3} \(\ell_{\max}\) & 0.01 & 0.05 & 0.10 \\ \hline 0.02 & 37737 & 21516 & 15157 \\ 0.04 & 9521 & 5396 & 3810 \\ 0.06 & 4243 & 2386 & 1681 \\ 0.08 & 2391 & 1330 & 944 \\ 0.10 & 1515 & 860 & 599 \\ 0.12 & 1046 & 590 & 409 \\ 0.14 & 764 & 433 & 298 \\ 0.16 & 585 & 326 & 224 \\ 0.18 & 464 & 254 & 175 \\ 0.20 & 371 & 204 & 139 \\ \hline \end{tabular} \end{table} Table 2: ALC based minimum sample size to estimate the NPS through the HPD with \(\alpha_{1}=\alpha_{2}=\alpha_{3}=5\) for the prior distribution of \(\theta\). \begin{table} \begin{tabular}{c r r r} \hline & \multicolumn{3}{c}{\(\rho\)} \\ \cline{2-3} \(\ell_{\max}\) & 0.01 & 0.05 & 0.10 \\ \hline 0.02 & 28494 & 16105 & 11346 \\ 0.04 & 7116 & 4018 & 2816 \\ 0.06 & 3198 & 1790 & 1258 \\ 0.08 & 1779 & 1004 & 698 \\ 0.10 & 1137 & 636 & 442 \\ 0.12 & 788 & 442 & 307 \\ 0.14 & 577 & 322 & 225 \\ 0.16 & 433 & 243 & 166 \\ 0.18 & 342 & 189 & 130 \\ 0.20 & 275 & 150 & 100 \\ \hline \end{tabular} \end{table} Table 3: ALC based minimum sample size to estimate the NPS through the HPD with \(\alpha_{1}=2\), \(\alpha_{2}=5\) and \(\alpha_{3}=8\) for the prior distribution of \(\theta\). data from the first and second quarter of the Mexico market. For the first quarter we have no prior knowledge, then we set \(\alpha_{1}=\alpha_{2}=\alpha_{3}=1\). For this quarter the numbers of detractors, passives and promoters are 136, 82 and 188, respectively, which implies a posterior Dirichlet distribution with vector parameter \(\boldsymbol{\alpha}^{*}=(137,83,189)^{\top}\) for \(\boldsymbol{\theta}\). Drawing a sample from this posterior distribution and computing its summaries, we have that a point estimate for the NPS is 0.127 and the HPD 95% interval is [0.038, 0.206]. For the second quarter, we may use the posterior parameter of the first quarter as the prior parameter for the current quarter, _i.e._, \(\alpha_{1}=137\), \(\alpha_{2}=83\) and \(\alpha_{3}=189\). For the second quarter the numbers of detractors, passives and promoters are 136, 82 and 188, respectively. In this case, a point estimate for the NPS is 0.131 and the HPD 95% interval is [0.072, 0.192]. All these results were obtained via the R package. Another simple way to obtain point and interval estimates for the NPS is to obtain (2) and (3) for this data, as discussed in Section 2. ## 5 Concluding remarks For the first time in the literature the sample size determination for estimating the NPS is discussed. To approach this problem we consider a Bayesian \begin{table} \begin{tabular}{r r r r} \hline & \multicolumn{3}{c}{\(\rho\)} \\ \cline{2-4} \(\ell_{\max}\) & 0.01 & 0.05 & 0.10 \\ \hline 0.02 & 28360 & 16040 & 11380 \\ 0.04 & 7160 & 4028 & 2861 \\ 0.06 & 3190 & 1794 & 1263 \\ 0.08 & 1795 & 1003 & 702 \\ 0.10 & 1151 & 640 & 443 \\ 0.12 & 781 & 440 & 308 \\ 0.14 & 579 & 322 & 219 \\ 0.16 & 438 & 240 & 164 \\ 0.18 & 345 & 186 & 129 \\ 0.20 & 278 & 152 & 100 \\ \hline \end{tabular} \end{table} Table 4: ALC based minimum sample size to estimate the NPS through the HPD with \(\alpha_{1}=8\), \(\alpha_{2}=5\) and \(\alpha_{3}=2\) for the prior distribution of \(\boldsymbol{\theta}\). approach via a multinomial/Dirichlet model and the average length criterion. We provide point and interval estimators for the NPS as in closed forms or via drawing a sample from the posterior distribution of the NPS and computing its summaries. Also, the Bayesian approach makes the inference updating becomes straightforward as illustrated in Section 4, _i.e._, a sequential procedure to estimate the NPS. Computational tools were developed to use these methodologies in practice. ## Supplementary Material The Excel spreadsheet is available at [https://doi.org/10.5281/zenodo.7679211.The](https://doi.org/10.5281/zenodo.7679211.The) R package is available at [https://github.com/eliardocosta/BayesNPS](https://github.com/eliardocosta/BayesNPS) (DOI: 10.5281/zenodo.7617770). The data used in the illustrative example is available at [https://www.kaggle.com/code/charlottetu/net-promoter-score/](https://www.kaggle.com/code/charlottetu/net-promoter-score/).
2307.03613
Modified masses and parallaxes of close binary system: HD39438
We present the detailed fundamental stellar parameters of the close visual binary system; HD39438 for the first time. We used Al-Wardat's method for analyzing binary and multiple stellar systems (BMSSs). The method implements Kurucz's plane parallel model atmospheres to construct synthetic spectral energy distributions for both components of the system. It then combines the results of the spectroscopic analysis with the photometric analysis and then compares them with the observed ones to construct the best synthetic spectral energy distributions for the combined system. The analysis gives the precise fundamental parameters of the individual components of the system. Based on the positions of the components of HD39438 on the H-R diagram, and evolutionary and isochrones tracks, we found that the system belongs to the main sequence stars with masses of 1.24 and 0.98 solar masses for the components A and B, respectively, and age of 1.995 Gyr for both components. The main result of HD39438 is new dynamical parallax, which is estimated to be 16.689+- 0.03 mas.
Suhail Masda, Z. T. Yousef, Mashhoor Al-Wardat, Awni Al-Khasawneh
2023-07-07T14:10:38Z
http://arxiv.org/abs/2307.03613v1
# Modified masses and parallaxes of close binary system: HD 39438 ###### Abstract We present the detailed fundamental stellar parameters of the close visual binary system; HD 39438 for the first time. We used Al-Wardat's method for analyzing binary and multiple stellar systems (BMSSs). The method implements Kurucz's plane parallel model atmospheres to construct synthetic spectral energy distributions for both components of the system. It then combines the results of the spectroscopic analysis with the photometric analysis and then compares them with the observed ones to construct the best synthetic spectral energy distributions for the combined system. The analysis gives the precise fundamental parameters of the individual components of the system. Based on the positions of the components of HD 39438 on the H-R diagram, and evolutionary and isochrones tracks, we found that the system belongs to the main sequence stars with masses of 1.24 and 0.98 solar masses for the components A and B, respectively, and age of 1.995 Gyr for both components. The main result of HD 39438 is new dynamical parallax, which is estimated to be \(16.689\pm 0.03\) mas. Binaries: close binary system, Stars: fundamental parameters, Methods: Analytical: Al-Wradat's Method, Techniques: photometric, Individual: HD 39438 20230 It merges the magnitude difference measurements of the speckle interferometry, the combined spectral energy distributions (SEDs) of the spectrophotometric analysis with the aid of the grids of Atlas9 models Kurucz (1994), and radial velocity measurements (once available) to estimate the individual fundamental stellar parameters of the binary systems, thereby determining the precise spectrophotometric masses of the binary systems. The method has been utilized to compare the synthetic stellar photometry with the observed stellar photometry to estimate the fundamental parameters of the solar-type stars (Al-Wardat, 2009, 2012; Al-Wardat et al., 2014, 2014, 2014, 2016; Al-Wardat et al., 2016, 2017; Masda et al., 2018, 2018, 2019, 2019, 2018, 2019, 2019, 2018, 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 20224, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 20222, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2029, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2029, 2021, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2021, 2029, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2029, 2021, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2029, 2021, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2020, 2021, 2029, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2029, 2021, 2029, 2023, 2024, 2026, 2028, 2029, 2021, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2029, 2020, 2021, 2029, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2029, 2021, 2029, 2023, 2024, 2026, 2028, 2029, 2021, 2029, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2029, 2021, 2029, 2023, 2024, 2026, 2028, 2029, 2029, 2021, 2023, 2024, 2025, 2026, 2028, 2029, 2029, 2021, 2029, 2023, 2024, 2026, 2029, 2027, 2028, 2029, 2029, 2029, 2028, 2029, 2029, 2021, 2029, 2029, 2021, 2029, 2023, 2024, 2026, 2029, 2027, 2028, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2029, 2030, 2031, 2032, 2033, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2039, 2030, 2039, 2030, 2032, 2039, 2030, 2039, 2030, 2030, 2031, 2032, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2039, 2030, 2032, 2039, 2030, 2039, 2030, 2032, 2039, 2030, 2032, 2039, 2030, 2032, 2032, 2033, 2032, 2039, 2030, 2032, 2032, 2039, 2032, 2034, 2035, 2036, 2037, 2038, 2039, 2039, 2039, 2030, 2039, 2032, 2039, 2032, 2039, 2032, 2039, 2032, 2032, 2039, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 2032, 20322, 2032, 2032, 2032, 2032, 2032, contains the observed magnitude differences between the primary and secondary components of the binary system. ## 3 Method and Analysis The spectrophotometric analysis is the most important step in Al-Wardat's method, which depends on two solutions to estimate the astrophysical parameters. The solutions are as follows: ### Spectroscopic solution The spectroscopic solution is the most important key to reach the fundamental stellar parameters. In this solution, we require to construct the synthetic SED for the combined and individual synthetic SED \begin{table} \begin{tabular}{c c c} \hline \hline Property & HD 39438 & Ref. \\ & HIP 27758 & \\ \hline \(\alpha_{2000}\) & \(05^{\rm h}52^{\rm m}29\aas@@fstack{\prime}411\) & 1 \\ \(\delta_{2000}\) & \(-02^{\circ}17^{\prime}07.^{\prime\prime}62\) & 1 \\ Sp. Typ. & G0V & 2 \\ Gaia DR2 & 3025640770239673216 & 1 \\ Gaia DR3 & 3025640770239673216 & 1 \\ \(E(B-V)\) & 0.017 & 3 \\ \(A_{\rm v}\) (mag) & 0.053 & \(\ast\) \\ \(\pi_{\rm H07}\) (mas) & \(20.15\pm 1.19\) & 4 \\ \(\pi_{\rm DR2}\) (mas) & \(11.906\pm 0.37\) & 5 \\ \(\pi_{\rm DR3}\) (mas) & \(16.051\pm 0.26\) & 5 \\ \(\rm[Fe/H]\) & \(-0.12\pm 0.08\) & 6 \\ \(V_{\rm J}\) (mag) & 7.26 & 7 \\ \(B_{\rm J}\) (mag) & \(7.79\pm 0.02\) & 8 \\ \((B-V)_{\rm J}\) (mag) & \(0.56\pm 0.015\) & 7 \\ \((b-y)_{\rm S}\) (mag) & 0.35 & 9 \\ \((v-b)_{\rm S}\) (mag) & 0.51 & 9 \\ \((u-v)_{\rm S}\) (mag) & 0.88 & 9 \\ \(B_{\rm T}\) (mag) & \(7.89\pm 0.012\) & 7 \\ \(V_{\rm T}\) (mag) & \(7.32\pm 0.010\) & 7 \\ \hline \end{tabular} 1 \end{table} Table 1: Fundamental data and observed photometry of HD 39438 \begin{table} \begin{tabular}{c|c c c c} \hline \hline HD & \(\triangle m\) & \(\sigma_{\Delta m}\) & Filter (\(\lambda/\Delta\lambda\)) & Ref. \\ \hline 39438 & 0.87 & 0.89 & \(V_{H}:511\)nm/222 & ESA (1997) \\ & \(1.34\) & 0.03 & \(545\)nm/30 & Pluzhnik (2005) \\ & 1.31 & 0.03 & \(545\)nm/30 & Balega et al. (2002b) \\ & 1.65 & & \(550\)nm/40 & Horch et al. (2008) \\ & 1.26 & & \(541\)nm/88 & Horch et al. (2008) \\ & 1.33 & & \(550\)nm/40 & Horch et al. (2010) \\ & 1.40 & & \(551\)nm/22 & Tokovinin et al. (2010) \\ & 2.30 & & \(543\)nm/22 & Hartkopf et al. (2012) \\ & 1.60 & & \(543\)nm/22 & Tokovinin et al. (2014) \\ & 1.50 & & \(543\)nm/22 & Tokovinin et al. (2014) \\ & 1.60 & & \(543\)nm/22 & Tokovinin et al. (2015) \\ & 1.62 & \(0.28\) & \(543\)nm/22 & Tokovinin (2017) \\ & 1.60 & & \(543\)nm/22 & Tokovinin (2016) \\ \hline \end{tabular} \end{table} Table 2: The observed magnitude difference between the components of HD 39438 (HIP 27758). of the system. First of all, we need to know the observed magnitude difference of the system, which is estimated as follows: \(\triangle m=1.49\pm 0.01\). This parameter is the average for all \(\triangle m\) measurements given in Table 2 under the V-band filters. The observed magnitude difference of the system, combined with the visual magnitude, led to the individual apparent and absolute magnitudes of the system as follows: \(m_{v}^{A}=8^{\rm m}.30\pm 0.002\), \({\rm M_{V}^{A}}=4^{\rm m}.36\pm 0.01\), and \(m_{v}^{B}=8^{\rm m}.64\pm 0.12\), \({\rm M_{V}^{B}}=4^{\rm m}.70\pm 0.13\) for the primary and secondary components, respectively, by using the following simple relationships: \[{\rm m_{v}^{A}}={\rm m_{v}}+2.5\log(1+10^{-0.4\triangle{\rm m}}), \tag{1}\] \[{\rm m_{v}^{B}}={\rm m_{v}^{A}}+\triangle{\rm m}, \tag{2}\] \[{\rm M_{V}}-{\rm m_{v}}=5-5\log({\rm d})-{\rm A_{V}}, \tag{3}\] Here, the distance of the system from Earth (\(d\)) is measured in parsec (pc). Furthermore, since HD 39438 is a nearby system, the interstellar extinction is neglected. The absolute magnitudes of HD 39438 components are employed for estimating the input parameters, together with some parameters taken as introductory values from the Tables of Lang (1992) and Gray (2005). In addition, the following equations for the main sequence stars are used: \[\log\frac{R}{R_{\odot}}=\frac{M_{bol}^{\odot}-M_{bol}}{5}-2\log \frac{T_{\rm eff}}{T_{\odot}}, \tag{4}\] \[\log g=\log\frac{M}{M_{\odot}}-2\log\frac{R}{R_{\odot}}+\log g_{ \odot}. \tag{5}\] where \(T_{\odot}=5777\,{\rm K}\), log \(g_{\odot}=4.44\) and \(M_{bol}^{\odot}=4^{\rm m}.75\). \(M_{bol}={\rm M_{V}}+{\rm BC}\); where \({\rm BC}\) is the bolometric correction. The individual synthetic SED for each single star are built based on input parameters of the binary system. Therefore, the Kurucz Atlas9 models, which are plaane-parallel model atmospheres developed by Kurucz in 1994, are emploplyed. These models are used to generate the synthetic fluxes for individual components of the system, and when combined with the parallax, they produce the synthetic SED for the combined close binary system. For this purpose, the specialized subroutines of Al-Wardat's method for analyzing BMSSs must be utilized. The combined synthetic SED of the binary system is determined using the following equation: \[F_{\lambda,s}=\left(\frac{R_{A}}{d}\right)^{2}\!\left(H_{\lambda}^{A}+H_{ \lambda}^{B}\!\left(\frac{R_{B}}{R_{A}}\right)^{2}\right) \tag{6}\] where \(F_{\lambda}\) is the combined synthetic SED of the binary system, \(R_{A}\) and \(R_{B}\) are the radii of component A and component B of the system, respectively, in solar units. \(H_{\lambda}^{A}\) and \(H_{\lambda}^{B}\) are the corresponding fluxes of component A and component B, respectively, in units of ergs cm\({}^{-2}\) s\({}^{-1}\) A\({}^{-1}\). These individual fluxes are dependent on the \(T_{\rm eff.}\) and log g. This equation accounts for the energy fluxx of the individual components located at a distance d (in parsecs) from Earth, ensuring a reliable estimation. In Equation 6, the values of the radii are dependent mainly on the accuracy of the parallax measurements. (Tokovinin et al., 2000) showed that the parallax measurements of the binary systems were probably distorted by the orbital motion. That is why, Al-Wardat's method discovered that the problems started appearing significantly in the parallax measurements of the binary systems and gives the new dynamical parallax (Al-Wardat et al., 2021; Masda & Al-Wardat, 2023). The results of the fundametal stellar parameters should be in keeping with those observed ones of the binary system. This has ranked as one of the best ways to make certain accuracy of the parallax of the system. So, synthetic photometric solution should be executed to determine the best stellar parameters of the close visual binary system. ### Photometric solution The synthetic photometric solution is the perfect complement to the spectroscopic solution, which is in turn instrumental in estimating the fundamental stellar parameters of the close binary systems. The main aim of this solution is to calculate the magnitudes and color indices of the combined and individual synthetic SEDs and then compare with the observed ones in any photometric system. So, the synthetic magnitudes and color indices in different photometrical systems such as: Johnson: \(U\), \(B\), \(V\), \(R\), \(U-B\), \(B-V\), \(V-R\); Stromgren: \(u\), \(v\), \(b\), \(y\), \(u-v\), \(v-b\), \(b-y\) and Tycho: \(B_{T}\), \(V_{T}\), \(B_{T}-V_{T}\) are calculated by using the following equation (Al-Wardat 2012): \[m_{p}[F_{\lambda,s}(\lambda)]=-2.5\log\frac{\int P_{p}(\lambda)F_{\lambda,s}( \lambda)\lambda\mathrm{d}\lambda}{\int P_{p}(\lambda)F_{\lambda,r}(\lambda) \lambda\mathrm{d}\lambda}+\mathrm{ZP}_{p} \tag{7}\] where \(m_{p}\) is the synthetic magnitude of the passband \(p\), \(P_{p}(\lambda)\) is the dimensionless sensitivity function of the passband \(p\), \(F_{\lambda,s}(\lambda)\) is the synthetic SED of the object and \(F_{\lambda,r}(\lambda)\) is the SED of the reference star (Vega). Zero points (ZP\({}_{p}\)) from Maiz Apellaniz (2007) are adopted. ## 4 Mass and dynamical parallax The stellar mass played a vital role in understanding the formation and evolution of the binary systems. Thus, its estimation should be accurate. There are two types of masses, which are the spectrophotometric mass \(\mathcal{M}_{Sph}\) and dynamical stellar mass \(\mathcal{M}_{d}\). The former is estimated based on the evolutionary tracks by using Al-Wardat's method for analyzing BMSSs, while the later is estimated by using orbital solution of the system based on Kepler's third law as follows: \[\mathcal{M}_{d}=\mathcal{M}_{A}+\mathcal{M}_{B}=\Big{(}\frac{a^{3}}{\pi^{3}P^ {2}}\Big{)}\ \mathcal{M}_{\odot}, \tag{8}\] The error in the dynamical mass is estimated as follows: \[\frac{\sigma_{\mathcal{M}}}{\mathcal{M}}=\sqrt{9\Big{(}\frac{\sigma_{\pi}}{ \pi}\Big{)}^{2}+9\Big{(}\frac{\sigma_{a}}{a}\Big{)}^{2}+4\Big{(}\frac{\sigma_{ p}}{p}\Big{)}^{2}} \tag{9}\] where \(a^{{}^{\prime\prime}}\) and \(\pi\) are the semi-major axis and the parallax (both in arcsec), respectively, P is the orbital period (in years), \(\ \mathcal{M}_{A}\) and \(\ \mathcal{M}_{B}\) are the masses (in solar mass). The dynamical masses are dependent mainly on the grades of the orbits. We can adopt the best orbit if that orbit has the grades of Grade 1=Definitive, Grade 2=Good, and Grade 3=Reliable. When the spectrophotometric mass is in keeping with the dynamical mass, the parallax of the system is adopted, otherwise it should be estimated by using Al-Wardat's method as follows: \[\pi_{dyn}=\frac{a}{P^{2/3}(\sum\mathcal{M}_{Sph})^{1/3}} \tag{10}\] where \(\sum\mathcal{M}_{Sph}\) are estimated by using Al-Wardat's method for analyzing BMSSs in solar mass and \(\pi_{dyn}\) is in arcsec. Its error is estimated as follows: \[\frac{\sigma_{\pi_{dyn}}}{\pi_{dyn}}=\sqrt{\frac{4}{9}\Big{(}\frac{\sigma_{P }}{P}\Big{)}^{2}+\Big{(}\frac{\sigma_{a}}{a}\Big{)}^{2}+\frac{1}{9}\Big{(} \frac{\sigma_{\sum\mathcal{M}_{Sph}}}{\sum\mathcal{M}_{Sph}}\Big{)}^{2}} \tag{11}\] ## 5 Results and Discussions The fundamental stellar properties of the close binary system, HD 39438 were estimated using the complex analytical method (Al-Wardat's method for analyzing BMSSs) by (Al-Wardat 2002). The method combines the spectroscopic solution with photometric solution to estimate the physical and geometrical stellar parameters of the system. These led to presenting a new value for the system's parallax of HD 39438. The results of the calculated synthetic magnitudes and colour indices of the individual components and combined synthetic SEDs of the binary system, HD 39438 are listed in Table 3. These have presented in different photometrical systems (Johnson: \(U\), \(B\), \(V\), \(R\), \(U-B\), \(B-V\), \(V-R\); Stromgren: \(u\), \(v\), \(b\), \(y\), \(u-v\), \(v-b\), \(b-y\) and Tycho: \(B_{T}\), \(V_{T}\), \(B_{T}-V_{T}\)). Table 4 shows the best agreement between the synthetic and observed photometry of the binary system, HD 39438. This agreement demonstrates that the basic stellar characteristics of each element in the system listed in Table 5 are reliable. Table 3 indicates that the synthetic apparent magnitudes are completely in keeping with the observed apparent magnitudes of the system. The stellar luminosities of the individual components of HD 39438 are estimated to be as follows: \(L_{A}=2.65\pm 0.08\,{\rm L}_{\odot}\) and \(L_{B}=0.76\pm 0.09\,{\rm L}_{\odot}\) for the primary and secondary omponents of the system, while the spectral types for them are F5.5V and G8V, respectively, which are in line with the spectral types of Mason et al. (2010) and Tokovinin (2017), and with the spectral type F5V given in WDS and SIMBAD catalogues. According to the results of the analysis, Fig. 1 shows the adopted combined synthetic SED and the synthetic SED for the individual component of the binary system for the first time based on the best agreement between the observed and synthetic stellar photometry of the system. Fig. 2 shows the spectrophotometric stellar masses of HD 39438, which are determined by using Al-Wardat's complex method for analyzing BMSSs based on the synthetic evolutionary tracks of Girardi et al. (2000b) and fundamental stellar parameters of the system. These are found to be \(1.24\pm 0.11\,{\rm M}_{\odot}\) and \(0.98\pm 0.09\,{\rm M}_{\odot}\) for the primary and secondary components of HD 39438. According to Tokovinin (2017), the total mass of the system was \(2.26{\cal M}_{\odot}\) by using the spectral types, which is in keeping with our results (\(2.22{\cal M}_{\odot}\)). The total dynamical mass obtained by the orbital solutions of Mason et al. (2010) (\(2.56\pm 0.32\)) is consistent with the total mass achieved in this study within error margins, while there is no agree \begin{table} \begin{tabular}{c c c c c} \hline \hline Sys. & Filter & Combined Synth. & HD 39438 & HD 39438 \\ & & \(\sigma=\pm 0.03\) & A & B \\ \hline Joh- & \(U\) & 7.90 & 8.06 & 10.06 \\ Cou. & \(B\) & 7.82 & 8.02 & 9.76 \\ & \(V\) & 7.26 & 7.51 & 9.00 \\ & \(R\) & 6.95 & 7.22 & 8.59 \\ & \(U-B\) & 0.08 & 0.04 & 0.31 \\ & \(B-V\) & 0.56 & 0.51 & 0.76 \\ & \(V-R\) & 0.31 & 0.29 & 0.40 \\ \hline Ström. & \(u\) & 9.07 & 9.23 & 11.21 \\ & \(v\) & 8.14 & 8.32 & 10.16 \\ & \(b\) & 7.58 & 7.80 & 9.41 \\ & \(y\) & 7.23 & 7.48 & 8.96 \\ & \(u-v\) & 0.93 & 0.91 & 1.05 \\ & \(v-b\) & 0.56 & 0.52 & 0.675 \\ & \(b-y\) & 0.35 & 0.32 & 0.45 \\ \hline Tycho & \(B_{T}\) & 7.96 & 8.14 & 9.95 \\ & \(V_{T}\) & 7.33 & 7.57 & 9.08 \\ & \(B_{T}-V_{T}\) & 0.63 & 0.57 & 0.87 \\ \hline \hline \end{tabular} \end{table} Table 3: The synthetic stellar photometry of HD 39438. ment between the results of Al-Wardat's method and dynamical mass obtained utilizing Tokovinin et al. (2014)'s orbit. However, Tokovinin (2017) revised the orbit and presented new orbital parameters, the new orbital solution was graded two (G 2=good), which is more accurate than the previous one. In his study, Tokovinin (2017) presented a new parallax as \(\pi_{dyn}=17.6\) mas depending on the new orbital solution. This further supports our conclusion that the measured parallax for this system is not accurate enough and needs to be revised by observations. In our analysis, we used the dynamical parallax and the good orbital solution by Tokovinin (2017) (\(P=11.963\pm 0.036\) yr and \(a=0.1207\pm 0.0007\) arcsec) to calculate the dynamical mass sum as \(\Sigma\mathcal{M}=2.25\pm 0.05\mathcal{M}_{\odot}\), which is well in line with the spectrophotometric mass sum (\(\Sigma\mathcal{M}=2.22\mathcal{M}_{\odot}\)) using Al-Wardat's method. In our case, we say that the suggested dynamical parallax should be slightly larger than the dynamical parallax of Tokovinin (2017) based on our results. As a results, we used good orbital solution of Tokovinin (2017) and our spectrophotometric mass sum (\(\Sigma\mathcal{M}=2.22\mathcal{M}_{\odot}\)) to compute the new dynamical parallax as \(\pi_{dyn}=17.689\pm 0.03\) mas, which is the closest estimate to the dynamical parallax of Tokovinin (2017). As a result, we expect that Gaia will perform and give good improvement in terms of the trigonometric parallax in the near future. \begin{table} \begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{HD 39438} \\ \cline{2-3} Filter & Observed \({}^{a}\) & Synthetic\({}^{b}\)(This work) \\ & (mag) & (mag) \\ \hline \(V_{J}\) & \(7.26\) & \(7.26\pm 0.03\) \\ \(B_{J}\) & \(7.79\pm 0.02\) & \(7.82\pm 0.03\) \\ \(B_{T}\) & \(7.93\pm 0.012\) & \(7.96\pm 0.03\) \\ \(V_{T}\) & \(7.32\pm 0.01\) & \(7.33\pm 0.03\) \\ \((B-V)_{J}\) & \(0.56\pm 0.02\) & \(0.56\pm 0.03\) \\ \((u-v)_{S}\) & \(0.88\) & \(0.93\pm 0.03\) \\ \((v-b)_{S}\) & \(0.51\) & \(0.56\pm 0.03\) \\ \((b-y)_{S}\) & \(0.35\) & \(0.35\pm 0.03\) \\ \(\triangle\)m & \(1.49^{c}\pm 0.01\) & \(1.49^{d}\pm 0.05\) \\ \hline \hline \end{tabular} Notes: \({}^{a}\) The observational data of HD 39438 (see Table 1), \({}^{b}\) The synthetic photometry of HD 39438 (see Table 3), \({}^{c}\) The observed magnitude difference of HD 39438 (see Table 2) and \({}^{d}\) The synthetic magnitude difference of HD 39438 (see Table 3) \end{table} Table 4: The best agreement between the observed photometry from catalogues and the synthetic photometry from this study of HD 39438. \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{H 39438} \\ \cline{3-4} Parameters & Units & H 39438 & HD 39438 \\ \cline{3-4} & & A & B \\ \hline T\({}_{\rm eff}\) & [ K ] & \(6370\pm 100\) & \(5580\pm 100\) \\ R & [R\({}_{\odot}\)] & \(1.34\pm 0.09\) & \(0.935\pm 0.09\) \\ log g & [cgs] & \(4.35\pm 0.07\) & \(4.50\pm 0.07\) \\ L & [L\({}_{\odot}\)] & \(2.65\pm 0.08\) & \(0.76\pm 0.09\) \\ M\({}_{\rm bol}\) & [mag] & \(3.69\pm 0.08\) & \(5.05\pm 0.09\) \\ M\({}_{V}\) & [mag] & \(4.03\pm 0.12\) & \(5.52\pm 0.13\) \\ M \({}^{a}\) & [M\({}_{\odot}\)] & \(1.24\pm 0.08\) & \(0.98\pm 0.07\) \\ Sp. Type & & F5.5V & G8V \\ \hline Parallax \({}^{b}\) & [mas] & \(20.15\pm 1.19\) \\ Age \({}^{c}\) & [Gyr] & & \(1.995\) \\ \hline \hline \end{tabular} \({}^{a}\)Based on Al-Wardat’s method \({}^{b}\)Based on H07’s parallax. \({}^{c}\)Based on the the isochrones tracks. \end{table} Table 5: The fundamental stellar parameters of the individual components of HD 39438. Fig. 2 shows the positions of both components on the isochrones tracks of Girardi et al. (2000a). In that case, we can see that the metallicity of HD 39438 is [Z=0.019, Y=0.27], as shown in Fig. 2. Based on Fig. 2, the age of the system is found to be 1.995 Gyr. The combined metallicity of the system was 0.015 based on the observed data (Gaspar et al. 2016), which corresponds well with the synthetic metallicity of 0.019, as shown in Fig. 2. ## 6 Conclusions We have presented the fundamental stellar parameters of the close binary system, HD 39438 using Al-Wardat's method for analyzing BMSSs. The method implements Kurucz's plane parallel model atmospheres to construct synthetic SEDs for both components of the system. It then combines the results of the spectroscopic analysis with the photometric analysis and then compares them with the observed ones to construct the best synthetic SEDs for the combined system. The best match between the synthetic and observed magnitudes and colour indices of the system for various photometrical systems, including Johnson: \(U\), \(B\), \(V\), \(R\), \(U-B\), \(B-V\), \(V-R\); Stromgren: \(u\), \(v\), \(b\), \(y\), \(u-v\), \(v-b\), \(b-y\) and Tycho: \(B_{T}\), \(V_{T}\), \(B_{T}-V_{T}\) is showcased. The results shows that HD 39438 consists of two main sequence stars; a 1.24 solar mass with F5.5V and a 0.98 solar mass with G8V, both have the same age around 2 Gyr. We revised the dynamical parallax of the system, which is estimated to be \(16.689\pm 0.03\) mas. This study utilized several resources and tools, including SAO/NASA, the SIMBAD database, the Fourth Catalog of Interferometric Measurements of Binary Stars, IPAC data systems, the ORBIT code and the CHORIZOS code for photometric and spectrophotometric data analysis, and codes of Al-Wardat's method for analyzing binary and multiple stellar systems (BMSSs).
2310.19284
Cosmological LTB Black Hole in a Quintom Universe
We study cosmological Lemaitre-Tolman-Bondi (LTB) black hole thermodynamics immersed in a quintom universe. We investigate some thermodynamic aspects of such a black hole in detail. We apply two methods of treating particles' tunneling from the apparent horizons and calculate the black hole's temperature in each method; the results of which are the same. In addition, by considering specific time slices in cosmic history, we study the thermodynamic features of this black hole in these specific cosmic epochs. Also, we discuss the information loss problem and the remnant content of the cosmological black hole in different cosmic epochs in this context. We show that approximately in all cosmic history, the temperature of the black hole's apparent horizon is more than the temperature of the cosmological apparent horizon.
Sareh Eslamzadeh, Kourosh Nozari, J. T. Firouzjaee
2023-10-30T05:57:41Z
http://arxiv.org/abs/2310.19284v1
# Cosmological LTB Black Hole in a Quintom Universe ###### Abstract We study cosmological Lemaitre-Tolman-Bondi (LTB) black hole thermodynamics immersed in a quintom universe. We investigate some thermodynamic aspects of such a black hole in detail. We apply two methods of treating particles' tunneling from the apparent horizons and calculate the black hole's temperature in each method; the results of which are the same. In addition, by considering specific time slices in the cosmic history, we study the thermodynamic features of this black hole in these specific cosmic epochs. Also, we discuss the information loss problem and the remnant content of the cosmological black hole in different cosmic epochs in this context. We show that approximately in all the cosmic history, the temperature of the black hole's apparent horizon is more than the temperature of the cosmological apparent horizon. **Keywords:** Cosmological Black Hole, LTB Black Hole, Tunneling Process, Hawking Temperature, Quintom Universe. ###### Contents * I Introduction * II Cosmological LTB Black Hole in a Quintom Universe * III Thermodynamics of Cosmological LTB Black Hole in a Quintom Universe * III.1 The Hamilton-Jacobi Method * III.2 The Parikh-Wilczek Method * III.3 Non-Thermal Spectrum * IV Evolution of Thermodynamic Features of Cosmological LTB Black Hole * V Summary and Conclusion * II.1.1 The \(\Lambda\)CDM model * III.2.1 The \(\Lambda\)CDM model * III.2.2 The \(\Lambda\)CDM model * III.2.3 The \(\Lambda\)CDM model * III.2.4 The \(\Lambda\)CDM model * III.2.5 The \(\Lambda\)CDM model * III.2.6 The \(\Lambda\)CDM model * III.2.7 The \(\Lambda\)CDM model * III.2.8 The \(\Lambda\)CDM model * III.2.9 The \(\Lambda\)CDM model * III.3.1 The \(\Lambda\)CDM model * III.3.2 The \(\Lambda\)CDM model * III.3.3 The \(\Lambda\)CDM model * III.3.4 The \(\Lambda\)CDM model * III.3.5 The \(\Lambda\)CDM model * III.3.6 The \(\Lambda\)CDM model * III.3.7 The \(\Lambda\)CDM model * III.3.8 The \(\Lambda\)CDM model * III.3.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.2 The \(\Lambda\)CDM model * III.4.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.2 The \(\Lambda\)CDM model * III.4.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.3 The \(\Lambda\)CDM model * III.4.4.4 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model * III.4.6 The \(\Lambda\)CDM model * III.4.7 The \(\Lambda\)CDM model * III.4.8 The \(\Lambda\)CDM model * III.4.9 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.1 The \(\Lambda\)CDM model * III.4.2 The \(\Lambda\)CDM model * III.4.3 The \(\Lambda\)CDM model * III.4.4.2 The \(\Lambda\)CDM model * III.4.5 The \(\Lambda\)CDM model Energy was introduced as a mysterious component responsible for this positively accelerated expansion. The first suggested candidate for this weird component was the cosmological constant [15]. But, problems of the cosmological constant [16] such as fine-tuning, coincidence, and the essence of being constant caused particle physics to give some new alternatives. Therefore, fields like Quintessence [17], K-essence [18], Tachyon [19], Phantom [20], and Quintom [21] were some of the most important subsequent suggestions. If we pay attention to the equation of state parameter, \(w_{{}_{field}}=\frac{p}{\rho}\), as an important quantity for a cosmological component, the Quintom field has a fascinating aspect: it is actually a combination of two fields including a Quintessence field with \(w>-1\) plus a Phantom field with \(w<-1\). Since the observational data are in the favor of a transition from the quintessence phase to a phantom phase at late time, a mechanism for crossing of the cosmological constant equation of state parameter, that is, \(w=-1\), is required. In Ref. [22], one can find some observational and theoretical evidences for the necessity of the Quintom field existence as a suitable candidate for the Dark Energy. The connection between thermodynamics variables and black hole geometry was firstly introduced by Bekenestein [23]. Afterward, four laws of thermodynamics for black holes were established [24] and, then, Hawking initiated the research on the possibility of black hole evaporation [25]. There are two straightforward approaches to calculate the particle tunneling rate from the black hole horizon: One based on the Hamilton-Jacobi method [26], and the other based on the null geodesics method [27; 28]. In Ref. [11] and references therein, one can find an elegant review on the topic of tunneling methods and Hawking's radiation from both stationary and dynamical black holes. Besides, thermodynamic features of cosmological black holes have been of interest in some research works [31; 32; 33; 34; 35; 36; 37; 38]. The present study aims to probe the tunneling process from the horizons of the cosmological LTB black hole surrounded by a quintom field. In this regard, in section II, we illustrate spacetime which contains the cosmological LTB black hole in the Quintom field as the background dark energy. We characterize the initial conditions which are required to construct both cosmological and black hole apparent horizons. Also, we debate on what effects the existence of Quintom has on these horizons in the entire cosmic history. In section III, we apply the Parikh-Wilczek method to calculate the entropy and temperature of the cosmological and black hole apparent horizons. Besides, we investigate the correlation between radiative modes and black hole remnant. In section IV, we are curious about the time evolution of the cosmological black hole surrounded by Quintom matter; precisely their horizons and thermodynamics time evolution in the entire cosmic history. Finally, we summarize our results in section V. Cosmological LTB black hole in a quintom universe To construct the metric of the cosmological LTB black hole in the Quintom dominated universe, we benefit the reults of Ref. [39]. In this regard, we assume the line element to be as follows \[ds^{2}=-dt^{2}+e^{\bar{\phi}}dr^{2}+e^{\phi}d\Omega^{2}, \tag{1}\] where \(t\) is a cosmic time parameter and \((r,\theta,\varphi)\) are comoving coordinates with \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\); \(\phi\) and \(\bar{\phi}\) are functions of \(t\) and \(r\). We consider the energy-momentum tensor of the Quintom field in the perfect fluid form as \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{2}\] where \(\rho\) and \(p\) are density and pressure of the Quintom field, respectively; and \(u^{\mu}=(1,0,0,0)\) is the four-velocity. Assuming there is no accretion, \(G_{1}^{0}=0\) (see [39]), other components of the Einstein's field equations are as follows \[G_{0}^{0} =8\pi\rho, \tag{3}\] \[G_{1}^{1}=G_{2}^{2}=G_{3}^{3} =-8\pi p.\] As explained in Ref. [39], taking into account the source to be a single perfect fluid and the background to be spatially flat, the comoving observer realizes a spatially homogenous pressure. Therefore, the Einstein equations give \[\ddot{\phi}+\frac{3}{4}\dot{\phi}^{2} = -8\pi p(t), \tag{4}\] \[\frac{\dot{\phi}^{\prime}\dot{\phi}}{\phi^{\prime}}+\frac{3}{4} \dot{\phi}^{2} = 8\pi\rho(r,t),\] where overdot and prime denote differentiation with respect to \(t\) and \(r\), respectively. Following Ref. [39], we set the pressure in the form \[p=-\frac{p_{0}}{(t_{0}-t)^{2}}, \tag{5}\] where \(p_{0}\) is a positive constant and \(t_{0}\) is recognized as the Big Rip singularity time. The solution of the Eqs. (4) is given by \[e^{\phi}=\big{[}P(r)(t_{0}-t)^{\frac{1-k}{2}}+S(r)(t_{0}-t)^{\frac{1+k}{2}} \big{]}^{\frac{4}{3}}, \tag{6}\] where \(k\equiv\sqrt{1+24\pi p_{0}}\) is a constant in terms of \(p_{0}\); \(P\) and \(S\) are arbitrary functions of \(r\). By choosing \(P=r^{3/2}\), \(S\) is determined in such a way that the boundary conditions would be recovered correctly. Finally, the metric functions of the cosmological black hole in a Quintom dominated universe are found as follows [39] \[e^{\phi}=\left[r^{\frac{3}{2}}(t_{0}-t)^{\frac{1-k}{2}}-\left(\frac{3}{2}\sqrt{2M }+\sqrt{6\pi\rho_{0}}r^{\frac{3}{2}}\right)(t_{0}-t)^{\frac{1+k}{2}}\right]^{ \frac{4}{3}}, \tag{7}\] and \[e^{\bar{\phi}}=\frac{\phi^{\prime 2}}{4}e^{\phi}. \tag{8}\] To compare and check the boundary conditions, one can find in Ref. [40] the cosmological LTB black hole described with the line element as follows \[ds^{2}=-dt^{2}+\frac{R^{\prime 2}(r,t)}{1+2E(r)}dr^{2}+R^{2}(r,t)(d\theta^{2}+ \sin^{2}\theta d\varphi^{2}), \tag{9}\] where \(R(r,t)\) is a physical radius, \(E(R)=\frac{1}{2}\dot{R}^{2}(r,t)-\frac{M(R)}{R(r,t)}\) gives the meaning of the total energy per unit mass, while \(M(R)\) is the mass in the sphere of comoving radius \(r\). If a collapsing metric is built by this metric, one can show that the apparent horizon (trapping horizon or dynamical horizon) will form at \(R=2M\) surface. The quantity \(E(r)\) is like the curvature function which includes a contribution from the kinetic energy and the gravitational potential energy. To investigate the boundary conditions of the metric Eqs. (7) and (8), we compare Eqs. (1), (7) and, (9), then rewrite the metric in terms of \(R\) as follows \[R\equiv e^{\phi/2}=\bigg{[}r^{\frac{3}{2}}(t_{0}-t)^{\frac{1-k}{2}}-\bigg{(} \frac{3}{2}\sqrt{2M}+\sqrt{6\pi\rho_{0}}r^{\frac{3}{2}}\bigg{)}(t_{0}-t)^{ \frac{1+k}{2}}\bigg{]}^{\frac{2}{3}}. \tag{10}\] In this regard, there are some special cases based on Eqs. (9) and, (10) as follows: * \(p_{0}\neq 0\), \(\rho_{0}\neq 0\) and, \(M\neq 0\): black hole solution in the Quintom dominated universe; * \(p_{0}\neq 0\), \(\rho_{0}\neq 0\) and, \(M=0\): Quintom dominated cosmology; * \(p_{0}=0\), \(\rho_{0}\neq 0\) and, \(M\neq 0\): black hole solution in a dust dominated universe with \(\rho_{0}=\rho_{d}a^{3}\), where, \(\rho_{d}\) and \(a\) are dust density and scale factor of the universe, respectively. Therefore, the metric of Eq. (10) turns into \[R=\bigg{[}r^{\frac{3}{2}}+\bigg{(}\frac{3}{2}\sqrt{2M}+\sqrt{6\pi\rho_{0}}r^{ 3/2}\bigg{)}t\bigg{]}^{\frac{2}{3}};\] (11) * \(p_{0}=0\), \(\rho_{d}=0\) and, \(M\neq 0\): Schwarzschild solution; * \(p_{0}=0\), \(\rho_{d}\neq 0\) and, \(M=0\): dust dominated cosmology. To investigate the apparent horizons of the cosmological LTB black hole immersed in a Quintom dominated universe we rewrite Eq. (1) based on the Schwarzschild notation \[ds^{2}=-(1-X^{2})dt^{2}+dx^{2}+2Xdtdx+x^{2}d\Omega^{2}, \tag{12}\] where \[x\equiv e^{\phi/2}\quad\text{and}\quad X\equiv\frac{\partial x}{\partial t}. \tag{13}\] To find the apparent horizons, we benefit the new time coordinate like \[dT=\bigg{(}dt+\frac{X}{1-X^{2}}dx\bigg{)}L^{-1}, \tag{14}\] where \(L\) is a total differential that is a function of time and coordinate and therefore is not a constant. As a result, the metric of the cosmological LTB black hole in a Quintom dominated universe turns into \[ds^{2}=-(1-X^{2})L^{2}dT^{2}+\frac{1}{1-X^{2}}dx^{2}+x^{2}d\Omega^{2}. \tag{15}\] To calculate the apparent horizons, \(x_{{}_{H}}\), we should find the roots of \(\chi\equiv 1-X^{2}=0\) which is equivalent to the following expression \[1-\frac{4\left(\sqrt{\frac{3\pi\rho}{2}}r^{3/2}(1+k)(1-t)^{\frac{k-1}{2}}- \frac{1}{2}r^{3/2}(1-k)(1-t)^{-\frac{1+k}{2}}+\frac{3(k+1)\sqrt{M}(1-t)^{\frac {k-1}{2}}}{2\sqrt{2}}\right)^{2}}{9\left(-\sqrt{6\pi\rho}r^{3/2}(1-t)^{\frac{ k+1}{2}}+r^{3/2}(1-t)^{\frac{1-k}{2}}-\frac{3\sqrt{M}(1-t)^{\frac{k+1}{2}}}{ \sqrt{2}}\right)^{2/3}}=0, \tag{16}\] while we put \(t_{0}=1\) in Eq. (13). Therefore, substituting \(r^{3/2}\) in terms of \(x\), Eq. (16) would be an equation with six roots, some of which are the location of apparent horizons in this cosmological background. Setting \(M=1\) and finding a numerical solution, we conclude that the second and third roots of the Eq. (16) are real and match with the boundary condition as we have illustrated them in Fig. 1. The second root is the black hole apparent horizon, \(x_{BH}\), and the third one is the cosmological apparent horizon, \(x_{CH}\). There is a certain time in the past when the two horizons were coincided. Also, there is a certain time before the Big Rip when the two horizons will coincide again, and the naked singularity will be leftover. After creation of the horizons, with passing time, the size of the cosmological LTB black hole horizons in the Quintom universe evolves in such a way that the cosmological apparent horizon size (blue dashed curve) first increases and then decreases, while the black hole apparent horizon size (red solid curve) first decreases and then increases. It seems that the black hole horizon shrinking is due to the phantom component in this setup. ## III Thermodynamics of cosmological LTB black hole in a quintom universe Firstly, a brief description of how Hawking radiation works is explained in what follows. According to the quantum field theory, the vacuum is a complex entity of virtual particles that are continuously created, interacted, and then annihilated. In general, a vacuum is stable; but the presence of external fields makes it possible for the particles to become real. We suppose a static gravitational field with the Killing vector field \(\xi^{\alpha}\). The particles' energy created in this field is equal to \(\omega=-p_{\alpha}\xi^{\alpha}\), where \(p^{\alpha}\) is four-momentum of the the particle and it is null for a massless particle. Whenever the virtual pair particle is created inside the horizon, the virtual particle with positive energy can tunnel throughout the horizon. Also, whenever the virtual pair particle is created outside the horizon, the virtual particle with negative energy can tunnel into the horizon. In both cases, the black hole absorbs the particle with negative energy, therefore, the mass of the black hole decreases; while the particle with positive energy escapes to infinity, and the observer detects it as Hawking radiation. Because the particle can classically fall into the black hole horizon, its action is real. For a particle that goes out the horizon of the black hole, the action becomes complex and the tunneling rate is determined by the imaginary part of the action. The transmission rate, \(\Gamma\), which is equal to the probability of emission devided by the probability of absorption of particles, is related to the imaginary part of the action on one side and to the temperature on the other side, as follows \[\Gamma=\frac{P_{em}}{P_{abs}}\sim\exp(-\beta\omega)\sim\exp(-2\mathrm{Im}S) \tag{17}\] where \(\beta^{-1}\) is known as the temperature of the black hole. This explanation obliges us to calculate the imaginary part of the action to obtain the temperature of the black hole by quantum tunneling of the particles. There are two methods to calculate the imaginary part of the action: the Hamilton-Jacobi method [26] and the Parikh-Wilczek method [27; 28]. The only noteworthy point remains that we are dealing with dynamic black holes instead of stationary ones. In the cosmological context, a spherically symmetric black hole with a dynamical horizon cannot produce pure Hawking particle-antiparticle pairs, as this would break the principle of energy conservation and causes the apparent horizon to become spacelike [29]. In other Figure 1: The behavior of the cosmological and black hole apparent horizons versus time in blue curve (dashed line) and red curve (solid line), respectively. Plot has been depicted with fixed mass, \(M=1\), while \(\rho_{0}=0.0002\), \(p_{0}=0.001\), and \(t_{0}=1\). words, the apparent horizon of any dynamical spacetime must lie inside the event horizon, and any virtual particle pairs created by the vacuum cannot escape and must fall back into the primordial black holes (PBHs). When we deal with fully dynamical metric, Hawking's quantum field theory approach to black hole radiation [30] cannot be applied, as it is only suitable for late-time stationary black holes and cannot calculate the thermal aspect of Hawking radiation. Alternatively, new approaches [26; 27; 28] have been developed to calculate Hawking radiation in dynamical backgrounds. These approaches are based on the semiclassical approach using adiabatic vacuum in quantum field theory in curved spacetime, and suggest that radiation is likely emitted from the neighborhood of the apparent horizons rather than near the event horizon. In the case of dynamical black holes, universal definitions such as the black hole horizon and its surface gravity must be redefined based on local physics bases. The most important definitions are trapping horizon, which are introduced by Hayward [7], and Kodama vector [41]. We are not going to explain these definitions here, but one can find some useful information on them in Refs. [12; 11; 42]. Our strategy in what follows is to apply the Hamilton-Jacobi and Parikh-Wilczek methods separately to the cosmological LTB black hole in a Quintom dominated universe with the related definitions for the dynamical black holes. ### The Hamilton-Jacobi Method The Hamilton-Jacobi equation for the cosmological LTB black hole in Quintom universe based on the metric (12) is \[\chi(\partial_{r}S)^{2}-2X\omega(\partial_{r}S)-\omega^{2}=0, \tag{18}\] where \(S\) is the action and \(\omega\) is the energy of a tunneling particle. We note that as before, \(\chi\) is defined as \(\chi\equiv 1-X^{2}=0\) where \(X\equiv\frac{\partial x}{\partial t}\) and \(r\) is the comoving radial coordinate. The invariant particle energy is determined based on the Kodama vector, \(K=(1,0,0,0)\), as follows \[\omega=-K^{i}\partial_{i}S=-\partial_{t}S. \tag{19}\] It is important to note that Eq. (18) contains both \(r\) and \(t\) since \(\omega\) as the particle's energy is defined by the Kodama vector based on the time differentiation of the action. Choosing the solution of the Eq. (18) with positive radial momentum, we have \[\partial_{r}S=\frac{\omega X}{\chi}(1+O(\chi)). \tag{20}\] Therefore, \(\partial_{r}S\) has a pole at the horizon. On the other hand, the action can be written as the sum of a real term and an imaginary term as follows \[S=\int{(dr\partial_{r}S+dt\partial_{t}S)}=\int{(dr\partial_{r}S+\frac{1}{2} \omega)}. \tag{21}\] To calculate the imaginary part of the action which the first term contains it, we expand \(\chi\) at the horizon as follows \[\chi\simeq\dot{\chi}\partial t+\chi^{\prime}\partial x, \tag{22}\] where \(\simeq\) means the approximation on the horizon and, \(\partial x=x-x_{H}\). Also, from the metric (12), outward null radial path crossing the horizon gives the result \[\partial t=-(\frac{1}{2}X)\Big{|}_{{}_{H}}\partial x. \tag{23}\] Substituting Eq. (23) into Eq. (22), we conclude \[\chi=\big{(}\chi^{\prime}-\frac{1}{2X}\dot{\chi}\big{)}\Bigg{|}_{{}_{H}}(x-x_{{} _{H}})+...=2\kappa_{{}_{H}}(x-x_{{}_{H}})+O((x-x_{{}_{H}})^{2}), \tag{24}\] where \[\kappa_{{}_{H}}=\frac{1}{2}\Box\,r|_{{}_{H}}=\frac{1}{2X^{2}}\big{(}\chi^{ \prime}-\frac{1}{2X}\dot{\chi}\big{)}\Bigg{|}_{{}_{H}}, \tag{25}\] is the dynamical surface gravity. Substituting Eq. (24) into Eq. (20) and then in Eq. (21), it is possible to calculate the imaginary part of the action using the Feynman's prescription as follows \[\text{Im}S=\text{Im}\int\partial_{r}Sdr=\text{Im}\int\frac{\omega X}{2\kappa_ {{}_{H}}(x-x_{{}_{H}}-i\epsilon)}dx=\frac{\pi\omega_{{}_{H}}}{\kappa_{{}_{H}}}. \tag{26}\] Finally, using Eq. (17) we can find the temperature of the cosmological LTB black hole immersed in Quintom universe as follows \[T=\beta^{-1}=\frac{\kappa_{{}_{H}}}{2\pi}. \tag{27}\] ### The Parikh-Wilczek Method Our approach is based on the quantum tunneling of the particles from the apparent horizon. We apply the null geodesics method which is well-known as the Parikh-Wilczek method [27]. Actually, the method describes the Hawking radiation by the pair of particle-antiparticle production near the horizon and the escape of the particle to infinity through the quantum tunneling process. The tunneling particle rate is related to both the imaginary part of the action and the temperature inverse. Therefore, calculations start with calculating the imaginary part of the action for a particle that is moving from an initial state at \(x_{in}\) to the final state at \(x_{out}\) as follows \[{\rm Im}S\equiv{\rm Im}\int E\:dt={\rm Im}\int_{x_{in}}^{x_{out}}p_{x}\:dx={\rm Im }\int_{x_{in}}^{x_{out}}\int_{0}^{p_{x}}\:d\tilde{p_{x}}\:dx, \tag{28}\] where \(x_{in}=x_{{}_{H}}-\epsilon\) and \(x_{out}=x_{{}_{H}}+\epsilon\). Also, in what follows \(\tilde{\omega}\) is the energy of the particle and we suppose this as a self interaction. With Hamilton equation, \(dp_{x}=\frac{dH}{\dot{x}}\), Eq. (28) changes to the following form \[{\rm Im}S={\rm Im}\int_{x_{in}}^{x_{out}}\int_{M}^{M-\tilde{\omega}}\frac{dH} {\dot{x}}\:dx=-{\rm Im}\int_{0}^{\tilde{\omega}}\int_{x_{in}}^{x_{out}}\frac{ dx}{\dot{x}}\:d\omega. \tag{29}\] We consider the lightlike geodesics for massless particles' tunneling regarded to the metric of Eq. (12) (known as the Painleve-Gullstrand like coordinate), we have \[\dot{x}^{2}+2\sqrt{1-\chi}\:\dot{x}-\chi=0. \tag{30}\] As a result, we find the outgoing and ingoing trajectories as follows \[\dot{x}=\pm 1-\sqrt{1-\chi}\,, \tag{31}\] which gives \(\dot{x}\simeq\frac{\chi}{2}\) for plus sign (outgoing trajectories). Substituting Eq. (31) into Eq. (29), the imaginary part of the action for massless outgoing particles is given by \[{\rm Im}S=-{\rm Im}\int_{0}^{\omega}\int_{x_{in}}^{x_{out}}\frac{2dx\:d\tilde{ \omega}}{\chi}\,. \tag{32}\] We put \(\chi\) from Eq. (24) into Eq. (32), therefore, we can calculate the imaginary part of the action by Parikh-Wilczek method as follows \[{\rm Im}S=\int_{0}^{\omega}\frac{2\pi\:d\tilde{\omega}}{2\kappa_{{}_{H}}}= \frac{\pi\omega_{{}_{H}}}{\kappa_{{}_{H}}}\,. \tag{33}\] As a result, the temperature with null geodesics approach will be the same which we obtained with Hamilton-Jacobi method in Eq. (27). We expected the same outcome regardless of the calculation method since we expect the infinity observer to detect a certain temperature. ### Non-Thermal Spectrum After the discovery of the thermal Hawking radiation, the information paradox has been discussed [43; 44]. Afterward, a criterion for calculating the correlation between radiation modes was proposed as follows [45; 46] \[\zeta(\omega_{1}+\omega_{2};\omega_{1},\omega_{2})=\ln\left[\Gamma(\omega_{1} +\omega_{2})\right]-\ln\left[\Gamma(\omega_{1})\Gamma(\omega_{2}]\right), \tag{34}\] here \(\zeta\) is the correlation function and \(\omega_{1,2}\) are the tunneling particles' energy. Actually, Eq. (34) lets us to know whether the probability of tunneling of two particles with energies \(\omega_{1}\) and \(\omega_{2}\) is the same as the probability of tunneling of one particle with energy \(\omega_{1}+\omega_{2}\) or not. If the correlation between emitted modes is not zero, it means the radiation deviates from pure thermal radiation. Regarding Eq. (17), one can find that the transmission rate is related to the imaginary part of the action, and regarding Eq. (34), the existence of a correlation between the emitted modes is obvious. Actually, we think that it is an important effect of the presence of the Quintom field in the environment of the black hole that causes this correlation between the emitted modes. ## IV Evolution of thermodynamic features of cosmological LTB black hole We probed the time evolution of the horizons in the previous sections. In this section, we intend to investigate the effect of time evolution on the thermodynamics of the cosmological LTB black hole immersed in a Quintom universe. In other words, first of all, we obtain the apparent horizons in terms of the mass and derive the equation for temperature versus the mass of the black hole. Then, we evaluate the black hole temperature behavior in some cosmic epochs. This is important for us to answer the question whether the LTB black hole in a Quintom universe evaporates in the same way in all cosmic epochs or the time is an essential component that affects Hawking radiation and the black hole remnant. We have to find the apparent horizons from Eq. (16), but contrary to the previous section, here we want to fix the time and obtain an explicit expression in terms of the mass of the black hole. To describe precisely, if we consider a fixed time, there is a critical mass in which two apparent horizons coincide. As we illustrate in Fig. 2, whatever the mass of the black hole is less than the critical mass, the two horizons are far away from each other; actually, the black hole horizon becomes smaller and the cosmological horizon becomes larger. Figure 2: The behavior of the cosmological and black hole apparent horizons versus the mass. Plot has been depicted with fixed time: \(t=-0.4\) for the green curve and \(t=+0.4\) for the purple curve. Solid lines show the black hole apparent horizons and dashed lines show the cosmological apparent horizons with \(\rho_{0}=0.0002\), \(p_{0}=0.001\), and \(t_{0}=1\). In order to obtain an explicit equation for the temperature in terms of the mass, first of all, we need the explicit expressions for the cosmological and black hole apparent horizons radii. These radii can be obtained via Eq. (16). The third root of the Eq. (16) is the cosmological apparent horizon, \(x_{CH}\). Applying the self-gravitating shells [47], we put \(M-\omega\) instead of \(M\) in \(x_{CH}\). In this manner, we gain the cosmological apparent horizon after the particle tunneling, \(x_{out}\) in Eq. (29). Selecting the outgoing trajectories from Eq. (31), expanding \(\dot{x}\) on the horizon, applying the residue calculus and expanding the result in terms of \(\omega\), finally we obtain the imaginary part of the action as follows \[\text{Im}S=\int_{0}^{\omega}\Big{[}\frac{320.1x_{CH}^{2}}{x_{CH}^{3}+22.9x_{CH }^{3/2}\sqrt{M}-208.1M}+O(\omega,\omega^{2},...)\Big{]}d\omega. \tag{35}\] The existence of the higher-order terms of \(\omega\) proves the non-thermal nature of the radiation which we explained previously. Regarding Eq. (17), to calculate the temperature, we need to keep the coefficient of \(\omega\) in the result of Eq. (35). As a result, we neglect higher-order terms of \(\omega\) in this step and calculate the imaginary part of the action for a massless particles' tunneling. After that, based on Eq. (17), we find the temperature of the cosmological apparent horizon of the cosmological LTB black hole immersed in a Quintom universe as follows \[T_{CH}\bigg{|}_{t=-0.4}=\frac{1}{4\pi\beta}=\frac{0.000248569\left(22.9x_{CH} ^{3/2}\sqrt{M}+x_{CH}^{3}-208.1M\right)}{x_{CH}^{2}}. \tag{36}\] In the same way, the temperature of the black hole apparent horizon of the cosmological LTB black hole immersed in a Quintom universe is as follows \[T_{BH}\bigg{|}_{t=-0.4}=\frac{1}{4\pi\beta}=\frac{0.000237356\left(22.1x_{BH} ^{3/2}\sqrt{M}+x_{BH}^{3}-237.6M\right)}{x_{BH}^{2}}. \tag{37}\] We repeat the same calculations for the black hole horizon and also for these two horizons at other times. Eventually, we find the temperature of the cosmological and black hole horizons of the cosmological LTB black hole in a Quintom universe as shown in Fig. 3. In the critical mass, when two horizons created, the temperature starts to rise from zero. Approximately, in all of the cosmic history, the temperature of the black hole's apparent horizon is more than the temperature of the cosmological apparent horizon for the cosmological LTB black hole in a Quintom universe. Actually, the word _approximately_ is a keyword here, especially for the beginning of the Hawking radiation. The three panels of Fig. 3 are qualitative in essence since are drawn with some approximations and also all constants to be unity. The apparent horizon of black hole is always smaller than that of the universe; the main reason for the temperature of the black hole to be _approximately_ always higher than that of the universe. On the other hand, by comparing equations Eq. (36) and Eq. (37), we see that a smaller coefficient for the first term and a larger coefficient for the mass of the black hole with a minus sign may cause the temperature of the black hole horizon to be lower than the temperature of the cosmological horizon in some subspaces of the model parameter space, especially in the initial moments of the Hawking radiation. Conceptually, it may reflect the non-equilibrium situation in the first steps of the Hawking radiation emission. In another words, at the beginning steps of formation of the two horizons and Hawking radiation, the temperature of the cosmological horizon may be higher than the black hole temperature. But, after a short time, by the flow of energy between the two horizons via Hawking radiation, the two horizons attain the same temperature. Continuing to radiate via Hawking radiation, the temperature of the black hole horizon would be higher than the cosmological one as expected. Also, there is a certain mass in which the two temperatures are the same. Comparing different epochs, at the time far from the Big Rip, it is predicted that the temperature of the cosmological LTB black hole immersed in a Quintom universe would be stopped at a lower temperature. In other words, in epochs closer to the Big Rip, for the cosmological LTB black hole in a Quintom universe, higher Hawking temperatures are expected in the final stage of the evaporation. Moreover, we have illustrated Hawking temperature of the black hole apparent horizon and cosmological apparent horizon in some cosmic epochs in Figs. 4 and 5, respectively. In these figures, the left panels represent the universal behavior of temperature and the right panels indicate the final stage of the evaporation in more detail. Actually, the results of the final stage of evaporation are interesting in some aspects; In cosmic epochs far from Big Rip, decreasing the mass, the cosmological horizon's temperature is expected to be constant while the black hole horizon's temperature first increases and then suddenly falls into zero. Conversely, in cosmic epochs close to the Big Rip, decreasing the mass, the cosmological horizon's temperature suddenly falls into zero and the black hole horizon's temperature is expected to increase slightly. The interesting point is the probability of the remnant formation. Indeed, we conclude if the cosmological LTB black hole in a Quintom universe evaporates in the early universe, the final remnant's content would be the baryonic matter. While, if it evaporates in the epochs close to the Big Rip, the final remnant's content probably would be a dark energy content like Quintom matter. Figure 3: The behavior of the black hole and cosmological apparent horizons’ temperatures versus the mass in three cosmic epochs. We consider the fixed time equal to \(t=-0.4,0,+0.4\) from left to right. The Hawking temperature of the black hole apparent horizon is more than the Hawking temperature of the cosmological apparent horizon in approximately all epochs. Whatever the cosmological LTB black hole in a Quintom universe evaporates in the early universe, its final temperature is expected to be lower. About the sudden and sharp drop in the right panels of Figs. 4 and 5, as we have mentioned previously, this is a trace of existing non-zero mass remnant with zero temperature. If the black hole evaporates in the early universe, evaporation continues until the temperature of the black hole horizon reaches zero, and the stable remnant remains. Maybe, these remnant can be a candidate for the primordial black hole and even cold dark matter. On the other hand, if the black hole evaporates in the late universe, Phantom domination causes the Big Crunch or Big Chill. Therefore, we can consider the zero temperature of the outer horizon of the black hole related to the Phantom dominance of the universe, growing the cosmological horizon size and Big Crunch/Big Chill. Existence of a non vanishing mass remnant has been observed in black hole evaporation in the contexts such as a noncommutative black hole, a quantum corrected black hole and especially for a black hole embedded in a scalar field. Therefore, observation of a sudden drop here is a trace of a non-zero mass remnant with vanishing temperature [48]. Figure 4: The behavior of the black hole apparent horizon temperature versus the mass in some cosmic epochs. The left panel shows the universal behavior while the right panel shows the final stage of the evaporation in more details. We put fixed times \(t=-0.4,-0.2,0,+0.2,+0.4\) from bottom to top. The temperature of the black hole horizon in the early universe falls into zero and the remnant with baryonic or dark energy content remains. Figure 5: The behavior of the cosmological apparent horizon temperature versus the mass in some cosmic epochs. The left panel shows the universal behavior while the right panel shows the final stage of the evaporation in more details. We put fixed times \(t=-0.4,-0.2,0,+0.2,+0.4\) from bottom to top. The temperature of the cosmological horizon in the early universe is expected to reach a finite temperature and in the late time it is expected to fall into zero. Finally, we note that the calculation of temperatures in this setup should make sense in some adiabatic approximation, when the concept of temperature itself makes sense. Indeed, the correlation between \(\omega_{1}\) and \(\omega_{2}\) modes in Eq. (34) can give a measure of the deviation from equilibrium. Indeed, if the evolution of the apparent horizons is fast, one does not expect a notion of equilibrium temperature to exist. ## V Summary and conclusion In this work we have probed the cosmological LTB black hole immersed in a Quintom universe. First, we have introduced the related metric and illustrated the time evolution of the black hole and the cosmological horizons. We have shown that there is a certain time in the past where the two horizons were coincided and, there is a certain time before the Big Rip where the two horizons will coincide. In this respect, we have noticed that the black hole horizon shrinking is due to the phantom component in this quintom model universe. Afterwards, we have applied two methods of tunneling particles from the horizons. Precisely speaking, we have calculated the Kodama vector and surface gravity based on the dynamical black hole definitions. Then, we calculated the temperature of the cosmological LTB black hole in a Quintom universe. We concluded that both Hamilton-Jacobi and Parikh-Wilczek methods have the same result for the temperature of this black hole as we expected the infinity observer to detect a specified temperature. Besides, we have shown the existence of a correlation between the emitted modes and non-thermal nature of the spectrum which could be an address to the information loss problem. Then we have investigated the temperature of the black hole and cosmological horizons of the LTB black hole immersed in a Quintom universe at some cosmic time slices. We have concluded that for both horizons in all cosmic time, there is a critical mass in which two horizons are created, and the temperatures start to rise from zero. Also, approximately in all the cosmic history, the temperature of the black hole's apparent horizon is more than the temperature of the cosmological apparent horizon. On the other hand, in epochs closer to Big Rip, for the cosmological LTB black hole in the Quintom universe, higher Hawking temperatures are expected in the final stage of evaporation. Moreover, we have illustrated the final stage of evaporation for both horizons at some cosmic time epochs in more detail. The remarkable result is on the final remnant's content of the black hole in the cosmic time close or far from the Big Rip. Actually, we have concluded that the remnant of the LTB black hole would be a baryonic matter in the early universe and would be a dark energy like Quintom matter in the epochs close to the Big Rip. **Acknowledgement:** We would like to appreciate Valerio Faraoni for insightful comments on the original draft of this manuscript. Also, the authors appreciate the respectful referee for carefully reading the manuscript and insightful comments which boosted the quality of the paper considerably. **Data Availability Statement:** This manuscript has no associated data or the data will not be deposited. [Authors comment: We have no further data related to this work to be deposited since it is definitely a theoretical study. All possible data are included in the present paper.] **Conflict of Interest:** There is no conflict of interest regarding this manuscript.
2306.13729
On the Two-sided Permutation Inversion Problem
In the permutation inversion problem, the task is to find the preimage of some challenge value, given oracle access to the permutation. This is a fundamental problem in query complexity, and appears in many contexts, particularly cryptography. In this work, we examine the setting in which the oracle allows for quantum queries to both the forward and the inverse direction of the permutation -- except that the challenge value cannot be submitted to the latter. Within that setting, we consider two options for the inversion algorithm: whether it can get quantum advice about the permutation, and whether it must produce the entire preimage (search) or only the first bit (decision). We prove several theorems connecting the hardness of the resulting variations of the inversion problem, and establish a number of lower bounds. Our results indicate that, perhaps surprisingly, the inversion problem does not become significantly easier when the adversary is granted oracle access to the inverse, provided it cannot query the challenge itself.
Gorjan Alagic, Chen Bai, Alexander Poremba, Kaiyan Shi
2023-06-23T18:31:48Z
http://arxiv.org/abs/2306.13729v2
# On the Two-sided Permutation Inversion Problem ###### Abstract In the permutation inversion problem, the task is to find the preimage of some challenge value, given oracle access to the permutation. This is a fundamental problem in query complexity, and appears in many contexts, particularly cryptography. In this work, we examine the setting in which the oracle allows for quantum queries to both the forward and the inverse direction of the permutation--except that the challenge value cannot be submitted to the latter. Within that setting, we consider two options for the inversion algorithm: whether it can get quantum advice about the permutation, and whether it must produce the entire preimage (search) or only the first bit (decision). We prove several theorems connecting the hardness of the resulting variations of the inversion problem, and establish a number of lower bounds. Our results indicate that, perhaps surprisingly, the inversion problem does not become significantly easier when the adversary is granted oracle access to the inverse, provided it cannot query the challenge itself. ## 1 Introduction ### The permutation inversion problem The permutation inversion problem is defined as follows: given a permutation \(\pi:[N]\to[N]\) and an image \(y\in[N]\), output the correct preimage \(x:=\pi^{-1}(y)\). In the decision version of the problem, it is sufficient to output only the first bit of \(x\). If the algorithm can only access \(\pi\) by making classical queries, then making \(T=\Omega(N)\) queries is necessary and sufficient for both problems. If quantum queries are allowed, then Grover's algorithm can be used to solve both problems with \(T=O(\sqrt{N})\) queries [1, 1], which is worst-case asymptotically optimal [1, 1]. In this work, we consider the permutation inversion problem in a setting where the algorithm is granted both forward and inverse quantum query access to the permutation \(\pi\). In order to make the problem nontrivial, we modify the inverse oracle so that it outputs a reject symbol when queried on the challenge image \(y\). We call this the _two-sided permutation inversion problem_. This variant appears naturally in the context of chosen-ciphertext security for encryption schemes based on (pseudorandom) permutations [13], as well as in the context of sponge hashing (SHA3) [1]. We consider two options for this problem: 1. _(Auxiliary information.)_ With this option enabled, the inversion algorithm now consists of two phases. The first phase is given a full description of \(\pi\) and allowed to prepare an arbitrary quantum state \(\rho_{\pi}\) consisting of \(S\) qubits. This state is called _auxiliary information_ or _advice_. The second phase of the inversion algorithm is granted only the state \(\rho_{\pi}\) and query access to \(\pi\), and asked to invert an image \(y\). The two phases can also share an arbitrarily long uniformly random string; we refer to this string as _shared randomness_. The complexity of the algorithm is measured in terms of the number of qubits \(S\) of the advice state (generated by the first phase) and the total number of queries \(T\) (made during the second phase.) 2. _(Search vs Decision.)_ Here the two options simply determine whether the inversion algorithm is tasked with producing the entire preimage \(x=\pi^{-1}(y)\) of the challenge \(y\) (search version), or only the first bit \(x_{0}\) (decision version.) If the algorithm is solving the search problem, we refer to it as a search permutation inverter, or \(\mathsf{SPI}\). If it is solving the decision problem, we refer to it as a decision permutation inverter, or \(\mathsf{DPI}\). If an \(\mathsf{SPI}\) succeeds with probability at least \(\epsilon\) in the search inversion experiment, we say it is an \(\epsilon\)-\(\mathsf{SPI}\). If a \(\mathsf{DPI}\) succeeds with probability at least \(1/2+\delta\) in the decision inversion experiment, we say it is a \(\delta\)-\(\mathsf{DPI}\). When we want to emphasize the number of queries \(T\) and the number of qubits \(S\) (in the advice state), we will also write, e.g., \((S,T,\epsilon)\)-\(\mathsf{SPI}\). In this work, we are mainly interested in the _average-case_ setting. This means that both the permutation \(\pi\) and the challenge image \(y\) are selected uniformly at random. Moreover, the success probability is taken over all the randomness in the inversion experiment, i.e., over the selection of \(\pi\) and \(y\) along with all internal randomness and measurements of the inversion algorithm. ### Summary of results Much is known about the standard (i.e., one-sided) inversion problem; we will review some of these results further below. The two-sided variant has received much less attention. Our results establish a series of basic facts about this variant. For the remainder of the paper, unless stated otherwise, we will refer to the two-sided permutation inversion problem as "the inversion problem" or simply "the problem." Amplification.We consider a simple form of amplification: the inversion algorithm \(\mathcal{A}\) is run \(\ell\) times; once the \(\ell\) executions are complete, the outputs \(x_{i}\) are tested to see if \(\pi(x_{i})=y\) (in the search case) or the majority bit is output (in the decision case.) To ensure that each execution behaves independently, the shared randomness is used to randomize the problem instance given to each execution of \(\mathcal{A}\). The total advice state of the amplified algorithm then consists of the \(\ell\) advice states generated by each execution of \(\mathcal{A}\). We refer to the amplified algorithm as \(\mathcal{A}[\ell]\). We show that this amplification boosts an \((S,T,\epsilon)\)-\(\mathsf{SPI}\) to an \((\ell S,\ell(T+1),1-(1-\epsilon)^{\ell})\)-\(\mathsf{SPI}\), and show a similar result for decision. This is formalized in Lemma4.1 and Lemma4.3. Search-to-decision reduction.Clearly, the search version of any variant of the inversion problem is no easier than the corresponding decision version. We establish a simple reduction showing that search is in fact also not much harder than decision. Specifically, we show that an \((S,T,\delta)\)-\(\mathsf{DPI}\) can be used to construct a \((n\ell S,n\ell T,1-ne^{-\ell\delta})\)-\(\mathsf{SPI}\) (here and throughout, \(n=\lceil\log N\rceil\).) This is formalized in Theorem5.1. Lower bounds, search version.We establish a lower bound for the search version of the inversion problem with advice, showing that \(ST^{2}\geq\widetilde{\Omega}(\epsilon^{3}N)\) for any \((S,T,\epsilon)\)-\(\mathsf{SPI}\). While this bound is not tight, we do establish a tighter bound of \(ST^{2}\geq\widetilde{\Omega}(\epsilon N)\) for a restricted class of inverters (similarly to a result of [10].) These results are formalized in Theorem6.2 and Theorem6.1. Lower bounds, decision version.For the decision version with advice, we combine two results above (search lower bound and search-to-decision reduction) to yield a (non-tight) bound of \(ST^{2}\geq\widetilde{\Omega}(\delta^{6}N)\) for any \(\delta\)-DPI. In the case of no advice, we get a tight lower bound via a reduction from the unstructured search problem; this shows that \(\widetilde{\Omega}(\sqrt{N})\) queries are required. Our reduction is similar to that of Nayak [20]. These results are formalized in Corollary6.3 and Corollary6.4. Applications.We observe that the two-sided version of the permutation inversion problem can be viewed as the main task of an adversary in a natural cryptographic experiment. In this experiment, the adversary is tasked with decrypting an encryption of a random message, while having oracle access to both the encryption map and the decryption map. This is a standard security notion called \(\mathsf{OW-CCA}\) (one-way security against chosen ciphertext attack). In our setting, we grant the attacker even more power: they can query quantumly, they can control the randomness of the encryption map, and they can deduce the randomness used to encrypt when applying the decryption map. We call this \(\mathsf{QCCRA}\) (quantum chosen ciphertext with randomness-access attack). We apply our lower bounds above to show that a natural encryption scheme constructed from random permutations is secure even against these powerful attacks. In the computational security setting, such a scheme can be instantiated efficiently using quantum-query-secure pseudorandom permutations [14]. These results are formalized in Theorem7.2 and Theorem7.4. Future work.The two-sided permutation inversion problem appears naturally in the context of sponge hashing [1] which is used by the international hash function standard SHA3 [13]. Previous work [1, 12] studied the post-quantum security of the sponge construction where the block function is either a random function or a (non-invertible) random permutation. However, as the core permutation in SHA3 is public and efficiently invertible, the "right setting" of theoretical study is one in which the block function consists of an invertible permutation. This setting is far less understood, and establishing the security of the sponge in this setting is a major open problem in post-quantum cryptography. Our results on two-sided permutation inversion may serve as a stepping stone towards this goal. ### Technical overview Our main technical result is the lower bound for the search variant of the two-sided permutation inversion problem in Section6.1. At a high level, our proof uses a similar _compression argument_ as in previous works on one-sided permutation inversion with advice [1, 10, 1]. We use information-theoretic lower bounds on the length of quantum random access codes [11, 12], which are a means of encoding classical bits in terms of (potentially fewer) qubits. In other words, we construct an encoder that compresses the truth table of a permutation by using the power of the search inverter, which then allows us to obtain the desired space-time lower bound \(ST^{2}=\widetilde{\Omega}(\epsilon^{3}N)\) in Theorem6.2. Along the way, we show how to amplify search inverters for the two-sided permutation inversion problem; this can be done via a careful _averaging argument_ (which we prove in Lemma2.3). Our approach allows us to obtain a simpler amplification analysis as compared to previous work [1] which used _quantum rewinding_ for the one-sided case. To obtain a space-time trade-off \(ST^{2}\geq\tilde{\Omega}(\delta^{6}N)\) for decision inverters that succeed with bias \(\delta>0\), we give a _search-to-decision_ reduction in Theorem5.1. Specifically, we show that a decision inverter can be used to solve the (search) permutation inversion problem by recovering one bit of the preimage at a time. Here, we invoke a self-reduction that _re-randomizes_ the decision inverter in each execution while guaranteeing independence. ### Related work Previous works have considered the quantum-query _function_ inversion problem [11, 12, 13, 14, 15]. A number of recent papers gave lower bounds for the (one-sided) quantum-query permutation inversion problem, with and without advice [1, 2, 1, 16, 17, 18, 19, 20]. The highlights among these are summarized in Table1. Note that the lower bound for restricted adversaries described in [1, 12] can be translated to the more general lower bound in a black box way, for example by applying the amplification procedure described in Lemma4.2. To our knowledge, the two-way variant of the inversion problem has only been considered in two other works. First, [13] gives a lower bound for inverting random injective functions in the case of two-way access without advice. Their query complexity is \(T>N^{1/5}\) with non-negligable success probability. Second, [1] briefly considers inverse access for the permutation inversion problem, but only in the trivial setting where a query on the challenge is allowed. Another novelty of our work is that we give a lower bound for the average-case decision problem. While prior work by Chung et al. [13] also considered the general decision game, their generic framework crucially relies on compressed oracles [16] which are only known to support random _functions_. Consequently, their techniques cannot readily be applied in the context of permutation inversion due to a lack of "compressed permutation oracles". We remark that the notion of two-way quantum accessibility to a random permutation has been considered in other works; for example, [1, 2] studied the hardness of detecting certain modifications to the permutation in this model. By contrast, we are concerned with the problem of finding the inverse of a random image. ## Acknowledgements We thank Christian Majenz for useful discussions. AP is partially supported by AFOSR YIP (award number FA9550-16-1-0495), the Institute for Quantum Information and Matter (an NSF Physics Frontiers Center; NSF Grant PHY-1733907) and by a grant from the Simons Foundation (828076, TV). Gorjan Alagic and Kaiyan Shi acknowledge support from the U.S. Army Research \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & [17] & [12] & [18] & Ours \\ \hline Advice & classical & quantum & quantum & quantum \\ \hline Access Type & one-sided & one-sided & one-sided & two-sided \\ \hline Inverter & restricted & restricted & general & general \\ \hline Space-time & \(ST^{2}=\widetilde{\Omega}(N)\) & \(ST^{2}=\widetilde{\Omega}(\epsilon N)\) & \(ST^{2}=\widetilde{\Omega}(\epsilon^{3}N)\) & \(ST^{2}=\widetilde{\Omega}(\epsilon^{3}N)\) \\ trade-off & & & & \\ \hline \end{tabular} \end{table} Table 1: Summary of previous work on permutation inversion with advice. Success probability is denoted by \(\epsilon\). Note that \(\epsilon=O(1)\) for computing the space-time trade-off in [17]. Office under Grant Number W911NF-20-1-0015. GA and Chen Bai acknowledge support from the U.S. Department of Energy under Award Number DE-SC0020312. GA acknowledges support from the AFOSR under Award Number FA9550-20-1-0108, and from the NSF under Award Number CNS-2154705. ## 2 Technical preliminaries In this section we collect a series of known technical results, which we will need for our main proofs. ### Some basic probabilistic lemmas We first record some basic lemmas about the behavior of certain types of random variables. **Lemma 2.1** (Multiplicative Chernoff Bound).: _Let \(X_{1},\ldots,X_{n}\) be independent random variables taking values in \(\{0,1\}\). Let \(X=\sum_{i\in[n]}X_{i}\) denote their sum and let \(\mu=\mathbb{E}[X]\) denote the expected value. Then for any \(\delta>0\),_ \[\Pr[X<(1-\delta)\mu]\leq\exp\left(\frac{e^{-\delta}}{(1-\delta)^{1-\delta}} \right)^{\mu}.\] _Specifically, for binomial distribution with \(\mu=np\) and \(p>\frac{1}{2}\), we have_ \[\Pr[X\leq n/2]\leq e^{-n(p-\frac{1}{2})^{2}/(2p)}\] _and correspondingly,_ \[\Pr\left[X>\frac{n}{2}\right]\geq 1-e^{-n(p-\frac{1}{2})^{2}/(2p)}.\] **Lemma 2.2** (Reverse Markov's inequality).: _Let \(X\) be a random variable taking values in \([0,1]\). Let \(\theta\in(0,1)\) be arbitrary. Then, it holds that_ \[\Pr[X\geq\theta]\geq\frac{\mathbb{E}[X]-\theta}{1-\theta}.\] Proof.: Fix \(\theta\in(0,1)\). We first show that \[(1-\theta)\cdot\mathbb{I}_{[X\geq\theta]}\geq X-\theta. \tag{1}\] Suppose that \(X\geq\theta\). Then, Eq. (1) amounts to \(1-\theta\geq X-\theta\), which is satisfied because \(X\leq 1\). Now suppose that \(X<\theta\). In this case Eq. (1) amounts to \(0\geq X-\theta\), which is satisfied whenever \(X\geq 0\). Taking the expectation over Eq. (1) and noting that \(\mathbb{E}[\mathbb{I}_{[X\geq\theta]}]=\Pr[X\geq\theta]\), we get \[(1-\theta)\cdot\Pr[X\geq\theta]\geq\mathbb{E}[X]-\theta.\] This proves the claim. **Lemma 2.3** (Averaging argument).: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be any finite sets and let \(\Omega:\mathcal{X}\times\mathcal{Y}\to\{0,1\}\) be a predicate. Suppose that \(\Pr_{x,y}[\Omega(x,y)=1]\geq\epsilon\), for some \(\epsilon\in[0,1]\), where \(x\) is chosen uniformly at random in \(\mathcal{X}\). Let \(\theta\in(0,1)\). Then, there exists a subset \(\mathcal{X}_{\theta}\subseteq\mathcal{X}\) of size \(|\mathcal{X}_{\theta}|\geq(1-\theta)\cdot\epsilon|\mathcal{X}|\) such that_ \[\Pr_{y}[\Omega(x,y)=1]\geq\theta\cdot\epsilon,\quad\forall x\in\mathcal{X}_{ \theta}.\] Proof.: Define \(p_{x}=\Pr_{y}[\Omega(x,y)=1]\), for \(x\in\mathcal{X}\). Then, for \(\epsilon\in[0,1]\), we have \[\mathbb{E}_{x}[p_{x}]=\Pr_{x,y}[\Omega(x,y)=1]=|\mathcal{X}|^{-1}\sum_{x\in \mathcal{X}}\Pr_{y}[\Omega(x,y)=1]\geq\epsilon.\] Fix \(\theta\in(0,1)\). Because the weighted average above is at least \(\epsilon\), there must exist a subset \(\mathcal{X}_{\theta}\) such that \[p_{x}=\Pr_{y}[\Omega(x,y)=1]\geq\theta\cdot\epsilon,\quad\forall x\in\mathcal{ X}_{\theta}.\] Recall that \(x\) is chosen uniformly at random in \(\mathcal{X}\). Using the reverse Markov's inequality in Lemma 2.2, it follows that \[\frac{|\mathcal{X}_{\theta}|}{|\mathcal{X}|}=\Pr[p_{x}\geq\theta\cdot\epsilon] \geq\frac{\mathbb{E}[p_{x}]-\theta\cdot\epsilon}{1-\theta\cdot\epsilon}\geq \frac{\epsilon\cdot(1-\theta)}{1-\theta\cdot\epsilon}>\epsilon\cdot(1-\theta).\] In other words, the subset \(\mathcal{X}_{\theta}\subseteq\mathcal{X}\) is of size at least \(|\mathcal{X}_{\theta}|\geq(1-\theta)\cdot\epsilon|\mathcal{X}|\). ### Swapping Lemma The following lemma controls the ability of a query algorithm to distinguish two oracles, in terms of a concept of "total query magnitude" to locations at which the oracles take differing values. **Definition 2.4** (Query magnitude).: _Let \(|\psi\rangle=\sum_{x\in\mathcal{X}}\alpha_{x}\,|x\rangle\) be a state and let \(\mathcal{S}\subseteq\mathcal{X}\) be a subset, for some finite set \(\mathcal{X}\). Then, the query magntitude with respect to \(\mathcal{S}\) is given by_ \[q_{\mathcal{S}}(|\psi\rangle)=\sum_{x\in\mathcal{S}}|\alpha_{x}|^{2}.\] **Definition 2.5** (Total query magnitude).: _Let \(\mathcal{A}^{f}\) be a quantum algorithm with quantum oracle access to a function \(f:\mathcal{X}\rightarrow\mathcal{Y}\), for some finite sets \(\mathcal{X}\) and \(\mathcal{Y}\). Let \(\mathcal{S}\subseteq\mathcal{X}\) be a subset. Then, the total query magntitude of \(\mathcal{A}^{f}\) on the set \(\mathcal{S}\) is defined as_ \[q(\mathcal{A}^{f},\mathcal{S}):=\sum_{t=0}^{T-1}q_{\mathcal{S}}(|\psi_{t} \rangle)=\sum_{t=0}^{T-1}\|\Pi_{\mathcal{S}}\,|\psi_{t}\rangle\,\|^{2},\] _where we represent \(\mathcal{A}\) as a sequence \(|\psi_{0}\rangle\,,\ldots,|\psi_{T-1}\rangle\) of intermediate states, where \(\Pi_{\mathcal{S}}\) is a projector onto a query register of \(\mathcal{A}\), and where \(|\psi_{t}\rangle\) represents the state of \(\mathcal{A}\) just before the \(t+1^{st}\) query._ We use the following elementary properties of the total query magnitude: **Lemma 2.6**.: _Let \(f:\mathcal{X}\rightarrow\mathcal{Y}\) be a function for some finite sets \(\mathcal{X}\) and \(\mathcal{Y}\), and let \(\mathcal{A}^{f}\) be a quantum algorithm with quantum oracle access to \(f\). Then,_ * _For any subset_ \(\mathcal{S}\subseteq\mathcal{X}\)_, it holds_ \[q(\mathcal{A}^{f},\mathcal{S})\leq T,\] _where_ \(T\) _be an upper bound on the number of queries made by_ \(\mathcal{A}\)_._ * _For any disjoint subsets_ \(\mathcal{S}_{0},\mathcal{S}_{1}\subseteq\mathcal{X}\) _it holds_ \[q(\mathcal{A}^{f},\mathcal{S}_{0}\cup\mathcal{S}_{1})=q(\mathcal{A}^{f}, \mathcal{S}_{0})+q(\mathcal{A}^{f},\mathcal{S}_{1}).\] * _For any subsets_ \(\mathcal{S}_{0}\subseteq\mathcal{S}_{1}\subseteq\mathcal{X}\) _it holds that_ \[q(\mathcal{A}^{f},\mathcal{S}_{0})\leq q(\mathcal{A}^{f},\mathcal{S}_{1}).\] **Lemma 2.7** (Swapping Lemma, [20]).: _Let \(f,g:\mathcal{X}\to\mathcal{Y}\) be functions such that \(f(x)=g(x)\) for all \(x\notin\mathcal{S}\), where \(\mathcal{S}\subseteq\mathcal{X}\). Let \(\left|\Psi_{f}\right>\) and \(\left|\Psi_{g}\right>\) denote the final states of a quantum algorithm \(\mathcal{A}\) with quantum oracle access to the functions \(f\) and \(g\), respectively. Then, it holds that_ \[\left\|\left|\Psi_{f}\right>-\left|\Psi_{g}\right>\right.\right\|\leq\sqrt{T \cdot q(\mathcal{A}^{f},\mathcal{S})},\] _where \(\left\|\left|\Psi_{f}\right>-\left|\Psi_{g}\right>\right.\right\|\) denotes the Euclidean distance and where \(T\) is an upper bound on the number of quantum oracle queries made by \(\mathcal{A}\)._ ### Lower bounds for quantum random access codes Quantum random access codes [21, 1, 13] are a means of encoding classical bits into (potentially fewer) qubits. We use the following variant from [11]. **Definition 2.8** (Quantum random access codes with variable length).: _Let \(N\) be an integer and let \(\mathcal{F}_{N}=\{f:[N]\to\mathcal{X}_{N}\}\) be an ensemble of functions over some finite set \(\mathcal{X}_{N}\). A quantum random access code with variable length \((\mathsf{QRAC}\text{-}\mathsf{VL})\) for \(\mathcal{F}_{N}\) is a pair \((\mathsf{Enc},\mathsf{Dec})\) consisting of a quantum encoding algorithm \(\mathsf{Enc}\) and a quantum decoding algorithm \(\mathsf{Dec}\):_ * \(\mathsf{Enc}(f;R)\)_: The encoding algorithm takes as input a function_ \(f\in\mathcal{F}_{N}\) _together with a set of random coins_ \(R\in\{0,1\}^{*}\)_, and outputs a quantum state_ \(\rho\) _on_ \(\ell=\ell(f)\) _many qubits (where_ \(\ell\) _may depend on_ \(f\)_)._ * \(\mathsf{Dec}(\rho,x;R)\)_: The decoding algorithm takes as input a state_ \(\rho\)_, an element_ \(x\in[N]\) _and random coins_ \(R\in\{0,1\}^{*}\) _(same randomness used for the encoding), and seeks to output_ \(f(x)\)_._ _The performance of a \(\mathsf{QRAC}\text{-}\mathsf{VL}\) is characterized by parameters \(L\) and \(\delta\). Let_ \[L:=\operatorname*{\mathbb{E}}_{f}[\ell(f)]\] _be the average length of the encoding over the uniform distribution on \(f\in\mathcal{F}_{N}\), and let_ \[\delta=\Pr_{f,x,R}\left[\mathsf{Dec}(\mathsf{Enc}(f;R),x;R)=f(x)\right]\] _be the probability that the the scheme correctly reconstructs the image of the function, where \(f\in\mathcal{F}_{N}\), \(x\in[N]\) and \(R\) are chosen uniformly at random._ We use the following information-theoretic lower bound on the expected length of any \(\mathsf{QRAC}\text{-}\mathsf{VL}\) scheme for permutations, which is a consequence of [11, Theorem 5]. **Theorem 2.9** ([11], Corollary 1).: _For any \(\mathsf{QRAC}\text{-}\mathsf{VL}\) for permutations \(\mathcal{S}_{N}\) with decoding advantage \(\delta=1-k/N\) and for any \(k=\Omega(1/N)\), we have_ \[L\geq\log N!-O(k\log N).\] The permutation inversion problem We begin by formalizing the search version of the problem of inverting a permutation. We let \([N]=\{1,...,N\}\); typically we choose \(N=2^{n}\) for some positive integer \(n\). For \(f:\mathcal{X}\to\mathcal{Y}\) a function from a set \(\mathcal{X}\) to an additive group \(\mathcal{Y}\), the quantum oracle \(\mathcal{O}_{f}\) is the unitary operator \[\mathcal{O}_{f}:\left|x\right\rangle\left|y\right\rangle\to\left|x\right\rangle \left|y\oplus f(x)\right\rangle.\] We use \(\mathcal{A}^{\mathcal{O}_{f}}\) (or sometimes simply \(\mathcal{A}^{f}\)) to denote that algorithm \(\mathcal{A}\) has quantum oracle access to \(f\). **Definition 3.1**.: _Let \(N\in\mathbb{N}\). A search-version permutation inverter (SPI) is a pair \(\mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) of quantum algorithms, where_ * \(\mathsf{S}_{0}\) _is an algorithm which receives as input a truth table for a permutation over_ \([N]\) _and a random string_ \(r\)_, and outputs a quantum state;_ * \(\mathsf{S}_{1}\) _is an oracle algorithm which receives a quantum state, an image_ \(y\in[N]\)_, and a random string_ \(r\)_, and outputs_ \(x\in[N]\)_._ We will consider the execution of a SPI\(\mathsf{S}\) in the following experiment, which we call \(\mathsf{SearchInvert}_{\mathsf{S}}\). 1. _(sample coins)_ a uniformly random permutation \(\pi:[N]\to[N]\) and a uniformly random string \(r\leftarrow\{0,1\}^{*}\) are sampled; 2. _(prepare advice)_ \(\mathsf{S}_{0}\) is run, producing a quantum state \(\rho_{\pi,r}\leftarrow\mathsf{S}_{0}(\pi,r)\); 3. _(sample instance)_ a uniformly random image \(y\in[N]\) is generated; 4. _(invert)_ \(\mathsf{S}_{1}\) is run with the two oracles below, and produces a candidate preimage \(x^{*}\). \[\mathcal{O}_{\pi}(\left|w\right\rangle\left|z\right\rangle)=\left|w\right\rangle \left|z\oplus\pi(w)\right\rangle\qquad\mathcal{O}_{\pi_{\perp y}^{-1}}(\left| w\right\rangle\left|z\right\rangle)=\left|w\right\rangle\left|z\oplus\pi_{\perp y }^{-1}(w)\right\rangle,\] (2) where \(\pi_{\perp y}^{-1}:[N]\times\{0,1\}\to[N]\times\{0,1\}\) is defined by \[\pi_{\perp y}^{-1}(w\|b)=\begin{cases}\pi^{-1}(w)\|0&\text{ if $b=0$ and $w\neq y$}\\ 1\|1&\text{ otherwise.}\end{cases}\] To keep notation simple, we write this process as \(x^{*}\leftarrow\mathsf{S}_{1}^{\pi_{\perp y}}(\rho_{\pi,r},y,r)\). We will use \(\pi_{\perp y}\) to denote simultaneous access to the two oracles in (2) throughout the paper. 5. _(check)_ If \(\pi(x^{*})=y\), output 1; otherwise output 0. Note that the two oracles allow for the evaluation of the permutation \(\pi\) in both the forward and inverse direction. To disallow trivial solutions, the oracle outputs a fixed "reject" element \(1||1\in[N]\times\{0,1\}\) if queried on \(y\) in the inverse direction. If the probability that \(\mathsf{S}\) successfully inverts (i.e., that the experiment outputs 1) is at least \(\epsilon\), we say that \(\mathsf{S}\) is an \(\epsilon\)-SPI. **Definition 3.2**.: _An \(\epsilon\)-SPI is a search-version permutation inverter \(\mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) satisfying_ \[\Pr\left[\pi^{-1}(y)\leftarrow\mathsf{S}_{1}^{\pi_{\perp y}}(\rho,y,r)\ :\ \rho \leftarrow\mathsf{S}_{0}(\pi,r)\right]\geq\epsilon,\] _where the probability is taken over \(\pi\leftarrow\mathcal{S}_{N}\), \(r\leftarrow\{0,1\}^{*}\) and \(y\leftarrow[N]\), along with all internal randomness and measurements of \(\mathsf{S}\)._ We will measure the computational resources required by a \(\mathsf{SPI}\ \mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) in terms of only two quantities. The first is an upper bound on the number of qubits of the state produced by \(\mathsf{S}_{0}\), denoted by \(S(\mathsf{S})\) (or simply \(S\), when the context is clear.) The second is an upper bound on the number of oracle queries made by \(\mathsf{S}_{1}\), denoted by \(T(\mathsf{S})\) (or simply \(T\).) We emphasize that the running time of \(\mathsf{S}\) and the length of the shared randomness \(r\) are only required to be finite. We will assume that both \(S\) and \(T\) depend only on the parameter \(N\); in particular, they will not vary with \(\pi\), \(y\), \(r\), or any measurements. To further simplify things, we will assume without loss of generality that \(\mathsf{S}_{0}\) outputs _exactly_\(S\) qubits and \(\mathsf{S}_{1}\) makes _exactly_\(T\) queries whenever \(\mathsf{S}\) is run in the experiment described above. We denote \(\epsilon\)-\(\mathsf{SPI}\) with \(S\) and \(T\) as \((S,T,\epsilon)\)-\(\mathsf{SPI}\), especially we have \((0,T,\epsilon)\)-\(\mathsf{SPI}\) when there is no advice (\(S=0\)). Relationship to previous notion.In [13, 14, 1], the success of a \(\mathsf{SPI}\) is measured in an alternative way. First, \(\mathcal{A}\) is said to "invert \(y\) for \(\pi\)" if \(\mathcal{A}\) succeeds in the inversion experiment for the pair \((\pi,y)\) with probability (over all remaining randomness and all measurements) at least \(2/3\). Second, \(\mathcal{A}\) is said to "invert a \(\delta\)-fraction of inputs" if \(\Pr_{\pi,y}[\mathcal{A}\text{ inverts }y\text{ for }\pi]\geq\delta\). This type of inverter is clearly captured by Definition3.1: it is an \(\epsilon\)-\(\mathsf{SPI}\) with \(\epsilon=2\delta/3\). However, there are inverters of interest which are captured by Definition3.1, but not by the previous definition. For example, in a cryptographic context, one would definitely be concerned about adversaries which can invert every \((\pi,y)\) with probability exactly \(1/n\). Such an adversary is clearly a \((1/n)\)-\(\mathsf{SPI}\), but is not a valid inverter under the previous definition for any value of \(\delta\). Other works also consider the general average-case captured by Definition3.1 (e.g., [13, 14, 15]) but without two-way oracle access. Decision version.The decision version of the permutation inversion problem is defined similarly to the search version above, with the modifications listed here: * A decision-version permutation inverter (\(\mathsf{DPI}\)) is denoted \(\mathsf{D}=(\mathsf{D}_{0},\mathsf{D}_{1})\), and outputs one bit \(b\) rather than a full candidate preimage; * In the "check" phase of the experiment, the single-bit output \(b\) of \(\mathsf{D}_{1}\) is compared to the first bit \(\pi^{-1}(y)|_{0}\) of the preimage of the challenge \(y\); * A \(\delta\)-\(\mathsf{DPI}\) is a decision permutation inverter which succeeds at the decision inversion experiment with probability at least \(\frac{1}{2}+\delta\); ## 4 Amplification In this section, we show how to amplify the success probability of search and decision inverters. The construction for the search case is shown in Protocol1. **Protocol 1** ("\(\ell\)-time repetition" of \(\epsilon\)-\(\mathsf{SPI}\)).: _Given an \(\epsilon\)-\(\mathsf{SPI}\ \mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) and an integer \(\ell>0\), define a \(\mathsf{SPI}\ \mathsf{S}[\ell]=(\mathsf{S}[\ell]_{0},\mathsf{S}[\ell]_{1})\) as follows._ 1. _(Advice Preparation)_ \(\mathsf{S}[\ell]_{0}\) _proceeds as follows:_ 1. _receives as input a random permutation_ \(\pi:[N]\to[N]\) _and randomness_ \(r\leftarrow\{0,1\}^{*}\) _and parses the string_ \(r\) _into_ \(2\ell\) _substrings_ \(r=r_{0}\|...\|r_{\ell-1}\|r_{\ell}\|...\|r_{2\ell-1}\) _(with lengths as_ \(r=r_{0}\|...\|r_{\ell-1}\|r_{\ell}\|...\|r_{2\ell-1}\)_)._ needed for the next step). 2. uses \(r_{0},...,r_{\ell-1}\) to generate \(\ell\) permutations \(\{\sigma_{i}\}_{i=0}^{\ell-1}\) in \(\mathcal{S}_{N}\), and then runs \(\mathsf{S}_{0}(\sigma_{i}\circ\pi,r_{i+\ell})\) to get a quantum state \(\rho_{i}:=\rho_{\sigma_{i}\circ\pi,r_{i+\ell}}\) for all \(i\in[0,\ell-1]\). Finally, \(\mathsf{S}[\ell]_{0}\) outputs the quantum state \(\bigotimes_{i=0}^{\ell-1}\rho_{i}\). 2. _(Oracle Algorithm)_ \(\mathsf{S}[\ell]_{1}^{\pi_{\downarrow y}}\) _proceeds as follows:_ 1. _receives_ \(\bigotimes_{i=0}^{\ell-1}\rho_{i}\)_, randomness_ \(r\) _and an image_ \(y\in[N]\) _as input._ 2. _parses_ \(r=r_{0}\|...\|r_{\ell-1}\|r_{\ell}\|...\|r_{2\ell-1}\) _and uses the coins_ \(r_{0}\|...\|r_{\ell-1}\) _to reconstruct the permutations_ \(\{\sigma_{i}\}_{i=0}^{\ell-1}\) _in_ \(\mathcal{S}_{N}\)_._ 3. _run the following routine for all_ \(i\in[0,\ell-1]\)_:_ 1. _runs_ \(\mathsf{S}_{1}\) _with oracle access to_ \((\sigma_{i}\circ\pi)_{\perp\sigma_{i}(y)}\)_, which implements the permutation_ \(\sigma_{i}\circ\pi\) _and its inverse (with output_ \(\perp\) _on input_ \(\sigma_{i}(y)\)_)._ 2. _gets back_ \(x_{i}\leftarrow\mathsf{S}_{1}^{(\sigma_{i}\circ\pi)_{\perp\sigma_{i}(y)}}( \rho_{i},\sigma_{i}(y),r_{i+\ell})\)_._ 4. _queries the oracle_ \(\pi_{\perp y}\) _(in the forward direction) on each_ \(x_{i}\) _to see if_ \(\pi(x_{i})=y\)_. If such an_ \(x_{i}\) _is found, output it; otherwise output_ \(0\)_._ We remark that other works considered different approaches to amplification, e.g., via quantum rewinding [1] and the gentle measurement lemma [1]. **Lemma 4.1** (Amplification, search).: _Let \(\mathsf{S}\) be a \((S,T,\epsilon)\)-SPI for some \(\epsilon>0\). Then \(\mathsf{S}[\ell]\) is a \((\ell S,\ell(T+1),1-(1-\epsilon)^{\ell})\)-SPI._ Proof.: We consider the execution of the "\(\ell\)-time repetition" of \(\epsilon\)-SPI, denoted by \(\mathsf{SPI}\)\(\mathsf{S}[\ell]\), in the search permutation inversion experiment defined in Protocol 1. By construction, \(\mathsf{S}[\ell]\) runs \(\ell\)-many \(\mathsf{SPI}\) procedures \((\mathsf{S}_{0},\mathsf{S}_{1})\). Because \(\mathsf{S}\) is assumed to be an \(\epsilon\)-SPI, it follows that for each iteration \(i\in[0,\ell-1]\), \[\Pr\Big{[}(\sigma_{i}\circ\pi)^{-1}(\sigma_{i}(y))\leftarrow \mathsf{S}_{1}^{(\sigma_{i}\circ\pi)_{\perp\sigma_{i}(y)}}\big{(}\rho_{i}, \sigma_{i}(y),r_{i+\ell}\big{)}:\rho_{i}\leftarrow\mathsf{S}_{0}(\sigma_{i} \circ\pi,r_{i+\ell})\Big{]}\] \[\equiv\Pr\big{[}\pi^{-1}(y)\leftarrow\mathsf{S}_{1}^{\pi_{ \downarrow y}}\big{(}\rho_{\pi,r_{i+\ell}},y,r_{i+\ell}\big{)}:\rho_{\pi,r_{i+ \ell}}\leftarrow\mathsf{S}_{0}(\pi,r_{r+\ell})\big{]}\ \geq\ \epsilon,\] where the probability is taken over \(\pi\leftarrow\mathcal{S}_{N}\), \(r\leftarrow\{0,1\}^{*}\) (which is used to sample permutations \(\sigma_{i}\)) and \(x\leftarrow[N]\), along with all internal measurements of \(\mathsf{S}\). Then, by the fact that all \(\ell\) trials are completely independent from one another, \[\Pr\big{[}\pi^{-1}(y)\leftarrow\mathsf{S}[\ell]_{1}^{\pi_{ \downarrow y}}(\rho,y,r):\rho\leftarrow\mathsf{S}[\ell]_{0}(\pi,r)\big{]}\] \[=1-\prod_{i=0}^{\ell-1}\Pr\Big{[}x\leftarrow\mathsf{S}_{1}^{( \sigma_{i}\circ\pi)_{\perp\sigma_{i}(y)}}\big{(}\rho_{i},\sigma_{i}(y),r_{i+ \ell}\big{)}:\rho_{i}\leftarrow\mathsf{S}_{0}(\sigma_{i}\circ\pi,r_{i+\ell}),x \neq(\sigma_{i}\circ\pi)^{-1}(\sigma_{i}(y)),\Big{]}\] \[\geq 1-(1-\epsilon)^{\ell}.\] Given that the \(\mathsf{SPI}\)\((\mathsf{S}_{0},\mathsf{S}_{1})\) requires space \(S\) and a number of queries \(T\), we have that \((\mathsf{S}[\ell]_{0},\mathsf{S}[\ell]_{1})\) requires space \(S(\mathsf{S}[\ell])=\ell\cdot S\) and a number of queries \(T(\mathsf{S}[\ell])=\ell\cdot(T+1)\), as both of these algorithms need to run either \(\mathsf{S}_{0}\) or \(\mathsf{S}_{1}\)\(\ell\)-many times as subroutines. This proves the claim. We also need a variant of the above, due to the requirements of our search lower bound technique. **Lemma 4.2**.: _Let \(\mathsf{S}\) be a \((S,T,\epsilon)\)-SPI for some \(\epsilon>0\). Then, we can construct an \(\mathsf{SPI}\ \mathsf{S}[\ell]=(\mathsf{S}[\ell]_{0},\mathsf{S}[\ell]_{1})\) using \(S(\mathsf{S}[l])\) qubits of advice and making \(T(\mathsf{S}[l])\) queries, with_ \[S(\mathsf{S}[l])=\left\lceil\frac{\ln(10)}{\epsilon}\right\rceil\cdot S\quad \text{ and }\quad T(\mathsf{S}[l])=\left\lceil\frac{\ln(10)}{\epsilon}\right\rceil \cdot(T+1)\] _such that_ \[\Pr_{\pi,y}\left[\Pr_{r}\left[\pi^{-1}(y)\leftarrow\mathsf{S}[l]_{1}^{\pi_{ \perp y}}(\rho,y,r):\rho\leftarrow\mathsf{S}[l]_{0}(\pi,r)\right]\geq\frac{2}{ 3}\right]\geq\frac{1}{5}.\] The proof is given in Appendix A.2. We also consider amplification for the decision version; the construction is essentially the same, except that the final "check" step is replaced by simply outputting the majority bit. The construction is given explicitly in Protocol 3 in Appendix A.3. **Lemma 4.3** (Amplification, decision).: _Let \(\mathsf{D}\) be a \(\delta\)-DPI for some \(\delta>0\). Then \(\mathsf{D}[\ell]\) is a \((\ell S,\ell T,1/2-\exp\bigl{(}-\delta^{2}/(1+2\delta)\cdot\ell\bigr{)})\)-DPI._ The proof is given in Appendix A.3. ## 5 Reductions We give two reductions related to the inversion problem: a search-to-decision reduction (for the case of advice), and a reduction from unstructured search to the decision inversion problem (for the case of no advice). ### A search-to-decision reduction First, to construct a search inverter from a decision inverter, we take the following approach. We first amplify the decision inverter so that it correctly computes the first bit of the preimage with certainty. We then repeat this amplified inverter \(n\) times (once for each bit position) but randomizing the instance in such a way that the \(j\)-th bit of the preimage is permuted to the first position. We then output the string of resulting bits as the candidate preimage. **Theorem 5.1**.: _Let \(\mathsf{D}\) be a \((S,T,\delta)\)-DPI. Then for any \(\ell\in\mathbb{N}\), we can construct a \((n\ell S,n\ell T,\eta)\)-SPI with_ \[\eta\geq 1-\lceil\log N\rceil\cdot\exp\biggl{(}-\frac{\delta^{2}}{(1+2\delta)} \cdot\ell\biggr{)}\,.\] Proof.: Given an \(\delta\)-DPI \((\mathsf{D}_{0},\mathsf{D}_{1})\) with storage size \(S\) and query size \(T\), we can construct a \(\eta\)-DPI \((\mathsf{D}[\ell]_{0},\mathsf{D}[\ell]_{1})\) with storage size \(\ell S\) and query size \(\ell T\) through "\(\ell\)-time repetition". By Lemma 4.3, we have that \(\eta\geq\frac{1}{2}-\exp\Bigl{(}-\frac{\delta^{2}}{(1+2\delta)}\cdot\ell \Bigr{)}\). Note that the algorithm \((\mathsf{D}[\ell]_{0},\mathsf{D}[\ell]_{1})\) runs \((\mathsf{D}_{0},\mathsf{D}_{1})\) as a subroutine. In the following, we represent elements in \([N]\) using a binary decomposition of length \(\lceil\log N\rceil\). To state our search-to-decision reduction, we introduce a generalized swap operation, denoted by \(\mathtt{swap}_{a,b}\), which acts as follows for any quantum state of \(m\) qubits: \[\mathtt{swap}_{a,b}\left|w\right\rangle =\mathtt{swap}_{a,b}\left|w_{m-1}\ldots w_{b}\ldots w_{a}\ldots w _{1}w_{0}\right\rangle\] \[=\left|w_{m-1}\ldots w_{a}\ldots w_{b}\ldots w_{1}w_{0}\right\rangle\] Note that \(\mathsf{swap}_{k,k}\) is equal to the identity, i.e. \(\mathsf{swap}_{k,k}\left|x\right>=\left|x\right>\) for \(x\in[N]\) and \(k\in[0,\lceil\log N\rceil-1]\). We construct a \(\mathsf{SPI}\left(\mathsf{S}_{0},\mathsf{S}_{1}\right)\) as follows. 1. The algorithm \(\mathsf{S}_{0}\) proceeds as follows: 1. \(\mathsf{S}_{0}\) receives a random permutation \(\pi:[N]\to[N]\) and a random string \(r\leftarrow\{0,1\}^{*}\) as inputs. We parse \(r\) into \(\lceil\log N\rceil\) individual substrings, i.e. \(r=r_{0}\|...\|r_{\lceil\log N\rceil-1}\); the length of each substring is clear in context. 2. \(\mathsf{S}_{0}\) runs the algorithm \(\mathsf{D}[\ell]_{0}(\pi\circ\mathsf{swap}_{0,j},r_{j})\) to obtain quantum advice \(\rho_{\pi\circ\mathsf{swap}_{0,j},r_{j}}\) for each \(j\in[0,\lceil\log N\rceil-1]\). Finally, \(\mathsf{S}_{0}\) outputs a quantum state \(\rho=\bigotimes_{j=0}^{\lceil\log N\rceil-1}\rho_{\pi\circ\mathsf{swap}_{0,j },r_{j}}\). (Note: We let \(\rho_{j}=\rho_{\pi\circ\mathsf{swap}_{0,j},r_{j}}\) for the rest of the proof.) 2. The oracle algorithm \(\mathsf{S}_{1}^{\mathcal{O}_{\pi},\mathcal{O}_{\pi^{-1}_{\perp y}}}\) proceeds as follows:1 Footnote 1: Here, we borrow the notation for \(\mathcal{O}_{\pi}\) and \(\mathcal{O}_{\pi^{-1}_{\perp y}}\) from the experiment described in Section 3. 1. \(\mathsf{S}_{1}\) receives \(\bigotimes_{j=0}^{n-1}\rho_{j}\), a random string \(r:=r_{0}\|...\|r_{n-1}\) and an image \(y\in[N]\) as inputs. 2. \(\mathsf{S}_{1}\) then runs the following routine for each \(j\in[0,\lceil\log N\rceil-1]\): 1. Run \(\mathsf{D}[\ell]_{1}\) with oracle access to \(\mathcal{O}_{\pi\circ\mathsf{swap}_{0,j}}\) and \(\mathcal{O}_{(\pi\circ\mathsf{swap}_{0,j})^{-1}_{\perp y}}\), where \[\mathcal{O}_{\pi\circ\mathsf{swap}_{0,j}}(\left|w\right>_{1} \left|z\right>_{2}) =\left(\mathsf{swap}_{0,j}\otimes I\right)\mathcal{O}_{\pi} \left(\mathsf{swap}_{0,j}\otimes I\right)\left|w\right>_{1}\left|z\right>_{2}\] \[\mathcal{O}_{(\pi\circ\mathsf{swap}_{0,j})^{-1}_{\perp y}}( \left|w\right>_{1}\left|z\right>_{2}) =(I\otimes\mathsf{swap}_{0,j})\mathcal{O}_{\pi^{-1}_{\perp y}} \left|w\right>_{1}\left|z\right>_{2}\] ii. Let \(b_{j}\leftarrow\mathsf{D}[\ell]_{1}^{(\pi\circ\mathsf{swap}_{0,j})_{\perp y} }(\rho_{j},y,r_{j})\) denote the output. 3. \(\mathsf{S}_{1}\) outputs \(x^{*}\in[N]\) with respect to the binary decomposition \(x^{*}=\sum_{j=0}^{\lceil\log N\rceil-1}2^{j}\cdot b_{j}\). We now argue that the probability that \(\mathcal{D}[\ell]_{1}\) correctly recovers the pre-image bits \(b_{i}\) and \(b_{j}\) is independent for each \(i\neq j\). From Lemma 4.3, we know that \(\mathsf{D}[\ell]_{1}\) runs \(\mathsf{D}_{1}\) as a subroutine, i.e. it decides the first bit of the pre-image of \(y\) by running \(\mathsf{D}_{1}\) (in Lemma 4.3) \(\ell\) times with different random coins. It actually needs to recall \(\mathsf{D}_{1}\) for amplification and for each iteration in this amplification \(k\in[0,\ell-1]\), the actual modified permutation under use is \(\sigma_{i,k}\circ\pi\circ\mathsf{swap}_{0,i}\) and image is \(\sigma_{i,k}(y)\). Similarly for term \(j\), \(\sigma_{j,k}\circ\pi\circ\mathsf{swap}_{0,j}\) and \(\sigma_{j,k}(y)\) is used as the permutation and image. Since the random coins (\(r_{i}\) and \(r_{j}\)), which are used to modify the target permutation \(\pi\), are independently random, those random permutations (\(\sigma_{i,k}\) and \(\sigma_{j,k}\)) generated from random coins are independently random and so do those modified composition permutations, images and advice states. Analyzing the success probability of \((\mathsf{S}_{0},\mathsf{S}_{1})\), we find that \[\mathsf{Pr}\left[\pi^{-1}(y)\leftarrow\mathsf{S}_{1}^{\pi_{\perp y} }(\rho,y,r)\ :\ \rho\leftarrow\mathsf{S}_{0}(\pi,r)\right]\] \[=\mathsf{Pr}\left[\bigwedge_{j=0}^{\lceil\log N\rceil-1}\pi^{-1}( y)|_{j}\leftarrow\mathsf{D}[\ell]_{1}^{(\pi_{\circ}\mathsf{swap}_{0,j})_{ \perp y}}(\rho_{j},y,r_{j})\right]\] \[=\prod_{j=0}^{\lceil\log N\rceil-1}\mathsf{Pr}\left[\pi^{-1}(y)|_ {j}\leftarrow\mathsf{D}[\ell]_{1}^{(\pi_{\circ}\mathsf{swap}_{0,j})_{\perp y} }(\rho_{j},y,r_{j})\right]\] \[\geq\left(1-\exp\biggl{(}-\frac{\delta^{2}}{(1+2\delta)}\cdot \ell\biggr{)}\right)^{\lceil\log N\rceil}\] \[\geq 1-\lceil\log N\rceil\cdot\exp\biggl{(}-\frac{\delta^{2}}{(1+ 2\delta)}\cdot\ell\biggr{)}.\] where the last line follows from Bernoulli's inequality. Finally, we compute the resources needed for \((\mathsf{S}_{0},\mathsf{S}_{1})\). By Lemma 4.3, \((\mathsf{D}[\ell]_{0},\mathsf{D}[\ell]_{1})\) requires space \(\ell S\) and query size \(\ell T\). For \(j\in[0,\lceil\log N\rceil-1]\), \(\mathsf{S}_{0}\) stores \(\mathsf{D}[\ell]_{0}\)'s outputs and thus \(\mathsf{S}\) requires storage size \(\lceil\log N\rceil\ell S\). Similarly, \(\mathsf{S}_{1}\) runs \(\mathsf{D}[\ell]_{1}\) to obtain \(b_{j}\) and thus it requires \(\lceil\log N\rceil\ell T\) many queries in total. ### A reduction from unstructured search Second, we generalize the method used in [20] to give a lower bound for decision inversion without advice. Unlike in Nayak's original reduction, here we grant two-way access to the permutation. Recall that, in the unique search problem, one is granted quantum oracle access to a function \(f:[N]\rightarrow\{0,1\}\) which is promised to satisfy either \(|f^{-1}(1)|=0\) or \(|f^{-1}(1)|=1\); the goal is to decide which is the case. The problem is formally defined below. **Definition 5.2**.: \((\textsc{UNIQUESEARCH}_{N})\) _Given a function \(f:[N]\rightarrow\{0,1\}\), such that \(f\) maps at most one element to 1, output "yes" if \(f^{-1}(1)\) is non-empty and "no" otherwise. In this work, the function \(f\) is restricted to map at most one element to \(1\)._ **Definition 5.3**.: _(Distributional error) Suppose an algorithm solves a decision problem with error probability at most \(p_{0}\) for "no" instances and \(p_{1}\) for "yes" instances. Then we say this algorithm has distributional error \((p_{0},p_{1})\)._ **Theorem 5.4**.: _Let \(\mathcal{A}\) be a \((0,T,\delta)\)-DPI. Then there exists a quantum algorithm \(\mathcal{B}\) that can solve \(\textsc{UNIQUESEARCH}_{N-1}\) with at most \(2T\) quantum queries with distributional error \(\left(\frac{1}{2}-\delta,\frac{1}{2}\right)\)._ Proof.: Our proof is similar to that of Nayak [20]: given a \((0,T,\delta)\)-DPI \(\mathcal{A}\), we construct another algorithm \(\mathcal{B}\) which solves the \(\textsc{UNIQUESEARCH}_{N-1}\) problem. For any uniform image \(t\in[N]\), define the "no" and "yes" instances sets (corresponding to the image \(t\)) of \(\textsc{PERMUTATION}_{N-1}\) (the permutation inversion problem of input size \(N\)): \[\pi_{t,0}=\{\pi:\pi\text{ is a permutation on }[N],\text{ the first bit of }\pi^{-1}(t)\text{ is }0\},\] \[\pi_{t,1}=\{\pi:\pi\text{ is a permutation on }[N],\text{ the first bit of }\pi^{-1}(t)\text{ is }1\}.\] Note that for a random permutation \(\pi\), whether \(\pi\in\pi_{t,0}\) or \(\pi_{t,1}\) simply depends on the choice of \(t\). Since \(t\) is uniform, \(\Pr[\pi\in\pi_{t,0}]=\Pr[\pi\in\pi_{t,1}]=1/2\). We also consider functions \(h:[N]\rightarrow[N]\) with a unique collision at \(t\). One of the colliding pair should have first bit \(0\), the other one should have first bit \(1\). Formally speaking, \(h(0\|i)=h(1\|j)=t\), where \(i,j\in\{0,1\}^{\log N-1}\). Let \(Q_{t}\) denote the set of all such functions. Furthermore, given a permutation \(\pi\) on \([N]\), consider functions in \(Q_{t}\) that differ from \(\pi\) at exactly one point. These are functions \(h\) with a unique collision and the collision is at \(t\). If \(\pi\in\pi_{t,0}\), \(\pi(0\|i)=h(0\|i)=t\) and \(1\|j\) is the unique point where \(\pi\) and \(h\) differ; if \(\pi\in\pi_{t,1}\), \(\pi(1\|j)=h(1\|j)=t\) and \(0\|i\) is the unique point where \(\pi\) and \(h\) differ. Let \(Q_{\pi,t}\) denote the set of such functions \(h\) and clearly \(Q_{\pi,t}\subseteq Q_{t}\). Note that if we pick a random permutation \(\pi\) in \(\{\pi_{N}\}\) and choose a uniform random \(h\in Q_{\pi,t}\), \(h\) is also uniform in \(Q_{t}\). Next, we construct an algorithm \(\mathcal{B}\) that tries to solve \(\mathsf{UNIQUESEARCH}_{N-1}\) as follows, with quantum oracle access to \(f\): 1. \(\mathcal{B}\) first samples a uniform random \(t\in[N]\) and some randomness \(r\in\{0,1\}^{*}\). Then with probability \(1/2\), it picks a uniform random permutation \(\pi\in\pi_{t,0}\); with probability \(1/2\), it picks a \(\pi\in\pi_{t,1}\). 2. \(\mathcal{B}\) constructs a function \(h_{f,\pi,t}\) and \(h_{f,\pi,t}^{-1*}\) as follows. If \(\pi\in\pi_{t,0}\), for any \(i\in\{0,1\}\) and \(j\in\{0,1\}^{\log N-1}\), \[h_{f,\pi,t}(i||j)=\begin{cases}t&\text{ if $i=1$ and $f(j)=1$,}\\ \pi(i||j)&\text{ otherwise.}\end{cases}\] (3) If \(\pi\in\pi_{t,1}\), for any \(i\in\{0,1\}\) and \(j\in\{0,1\}^{\log N-1}\), \[h_{f,\pi,t}(i||j)=\begin{cases}t&\text{ if $i=0$ and $f(j)=1$,}\\ \pi(i||j)&\text{ otherwise.}\end{cases}\] (4) No matter what instance sets \(\pi\) belongs to, the corresponding "inverse" function is defined as \[h_{f,\pi,t}^{-1*}(k||b)=\begin{cases}\pi^{-1}(k)||0&\text{ if $b=0$ and $w\neq t$,}\\ 1||1&\text{ otherwise.}\end{cases}\] (5) 3. \(\mathcal{B}\) then sends \(t\) and \(r\) to \(\mathcal{A}\), runs it with quantum oracle access to \(h_{f,\pi,t}\) and \(h_{f,\pi,t}^{-1*}\), and finally gets back \(b^{\prime}\). For simplicity, we write this process as \(b^{\prime}\leftarrow\mathcal{A}^{h_{\perp t}}(t;r)\). 2 Footnote 2: Note that those functions are defined classically above and its allowance for quantum oracle access is discussed in Appendix B which gives \(2q\) queries in the theorem statement. 4. \(\mathcal{B}\) outputs \(b^{\prime}\) if \(\pi\in\pi_{t,0}\), and \(1-b^{\prime}\) if \(\pi\in\pi_{t,1}\). Let \(\delta_{1}\) be the error probability of \(\mathcal{A}\) in the YES case and \(\delta_{0}\) be that in the NO case of \((0,T,\delta)\)-DPI. Since \(\Pr[\pi\in\pi_{t,0}]=\Pr[\pi\in\pi_{t,1}]=1/2\), it follows that \[\Pr[\text{error of $\mathcal{A}$}]=1-\left(\frac{1}{2}+\delta\right)=\frac{1}{2}( \delta_{0}+\delta_{1})\Rightarrow\delta=\frac{1}{2}-\frac{1}{2}(\delta_{0}+ \delta_{1}).\] We now analyze the error probability of \(\mathcal{B}\) in the YES and NO cases. In the NO case, \(f^{-1}(1)\) is empty, so no matter whether \(\pi\in\pi_{t,0}\) or \(\pi\in\pi_{t,1}\), \(h_{f,\pi,t}=\pi\). It follows that \(\mathcal{A}^{h_{\perp t}}(t,r)=\mathcal{A}^{\pi_{\perp t}}(t,r)\). Therefore, \[\Pr[\text{error of $\mathcal{B}$ in NO case}] =\Pr\bigl{[}1\leftarrow\mathcal{B}^{\mathcal{O}_{f}}(\cdot)\bigr{]}\] \[=\Pr\Bigl{[}1\leftarrow\mathcal{A}^{h_{\perp t}}(t;r)|\pi\in\pi_{ t,0}\Bigr{]}\Pr[\pi\in\pi_{t,0}]\] \[\qquad+\Pr\Bigl{[}0\leftarrow\mathcal{A}^{h_{\perp t}}(t;r)|\pi \in\pi_{t,1}\Bigr{]}\Pr[\pi\in\pi_{t,1}]\] \[=\frac{1}{2}\left(\Pr[1\leftarrow\mathcal{A}^{\pi_{\perp t}}(t;r )|\pi\in\pi_{t,0}]+\Pr[0\leftarrow\mathcal{A}^{\pi_{\perp t}}(t;r)|\pi\in\pi_{ t,1}]\right)\] \[=\frac{1}{2}\left(\Pr[\text{error of $\mathcal{A}$ in NO case}]+\Pr[\text{error of $\mathcal{A}$ in YES case}]\right)\] \[=\frac{1}{2}\left(\delta_{0}+\delta_{1}\right)=\frac{1}{2}-\delta.\] In the YES case, \(f^{-1}(1)\) is not empty, so function \(h_{f,\pi,t}\) has a unique collision at \(t\), with one of the colliding pair having first bit \(0\) and the other one having first bit \(1\), no matter \(\pi\in\pi_{t,0}\) or \(\pi_{t,1}\). As \(f\) is a black-box function, the place \(j\) where \(f(j)=1\) is uniform and so \(h_{f,\pi,t}\) is uniform in \(Q_{\pi,t}\). By arguments in the beginning of this proof, as \(\pi\) is uniform, the function is also uniform in \(Q_{t}\). Let \(p:=\Pr\limits_{h_{f,\pi,t}\gets Q_{t}}[0\leftarrow\mathcal{A}^{h_{\perp t} }(t;r)].\) Therefore, \[\Pr[\text{error of $\mathcal{B}$ in YES case}] =\Pr\Bigl{[}0\leftarrow\mathcal{B}^{f}(\cdot)\Bigr{]}\] \[=\Pr\Bigl{[}0\leftarrow\mathcal{A}^{h_{\perp t}}(t;r)|\pi\in\pi_{ t,0}\Bigr{]}\Pr[\pi\in\pi_{t,0}]\] \[\qquad+\Pr\Bigl{[}1\leftarrow\mathcal{A}^{h_{\perp t}}(t;r)|\pi \in\pi_{t,1}\Bigr{]}\Pr[\pi\in\pi_{t,1}]\] \[=\frac{1}{2}\left(\Pr\Bigl{[}0\leftarrow\mathcal{A}^{h_{\perp t} }(t;r)|h_{f,\pi,t}\in Q_{t}\Bigr{]}+\Pr\Bigl{[}1\leftarrow\mathcal{A}^{h_{ \perp t}}(t;r)|h_{f,\pi,t}\in Q_{t}\Bigr{]}\right)\] \[=\frac{1}{2}\left(p+(1-p)\right)=\frac{1}{2}.\] ## 6 Lower bounds ### Search version We now give lower bounds for the search version of the permutation inversion problem over \([N]\). We begin with a lower bound for a restricted class of inverters; these inverters succeed on an \(\epsilon\)-fraction of inputs with constant probability (say, \(2/3\).). The proof uses a similar approach as in previous works on one-sided permutation inversion with advice [1, 1, 2]. We now give an overview of the proof below. Suppose we are given an \(\epsilon\)-SPI\(\mathsf{S}\) that uses \(S\)-many qubits of advice and \(T\)-many queries (either in the forward or inverse direction) to random permutation \(\pi:[N]\rightarrow[N]\) in order to output \(x=\pi^{-1}(y)\) with advantage \(\epsilon>0\) for a random image \(y\in[N]\). Using a careful _averaging argument_ (Lemma 2.3), we show that we can amplify \(\mathsf{S}\) to obtain an inverter \(\mathsf{S}^{\prime}\) (with only \(O(1/\epsilon)\) space-time overhead) such that, with probability \(1/5\) over the choice of \(\pi\) and \(y\), \(\mathsf{S}^{\prime}\) succeeds at outputting \(x\) with probability at least \(2/3\) (over the choice of random coins). Similar to previous works [11, 12], we use information-theoretic lower bounds on the length of quantum random access codes [10, 13, 1], which are a means of encoding classical bits in terms of (a potentially fewer amount of) qubits. In other words, we construct an encoder that compresses the truth table of \(\pi\) by using the power of the search inverter \(\mathsf{S}^{\prime}\), which then allows us to obtain the desired space-time lower bound \(ST^{2}=\widetilde{\Omega}(\epsilon^{3}N)\) in Theorem6.2. To define a suitable quantum random access code with respect to \(\pi\), we choose a random subset \(\mathcal{R}\in[N]\) (known to the encoder and decoder by means of shared randomness) such that each element is contained in \(\mathcal{R}\) with a certain probability. We then define a so-called _good subset_\(\mathcal{G}\) of elements \(x\in\mathcal{R}\) with the following two properties: \(\mathsf{S}^{\prime}\) succeeds at inverting \(\pi(x)\) with probability at least \(2/3\) and that the _query magnitude_ of \(\mathsf{S}^{\prime}\) on any element in \(\mathcal{R}\setminus\{x\}\) (in the forward direction) and \(\pi(\mathcal{R})\setminus\{\pi(x)\}\) (in the inverse direction) is small when given \(y=\pi(x)\) as input.3 Using an appropriate choice of parameters, we can show that our choice of \(\mathcal{G}\) is sufficiently large with high probability. The encoding then consists of the following items: a _partial_ truth table of \(\pi\) on \([N]\setminus\mathcal{G}\), the entire image \(\pi(\mathcal{G})\) as well as the auxiliary state used by \(\mathsf{S}^{\prime}\). To recover the pre-image of the challenge input \(y\), the decoder simply runs the inverter \(\mathsf{S}^{\prime}\) on the auxiliary state by simulating the oracle access to the permutation (and its inverse). Note, however, that the decoder only has access to a partial truth table for \(\pi\), and thus has no means of answering queries on \(\mathcal{G}\) (in the forward direction) and \(\pi(\mathcal{G})\) (in the inverse direction). Because the query magnitude with respect to the two sets is small, we can use a standard _swapping lemma_ trick to show that the state prepared by \(\mathsf{S}^{\prime}\) with access to the simulated oracle (that answers incorrectly on \(\mathcal{G}\setminus\{x\}\) and \(\pi(\mathcal{G})\setminus\{\pi(x)\}\)) is sufficiently close to the state prepared with access to the real oracle. Therefore, the (simulated) inverter \(\mathsf{S}^{\prime}\) still succeeds at inverting \(y\) with good enough probability. Footnote 3: Here it is crucial that the oracle for the inverse direction rejects if queried on the challenge input \(y=\pi(x)\). The statement and proof on a search lower bound for _restricted inverters_ is formally described below. **Theorem 6.1**.: _Let \(N\in\mathbb{N}\) and let \(\mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) be a \((S,T,2\epsilon/3)\)-SPI that satisfies_ \[\Pr_{\pi,y}\left[\Pr_{r}\left[\pi^{-1}(y)\leftarrow\mathsf{S}_{1}^{\pi_{\perp y }}(\rho,y,r):\rho\leftarrow\mathsf{S}_{0}(\pi,r)\right]\geq\frac{2}{3}\right] \geq\epsilon.\] _Suppose that \(\epsilon=\omega(1/N)\), \(T=o(\epsilon\sqrt{N})\) and \(S\geq 1\). Then, for sufficiently large \(N\) we have_ \[ST^{2}\geq\widetilde{\Omega}(\epsilon N).\] Proof.: To prove the claim, we construct a QRAC-VL scheme that encodes the function \(\pi^{-1}\) and then derive the desired space-time trade-off via Theorem2.9. Let \(\mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) be an \(2\epsilon/3\)-SPI that succeeds on a \(\epsilon\)-fraction of inputs with probability at least \(2/3\). In other words, \(\mathsf{S}\) satisfies \[\Pr_{\pi,y}\left[\Pr_{r}\left[\pi^{-1}(y)\leftarrow\mathsf{S}_{1}^{\pi_{\perp y }}(\rho,y,r):\rho\leftarrow\mathsf{S}_{0}(\pi,r)\right]\geq\frac{2}{3}\right] \geq\epsilon.\] By the averaging argument in Lemma2.3 with parameter \(\theta=1/2\), it follows that there exists a large subset \(\mathcal{X}\subseteq\mathcal{S}_{N}\) of permutations with size at least \(N!/2\) such that for any permutation \(\pi\in\mathcal{X}\), we have that \[\Pr_{y}\left[\Pr_{r}\left[\pi^{-1}(y)\leftarrow\mathsf{S}_{1}^{\pi_{\perp y}}( \rho,y,r):\rho\leftarrow\mathsf{S}_{0}(\pi,r)\right]\geq\frac{2}{3}\right] \geq\frac{\epsilon}{2}.\] For a given permutation \(\pi\in\mathcal{X}\) we let \(\mathcal{I}\) be the set of indices \(x\in[N]\) such that \(\mathsf{S}\) correctly inverts \(\pi(x)\) with probability at least \(2/3\) over the choice of \(r\). By the definition of the set \(\mathcal{X}\), we have that \(|\mathcal{I}|\geq\epsilon/2\cdot N\). Our QRAC-VL scheme \((\mathsf{Enc},\mathsf{Dec})\) for encoding permutations is described in detail in Protocol 2. Below, we introduce some additional notation which will be relevant for the scheme. For convenience, we model the two-way accessible oracle given to \(\mathsf{S}_{1}\) in terms of a single oracle for the _merged_ function of the form 4 Footnote 4: The (reversible) quantum oracle implementation is similar as the one in Definition 3.1. We use the function \(\pi_{\perp y}\) for ease of presentation, since the same proof carries over with minor modifications in the quantum oracle case. \[\pi_{\perp y}(w,a)\stackrel{{\mathrm{def}}}{{=}}\begin{cases} \pi(w)&\text{ if }a=0\\ \pi^{-1}(w)&\text{ if }w\neq y\,\wedge\,a=1\\ \perp&\text{ if }w=y\,\wedge\,a=1.\end{cases}\] Let \(c,\gamma\in(0,1)\) be parameters. As part of the encoding, we use the shared randomness \(R\in\left\{0,1\right\}^{*}\) to sample a subset \(\mathcal{R}\subseteq[N]\) such that each element of \([N]\) is contained in \(\mathcal{R}\) with probability \(\gamma/T(\mathsf{S})^{2}\). Moreover, we define the following two disjoint subsets of \([N]\times\left\{0,1\right\}\): \[\Sigma_{0}^{\mathcal{R}} =\mathcal{R}\setminus\left\{x\right\}\times\left\{0\right\}\] \[\Sigma_{1}^{\mathcal{R}} =\pi(\mathcal{R})\setminus\left\{\pi(x)\right\}\times\left\{1\right\}.\] Let \(\mathcal{G}\subseteq\mathcal{I}\) be the set of \(x\in[N]\) which satisfy the following two properties: 1. The element \(x\) is contained in the set \(\mathcal{R}\), i.e. \[x\in\mathcal{R};\] (6) 2. The total query magnitude of \(\mathsf{S}_{1}^{\pi_{\perp y}}\) with input \((\mathsf{S}_{0}(\pi,r),y,r)\) on the set \(\Sigma_{0}^{\mathcal{R}}\cup\Sigma_{1}^{\mathcal{R}}\) is bounded by \(c/T(\mathsf{S})\). In other words, we have \[q(\mathsf{S}_{1}^{\pi_{\perp y}},\Sigma_{0}^{\mathcal{R}}\cup\Sigma_{1}^{ \mathcal{R}})\,\leq\,c/T(\mathsf{S}).\] (7) **Claim 1.** Let \(\mathcal{G}\subseteq[N]\) be the set of \(x\) which satisfy the conditions in (6) and (7). Then, there exist constants \(\gamma,c\in(0,1)\) such that \[\Pr_{\mathcal{R}}\left[|\mathcal{G}|\geq\frac{\epsilon\gamma N}{4\,T(\mathsf{ S})^{2}}\left(1-\frac{5\gamma^{2}}{c}\right)\right]\geq 0.8.\] In other words, we have \(|\mathcal{G}|=\Omega(\epsilon N/T(\mathsf{S})^{2})\) with high probability. Proof.: Let \(\mathcal{H}=\mathcal{R}\cap\mathcal{I}\) denote the set of \(x\in\mathcal{R}\) for which \(\mathsf{S}\) correctly inverts \(\pi(x)\) with probability at least \(2/3\) over the choice of \(r\). By the definition of the set \(\mathcal{R}\), it follows that \(|\mathcal{H}|\) has a binomial distribution. Therefore, in expectation, we have that \(|\mathcal{H}|=\gamma|\mathcal{I}|/T(\mathsf{S})^{2}\). Using the multiplicative Chernoff bound in Lemma 2.1 and the fact that \(T(\mathsf{S})=o(\epsilon\sqrt{N})\), we get \[\Pr_{\mathcal{R}}\left[|\mathcal{H}|\geq\frac{\gamma|\mathcal{I}|}{2\,T( \mathsf{S})^{2}}\right]\geq 0.9, \tag{8}\] for all sufficiently large \(N\). Because each query made by \(\mathsf{S}_{1}\) has unit length and because \(\mathsf{S}_{1}\) makes at most \(T(\mathsf{S})\) queries, it follows that \[q(\mathsf{S}_{1}^{\pi_{\perp y}},[N]\times\{0,1\})\leq T(\mathsf{S}). \tag{9}\] We obtain the following upper bound for the average total query magnitude: \[\begin{split}&\mathop{\mathbb{E}}_{\mathcal{R}}\left[q(\mathsf{S}_{1}^{ \pi_{\perp y}},\Sigma_{0}^{\mathcal{R}}\cup\Sigma_{1}^{\mathcal{R}})\right]\\ &=\mathop{\mathbb{E}}_{\mathcal{R}}\left[q(\mathsf{S}_{1}^{\pi_{ \perp y}},\Sigma_{0}^{\mathcal{R}})+q(\mathsf{S}_{1}^{\pi_{\perp y}},\Sigma_{1 }^{\mathcal{R}})\right]\\ &=\mathop{\mathbb{E}}_{\mathcal{R}}\left[q(\mathsf{S}_{1}^{\pi_{ \perp y}},\Sigma_{0}^{\mathcal{R}})\right]+\mathop{\mathbb{E}}_{\mathcal{R}} \left[q(\mathsf{S}_{1}^{\pi_{\perp y}},\Sigma_{1}^{\mathcal{R}})\right]\\ &=\mathop{\mathbb{E}}_{\mathcal{R}}\left[q(\mathsf{S}_{1}^{\pi_{ \perp y}},\mathcal{R}\setminus\{x\}\times\{0\})\right]\\ &\quad+\mathop{\mathbb{E}}_{\mathcal{R}}\left[q(\mathsf{S}_{1}^{ \pi_{\perp y}},\pi(\mathcal{R})\setminus\{\pi(x)\}\times\{1\})\right]\\ &=\frac{\gamma}{T(\mathsf{S})^{2}}\cdot q(\mathsf{S}_{1}^{\pi_{ \perp y}},[N]\setminus\{x\}\times\{0\})\\ &\quad+\frac{\gamma}{T(\mathsf{S})^{2}}\cdot q(\mathsf{S}_{1}^{ \pi_{\perp y}},\pi([N])\setminus\{\pi(x)\}\times\{1\})\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad **Protocol 2** (Quantum Random Access Code For Inverting Permutations).: _Let \(c,\gamma\in(0,1)\) be parameters. Consider the following (variable-length) quantum random-access code given by \(\mathsf{QRAC\mbox{-}VL}=(\mathsf{Enc},\mathsf{Dec})\) defined as follows:_ * \(\mathsf{Enc}(\pi^{-1};R)\)_: On input_ \(\pi^{-1}\in\mathcal{S}_{N}\) _and randomness_ \(R\in\{0,1\}^{*}\)_, first uses_ \(R\) _to extract random coins_ \(r\) _and then proceeds as follows:_ **Case 1:**__\(\pi\notin\mathcal{X}\) _or_ \(|\mathcal{G}|<\frac{\epsilon\gamma N}{4\,T(\mathsf{S})^{2}}\left(1-\frac{5 \gamma^{2}}{c}\right)\)_. Uses the classical flag_ \(\mathsf{case}=1\) _(taking one additional bit) and outputs the entire permutation table of_ \(\pi^{-1}\)_._ **Case 2:**__\(|\mathcal{G}|\geq\frac{\epsilon\gamma N}{4\,T(\mathsf{S})^{2}}\left(1-\frac{5 \gamma^{2}}{c}\right)\)_. Use the classical flag_ \(\mathsf{case}=2\) _(taking one additional bit) and output the following_ 1. _The size of_ \(\mathcal{G}\)_, encoded using_ \(\log N\) _bits;_ 2. _the set_ \(\mathcal{G}\subseteq\mathcal{R}\)_, encoded using_ \(\log\binom{|\mathcal{R}|}{|\mathcal{G}|}\) _bits;_ 3. _The permutation_ \(\pi\) _restricted to inputs outside of_ \(\mathcal{G}\)_, encoded using_ \(\log(N!/|\mathcal{G}|!)\) _bits;_ 4. _Quantum advice used by the algorithm repeated_ \(\rho\) _times with_ \(\alpha^{\otimes\rho}\)_, for_ \(\alpha\leftarrow\mathsf{S}_{0}(\pi,r)\) _for some_ \(\rho\) _that we will decide later. (We can compute this as the encoder can preprocess multiple copies of the same advice. Note that this is the only part of our encoding that is not classical.)_ * \(\mathsf{Dec}(\beta,y;R)\)_: On input encoding_ \(\beta\)_, image_ \(y\in[N]\) _and randomness_ \(R\in\{0,1\}^{*}\)_, first uses_ \(R\) _to extract random coins_ \(r\) _and then proceeds as follows:_ **Case 1:**: _This corresponds to the flag_ \(\mathsf{case}=1\)_. Search the permutation table for_ \(\pi^{-1}\) _and outputs_ \(x\) _such that_ \(\pi^{-1}(y)=x\)_._ **Case 2:**: _This corresponds to the flag_ \(\mathsf{case}=2\)_. Recover_ \(\mathcal{G}\) _and_ \(\pi(x)\) _for every_ \(x\notin\mathcal{G}\)_. If_ \(y=\pi(x)\) _for some_ \(x\notin\mathcal{G}\)_, output_ \(x=\pi^{-1}(y)\)_. Otherwise, parses_ \(\alpha_{1},\alpha_{2},\ldots,\alpha_{\rho}\) _and runs_ \(\mathsf{S}_{1}^{\bar{\pi}_{\perp y}}(\alpha_{i},y,r)\) _for each_ \(i\in[\rho]\) _and outputs their majority vote, where we let_ 1__ Footnote 1: The (reversible) quantum oracle implementation for \(\bar{\pi}_{\perp y}\) is provided in Appendix C. \[\bar{\pi}_{\perp y}(w,a)=\begin{cases}y&\text{ if }w\in\mathcal{G}\,\wedge\,a=0\\ \pi(w)&\text{ if }w\notin\mathcal{G}\,\wedge\,a=0\\ \pi^{-1}(w)&\text{ if }w\notin\pi(\mathcal{G})\,\wedge\,a=1\\ \perp&\text{ if }w\in\pi(\mathcal{G})\,\wedge\,a=1.\end{cases}\] _Let us now analyze the performance of our_ \(\mathsf{QRAC\mbox{-}VL}\) _scheme_ \((\mathsf{Enc},\mathsf{Dec})\) _in_ Protocol 2_. Let_ \(|\Psi_{\pi_{\perp y}}\rangle\) _and_ \(|\Psi_{\bar{\pi}_{\perp y}}\rangle\) _denote the final states of_ \(\mathsf{S}_{1}\) _when it is given the oracles_ \(\pi_{\perp y}\) _and_ \(\bar{\pi}_{\perp y}\)_, respectively. By the Swapping Lemma (Lemma2.7) and Lemma2.6: \[\begin{split}\|\,|\Psi_{\pi_{\perp y}}\rangle-|\Psi_{\bar{\pi}_{ \perp y}}\rangle\,\|&\leq\sqrt{T(\mathsf{S})\cdot q(\mathsf{S}_{1} ^{\pi_{\perp y}},\mathcal{G}\setminus\{x\}\times\{0\})}\,\cup\,(\pi(\mathcal{G })\setminus\{\pi(x)\}\times\{1\})\\ &\leq\sqrt{T(\mathsf{S})\cdot q(\mathsf{S}_{1}^{\pi_{\perp y}}, \Sigma_{0}^{\mathcal{R}}\cup\Sigma_{1}^{\mathcal{R}})}\\ &\leq\sqrt{T(\mathsf{S})\cdot\frac{c}{T(\mathsf{S})}}=\sqrt{c}. \end{split}\] Since \(x\in\mathcal{I}\), it follows from the definition of \(\mathcal{I}\) that measuring \(|\Psi_{\pi_{\perp y}}\rangle\) results in \(x\) with probability at least \(2/3\). Given a small enough positive constant \(c\), we can ensure that measuring \(|\Psi_{\bar{\pi}_{\perp y}}\rangle\) will result in \(x\) with probability at least \(0.6\). We now examine the length of our encoding. With probability \(1-\epsilon/2\), we have \(\pi\notin\mathcal{X}\); with probability \(\epsilon(1-0.8)/2\), we have \(\pi\in\mathcal{X}\) but \(\mathcal{G}\) is small, i.e., \[|\mathcal{G}|<\frac{\epsilon\gamma N}{4\,T(\mathsf{S})^{2}}\left(1-\frac{5 \gamma^{2}}{c}\right).\] Therefore, except with probability \(1-0.4\epsilon\), our encoding will result in the flag case \(=1\), where the encoding consists of \(1+\log N!\) classical bits and the decoder succeeds with probability \(1\). With probability \(0.4\epsilon\), our encoding has the flag case \(=2\), and the size equals \[1+\log N+\log\binom{|\mathcal{R}|}{|\mathcal{G}|}+\log(N!/|\mathcal{G}|!)+ \rho S(\mathsf{S}).\] By the assumption that \(T(\mathsf{S})=o(\epsilon\sqrt{N})\), we have \[\begin{split}\log\binom{|\mathcal{R}|}{|\mathcal{G}|}& =\log\left(\frac{|\mathcal{R}|(|\mathcal{R}|-1)\dots(|\mathcal{ R}|-|\mathcal{G}|+1)}{|\mathcal{G}|(|\mathcal{G}|-1)\dots 1}\right)\\ &=O\left(\log\left(\frac{|\mathcal{R}||\mathcal{R}|\dots| \mathcal{R}|}{|\mathcal{G}||\mathcal{G}|\dots|\mathcal{G}|}\right)\right)\\ &=O(|\mathcal{G}|\log(|\mathcal{R}|/|\mathcal{G}|))\\ &=O(|\mathcal{G}|\log 1/\epsilon)\\ &=o(|\mathcal{G}|\log|\mathcal{G}|),\end{split}\] and we can rewrite the size of the encoding as \[\log N+o(|\mathcal{G}|\log|\mathcal{G}|)+\log N!-\log|\mathcal{G}|!+\rho S( \mathsf{S}).\] In the case when the decoder is queried on an input that is already known, that is \(y\notin\pi(\mathcal{G})\) (which occurs with probability \(1-|\mathcal{G}|/N\)), the decoder recovers the correct pre-image with probability \(1\). Otherwise, the analysis is the following: with just one copy of the advice, the decoder recovers the correct pre-image with probability \(2/3\), and hence with \(\rho\) many copies, the decoder can take the majority vote and recover the correct pre-image with probability \(1-\exp(-\Omega(\rho))\). The latter follows from the Chernoff bound in Lemma2.1. Overall, the average encoding length is \[0.4\epsilon\cdot(\log N+o(|\mathcal{G}|\log|\mathcal{G}|)-\log|\mathcal{G}|!+ \rho S(\mathsf{S}))+\log N!\] where the average success probability is \(1-|\mathcal{G}|/N\cdot\exp(-\Omega(\rho))\). By setting \(\rho=\Omega(\log(N/\epsilon))=\Omega(\log N)\), the average success probability amounts to \(1-O(1/N^{2})\). Therefore, using the lower bound in Theorem 2.9, we have \[\log N!+0.4\epsilon\cdot(\log N+o(|\mathcal{G}|\log|\mathcal{G}|)- \log|\mathcal{G}|!+\rho S(\mathsf{S})) \geq\log N!-O\left(\frac{1}{N}\log N\right)\] \[\log N+o(|\mathcal{G}|\log|\mathcal{G}|)-\log|\mathcal{G}|!+\rho S (\mathsf{S}) \geq-O\left(\log N\right)\] \[\rho S(\mathsf{S})+O\left(\log N\right) \geq\log|\mathcal{G}|!-o(|\mathcal{G}|\log|\mathcal{G}|)\] \[S(\mathsf{S})\log N \geq\Omega(\log|\mathcal{G}|!-o(|\mathcal{G}|\log|\mathcal{G}|))\] where the second and the last equality comes from the fact that \(\epsilon=\omega(1/N)\) and \(\rho=\Omega(\log N)\), respectively. Since \(\log|\mathcal{G}|!=O(|\mathcal{G}|\log|\mathcal{G}|)\), it follows that \[S(\mathsf{S})\log N \geq\Omega(O(|\mathcal{G}|\log|\mathcal{G}|)-o(|\mathcal{G}| \log|\mathcal{G}|))\] \[S(\mathsf{S})\log N \geq\Omega(|\mathcal{G}|\log|\mathcal{G}|).\] As we are conditioning on the event that \(\mathcal{G}\) is large, i.e. \[|\mathcal{G}|\geq\frac{\epsilon\gamma N}{4\,T(\mathsf{S})^{2}}\left(1-\frac{5 \gamma^{2}}{c}\right),\] plugging in the lower bound on \(|\mathcal{G}|\), we have that for sufficiently large \(N\), \[S(\mathsf{S}) \geq\widetilde{\Omega}(|\mathcal{G}|)\] \[S(\mathsf{S})\cdot T(\mathsf{S})^{2} \geq\widetilde{\Omega}(\epsilon N).\] This gives the desired space-time trade-off. We remark that the search inverter we consider in Theorem 6.1 succeeds on more than just a constant number of inputs, that is \(\epsilon=\omega(1/N)\), and beats the time complexity of \(T=\Omega(\sqrt{\epsilon N})\) which is required for unstructured search using Grover's algorithm. [1, 1, 2]. Next, we remove the restriction on the inverter by applying amplification (specifically, Corollary 4.2.) This yields a lower bound in the full average-case version of the search inversion problem. **Theorem 6.2**.: _Let \(\mathsf{S}\) be a \((S,T,\epsilon)\)-SPI for some \(\epsilon>0\). Suppose that \(\epsilon=\omega(1/N)\), \(T=o(\epsilon^{2}\sqrt{N})\), and \(S\geq 1.\) Then, for sufficiently large \(N\) we have_ \[S(\mathsf{S})\cdot T(\mathsf{S})^{2}\geq\widetilde{\Omega}(\epsilon^{3}N).\] Proof.: Let \(\mathsf{S}=(\mathsf{S}_{0},\mathsf{S}_{1})\) be an \(\epsilon\)-SPI, for some \(\epsilon>0\). Using Corollary 4.2, we can construct an SPI\(\mathsf{S}[\ell]=(\mathsf{S}[\ell]_{0},\mathsf{S}[\ell]_{1})\) with space and time complexities \[S(\mathsf{S}[\ell])=\left\lceil\frac{\ln(10)}{\epsilon}\right\rceil\cdot S( \mathsf{S})\quad\text{ and }\quad T(\mathsf{S}[\ell])=\left(\left\lceil\frac{\ln(10)}{ \epsilon}\right\rceil+1\right)\cdot T(\mathsf{S})\] such that \[\Pr_{\pi,y}\left[\Pr_{r}\left[\pi^{-1}(y)\leftarrow\mathsf{S}[\ell]_{1}^{\pi_ {1:y}}(\mathsf{S}[\ell]_{0}(\pi,r),y,r)\right]\geq\frac{2}{3}\right]\geq\frac{ 1}{5}.\] From Theorem 6.1 it follows that for sufficiently large \(N\geq 1\), \[S(\mathsf{S}[\ell])\cdot T(\mathsf{S}[\ell])^{2}\geq\widetilde{\Omega}(N).\] Plugging in the expressions for \(S(\mathsf{S}[\ell])\) and \(T(\mathsf{S}[\ell])\), we get that with assumption \[\epsilon=\omega(1/N),\ \ \ \ \ T(\mathsf{S})=o(\epsilon^{2}\sqrt{N})\ \ \ \text{ and }\ \ \ S(\mathsf{S})\geq 1,\] the trade-off between space and time compleixties is \[S(\mathsf{S})\cdot T(\mathsf{S})^{2}\geq\widetilde{\Omega}(\epsilon^{3}N).\] Note that we incur a loss (\(\epsilon^{3}\) versus \(\epsilon\)) in our search lower bound due to the fact that we need to amplify the _restricted_ search inverter in Theorem 6.1. This results in a multiplicative overhead of \(\Theta(1/\epsilon)\) in terms of space and time complexity, as compared to the restricted inverter. We remark that a similar loss as a result of amplification is also inherent in [1]. ### Decision version The search lower bound of Theorem 6.2, when combined with the search-to-decision reduction of Theorem 5.1, yields a lower bound for the decision version. **Corollary 6.3**.: _Let \(\mathsf{D}\) be a \((S,T,\delta)\)-DPI for some \(\delta>0\). Suppose that \(\delta=\omega(1/N)\) and \(T=\tilde{o}\left(\delta^{2}\sqrt{N}\right)\) and \(S\geq 1.\) Then, for sufficiently large \(N\) we have_ \[S(\mathsf{D})\cdot T(\mathsf{D})^{2}\gtrapprox\widetilde{\Omega}\left(\delta^{ 6}N\right).\] Proof.: Let \(N=2^{n}\). Given a \(\delta\)-DPI \(=(\mathsf{D}_{0},\mathsf{D}_{1})\) where \(\mathsf{D}_{0}\) outputs \(S\)-qubit state and \(\mathsf{D}_{1}\) makes \(T\) queries, one can construct an \(\eta\)-SPI \(=(\mathsf{S}_{0},\mathsf{S}_{1})\) by Theorem 5.1 with \(\eta\geq 1-\mathsf{negl}(n)\), and with space and time complexities \[S(\mathsf{S})=n\ell S(\mathsf{D})\ \ \ \text{ and }\ \ \ T(\mathsf{S})=n\ell T(\mathsf{D})\] where \(\ell=\Omega\left(\frac{n(1+2\delta)}{\delta^{2}}\right)\). It directly follows from Theorem 6.2 that with conditions \[\delta =\omega(1/N),\] \[T(\mathsf{D}) =\frac{1}{n\ell}\cdot o(\eta\sqrt{N})=o\left(\frac{\delta^{2}}{ n^{2}(1+2\delta)}\sqrt{N}\right)=\tilde{o}\left(\delta^{2}\sqrt{N}\right),\] \[S(\mathsf{D}) \geq 1,\] \(\mathsf{S}\) satisfies the space-time trade-off lower bound \[n^{3}\left(\frac{n(1+2\delta)}{\delta^{2}}\right)^{3}S(\mathsf{ D})\cdot T(\mathsf{D})^{2} \geq\widetilde{\Omega}(\eta^{3}N)\approx\widetilde{\Omega}(N)\] \[S(\mathsf{D})\cdot T(\mathsf{D})^{2} \gtrapprox\widetilde{\Omega}\left(\delta^{6}N\right)\] for sufficiently large \(N\). Similar to the search lower bound from before, we incur a loss that amounts to a factor \(\delta^{6}\). This results from our specific approach which is based on the search-to-decision reduction in Theorem5.1. We believe that our lower bound could potentially be improved even further. In the case of no advice, we can get a tight bound by means of the reduction from the unique search problem (Theorem5.4), combined with well-known lower bounds on the average-case unique search problem. **Theorem 6.4**.: _Let \(\mathsf{D}\) be a \((0,T,\delta)\)-DPI. Then \(T^{2}\geq\widetilde{\Omega}(\delta N)\)._ Proof.: Since \(\mathsf{D}\) is a \((0,T,\delta)\)-DPI, by Theorem5.4 we get a \(2T\)-query algorithm for the unique search problem with distributional error \((\frac{1}{2}-\delta,\,\frac{1}{2})\). Since the "yes" and "no" cases are uniformly distributed, we can write the distribution error as \[\delta^{\prime}=\frac{1}{2}\left(\frac{1}{2}-\delta\right)+\frac{1}{2}\cdot \frac{1}{2}=\frac{1}{2}-\frac{\delta}{2}.\] For average-case unique search problem, previous work [14, 15, 16] gave a optimal bound \(T^{2}\geq\widetilde{\Omega}(pN)\), where \(p\) is the success probability of the unique search problem. This concludes our proof. ## 7 Applications In this section, we give a plausible security model for symmetric-key encryption, and a scheme whose security in that model is based on the hardness of our two-sided permutation inversion problem. Recall that a symmetric-key encryption scheme consists of three algorithms: * key generation \(\mathsf{Gen}\): given randomness \(s\), security parameter \(n\); outputs key \(k:=\mathsf{Gen}(1^{n};s)\); * encryption \(\mathsf{Enc}\): given key \(k\), plaintext \(m\), randomness \(r\); outputs ciphertext \(c:=\mathsf{Enc}_{k}(m;r)\); * decryption \(\mathsf{Dec}\): given key \(k\), ciphertext \(c\); outputs plaintext \(m:=\mathsf{Dec}_{k}(c)\). When the randomness is to be selected uniformly, we suppress it, e.g., we write \(\mathsf{Gen}(1^{n})\). Consider the following security definition. **Definition 7.1**.: _(OW-QCCRA) Let \(\mathsf{SKE}=(\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})\) be a private-key encryption scheme. We say that \(\mathsf{SKE}\) is OW-QCCRA-secure if the advantage for any quantum polynomial-time adversary \(\mathcal{A}\) in the following experiment is at most negligible:_ 1. _A key_ \(k\) _is generated by running_ \(\mathsf{Gen}(1^{n})\)_;_ 2. \(\mathcal{A}\) _gets quantum oracle access to_ \(\mathsf{Enc}_{k}(\,\cdot\,;\,\cdot\,)\) _and_ \(\mathsf{Dec}_{k}(\cdot)\)_;_ 3. _Uniform_ \(m\in\mathcal{M}\) _and_ \(r\in\mathcal{R}\) _are chosen, and a challenge ciphertext_ \(c=\mathsf{Enc}_{k}(m;r)\) _is computed and given to_ \(\mathcal{A}\)_;_ 4. \(\mathcal{A}\) _gets quantum oracle access to_ \(\mathsf{Enc}_{k}(\,\cdot\,;\,\cdot\,)\) _and_ \(\mathsf{Dec}_{k}^{\perp c}(\cdot)\)_. Eventually, it outputs a bit_ \(b\)_._ 5. _The experiment outputs 1, if_ \(b=m|_{0}\)_, and 0 otherwise._ We remark that, unlike in most definitions of security, here the adversary is allowed to choose both inputs to the encryption oracle: the plaintext as well as the randomness. The acronym \(\mathsf{OW}\) stands for "one-way" and \(\mathsf{QCCRA}\) stands for "quantum chosen-ciphertext randomness-access attack." Next, we define two simple encryption schemes. **RP Scheme.** Consider the following (inefficient) scheme that uses uniformly random permutations. * \(\mathsf{Gen}\) is given \(1^{n}\) and outputs a description \(k\) of a uniformly random permutation \(\pi\) on \(\{0,1\}^{2n}\); * \(\mathsf{Enc}\) is given \(k\), \(m\in\{0,1\}^{n}\) and \(r\in\{0,1\}^{n}\), and outputs \(c:=\pi(m||r)\); * \(\mathsf{Dec}\) is given \(k\) and \(c\in\{0,1\}^{2n}\), and outputs the first \(n\) bits of \(\pi^{-1}(c)\). **PRP Scheme.** Let \(\{P_{k}:\{0,1\}^{2n}\mapsto\{0,1\}^{2n}\}\) be a family quantum-query-secure strong pseudorandom permutations (PRPs) [10, 11] and consider the following scheme: * \(\mathsf{Gen}\) takes as input a security parameter \(1^{n}\) and returns a key \(k\in\{0,1\}^{n}\) for \(P_{k}\); * \(\mathsf{Enc}\) is given key \(k\in\{0,1\}^{n}\), \(m\in\{0,1\}^{n}\) and \(r\in\{0,1\}^{n}\), and outputs \(c:=P_{k}(m||r)\); * \(\mathsf{Dec}\) is given key \(k\in\{0,1\}^{n}\) and \(c\in\{0,1\}^{2n}\), and outputs the first \(n\) bits of \(P_{k}^{-1}(c)\). Of course, any practical scheme should be efficient, and indeed we can show that the PRP scheme is \(\mathsf{OW}\)-\(\mathsf{QCCRA}\)-secure. Specifically, we apply our decision inversion lower bound (Corollary 6.3) to prove the below security theorem. **Theorem 7.2**.: _The PRP scheme is \(\mathsf{OW}\)-\(\mathsf{QCCRA}\)-secure. In other words, for all quantum polynomial time (QPT) adversaries \(\mathcal{A}\), it holds that_ \[\Pr\Bigl{[}\mathsf{Exp}_{\mathcal{A},\mathrm{PRP}}^{\mathsf{OW}\text{-} \mathsf{QCCRA}}(1^{n})=1\Bigr{]}\leq\frac{1}{2}+\mathsf{negl}(n).\] Proof.: Given an adversary \(\mathcal{A}\) that attacks the RP scheme in the \(\mathsf{OW}\)-\(\mathsf{QCCRA}\) experiment, we can construct a \(\delta\)-\(\mathsf{DPI}\)\(\mathsf{D}=(\mathsf{D}_{0},\mathsf{D}_{1})\) in \(\mathsf{DecisionInvert}_{\mathsf{D}}\), which takes place as follows: 1. **(sample instance and coins)** a random permutation \(\pi:\{0,1\}^{n}\to\{0,1\}^{n}\), a random image \(y\leftarrow\{0,1\}^{n}\), and a random string \(r\leftarrow\{0,1\}^{*}\) are sampled; 2. **(prepare advice)**\(\mathsf{D}_{0}\) is given the whole permutation table of \(\pi\). Then it constructs oracles \(\mathsf{Enc}(\cdot;\cdot)=\pi(\cdot\|\cdot)\) and \(\mathsf{Dec}(\cdot)=\pi^{-1}(\cdot)\) and gives \(\mathcal{A}\) poly-time quantum oracle access. \(\mathsf{D}_{0}\) will get back an output state \(\rho\) and then output it. 3. **(invert)**\(\mathsf{D}_{1}\) is run with a random instance \(y\), advice \(\rho\) and quantum oracle access \(\mathcal{O}_{\pi}\) and \(\mathcal{O}_{\pi_{\downarrow y}^{-1}}\). It then directly passes \(y\) and two oracles to \(\mathcal{A}\) and gets back a bit \(b\) and outputs it. 4. **(check)** If \(b=\pi^{-1}(y)|_{0}\), output 1; otherwise output 0. It trivially follows that \[\Pr\Bigl{[}\mathsf{Exp}_{\mathcal{A},\mathrm{RP}}^{\mathsf{OW}\text{-} \mathsf{QCCRA}}(1^{n})=1\Bigr{]}\leq\Pr[\mathsf{DecisionInvert}_{\mathsf{D}} =1].\] By assumption we have that, for all QPT \(\mathcal{A}\), there exists a negligible function \(\mathsf{negl}\) such that \[\left|\Pr\left[\mathcal{A}^{P_{k}(\cdot),P_{k}^{-1}(\cdot)}\left(1^{n}\right)=1 \right]-\Pr\left[\mathcal{A}^{\pi(\cdot),\pi^{-1}(\cdot)}\left(1^{n}\right)=1 \right]\right|\leq\mathsf{negl}(n),\] where \(P_{k}\) is a quantum-query-secure strong pseudorandom permutation [11]. Therefore \[\Pr\Bigl{[}\mathsf{Exp}_{\mathcal{A},\mathrm{PRP}}^{\mathsf{OW- QCCRA}}(1^{n})=1\Bigr{]} \leq\Pr\Bigl{[}\mathsf{Exp}_{\mathcal{A},\mathrm{RP}}^{\mathsf{OW- QCCRA}}(1^{n})=1\Bigr{]}+\mathsf{negl}(n)\] \[\leq\Pr[\mathsf{DecisionInvert}_{\mathsf{D}}=1]+\mathsf{negl}(n)\] \[=\frac{1}{2}+\delta+\mathsf{negl}(n).\] If \(\delta<w(1/N)\), the security statement directly follows. Otherwise, by Corollary 6.3, \[\delta\leq\widetilde{O}\left(\frac{S^{1/6}T^{1/3}}{2^{n/6}}\right)\leq\mathsf{ negl}(n),\] as both \(S\) and \(T\) are of polynomial size. We also show that the (idealized, inefficient) RP scheme actually satisfies an even stronger version of the \(\mathsf{OW-QCCRA}\) security notion. In this strengthening, \(\mathcal{A}\) is computationally unlimited, and also gets _unlimited_ quantum oracle access to \(\mathsf{Enc}_{k}(\cdot\,;\cdot)\) and \(\mathsf{Dec}_{k}(\cdot)\) in the pre-challenge phase. This is defined and proved formally below. **Definition 7.3**.: _(strong \(\mathsf{OW-QCCRA}\)) Let \(\mathsf{SKE}=(\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})\) be a private-key encryption scheme. We say that \(\mathsf{SKE}\) is strong \(\mathsf{OW-QCCRA}\)-secure if the advantage for any unbounded quantum adversary \(\mathcal{A}\) in the following experiment is at most negligible:_ 1. _A key_ \(k\) _is generated by running_ \(\mathsf{Gen}(1^{n})\)_;_ 2. \(\mathcal{A}\) _gets_ _unlimited_ _quantum oracle access to_ \(\mathsf{Enc}_{k}(\,\cdot\,;\,\cdot\,)\) _and_ \(\mathsf{Dec}_{k}(\cdot)\)_; but can only writes down a_ \(\mathsf{poly}(n)\)_-qubits state;_ 3. _Uniform_ \(m\in\mathcal{M}\) _and_ \(r\in\mathcal{R}\) _are chosen, and a challenge ciphertext_ \(c=\mathsf{Enc}_{k}(m;r)\) _is computed and given to_ \(\mathcal{A}\)_;_ 4. \(\mathcal{A}\) _now gets poly-time quantum oracle access to_ \(\mathsf{Enc}_{k}(\,\cdot\,;\,\cdot\,)\) _and_ \(\mathsf{Dec}_{k}^{\perp c}(\cdot)\)_. Eventually, it outputs a bit_ \(b\)_._ 5. _The experiment outputs 1, if_ \(b=m|_{0}\)_, and 0 otherwise._ **Theorem 7.4**.: _The \(\mathrm{RP}\) scheme is strong \(\mathsf{OW-QCCRA}\)-secure. In other words, for all quantum adversaries \(\mathcal{A}\), it holds that_ \[\Pr\Bigl{[}\mathsf{Exp}_{\mathcal{A},\mathrm{RP}}^{\mathsf{strong \ 2. **(prepare advice)**\(\mathsf{D}_{0}\) is given the whole permutation table of \(\pi\) and grants \(\mathcal{A}\)** unlimited** oracle access to \(\pi\) and \(\pi^{-1}\). Then \(\mathsf{D}_{0}\) gets back an output state \(\rho\) and outputs it. 3. **(invert)**\(\mathsf{D}_{1}\) is run with a random instance \(y\), advice \(\rho\) and quantum oracle access to \(\mathcal{O}_{\pi}\) and \(\mathcal{O}_{\pi^{-1}_{\perp y}}\). It then directly passes \(y\) and two oracles to \(\mathcal{A}\) and gets back a bit \(b\) and outputs it. 4. **(check)** If \(b=\pi^{-1}(y)|_{0}\), output 1; otherwise output 0. It trivially follows that \[\Pr\Bigl{[}\mathsf{Exp}_{\mathcal{A},\text{RP}}^{\mathsf{strong}\ \mathsf{OW-QCCRA}}(n)=1 \Bigr{]}\leq\Pr[\mathsf{DecisionInvert}_{\mathsf{D}}=1]=\frac{1}{2}+\delta.\] If \(\delta<w(1/N)\), the statement directly follows. Otherwise, by Corollary6.3, assuming storage size \(S\) and \(T=\widetilde{o}\left(\delta^{2}\sqrt{2^{n}}\right)\), \[\delta\leq\widetilde{O}\left(\frac{S^{1/6}T^{1/3}}{2^{n/6}}\right).\] The assumption can also be expressed as \(\delta\geq\widetilde{\omega}\left(\frac{T^{1/2}}{2^{n/4}}\right)\) which is smaller than \(\left(\frac{S^{1/6}T^{1/3}}{2^{n/6}}\right)\). Therefore, given \(S=poly(n)\) given in the experiment, we can directly bound the success probability by \(\left(\frac{S^{1/6}T^{1/3}}{2^{n/6}}\right)\) which is negligible of \(n\). Note that the adversary \(\mathcal{A}\) in the above experiment has **unlimited** query access while the adversary in Theorem7.2 only has poly-time query access. Therefore the truely random permutation RP cannot be replaced by the pseudorandom permutation PRP while still satisfying \(\mathsf{strong}\mathsf{OW-QCCRA}\). Finally, we remark that the above results hold for the following strengthening of \(\mathsf{OW-QCCRA}\), described as follows. Suppose that an encryption scheme satisfies the property that there exists an _alternative_ decryption algorithm which can both compute the plaintext, and also deduce the randomness that was initially used to encrypt. This property is true for the RP and PRP schemes, as well as some other standard encryption methods (e.g., Regev's secret-key LWE scheme, implicit in [10]). For schemes in this category, one can also grant access to such an alternative decryption algorithm, thus expanding the form of "randomness access" that the adversary has. Our proofs show that the RP and PRP schemes are secure (in their respective setting) even against this form of additional adversarial power.
2304.05732
Neutron scattering sum rules, symmetric exchanges, and helicoidal magnetism in MnSb$_2$O$_6$
MnSb$_{2}$O$_{6}$ is based on the noncentrosymmetric $P321$ space group with magnetic Mn$^{2+}$ ($S={5/2}$, $L\approx 0$) spins ordering below $T_{\mathrm{N}}=12$ K in a helicoidal structure. The ground state magnetic structure, expected to be built and originate from 7 Heisenberg exchange constants, has been shown to be coupled to the underlying crystallographic chirality with polar domain switching being reported. We apply neutron spectroscopy to extract these symmetric exchange constants. Given the high complexity of the magnetic exchange network, crystallographic structure and complications fitting linear spin-wave models, we take advantage of multiplexed neutron instrumentation to use the first moment sum rule of neutron scattering to estimate the 7 exchange constants. We then use these parameters to calculate the low-energy spin-waves in the N\'eel state to reproduce the neutron response without strong antisymmetric coupling. Using Green's response functions, the stability of long-wavelength excitations in the context of proposed magnetic structures is then discussed. The results show the presence of strong exchange constants for the chiral exchange pathways and illustrate an underlying coupling between crystallographic and magnetic ``chirality" through predominantely symmetric exchange.
E. Chan, H. Lane, J. Pásztorová, M. Songvilay, R. D. Johnson, R. Downie, J-W. G. Bos, J. A. Rodriguez-Rivera, S. -W. Cheong, R. A. Ewings, N. Qureshi, C. Stock
2023-04-12T09:42:18Z
http://arxiv.org/abs/2304.05732v1
Neutron scattering sum rules, symmetric exchanges, and helicoidal magnetism in MnSb\({}_{2}\)O\({}_{6}\) ###### Abstract MnSb\({}_{2}\)O\({}_{6}\) is based on the noncentrosymmetric \(P321\) space group with magnetic Mn\({}^{2+}\) (\(S=5/2\), \(L\approx 0\)) spins ordering below \(T_{\rm N}=12\) K in a cycloidal structure. The spin rotation plane was found to be tilted away from the \(c\)-axis [M. Kinoshita _et al._ Phys. Rev. Lett. **117**, 047201 (2016)] resulting as a helicoidal ground state which we refer as the tilted structure. In our previous diffraction work [E. Chan _et al._ Phys. Rev. B **106**, 064403 (2022)] we found no evidence that this tilted structure is favored over the pure cycloidal order (referred as the untilted structure). The ground state magnetic structure, expected to be built and originate from 7 nearest neighbor Heisenberg exchange constants, has been shown to be coupled to the underlying crystallographic chirality with polar domain switching being reported. We apply neutron spectroscopy to extract these symmetric exchange constants. Given the high complexity of the magnetic exchange network, crystallographic structure and complications fitting many parameter linear spin-wave models, we take advantage of multiplexed neutron instrumentation to use the first moment sum rule of neutron scattering to estimate these symmetric exchange constants. The first moment of neutron scattering provides a way of deriving the Heisenberg exchange constant between two neighboring spins if the relative angle and distance of the two ordered spins is known. We show that the first moment sum rule combined with the known magnetic ordering wavevector fixes 6 of the 7 exchange constants. The remaining exchange constant is not determined by this analysis because of the equal spatial bond distances present for different chiral exchange interactions. However, we find this parameter is fixed by the magnon dispersion near the magnetic zone boundary which is not sensitive to the tilting of the global magnetic structure. We then use these parameters to calculate the low-energy spin-waves in the Neel state to reproduce the neutron response without strong antisymmetric coupling. Using Green's response functions, the stability of long-wavelength excitations in the context of our proposed untilted magnetic structures is then discussed. The results show the presence of strong symmetric exchange constants for the chiral exchange pathways and illustrate an underlying coupling between crystallographic and magnetic "chirality" through predominantely symmetric exchange. We further argue that the excitations can be consistently modelled in terms of an untilted magnetic structure in the presence of symmetric-only exchange constants. ## I Introduction Magnetic materials that lack an inversion center potentially host coupled magnetic and ferroelectric order parameters while also providing a framework for unusual magnetic excitations like directionally anisotropic (or nonreciprocal) spin-waves.[1; 2] Such materials often consist of magnetic ions in a low-symmetry environment with a complex set of magnetic interactions causing the coupling between structural (e.g. ferroelectricity) and magnetic orders.[3; 4; 5; 6; 7] Determining these magnetic interactions that provide the basis for coupled structural and magnetic properties is often complicated and based on many parameter fits from complex magnetic ground states.[8; 9] In this paper we investigate the magnetic excitations in powder and in an array of single crystals of the helicoidal magnet MnSb\({}_{2}\)O\({}_{6}\) with the goal of extracting the symmetric exchange constants. Given the complexity of the excitation spectrum, the number of predicted exchange constants, and ambiguities of the magnetic structure (tilted versus untilted ground state), we apply a first moment sum rule[20] analysis to extract the symmetric exchange constants and compare the results to the excitation spectrum from mean field linear spin-wave theory. This approach only depends on the relative orientation of neighboring magnetic moments and does not depend on whether the overall magnetic structure is tilted or untilted as discussed below. We also demonstrate a generalized methodology for obtaining symmetric Heisenberg exchange constants from multiplexed neutron scattering where extensive regions of momentum and energy transfers are sampled. Iron-langasite (Ba\({}_{3}\)NbFe\({}_{3}\)Si\({}_{2}\)O\({}_{14}\)) [10; 11; 12; 13] and MnSb\({}_{2}\)O\({}_{6}\)[14; 15; 16; 17; 18] are two examples of magnetic compounds that are based on the noncentrosymmetric \(P321\) (\(\#150\)) space group. The magnetic order in these compounds is different with iron-langasite being described by a simple helix that can be quantified by a time-even pseudoscalar. [15] The magnetic structure in MnSb\({}_{2}\)O\({}_{6}\), in contrast was first found cycloidal and quantified by a time-even polar vector. [15; 18] Given the fact that magnetic Mn\({}^{2+}\) (\(S=5/2\), \(L\approx 0\)) is not expected to have an orbital degeneracy that would enhance any anisotropy in the magnetic Hamiltonian, [19] like antisymmetric terms, it is expected that such terms are small compared to symmetric exchange terms in the magnetic Hamiltonian. Furthermore, diagonal symmetric exchange interactions are coupled to the chirality of the underlying lattice. The nuclear structure of MnSb\({}_{2}\)O\({}_{6}\), based on interlaying MnO\({}_{6}\) and SbO\({}_{6}\) octahedra is shown in Fig. 1(a). The only magnetic ions Mn\({}^{2+}\) arrange in a triangular motif. Magnetic interactions occur in these isolated MnO\({}_{6}\) octahedra through super-super-exchange (SSE) pathways (Mn-O-O-Mn). In particular, chiral SSE pathways along the \(c\)-axis, shown in Fig. 1(b)-(c), define the structural chirality of the compound. Below \(T_{\rm N}\approx 12\) K, the magnetic ground state was found to follow a cycloidal order with a propagation vector \(\mathbf{k}=(0,0,0.182)\). [15] Within each triangle of Mn in the \((ab)\)-plane, shown in dashed gray lines in Fig. 1(d), the moments are dephased by \(120^{\circ}\). The sense of rotation of the spins along the \(c\)-axis and within a basal triangle can be described by magnetic parameters \(\eta_{\rm C}\) and \(\eta_{\rm T}\), often called magnetic "chiralities", which directly couple to the crystal chirality \(\sigma\) through an energy invariant. [18] Later on, the cycloids were reported to be tilted away from the \(c\)-axis, with one of the main axes of the spin envelop parallel to \([1\bar{1}0]\), as shown in Fig. 1(e). This ground state was presented to be necessary to explain the electric polarization measured by pyroelectric current in the \((ab)\)-plane in Ref. [16]. The magnetic structure was further investigated by complementary neutron diffraction techniques in Ref. [18], showing no evidence of this tilted magnetic ground state. Furthermore, a mechanism based on the coupled structural and magnetic chiralities is proposed for the ferroelectric switching, which does not require a tilted cycloid ground state. The magnetic interactions are described by a dominant Heisenberg Hamiltonian \(\hat{\mathcal{H}}=\sum_{ij}J_{ij}\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\) with the symmetric exchange constants corresponding to the seven SSE pathways in MnSb\({}_{2}\)O\({}_{6}\). [15] The nearest neighbor exchange paths are shown in Fig. 2, where the oxygen atoms are omitted for clarity. Each manganese and antimony atom is surrounded by six oxygen atoms forming edge-sharing octahedra. In a minimalist model considering only interactions between neighboring Mn\({}^{2+}\) ions, there are therefore 7 exchange constants which need to be considered. Intraplane interactions are shown in Fig. 2(a) where \(J_{1}\) connects a triangle of MnO\({}_{6}\) octahedra through a SbO\({}_{6}\) octahedra centered at the origin, and \(J_{2}\) connects MnO\({}_{6}\) octahedra between these triangles, through Figure 1: (a) Nuclear structure of lattice-chiral MnSb\({}_{2}\)O\({}_{6}\). The structural chirality can be defined as the helical winding of the Mn-O-O-Mn super-super-exchange pathway with respect to the \(c\)-axis: it is clockwise for left-handed structure (b), and anti-clockwise for right-handed structure (c). Figures made on Vesta. [21] (d) Cycloidal magnetic structure with magnetic parameters \(\eta_{\rm c}\) and \(\eta_{\rm T}\) describing sense of rotations of the spins. (e) Tilted magnetic structure where the spin rotation plane is tilted from the \(c\)-axis by an angle \(\theta\). Our previous diffraction work found no evidence of this tilted magnetic structure over the cycloidal one, we discuss this below in the context of a model with symmetric-only exchange constants. Figures made on Mag2Pol. [22] an interplane SbO\({}_{6}\) octahedron shown in Fig. 2(c). Interplane interactions within a Mn triangle connected by \(J_{1}\) are shown in Fig. 2(b), where \(J_{4}\) is the straight interplane exchange interaction, and \(J_{3}\) and \(J_{5}\) are diagonal exchange interactions. Similarly, Figure 2(c) shows \(J_{6}\) and \(J_{7}\), the diagonal exchange interactions connecting a Mn triangle linked by \(J_{2}\). Interestingly, \(J_{3}\) and \(J_{6}\) are related to the right-handed helical winding of the Mn-O-O-Mn SSE pathways (shown in Fig. 1(c) for \(J_{3}\)), while \(J_{5}\) and \(J_{7}\) are related to left-handed SSE pathways (shown in Fig. 1(b) for \(J_{5}\)). Thus, these chiral exchange paths are interchanged by inversion symmetry between structurally left- and right-handed crystals.[15] We note that only the five first exchange constants were necessary to describe the SSE interactions in iron-langasite, due to structural differences with MnSb\({}_{2}\)O\({}_{6}\). Indeed, in Ba\({}_{3}\)NbFe\({}_{3}\)Si\({}_{2}\)O\({}_{14}\), the bond distance \(d_{2}=5.652\,\mathrm{\SIUnitSymbolAngstrom}\) associated with intertriangle interaction \(J_{2}\) is significantly larger than the bond distance \(d_{1}=3.692\,\mathrm{\SIUnitSymbolAngstrom}\) tied to intratriangle interaction \(J_{1}\).[13] On the contrary, in MnSb\({}_{2}\)O\({}_{6}\), \(d_{2}=4.845\,\mathrm{\SIUnitSymbolAngstrom}\) is smaller than \(d_{1}=5.596\,\mathrm{\SIUnitSymbolAngstrom}\), as a result the related interplane interactions \(J_{6}\) and \(J_{7}\) are expected to be more significant as they link magnetic Mn\({}^{2+}\) ions through SSE pathways. In this paper, we present our inelastic neutron scattering data from both powder and single crystals of MnSb\({}_{2}\)O\({}_{6}\). We apply the first moment (Hohenberg-Brinkman) sum rule of neutron scattering to extract the exchange constants from the Heisenberg model, therefore characterizing the magnetic Hamiltonian. Then, we apply Green's functions on a rotating frame to generate spin-wave spectra based on our derived exchange constants. Using the values of the symmetric exchange constants from sum rules of neutron scattering, we refine the parameters to obtain a good description of the neutron inelastic spectra. Based on the Green's functions neutron response, the stability of spin-wave excitations is further tested for the proposed magnetic structures. ## II Experimental details ### Materials preparation Materials preparation followed the procedure outlined in Ref. [23]. Powders of MnSb\({}_{2}\)O\({}_{6}\) were prepared by mixing stoichiometric amounts of pure MnCO\({}_{3}\) and Sb\({}_{2}\)O\({}_{3}\). After mixing through grinding, the powder was pressed into a pellet and heated up to 1000\({}^{\circ}\)C with the process repeated with intermediate grinding. It was found that heating the pellet to higher temperatures introduced the impurity Mn\({}_{2}\)Sb\({}_{2}\)O\({}_{7}\). Single crystals of MnSb\({}_{2}\)O\({}_{6}\) were prepared using the flux method. Starting ratios for single-crystal growth were (by weight) 73% of flux V\({}_{2}\)O\({}_{5}\), 20% of polycrystalline MnSb\({}_{2}\)O\({}_{6}\) and 7% of B\({}_{2}\)O\({}_{3}\). The powder was ground and pressed into a pellet and flame sealed in a quartz ampoule under vacuum (less than 1e\({}^{-4}\) Torr). B\({}_{2}\)O\({}_{3}\) was used to lower the melting temperature of the V\({}_{2}\)O\({}_{5}\) flux. Back filling the ampoules with \(\approx\) 200 mTorr of Argon gas was found to noticeably improve crystal sizes. Quartz ampoules were then heated to 1000\({}^{\circ}\)C at a rate of 60\({}^{\circ}\)C/hour and soaked at this temperature for 24 hours. The furnace was then cooled to 700\({}^{\circ}\)C at a rate of 2\({}^{\circ}\)C/hour and held for 24 hours, before it was switched off and allowed to cool to room temperature. Crystal sizes in the range from a few millimeters to nearly a centimeter were obtained through this procedure. ### Neutron spectroscopy To investigate the magnetic dynamics, neutron spectroscopy was performed on the MACS (NIST, Gaithersburg) triple-axis spectrometer[24] on both single crystals and powder samples. 1.3 g of single crystals were aligned in the (\(HHL\)) scattering plane on both sides of four aluminium plates and coated with viscous hydrogen free Fomblin oil, as shown in Fig. 3. A select fraction of the Figure 2: Drawing of the seven nearest neighbors interactions in MnSb\({}_{2}\)O\({}_{6}\). (a) Intraplane interactions \(J_{1}\) connecting triangles of Mn centered at the lattice origin, and \(J_{2}\) connecting between these triangles. (b) Interplane interactions based on the \(J_{1}\) triangle, \(J_{4}\) is the straight interplane interaction, while \(J_{3}\) and \(J_{5}\) are diagonal chiral interactions. (c) Interplane interactions based on the \(J_{2}\) triangle, with \(J_{6}\) and \(J_{7}\) as chiral exchange interactions. Oxygen atoms are omitted here for clarity. Figure made on Mag2Pol.[22] crystals were aligned with Laue diffraction and the remainder were aligned using polarized optical microscopy based on the crystal morphology. These single crystals were synthesized the same way as the samples measured in our previous studies in Ref. [18], where we have performed Schwinger scattering and transmission polarized optical microscopy and found only a small imbalance of chiral structural domains in the single crystals. This small imbalance distinguishes MnSb\({}_{2}\)O\({}_{6}\) from the enantioure single crystals of iron based langasite previously studied.[10; 12; 22] During the coalignment of the single crystals used here for spectroscopy, great care was taken to align the relative \(a\) and \(b\) inplane axes, the choice of what constituted \(\pm\) [001] was done at random. For the purposes here we consider the average crystal structure to be an equal mixture of the differing chiral domains. We will show in Section III.4 that our analysis holds no matter the proportion of chiral structural domains. To probe the dynamics in our array of single crystals, the final energy was fixed to either \(E_{\mathrm{f}}\)=2.4 meV or 3.7 meV with BeO and Be filters, respectively, being used on the scattered side to filter out higher order neutrons from the monochromator. For all results presented here the pyrolytic graphite PG(002) monochromator was focused both horizontally and vertically. The lattice parameters were measured to be \(a=b=8.733\,\mathrm{\SIUnitSymbolAngstrom}\) and \(c=4.697\,\mathrm{\SIUnitSymbolAngstrom}\). For powder measurements, a 16.3 g sample was used with \(E_{\mathrm{f}}\)=3.7 meV and a BeO filter on the scattered side. ## III Results and discussion In this section, we will first present the neutron scattering data for both powders and single crystals of MnSb\({}_{2}\)O\({}_{6}\), before detailing our absolute normalization process. Then, zeroth and first moment sum rules are applied to our inelastic data allowing the extraction of the symmetric exchange constants. We will finally use Green's functions on a rotating frame to compare the resulting spin-wave spectra to the experimental ones and to test the stability of proposed magnetic structures. ### Excitation spectra #### iii.1.1 Total excitation spectra The excitation spectra of both powders and single crystals of MnSb\({}_{2}\)O\({}_{6}\) at \(T=1.4\,\mathrm{K}\) are shown in Fig. 4, with the \(E_{\mathrm{f}}=3.7\,\mathrm{meV}\) MACS setup. The powder data in Fig. 4(a) display intense low energy magnetic scattering extending from the elastic line to \(\sim 1\) meV, and a weaker Figure 3: 1.3 g of single crystals of MnSb\({}_{2}\)O\({}_{6}\) aligned on four Al plates, and coated with Fomblin oil for inelastic neutron scattering. Figure 4: (a) Powder averaged inelastic neutron scattering spectrum taken on MACS at \(T=1.4\,\mathrm{K}\). (b)-(c) Single crystal inelastic neutron scattering spectrum from the \(E_{\mathrm{f}}=3.7\,\mathrm{meV}\) dataset at \(T=1.4\,\mathrm{K}\). The logarithmic intensity scales are chosen to show the two components to the scattering and in particular the higher energy weak scattering displayed at \(\sim 2\) meV. band of excitations at approximately twice this value at \(\sim 2\) meV. The single crystal data displayed in Fig. 4(b)-(c) illustrate two different types of scattering: one with intense dispersive fluctuations that are well defined both in momentum and energy at low energies, and the other with a weaker momentum and energy broadened continuum of scattering extending to larger energy transfers. This continuum of scattering is most apparent at the zone boundaries in the single crystal data. Given the kinematics of these two types of scattering, we associate the lower energy dispersive fluctuations with one-magnon scattering and the higher energy continuum with two-magnon scattering. While two-magnon scattering is expected to be most prominent in \(S=1/2\) magnets,[25; 26; 27; 28; 29; 30; 31; 32; 33] it is a direct result of the uncertainty associated with non-commuting observables and has been studied extensively in other large-\(S\) magnets.[34; 35; 36] We discuss this cross section later in the paper in the context of the zeroth moment sum rule and show indeed that these two components of scattering originate from single and multi magnon processes. #### iii.1.2 Powder low-energy spectrum Results of the low-energy powder inelastic neutron scattering experiment performed on MACS, with fixed final energy \(E_{\mathrm{f}}=3.7\,\mathrm{meV}\) are shown in Fig. 5. The powder averaged spin-wave dispersion at \(T=1.4\,\mathrm{K}\), below the Neel magnetic ordering transition, is presented in Fig. 5(a), showing low-energy spin dynamics below \(E\approx 1.4\,\mathrm{meV}\). These dynamics are highly dispersive from the magnetic ordering wavevector and are gapless within experimental resolution (\(\Delta E\approx 0.15\,\mathrm{meV}\)). In contrast, above \(T_{\mathrm{N}}\approx 12\,\mathrm{K}\), the magnetic scattering is considerably broadened both in momentum and energy indicative of spatially and temporally short-range correlations. This paramagnetic scattering is very strong due to high spin \(S=5/2\) of Mn\({}^{2+}\) magnetic ions, as shown in Fig. 5(b) with the spectrum measured at \(T=25\,\mathrm{K}\). Both experimental datasets below and above the magnetic ordering temperature also display a decay in intensity with increasing momentum transfer, characteristic of magnetic scattering. The powder averaged spectra establish the presence of dispersive magnetic dynamics and the energy scale of the spin excitations. #### iii.1.3 Single crystal low-energy spectrum Results of single crystal inelastic neutron scattering performed on MACS with a fixed final energy \(E_{\mathrm{f}}=2.4\,\mathrm{meV}\) are displayed in Fig. 6 and Fig. 7 at \(T=1.4\,\mathrm{K}\) below \(T_{\mathrm{N}}\). The data are illustrative of dispersive dynamics originating from the magnetic ordering wavevector. Constant energy slices at \(E=0.1\,\mathrm{meV}\) and \(E=1.25\,\mathrm{meV}\) are shown in Fig. 6(a) and (b). Spin-wave dispersion along \((-1,-1,L)\) and \((H,H,0)\) are respectively shown in Fig. 7(a) and Fig. 7(b). Spin-wave branches emerging from nuclear Bragg peak (-1,-1,0) and also its magnetic satellites (-1,-1,0)\(\pm\mathbf{k}\) are visible in Fig. 6(a) and Fig. 7(a). Within the instrumental resolution (\(\Delta E\approx 0.1\,\mathrm{meV}\)), all modes seem gapless, which is consistent with the low anisotropy measured from electron spin resonance,[17] and observed from the tunability of the magnetic structure by small magnetic fields.[18; 16] As already presented in Fig. 4(b, c), inelastic neutron scattering data were also obtained on MACS with the same array of single crystals, but with a fixed final energy \(E_{\mathrm{f}}=3.7\,\mathrm{meV}\). In the following of the paper, the dataset used for each analysis will be mentioned. ### Absolute normalization of magnetic cross section In order to straightly compare the magnetic scattering intensities from the different datasets, they have to be converted into absolute units. This is particularly important given our goal of applying sum rules of neutron scattering to obtain the magnetic exchange constants in absolute units of energy. Through this we will apply the zeroth moment sum rule to demonstrate that all of the magnetic spectral weight is measured in the experiments discussed above. We then apply the first moment sum rule to obtain the symmetric exchange constants. In this Figure 5: Powder inelastic neutron scattering spectrum of the one-magnon cross section at (a) \(T=1.4\,\mathrm{K}\) (below \(T_{\mathrm{N}}\)) and (b) \(T=25\,\mathrm{K}\) (above \(T_{\mathrm{N}}\)). section, we describe our normalization process, adapted from Ref. [37] and Ref. [38] and introduce our definition for the dynamical structure factor \(S(\mathbf{Q},E)\). The intensity measured during the experiment \(I(\mathbf{Q},E)\) (in counts) is related to the differential cross section via a convolution with an instrumental-dependent resolution function \(R\): \[I(\mathbf{Q},E)=\int\mathrm{d}\mathbf{Q}_{0}\,\mathrm{d}E_{0}\,\frac{\mathrm{d}^{2} \sigma}{\mathrm{d}\Omega\,\mathrm{d}E_{\mathrm{f}}}(\mathbf{Q}_{0},E_{0})R(\mathbf{Q }_{0},E_{0},\mathbf{Q},E) \tag{1}\] By assuming a slow variation of this resolution function in the region of study (over the narrow energy range probed in this study), it can be approximated by a constant \(R_{0}\), which allows us to decouple the intensity into: \[I(\mathbf{Q},E)\approx R_{0}\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}\Omega\, \mathrm{d}E_{\mathrm{f}}}(\mathbf{Q}_{0},E_{0}) \tag{2}\] During the data reduction, the intensity is normalized to the monitor counts based on a low efficiency detector placed in the incident beam after the monochromator and before the sample. The efficiency of which is inversely dependent to the speed of the incident neutrons, which is proportional to \(k_{\mathrm{i}}\), giving the normalized intensity (in counts/mon): \[\bar{I}(\mathbf{Q},E)=k_{\mathrm{i}}I(\mathbf{Q},E)=k_{\mathrm{i}}R_{0}\frac{\mathrm{ d}^{2}\sigma}{\mathrm{d}\Omega\,\mathrm{d}E_{\mathrm{f}}}(\mathbf{Q},E) \tag{3}\] Having related the measured scattering intensity to the cross section, we now focus on the magnetic differential cross section for unpolarized neutrons and identical magnetic ions. Assuming isotropic spin excitations, we can define the dynamic structure factor \(S(\mathbf{Q},E)=S^{xx}=S^{yy}=S^{zz}\), where \(S^{\alpha\beta}\) is the dynamic spin correlation function related to the Fourier transform of the spin-spin correlation function. Neglecting the Debye-Waller factor gives the following double differential cross section: \[\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}\Omega\,\mathrm{d}E_{\mathrm{f}}}(\mathbf{Q },E)=N\frac{k_{\mathrm{f}}}{k_{\mathrm{i}}}\left(\frac{\gamma r_{0}}{2}\right) ^{2}(g|f(\mathbf{Q})|)^{2}2S(\mathbf{Q},E) \tag{4}\] where \(N\) is the number of unit cells, \(\gamma r_{0}/2\approx 0.2695\times 10^{-12}\,\mathrm{cm}\) is the typical magnetic scattering length, \(g\) is the Lande factor and \(f(\mathbf{Q})\) the magnetic form factor. Combining Eq. 3 and 4 we get the dynamical structure factor (in \(\mathrm{meV}^{-1}\)) from the measured intensity by: Figure 6: MACS single crystal inelastic neutron scattering spectra at \(T=1.4\,\)K: constant energy slices for (a) \(E=0.1\,\)meV and (b) \(E=1.25\,\)meV. The weak scattering in (a) at \((H,H)\)\(\sim\) -0.5 and displaced at \((H,H)\)\(\sim\) -1.1 originate from some crystals miss-aligned by \(\sim\) 60\({}^{\circ}\) in the multi crystal mount. Figure 7: MACS single crystal inelastic neutron scattering spectra at \(T=1.4\,\)K: spin-wave dispersion along (a) \((-1,-1,L)\) and (b) \((H,H,0)\). \[S(\mathbf{Q},E)=\frac{\bar{I}(\mathbf{Q},E)}{|gf(\mathbf{Q})|^{2}(\frac{\gamma r_{0}}{2})^{2} 2Nk_{\mathrm{f}}R_{0}} \tag{5}\] we can write directly the numerical values of the magnetic cross section \((\gamma r_{0}/2)^{2}\) into the equation: \[S(\mathbf{Q},E)=\frac{13.8(\mathrm{b}^{-1})\bar{I}(\mathbf{Q},E)}{|gf(\mathbf{Q})|^{2}2Nk_{ \mathrm{f}}R_{0}} \tag{6}\] The key for normalizing the magnetic intensity is thus to evaluate this instrumental-dependent factor \(Nk_{\mathrm{f}}R_{0}\) expressed in (meV)(counts/mon)(\(\mathrm{b}^{-1}\)). There are several ways reported in the literature for obtaining this instrument calibration factor. One possibility is to evaluate the incoherent scattering from the elastic line of a known standard compound (for example as done in Ref. [39]). By energy integrating the measured intensity close to elastic energy transfer, far from any magnetic or nuclear Bragg peak, we obtain, as \(k_{\mathrm{i}}=k_{\mathrm{f}}\) for elastic scattering: \[\int_{-\epsilon}^{+\epsilon}\mathrm{d}E\,\bar{I}(\mathbf{Q},E)=Nk_{\mathrm{f}}R_{ 0}{\sum_{i}}(b_{i}^{\mathrm{inc}})^{2} \tag{7}\] where \(b_{i}^{\mathrm{inc}}\) is the incoherent scattering length of atom \(i\), and the sum is over the unit cell. Vanadium having a large incoherent scattering cross section compared to its coherent one, it is usually used as a standard sample to normalize inelastic neutron scattering data. We have measured the Vanadium sample in the same geometry and instrumental configuration as our MnSb\({}_{2}\)O\({}_{6}\) powder sample. With \(N_{\mathrm{V}}\) the number of Vanadium atoms and its incoherent scattering length \(b_{\mathrm{V}}^{\mathrm{inc}}=6.35\,\mathrm{fm}\),[40] we can write: \[N_{\mathrm{V}}k_{\mathrm{f}}R_{0}=\frac{\int_{-\epsilon}^{+\epsilon}\mathrm{ d}E\,\bar{I}_{\mathrm{V}}(\mathbf{Q},E)}{(b_{\mathrm{V}}^{\mathrm{inc}})^{2}} \tag{8}\] By writing \(N_{\mathrm{V}}=m_{\mathrm{V}}/(A_{\mathrm{r}}(V)m_{\mathrm{u}})\) with \(m_{\mathrm{V}}\) the mass of the Vanadium sample, \(A_{\mathrm{r}}(V)\) the relative atomic mass of Vanadium, and \(m_{\mathrm{u}}\) the atomic mass constant, we can write the ratio \(N/N_{\mathrm{v}}=\frac{m/A_{\mathrm{r}}(\mathrm{MnSb}_{2}\mathrm{O}_{6})_{ \mathrm{cell}}}{m_{\mathrm{V}}/A_{\mathrm{r}}(V)}\) with \(m\) the mass of the MnSb\({}_{2}\)O\({}_{6}\) sample, and \(A_{\mathrm{r}}(\mathrm{MnSb}_{2}\mathrm{O}_{6})_{\mathrm{cell}}\) the relative mass of a unit cell (three formula units of MnSb\({}_{2}\)O\({}_{6}\) per unit cell), the normalization factor becomes: \[Nk_{\mathrm{f}}R_{0}=\frac{m/A_{\mathrm{r}}(\mathrm{MnSb}_{2}\mathrm{O}_{6})_ {\mathrm{cell}}}{m_{\mathrm{V}}/A_{\mathrm{r}}(V)}\frac{\int_{-\epsilon}^{+ \epsilon}\mathrm{d}E\,\bar{I}_{\mathrm{V}}(\mathbf{Q},E)}{0.403\,\mathrm{b}} \tag{9}\] This equation allows us to obtain the instrumental calibration factor from the incoherent cross section centered at the elastic (\(E=0\)) position. We note that an alternate way to obtain this calibration constant is to measure the elastic incoherent cross section from the sample given Manganese has a comparatively large incoherent cross section. We did not take this approach in this experiment as we found the elastic line where incoherent scattering is present in our single crystal geometry was contaminated by scattering from hydrogen free (yet fluorine based) Fomblin oil. Fomblin, while having a comparatively small incoherent cross section in comparison to hydrogen, has a non-negligible coherent liquid-like cross section. This cross section is difficult to disentangle from the purely Mn\({}^{2+}\) incoherent cross section and therefore we relied on a separate Vanadium standard of known mass. ### Total moment sum rule Having established the procedure for calibration of the instrument, we now discuss the sum rules of neutron scattering. Magnetic neutron scattering is governed by sum rules which are satisfied by integrating the dynamical spin correlation function \(S^{\alpha\beta}(\mathbf{Q},E)\) over energy and momentum transfer.[20] In particular the energy moments, \(\int_{-\infty}^{+\infty}E^{n}S^{\alpha\beta}(\mathbf{Q},E)\,\mathrm{d}E\) are given theoretically,[41; 20; 42] with \(n=0,1\) the zeroth and first moment. The zeroth moment sum rule is often referred to as the total moment sum rule and corresponds to the integral of all the magnetic spectral weights:[43; 44; 37; 45] \[\frac{3\int\mathrm{d}^{3}\mathbf{Q}\int\mathrm{d}E\,S(\mathbf{Q},E)}{\int\mathrm{d}^{ 3}\mathbf{Q}}=N_{\mathrm{m}}S(S+1) \tag{10}\] where \(N_{\mathrm{m}}=3\) is the number of magnetic ions per unit cell. This quantity can be considered as a conservation rule and allows us to confirm whether we have experimentally measured all of the spectral weight. This rule has become particularly important in itinerant compounds near potential critical points.[46] We will apply this zeroth moment sum rule to our powder data, which was normalized using a vanadium standard sample, following the process described above. In this case, the total moment can be written as: \[I=\frac{\int\mathrm{d}Q\,Q^{2}\int\mathrm{d}E\,S(Q,E)}{\int\mathrm{d}Q\,Q^{2 }}=S(S+1) \tag{11}\] with \(Q=|\mathbf{Q}|\). In order to estimate the spectral contributions from one-magnon and two-magnon scattering, we can introduce the momentum integrated intensity: \[\tilde{I}(E)=\frac{3\int\mathrm{d}Q\,Q^{2}S(Q,E)}{\int\mathrm{d}Q\,Q^{2}} \tag{12}\] which measures the magnetic density of states.[44; 45] Then the integral \(\int_{E_{\rm min}}^{E_{\rm max}}{\rm d}E\,\tilde{I}(E)\) gives the spectral weight for the energy interval \([E_{\rm min},E_{\rm max}]\). Figure 8 shows the momentum integrated intensities as a function of the energy. As discussed above, the magnetic intensity consists of two components with a low-energy component which consists of harmonic excitations well defined in momentum and energy and a second considerably weaker component which is broadened in momentum and energy transfer. These correspond to single [Fig. 8(a)] and two-magnon [Fig. 8(b)] dynamics and are separated in the powder averaged data. We can see that the one- and two-magnon contributions crossover around 1.6 meV (red dashed line), but since the intensities are quite low at this energy we consider 1.6 meV as the upper bound of the one-magnon scattering, and 0.3 meV as its lower bound (blue dashed line). To extract numerical values for the integrated zeroth moments from our powder data we average the data in momentum. Accounting from the momentum powder average, the \(Q\)-dependence of the integrated intensity is given by:[43; 47] \[\mathcal{L}(Q_{\rm max})=\frac{\int_{0}^{Q_{\rm max}}{\rm d}Q\,Q^{2}\int{\rm d }E\,S(Q,E)}{\int_{0}^{Q_{\rm max}}{\rm d}Q\,Q^{2}} \tag{13}\] and is shown in Fig. 9 for both (a) one-magnon and (b) two-magnon contributions discussed above. The momentum average in this plot allows us to account for limited kinematic coverage of the detectors at low momentum transfers (see low momentum transfers in Fig. 5). From Fig. 9, we can see that \(\mathcal{L}(Q_{\rm max})\) approximately fully saturates close to \(2\,{\rm\AA}^{-1}\) thereby illustrating that approximately all of the spectral weight has been sampled. Based on this momentum average of the powder data, the spectral weight \(I_{1}=2.7(2)\) for one-magnon scattering is then calculated by integrating the intensity between 0.3 meV [dashed blue line in Fig. 8(a)], and 1.6 meV (dashed red line in Fig. 8). The two-magnon spectral weight is obtained by integrating between 1.6 and 4 meV, leading to \(I_{2}=0.17(1)\). The elastic (static) scattering contribution to the total moment is \(\langle S_{z}\rangle^{2}\) where \(z\) indicates the direction of the Mn\({}^{2+}\) spin in the rotated local frame. From our neutron powder diffraction (previously outlined in Ref. [18]) the ordered moment is \(g\langle S_{z}\rangle=4.6\,\mu_{\rm B}\) at 2.6 K leading to \(\langle S_{z}\rangle^{2}=5.3\), and a spin reduction from the expected full saturated moment corresponding to \(S=5/2\) of \(\Delta S=S-\langle S_{z}\rangle=0.2\). This missing component from the experimental \(\langle S_{z}\rangle\) by conservation of spectral weight is expected to reside in the multimagnon component of the neutron dynamics corresponding to longitudinal fluctuations. Based on this elastic spectral weight, the theoretical total, one-magnon, and two-magnon contributions can be computed.[48; 35] They are compared with those obtained experimentally in Table 1. The experimental total moment is 8.2(2), which is to be compared to the expected value of 8.75 for \(S=5/2\). The discrepancies can be due to the relatively small \(Q\)-range measured during this experiment and experimental systematic issues such as the use of an external Vanadium standard or small variations in the resolution function over the energy range probed here. Given the small energy and momentum ranges, and that we have integrated the intensity over all momentum and energy, we do not expect that changes in the resolution to be important. However, the results are in good agreement illustrating the relative weights of one- and two-magnon cross sections and the energy range over which the magnetic dynamics are present in MnSb\({}_{2}\)O\({}_{6}\). This also confirms our assignment of the higher energy component to longitudinal two-magnon scattering and also illustrates all of the spectral weight is sampled in the dynamic range of our experiments. \begin{table} \begin{tabular}{c c c} \hline \hline & Theory & Experiment \\ \hline Total & \(S(S+1)=8.75\) & 8.2(2) \\ Elastic & \(\langle S_{z}\rangle^{2}=5.3\) & \\ One-magnon & \((S-\Delta S)(1+2\Delta S)=3.2\) & 2.7(2) \\ Two-magnon & \(\Delta S(\Delta S+1)=0.2\) & 0.17(1) \\ \hline \hline \end{tabular} \end{table} Table 1: Contributions of the different components of the scattering for \(S=5/2\) and \(\Delta S=0.2\) deduced from neutron powder diffraction. Figure 8: Momentum integrated intensities as a function of the energy, for (a) \(E\in[0,1.9]\) meV, and (b) \(E\in[1.3,4]\) meV. The intensities are integrated between the dashed blue (0.4 meV) and red (1.6 meV) lines to get the one-magnon spectral weight \(I_{1}\), and above the red lines to 4 meV to get the two-magnon spectral weight \(I_{2}\). ### First moment sum rule The previous discussion of the zeroth moment sum rule has established several points relevant for the rest of the paper. First, we established the energy range of the magnetic dynamics in MnSb\({}_{2}\)O\({}_{6}\). Second, we have established the relative spectral weights of the single and two-magnon cross sections and found these to be in good agreement with missing spectral weight observed in diffraction experiments. Third, we have established and verified a calibration procedure for the powder data. #### iii.4.1 Theory In this section, we discuss the first moment sum rule and how it can be applied to extract symmetric exchange constants. The first moment is defined for general dynamic spin correlation function \(S^{\alpha\beta}(\mathbf{Q},E)\) as: \[\langle E\rangle(\mathbf{Q}) \equiv\int_{-\infty}^{\infty}\mathrm{d}E\,E\;S^{\alpha\beta}(\bm {Q},E) \tag{14}\] \[=\int_{-\infty}^{\infty}\mathrm{d}E\,\langle[\hat{S}^{\alpha}( \mathbf{Q},E),\hat{\mathcal{H}}]\hat{S}^{\beta}(-\mathbf{Q},0)\rangle\] \[=\langle[\hat{S}^{\alpha}(\mathbf{Q}),\hat{\mathcal{H}}]\hat{S}^{ \beta}(-\mathbf{Q})\rangle\] For nuclear scattering from a monotonic system, this reduces to \(\frac{\hbar^{2}Q^{2}}{2M}\), where \(M\) is the mass of the scattering nucleus.[49; 50] For magnetic systems and in the case for symmetric-only exchange where the Hamiltonian has the form \(\hat{\mathcal{H}}=\sum_{i,j}J_{ij}\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\), the Hohenberg-Brinkman first moment sum rule is given by:[20; 37; 43; 44; 45; 46] \[\langle E\rangle(\mathbf{Q}) =\int\mathrm{d}E\,E\;S(\mathbf{Q},E) \tag{15}\] \[=-\frac{2}{3}\sum_{i,j}n_{ij}J_{ij}\langle\hat{\mathbf{S}}_{i}\cdot \hat{\mathbf{S}}_{j}\rangle[1-\cos(\mathbf{Q}\cdot\mathbf{d}_{ij})]\] where \(\langle\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\rangle\) is the ground-state equal-time correlation function of spins \(\hat{\mathbf{S}}_{i}\) and \(\hat{\mathbf{S}}_{j}\) at sites \(i\) and \(j\), \(n_{ij}\) is the multiplicity of \(J_{ij}\), the exchange constant associated to the bond vector \(\mathbf{d}_{ij}\). This equation assumes symmetric-only exchange as we anticipate is dominant for \(3d\) magnetic transition metal ions in the absence of spin-orbit coupling. Anisotropic terms in the magnetic Hamiltonian appear as constants to this equation for the first moment, however, given the lack of an orbital degree of freedom in Mn\({}^{2+}\) in an octahedra, we expect such terms to be small in comparison to the symmetric Heisenberg exchange and therefore neglect them here. Knowing the nuclear and magnetic structure of a compound gives the bond vectors \(\mathbf{d}_{ij}\) and the correlators \(\langle\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\rangle\). Then, measuring the first moment for different \(\mathbf{Q}\) values allows to fit the exchange constants, which correspond to the amplitudes of the sinusoidal oscillations. We note that Eqn. 15 only depends on the relative orientation of neighboring spins which has been modelled previously using neutron diffraction. For the following, in terms of notation, the spin component \(S(S+1)\) will be included in the exchange constants instead of the correlators and the exchange constants are in units of meV. In MnSb\({}_{2}\)O\({}_{6}\), 7 nearest neighbors exchange interactions are considered and expected to be relevant, as shown in Fig. 2, related to a total of 30 Mn-Mn bonds per unit cell. The first thing to evaluate is the ground-state correlation functions \(\langle\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\rangle\) for each of the bonds. The magnetic ground state of MnSb\({}_{2}\)O\({}_{6}\) is unclear, rather reported as a pure cycloid in Ref. [15] or tilted from the \(c\)-axis in Ref. [16]. But in both cases, the spin structure is helicoidal with the spins co-rotating in the same plane.[18] Thus, the scalar product can be simply evaluated by \(\cos\Delta\theta_{ij}\), with \(\Delta\theta_{ij}\), the angle difference between the spins in the same rotation plane. The exchange interactions are listed in Table 2 with their associated multiplicities, bond distances, and ground-state correlators, with \(k=0.182\) the propagation vector component along the \(c\)-axis. We emphasize that this method only depends on relative orientation of neighboring spins and not on details for the tilted and non tilted helicoidal structures. Indeed, the \(\langle\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\rangle\) correlators are the same in both models. Therefore, this method allows us an independent means of measuring the exchange constants without details of the long-range magnetic structure that is relevant for spin-wave calculations. We discuss this point later in the context of stability of the long-wavelength excitations once we have Figure 9: Integrated intensities as a function of \(Q_{\mathrm{max}}\) the momentum integration upper bound, for (a) one-magnon and (b) two-magnon scattering. The dashed lines indicate the final values for \(Q_{\mathrm{max}}=2.05\,\mathrm{\SIUnitSymbolAngstrom}\). obtained the exchange constants from the first moment analysis. Furthermore, we note that the correlators for diagonal paths actually depend on the sense of rotations of the spins, and thus on the magnetic parameters \(\eta_{\rm C}\) and \(\eta_{\rm T}\). From the energy invariant, these magnetic parameters are related to the structural chirality by \(\sigma=\eta_{\rm C}\eta_{\rm T}\).[18] Thus the correlators for the diagonal exchange paths are \(\cos(2\pi(\eta_{\rm C}k\pm\eta_{\rm T}/3))=\cos(2\pi(k\pm\sigma/3))\) for left- \(J_{5}\), \(J_{7}\) (\(+\)) and right-handed \(J_{3}\), \(J_{6}\) (\(-\)) exchange interactions. The diagonal exchange interactions are interchanged by inversion symmetry, which corresponds to an inversion of \(\sigma\). Thus, ground-state correlators are invariant for a given exchange constant. Thus the analysis holds independently of the structural and magnetic domains populations. This is convenient as a mixture of structural and magnetic domains was previously measured in a single crystal of MnSb\({}_{2}\)O\({}_{6}\).[18] For a fixed scattering vector \(\mathbf{Q}\), the cosine frequency will only depend on the bond distances. We can therefore define the parameters \(\gamma\) associated to each of the five distinct bond lengths, which are functions of the exchange constants and ground-state correlation functions: \[\left\{\begin{aligned} \gamma_{1}&=J_{1}c_{1}\\ \gamma_{2}&=J_{2}c_{1}\\ \gamma_{4}&=J_{4}c_{4}\\ \gamma_{\rm l}&=J_{3}c_{\rm R}+J_{5}c_{\rm L}\\ \gamma_{\rm e}&=J_{6}c_{\rm R}+J_{7}c_{\rm L}\\ \end{aligned}\right. \tag{16a}\] \[\left\{\begin{aligned} \gamma_{2}&=J_{2}c_{1}\\ \gamma_{4}&=J_{4}c_{4}\\ \gamma_{\rm l}&=J_{3}c_{\rm R}+J_{5}c_{\rm L}\\ \gamma_{\rm e}&=J_{6}c_{\rm R}+J_{7}c_{\rm L}\\ \end{aligned}\right. \tag{16b}\] where the \(c_{i}\) are calculated from the co-rotating helicoidal magnetic structure[18] and displayed in Table 2. #### iii.2.2 Single-crystal data Having discussed the equations and theory for the first moment sum rule applied to MnSb\({}_{2}\)O\({}_{6}\), we now apply this to our single crystal sample aligned in the (\(HHL\)) scattering plane. We can simplify the calculation of the first moment by fixing \(H=H_{0}\) and varying \(L\) (\(L\)-scan), or fixing \(L=L_{0}\) and varying \(H\) (\(H\)-scan). This leads to two different analyses. The \(L\)-scan analysis will be detailed in the following section, while the \(H\)-scan analysis is presented in Appendix A.2. The data is extracted along an \(L\)-scan, considering \(\mathbf{Q}=(H_{0},H_{0},L)\) with \(L\) varying and a given \(H_{0}\). In the following we will consider the \(E_{\rm f}=2.4\,\)meV dataset, as an example, we take \(H_{0}=0.2\). The spin-wave dispersion along \((0.2,0.2,L)\) is shown in Fig. 10. For each interaction indexed by spins \(i\) and \(j\), the corresponding term in the first moment cosine from Eq. (15) can be written as: \[\mathbf{Q}\cdot\mathbf{d}_{ij}=2\pi H_{0}(d_{ij,x}+d_{ij,y})+2\pi Ld_{ij,z} \tag{17}\] where the distances \(d_{ij}\) are expressed in lattice units, and the scattering vector in reciprocal lattice units. Using trigonometric identities to expand the cosine term, and summing Eq. (15) over the 30 bonds in the unit cell, a general formula for the first moment is derived, for a fixed \(H_{0}\): \[\langle E\rangle(H_{0},L)=A(H_{0})\cos(2\pi L)+C(H_{0}) \tag{18}\] where \(A\) and \(C\) are two \(H_{0}\)-dependent functions of the \(\gamma\) parameters, given by: \[A(H_{0}) =\frac{2}{3}[(1+2c(H_{0}))\gamma_{i}+3\gamma_{4}+2\Sigma_{c}(H_{ 0})\gamma_{e}] \tag{19}\] \[C(H_{0}) =-\frac{2}{3}[2(1-c(H_{0}))\gamma_{1}+...\] (20) \[2(3-\Sigma_{c}(H_{0}))\gamma_{2}+3\gamma_{i}+3\gamma_{4}+6\gamma_ {e}]\] where, \[c(H_{0}) =\cos(2\pi H_{0}\delta_{1})\] \[\Sigma_{c}(H_{0}) =\cos(2\pi H_{0}\delta_{2})+...\] \[\cos(2\pi H_{0}\delta_{3})+\cos(2\pi H_{0}\delta_{4})\] \begin{table} \begin{tabular}{c c c c c} \(J_{i}\) & \(n_{i}\) & \(d_{i}\) (Å) & \(\Delta\theta_{ij}\) & \(c_{ij}=\langle\mathbf{S}_{i}\cdot\mathbf{S}_{j}\rangle=\cos\Delta\theta_{ij}\) \\ \hline \(J_{1}\) & \(3\) & \(d_{1}=5.5961\) & \(2\pi/3\) & \(c_{1}=-0.5\) \\ \(J_{2}\) & \(6\) & \(d_{2}=4.8445\) & & \\ \(J_{3}\) & \(3\) & \(d_{1}=7.3235\) & \(2\pi(k+\varepsilon_{\rm T}/3)\) & \(c_{\rm R}=-0.995\) \\ \(J_{4}\) & \(3\) & \(d_{4}=4.7241\) & \(2\pi k\) & \(c_{4}=0.414\) \\ \(J_{5}\) & \(3\) & \(d_{1}=7.3235\) & \(2\pi(k-\varepsilon_{\rm T}/3)\) & \(c_{\rm L}=0.58\) \\ \(J_{6}\) & \(6\) & \(d_{6}=6.7666\) & \(2\pi(k+\varepsilon_{\rm T}/3)\) & \(c_{\rm R}=-0.995\) \\ \(J_{7}\) & \(6\) & \(2\pi(k-\varepsilon_{\rm T}/3)\) & \(c_{\rm L}=0.58\) \\ \end{tabular} \end{table} Table 2: Summary of the exchange interactions \(J_{i}\), with their multiplicity in the unit cell \(n_{i}\), the related bond distance \(d_{i}\), the spin angle difference \(\Delta\theta_{ij}\) and the associated ground-state correlation functions \(c_{ij}\). Subindices i and e refer to the diagonal bond distances internal and external to the triangle of Mn interconnected by \(J_{1}\). Subindices L and R refer to left- and right-handed correlation functions. Figure 10: MACS single crystal inelastic neutron scattering spectrum: spin-wave dispersion along \((0.2,0.2,L)\). The red dashed lines indicate constant-\(\mathbf{Q}\) scans shown in Fig. 11(a)-(c). are \(H_{0}\)-dependent harmonic oscillations, and \[\delta_{1} =3(1-r_{x})\] \[\delta_{2} =1\] \[\delta_{3} =2-3r_{x}\] \[\delta_{4} =3r_{x}-1\] are Mn-Mn interatomic distances (in r.l.u.) projected in the \((ab)\)-plane. \(r_{x}=0.6329\) is the \(a\)-axis coordinate of the Mn atom at Wyckoff site \(3e\), taken from the single crystal neutron diffraction refinement at \(T=2\,\mathrm{K}\) in Ref. [18]. From Eq. (18), for a specific \(H_{0}\), we can compute the first moment as a function of \(L\), and fit the coefficients \(A(H_{0})\) and \(C(H_{0})\) for a scan along \((H_{0},H_{0},L)\). The next step is to repeat the same process for several \(H_{0}\), and fit the \(\gamma\) parameters in coefficients \(A\) and \(C\) with Eq. (19) and Eq. (20). Examples of calculations of the first moment for different \(L\), for \(\mathbf{Q}=(0.2,0.2,L)\) are shown in Fig. 11(a)-(c). Figure 11: (a)-(c) Constant-\(\mathbf{Q}\) scans for different \(\mathbf{Q}=(0.2,0.2,L)\), indicated with dashed red lines in Fig. 10. A fit to a double gaussian is shown in red, and the first moment is calculated from trapezoidal integration where the background is removed from the gaussian fit. (d) First moment as a function of \(L\) for \(H_{0}=0.2\), fitted to its theoretical expression (red curve). The red data points corresponds to the first moments calculated in the cuts plotted in (a)-(c). (e)-(f) First moment as a function of \(L\) for (e) \(H_{0}=-0.4\) and (f) \(H_{0}=-0.8\), fitted to theoretical expression in red. These constant-\(\mathbf{Q}\) scans are indicated in red dashed lines in Fig. 10. Most of the \(S(\mathbf{Q},E)\) are well fitted by two gaussians, shown in red in the figures, but to take into account any deviation from a two-mode spectrum, the numerical integration of the first moment from Eq. (15) was performed using a trapezoidal integration, with the background removed from these two-gaussian fits. The calculation is performed above 0.2 meV to get rid of any contribution from elastic scattering, and below 1.6 meV to only capture contribution from one-magnon scattering. This criterion is arbitrary, and low-energy scattering can be miscalculated. Actually, due to Eq. (15), lowest energy points contribute less to the first moment (given a low magnetic intensity at low energy), so the differences are not significant within uncertainties. More information concerning the numerical integration and the differences between the methods of integration are given in Appendix A.1. These first moments are calculated for a range of \(L\), as shown in Fig. 11(d) where first moments computed in Fig. 11(a)-(c) are highlighted in red. For this specific \(H_{0}=0.2\), the \(A\) and \(C\) parameters are obtained from the fit (red curve) to Eq. (15). The \(H_{0}\)-dependence of \(A\) and \(C\) is then obtained by repeating the same procedure for different \(H_{0}\), as illustrated in Fig. 11(e)-(f) for \(H_{0}=-0.4\) and \(H_{0}=-0.8\). Finally, a total of 969 first moments \(\langle E\rangle(\mathbf{Q})\) were calculated from the MACS \(E_{\rm f}=2.4\,\)meV dataset for this analysis and are shown as a function of the fitted first moment in Fig. 12(a). Finally the \(\gamma\) parameters are obtained by fitting \(A\) and \(C\) to Eq. (19) and Eq. (20) as shown in Fig. 12(b)-(c), where the red data points are the coefficients calculated in Fig. 11(d)-(f). We note from Eq. (18) that some remaining background can be included in the computation of \(C\), as well as small contributions from anisotropic terms in the magnetic Hamiltonian, as discussed above. For this reason, the \(H_{0}\)-independent part of Eq. (20) is not fitted to get the parameters \(\gamma_{4}\), \(\gamma_{\rm i}\) and \(\gamma_{\rm e}\), which are rather fitted with Eq. (19), where \(A\) represents the amplitude of the first moment cosine variation. A similar analysis can be performed by considering a fixed \(L_{0}\) and varying along \(H\) and is detailed in Appendix A.2, giving another set of fitted \(\gamma\) parameters. Then, these two analyses were performed again with the second single crystal dataset, with \(E_{\rm f}=3.7\,\)meV, giving two other sets of \(\gamma\) parameters. This is detailed in Appendix A.3. These fitted \(\gamma\) parameters are shown in Fig. 13, where they have been normalized to \(\gamma_{\rm e}\) obtained from the \(L\)-scan analysis for each dataset, in order to get rid of any scale issue coming from the absolute normalization process and to directly compare the fitted parameters. We discuss below how we obtain the overall scaling factor to obtain units of meV. #### iii.1.3 Powder data As described in Section III.1.2, powder inelastic neutron scattering was also performed on MACS and first moment sum rule can also be applied to these data. For polycrystalline samples, the intensity measured is related to the powder averaged Figure 12: (a) Measured first moments versus fitted first moments for \(L\)-scan analysis, for the \(E_{\rm f}=2.4\,\)meV dataset. A total of 969 \(\langle E\rangle(\mathbf{Q})\) were taken into account. (b)-(c) Fitting of coefficients (b) \(A\) and (c) \(C\) giving the \(\gamma\) parameters. The red data points show the values calculated in Fig. 11(d)-(f). \(\int\mathrm{d}\Omega_{\mathbf{Q}}\,S(\mathbf{Q},E)/4\pi\) of the dynamic structure factor. This gives the powder averaged first moment sum rule:[43; 45] \[\langle E\rangle(|\mathbf{Q}|) =\int\mathrm{d}E\,ES(|\mathbf{Q}|,E)\] \[=-\frac{2}{3}\sum_{i,j}n_{ij}J_{ij}\langle\mathbf{\hat{S}}_{i}\cdot \mathbf{\hat{S}}_{j}\rangle\left\{1-\frac{\sin(|\mathbf{Q}||\mathbf{d}_{ij}|)}{|\mathbf{Q}|| \mathbf{d}_{ij}|}\right\} \tag{21}\] As for the single crystal analysis, for a fixed \(Q=|\mathbf{Q}|\), the sine frequency only depends on the bond lengths, which are the same for diagonal exchange paths as listed in Table 2, resulting in five distinct bond distances. We can further simplify the first moment by summing over these distinct bond distances: \[\langle E\rangle(Q)=-\frac{2}{3}\sum_{i}n_{i}\gamma_{i}\left\{1-\frac{\sin(Q| \mathbf{d}_{i}|)}{Q|\mathbf{d}_{i}|}\right\} \tag{22}\] where \(i\in[1,5]\) is related to the \(i\)-th bond length and the \(\gamma_{i}\) are defined in Eq. (16). Due to the very close bond distances (especially \(d_{2}=4.8445\,\mathrm{\SIUnitSymbolAngstrom}\) and \(d_{4}=4.7241\,\mathrm{\SIUnitSymbolAngstrom}\)), and the relatively small \(Q\)-range probed in the experiment (from \(0.3\) to \(2.05\)\(\mathrm{\SIUnitSymbolAngstrom}^{-1}\)), we were not able to conveniently fit the \(\gamma\) parameters, because of high correlations in the fitting process. However, we can compare the first moment extracted from the powder inelastic neutron scattering with the theoretical one calculated using the \(\gamma\) parameters obtained from the single crystal analysis described above. The first step for extracting the first moment from the experimental data is to define the region of integration for the energy. For the powder, the first moment was integrated for \(E\in[0.3,1.6]\) meV to get rid of the elastic and two-magnon scattering. This is justified by the spectral weight calculated in the total moment sum rule analysis described in Section III.3. Due to gapless modes in the one-magnon spectrum, around \(0.8\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\) and \(1.4\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\), as shown in Fig. 5(a), the contribution from elastic scattering and one-magnon can be mixed. However, this mixture happens at low energies and low intensities, so that deviations from the actual first moment are small. As for the single crystal analysis, the data were integrated numerically using a trapezoid integration, and the background was removed by fitting with two gaussians. The theoretical \(\gamma\) parameters calculated from the single crystal first moment sum rule analysis were rescaled to match the scale of the first moment observed in the powder experiment, as we know the powder data have been fairly normalized as it captures all the magnetic spectral weight as detailed in Section III.3. The magnetic form factor is Figure 13: Fitted parameters for the different analysis and dataset, normalized to \(\gamma_{\mathrm{e}}\) obtained in the \(L\)-scan analysis from the \(E_{\mathrm{f}}=2.4\,\mathrm{meV}\) dataset. Mean values (green bars) are calculated averaging over the four analysis. also taken into account during this rescaling process. The theoretical first moment calculated from the \(\gamma\) parameters obtained from the single crystal sum rules analysis is shown in red in Fig. 14, and matches well the first moment computed from the powder experiment. The contribution from each exchange constant associated to their bond distance is shown in thin lines (normalized to the powder computed first moment). From this, we can see how the contributions from \(J_{2}\) and \(J_{4}\) to the first moment are close, which makes the fit difficult within this small wavevector range probed during this experiment. ### Determination of exchange constants In the first moment sum rules analysis, we have used the five \(\gamma\) parameters which are related to the seven exchange constants. \(\gamma_{1}\), \(\gamma_{2}\) and \(\gamma_{4}\) are uniquely related to \(J_{1}\), \(J_{2}\) and \(J_{4}\), and can be deduced from Eqs. (16a-c), leaving \(J_{3}\), \(J_{5}\), \(J_{6}\) and \(J_{7}\). \(\gamma_{\rm i}\) and \(\gamma_{\rm e}\) are related in Eqs. (16d) and (16e) to these four chiral exchange constants. Considering the energy minimization using the experimental propagation vector from diffraction,[18] these four unknown exchange constants can be written into three linearly independent equations: \[\tan 2\pi k =\sqrt{3}\frac{J_{3}-J_{5}+2(J_{6}-J_{7})}{J_{3}+J_{5}+2(J_{6}+J_{ 7}-J_{4})} \tag{23a}\] \[\gamma_{\rm i} =J_{3}c_{\rm R}+J_{5}c_{\rm L}\] (23b) \[\gamma_{\rm e} =J_{6}c_{\rm R}+J_{7}c_{\rm L} \tag{23c}\] This analysis presents an ambiguity given the presence of three equations and four unknown exchange constants. This ambiguity is intrinsic originating from many of the exchange parameters corresponding to the same bond distances which is the the basis of the first moment sum rule analysis discussed above. In particular, the exchange constants \(J_{3}\) (\(J_{6}\)) and \(J_{5}\) (\(J_{7}\)) correspond to the same bond distance and only differ by the SSE pathway defined by the crystal chirality. We therefore need further information to close this set of equations and seek this through a comparison between calculated and measured single crystal excitation spectra, focusing on the overall bandwidth and excitations near the zone boundary. By calculating the excitation spectra using linear spin-wave theory software SpinW [51] with an simulated instrumental resolution \(\Delta E\approx 0.1\,\)meV, we can see that the upper magnon branch along \((H,H,0)\) is largely affected by a change of the \(J_{3}\) exchange parameter. We note that the calculation was done assuming an untilted structure [cycloidal ground state shown in Fig. 1(d)], however, the scattering near the top of the single magnon branch was found not to be sensitive to the tilting of the magnetic moments. Analyzing the scattering near the top of the single magnon branch near the magnetic zone boundary therefore provides an independent means of fixing \(J_{3}\). The experimental spectrum from MACS \(E_{\rm f}=2.4\,\)meV dataset is shown in Fig. 15(a), and compared to calculated spectra for different values of \(J_{3}\) in Fig. 15(b)-(d), where we can observe a significant change of the position and structure of the upper mode. In particular, tuning \(J_{3}\) affects the maximum energy of the one-magnon band and also the splitting of multiple bands at the maximum energy of the single magnon bands as observed in the \(H\)-scans. Given our experimental data [Fig. 15(a)] and to close off the set of Eqns. 23, we assume no observable splitting of bands in the \(H\)-scans and a maximum single-magnon energy excitation given by experiment. These two observations fix both the absolute value of \(J_{3}\) and also an overall scaling factor taking the data to absolute units of meV. For these calculations, \(J_{5}\), \(J_{6}\) and \(J_{7}\) are obtained by fixing \(J_{3}\) in Eq. (23) resulting in a system of three equations and three unknowns with \(\gamma_{\rm i}\) and \(\gamma_{\rm e}\) the mean values obtained in the single crystal sum rules analysis shown in Fig. 13. We have chosen to fix \(J_{3}\) as it has the lesser influence on the ordering wavevector which is seen by partially differentiating Eq. (23a). Finally, the exchange constants obtained by fixing \(J_{3}\) with the best agreement are listed in Table 3. The uncertainty associated to \(J_{3}\) is an estimation based on the instrumental resolution of how far from \(J_{3}=0.25\,\)meV we can observe the band splitting. From this estimated error, and the least-square refinement of \(\gamma_{\rm i}\) and \(\gamma_{\rm e}\), we subsequently compute the uncertainties associated to \(J_{5,6,7}\). The obtained exchange constants are compared with the values calculated from DFT from Ref. [15]. First we can see that the interactions are overall lower in energy than expected from the DFT calculations. Then, the left-handed interactions \(J_{3}\) and \(J_{6}\) are dominant in comparison to right-handed \(J_{5}\) and \(J_{7}\), as expected to impose the structural chirality of MnSb\({}_{2}\)O\({}_{6}\). Figure 14: (data points) First moment computed from the powder data, as a function of the scattering vector amplitude. (red thick curve) First moment calculated from the \(\gamma\) parameters fitted in the single crystal first moment sum rule analysis. (thin curves) Contributions to the first moment from the different exchange paths, normalized to the powder computed first moment. From mean field theory, the Curie-Weiss temperature can be estimated by summing the exchange constants over the nearest neighbors of a Mn\({}^{2+}\) ion:[52] \[\Theta_{\rm CW} = -\frac{S(S+1)}{3k_{\rm B}}\left[2(J_{1}+J_{3}+J_{4}+J_{5})+\right. \tag{24}\] \[\left.4(J_{2}+J_{6}+J_{7})\right]\] We note that this equation is not linearly independent from the system in Eq. (23), and thus cannot be used to uniquely determine the four chiral exchange constants \(J_{3}\), \(J_{5}\), \(J_{6}\), and \(J_{7}\). Furthermore, the Curie-Weiss temperature obtained from magnetic susceptibility on MnSb\({}_{2}\)O\({}_{6}\) powder, \(\Theta_{\rm CW}=-19.6\) K in Ref. [15] and \(\Theta_{\rm CW}=-23\) K in Ref. [17] have a difference \(\Delta T=3.4\) K corresponding to an energy difference of \(\Delta E\approx 0.3\) meV, which is significant given the low energy scale of the exchange constants in MnSb\({}_{2}\)O\({}_{6}\) (see Table 2). This variation in experimentally reported results is justifiable given the choice of the linear regime when fitting mean-field Curie Weiss law and reflects the experimental uncertainty. For these reasons, we have not used the experimental Curie-Weiss temperatures as a hard constraint for the exchange constants. On the contrary, we can compute afterwards \(\Theta_{\rm CW}=-26(1)\) K, which reasonably agrees with the measured ones, given the experimental variations. ### Comparison to spin-wave theory In the previous sections we have applied the first moment sum rule to extract the complex series of Heisenberg exchange constants in MnSb\({}_{2}\)O\({}_{6}\). In this section we compare these results to a mean-field linear spin-wave theory to compare results and also to test for stability of the ground state magnetic structure. We use the Green's function formalism for this. While this technique for calculating magnetic excitations is more versatile in cases Figure 15: Spin-wave dispersion along \((H,H,0)\) for: (a) MACS single crystal inelastic neutron scattering spectrum. (b)-(d) Inelastic neutron scattering spectrum calculated from linear spin-wave theory by fixing different \(J_{3}\) values. The other parameters for these calculations are listed in Table 5. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \(J_{1}\) & \(J_{2}\) & \(J_{3}\) & \(J_{4}\) & \(J_{5}\) & \(J_{6}\) & \(J_{7}\) \\ \hline DFT[15] & 0.77 & 1.47 & 2.2 & 1.16 & 0.4 & 1.94 & 0.4 \\ Sum rules & 0.10(4) & 0.29(2) & 0.25(2) & 0.35(5) & 0.07(8) & 0.97(3) & 0.03(5) \\ Refined & 0.10 & 0.29 & 0.25 & **0.25** & 0.07 & 0.97 & **-0.023** \\ \hline \hline \end{tabular} \end{table} Table 3: Symmetric \(J\) exchange constants obtained by DFT calculations[15] and the mean values from the four single crystal sum rules analyses (normalized to \(\gamma_{\rm e}\) and then rescaled to experimental data, in meV, note that all values of \(J\) in the table are multiplied by \(S(S+1)\) with \(S=5/2\)). The refined parameters using Green’s function approach are highlighted in red. where the low-energy response is determined by a series of single-ion states (such as the case in rare-earths or in the presence of spin-orbit coupling like in, for example, Co\({}^{2+}\)[53] or V\({}^{3+}\)[54] based compounds), it is also useful to test for stability of harmonic long-wavelength magnetic excitations with changes in the local magnetic environment. In this section we first briefly outline the use of the Green's function technique and then we apply it to calculate the spin excitation spectrum, comparing sum rule results presented above to experiment, then refining results. We then test stability of the proposed magnetic structure and interactions based on the series of exchange constants extracted with the first moment sum rule and refined values. In particular, we discuss the stability of long-wavelength magnetic fluctuations for tilted helicoidal structures. #### iv.2.1 Green's functions on a rotating frame The basic technique for applying the Green's function approach has been outlined in several previous papers by us. The application of the technique to collinear systems CoO,[53] in the presence of spin-orbit coupling with Co\({}^{2+}\) (\(S=\frac{3}{2}\), \(l_{\text{eff}}\)=1) ions, and CaFe\({}_{2}\)O\({}_{4}\),[55] based on a spin-only ground state of Fe\({}^{3+}\) (\(S=\frac{5}{2}\)) ions. We then recently extended this methodology to the noncollinear magnetic structure of RbFe\({}^{2+}\)Fe\({}^{3+}\)F\({}_{6}\) which involved coupled spin-only Fe\({}^{3+}\) (\(S=\frac{5}{2}\)) and orbitally degenerate Fe\({}^{2+}\) (\(S=2\), \(l_{\text{eff}}\)=1) ions. In terms of MnSb\({}_{2}\)O\({}_{6}\) where only a spin-degree of freedom exists (Mn\({}^{2+}\) with \(S=\frac{5}{2}\)), we quote only the key results here and refer the reader to Ref. [56] for further details. The methodology here is to use the Green's functions results from the collinear cases and transform to a local rotating frame of reference for use in incommensurate magnets like MnSb\({}_{2}\)O\({}_{6}\). The neutron scattering cross section is proportional to the dynamical structure factor \(S(\mathbf{Q},\omega)\) which is related to the Green's response function, \(G(\mathbf{Q},\omega)\) via the fluctuation-dissipation theorem. \[S(\mathbf{Q},\omega)=-\frac{1}{\pi}[n(\omega)+1]\operatorname{Im}G(\mathbf{Q },\omega) \tag{25}\] where \(n(\omega)\) is the Bose factor. The Green's function, in the laboratory frame, is defined here as \[G^{\alpha\beta}_{\gamma\dot{\gamma}}(i^{\prime}j^{\prime},t)=-i\Theta(t)([ \hat{S}^{\alpha}_{i^{\prime}\dot{\gamma}}(t),\hat{S}^{\beta}_{j^{\prime}\dot{ \gamma}^{\prime}}(0)]).\] The three sets of indices in this definition of the Green's function and used throughout the remaining discussion in this paper are summarized in Table 4. Following previous methods applying the RPA (random phase approximation),[57; 58] we take an interaction Hamiltonian between Mn\({}^{2+}\) (\(S=\frac{5}{2}\)) ions of the form \(\mathcal{H}_{\text{int}}=\frac{1}{2}\sum_{ij^{\prime\prime}}^{\gamma\gamma^{ \prime}}\mathcal{J}_{ij^{\prime\prime}}^{\gamma\gamma^{\prime}}\mathbf{S}_{i \gamma}\cdot\mathbf{S}_{j\gamma^{\prime}}\), where \(\mathcal{J}_{ij}^{\gamma\gamma^{\prime}}\) is a symmetric Heisenberg exchange parameter. Note that we have changed notation here from Eqn.15 and written the symmetric exchange \(J_{1\to 7}\), discussed above in the context of the first moment sum rule, as a diagonal matrix \(\mathcal{J}_{ij}^{\gamma\gamma^{\prime}}\) which we use below when moving to a rotating frame as required for incommensurate magnets. Note also that the factor of \(\frac{1}{2}\) in \(\mathcal{H}_{\text{int}}\) originates from the application of mean field theory as discussed previously in Refs. [55; 53; 57; 56; 57], and [59]. As shown in Ref. [56], applying mean field decoupling and converting to a local rotating frame, where we define rotation matrices, \[\mathbf{S}_{i\gamma}=R_{i\gamma}\mathbf{\tilde{S}}_{i\gamma},\] with \(\mathbf{\tilde{S}}_{i\gamma}\) being the spin operators in the rotating frame. As discussed in Ref. [56] the Green's function equation of motion becomes after transforming to \(\mathbf{Q}\) and \(\omega\) space \[\tilde{G}^{\alpha\beta}_{\gamma\dot{\gamma}^{\prime}}(\mathbf{Q},\omega)=g^{\alpha\beta}_{\gamma\dot{\gamma}^{\prime}}(\omega)\delta_{\dot{ \gamma}\dot{\gamma}^{\prime}}\] \[\quad\quad+\sum_{\gamma^{\prime}}^{\mu\nu}g^{\alpha\mu}_{\gamma \dot{\gamma}}(\omega)\mathcal{\tilde{J}}^{\mu\nu}_{\dot{\gamma}\gamma^{\prime }}(\mathbf{Q})\mathcal{\tilde{G}}^{\nu\beta}_{\gamma^{\prime}\dot{\gamma}^{ \prime}}(\mathbf{Q},\omega)\] where the Fourier transform of the exchange interaction in the rotating frame is \[\underline{\underline{\underline{\underline{\mathcal{J}}}}}( \mathbf{Q})=X^{\prime}\Big{[}\underline{\underline{\underline{\mathcal{J}}}}( \mathbf{Q}+\mathbf{\tilde{q}})T_{3N}\] (26a) \[+\underline{\underline{\underline{\mathcal{J}}}}(\mathbf{Q}- \mathbf{\tilde{q}})T_{3N}^{*}+\underline{\underline{\underline{\mathcal{J}}}}( \mathbf{Q})(\mathbb{I}_{3}\otimes\mathbf{nn}^{T})\Big{]}X\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad sum over transitions to and from the ground state, as appropriate for magnon excitations at zero temperature. The rotation back to the lab frame can be achieved by \[\underline{\underline{G}}(\mathbf{Q},\omega)=D_{\mathbf{Q}}(\mathbb{ I}_{3}\otimes\mathbf{nn}^{T})X\underline{\underline{G}}(\mathbf{Q},\omega)X^{ \prime}(\mathbb{I}_{3}\otimes\mathbf{nn}^{T})D_{-\mathbf{Q}}\] \[+D_{\mathbf{Q}}T_{3N}^{*}X\underline{\underline{G}}(\mathbf{Q}+ \mathbf{\tilde{q}},\omega)X^{\prime}T_{3N}^{\prime}D_{-\mathbf{Q}}\] \[+D_{\mathbf{Q}}T_{3N}X\underline{\underline{\underline{G}}}( \mathbf{Q}-\mathbf{\tilde{q}},\omega)X^{\prime}T_{3N}^{*\prime}D_{-\mathbf{Q}}\] where the matrix \(D_{\mathbf{Q}}=\delta_{\gamma\gamma^{\prime}}e^{i\mathbf{Q}\cdot\delta_{ \gamma}}\otimes\mathbb{I}_{3}\) accounts for the interference between ions in the unit cell. Finally the neutron scattering cross section is \[S(\mathbf{Q},\omega)=g_{L}^{2}f^{2}(\mathbf{Q})\sum_{\alpha\beta}(\delta_{ \alpha\beta}-\hat{q}_{\alpha}\hat{q}_{\beta})S^{\alpha\beta}(\mathbf{Q},\omega),\] where the partial dynamical structure factor, \(S^{\alpha\beta}(\mathbf{Q},\omega)\) is proportional to the imaginary part of the Green's function [Eq. (25)], \(g_{L}\) is the Lande g-factor, \(f(\mathbf{Q})\) is the Mn\({}^{2+}\) magnetic form factor and the polarization factor selects the component perpendicular to the momentum transfer. We now apply this theory to MnSb\({}_{2}\)O\({}_{6}\), which comprises a triangular motif of coupled Mn\({}^{2+}\) (\(3d^{6}\)) ions. In an intermediate octahedral field, the single-ion ground state of Mn\({}^{2+}\) is \({}^{6}\)S (\(S=5/2\), \(L\approx 0\)) and the orbital moment is quenched. As a result, the effect of spin-orbit coupling and crystallographic distortions are small and may be neglected. The single-ion Hamiltonian is thus remarkably simple and consists solely of the molecular mean field created by the magnetic coupling to neighboring ions, which breaks time reversal symmetry, \(\mathcal{H}_{\text{SI}}=h_{\text{MF}}\hat{S}_{z}\). This "Zeeman-like" term acts to split the 6-fold degenerate \(|S=5/2,m\rangle\) states. At low temperatures (as illustrated in Fig. 7 of Ref. [55]) when only the ground state is populated, only one transition is allowed under the constraints of dipole selection rules of neutron scattering. We note that this approach is equivalent to semi-classical linear spin-wave theory. tion calculation with an untilted magnetic structure, we derive the predicted neutron scattering excitation spectrum in Fig. 16(a)-(b). This calculation is done with no anisotropic terms. Symmetric exchange is expected to be dominant here owing to the lack of an orbital degree of freedom for Mn\({}^{2+}\). The general results are in good qualitative agreement with experiment, however the calculated zone-boundary excitations are clearly in disagreement with experiment with the calculation predicting lower energy excitations than observed in experiment at the zone boundary. To address this, there are two noteworthy points of our first moment sum rule analysis. First, on inspection of Fig. 13, the values of \(\gamma_{4}\), which fixes \(J_{4}\) maybe dominated by the \(H\)-scan experiment performed with \(E_{\rm f}=3.7\,\)meV. In comparison to iron based langasite, this value for \(J_{4}\) is also considerably larger in MnSb\({}_{2}\)O\({}_{6}\).[13] We therefore consider a case when this value is lowered in Fig. 16(c)-(d). To ensure the same ordering wavevector we correspondingly tune \(J_{7}\) given the relatively large error bar in our analysis and also the large sensitivity of the magnetic ordering wavevector to this exchange constant (Eqn. 23a). After refining \(J_{4,7}\) (to within one-two sigma of the calculated error bar from the first moment sum rule analysis) we obtain a good description of the data (both along the \(L\) and \(H\) directions) with sum rule and refined exchange parameters illustrated in Table 3 (refined values from this step highlighted in red). #### iv.2.3 Stability analysis Having derived a set of symmetric exchange constants from the first moment sum rule and written down a response function theory for the spin waves in terms of Green's functions, we discuss stability of the ground state fixed by the magnetic structure. There have been two magnetic structures proposed in the literature involving a tilting of the plane of the helicoid at an angle away from the \(c\)-axis [Ref. [16] and Fig. 1(e)] and one without tilting [Ref. [15] and Fig. 1(d)]. While initially it was proposed that the observed polar domain switching in MnSb\({}_{2}\)O\({}_{6}\) requires a tilted structure, other work based on neutron diffraction has suggested that it is not a requirement. While in a previous paper we have argued for the existence of an untilted structure, the goodness of fit to the diffraction data was not markedly worse for the tilted case making the results arguably ambiguous.[18] Here we evaluate the stability of the long-wavelength magnetic excitations as a function of tilting the vertical axis of the spin rotation plane given our exchange constants derived from the first moment rule. We emphasize that the exchange constants derived above from the first moment sum rule depend only on the relative orientation of neighboring spins and is independent of the static magnetic structure being tilted or not. Given the good description of the data to a symmetric-only exchange model, we test here how stable these excitations are when the static magnetic structure is gradually tilted. The Green's function calculation predicts the energy and momentum values of stable harmonic excitations through the imaginary part of the response, given a magnetic ground state and a set of symmetric exchange constants. In the first moment analysis presented above, the exchange constants are derived based on relative orientation of the magnetic moments, and does not depend on global details like tilting of the overall magnetic structure. Our Green's function analysis, however, does require this tilting as the magnetic ground state determines the local molecular field on each site. Given that the Green's function approach predicts stable harmonic excitations as a function of momentum and energy, in this section we search for stable long-wavelength excitations as a function of tilting of the spin rotation plane given our derived exchange parameters based on the first moment rule. We focus on \(L\)-scans as calculations of the excitation spectrum along \(H\) were found to not noticeably change with tilting the spin rota Figure 17: Calculations investigating the stability of long-wavelength spin-waves as a function of tilting the spin rotation plane away from the \(c\)-axis. Calculations of the neutron response for tilts of \(\theta=15^{\circ}\) (a), \(10^{\circ}\) (b), and \(0^{\circ}\) (c) are displayed with low-energy, long-wavelength excitations only stable for tilts of \(\theta\sim 0^{\circ}\). This is further illustrated in panels (d)-(e) that display the response at low energies as a function of tilt-angle of the spin rotation plane away from the \(c\)-axis. We emphasize that these calculations are done for a magnetic Hamiltonian with _symmetric-only_ exchange constants. No anisotropic terms are included in the magnetic Hamiltonian as discussed in the main text. tion plane away from the \(c\)-axis over the range of 0-15\({}^{\circ}\). We note that such \(H\)-scans were used above to fix one of the exchange parameters and the overall calibration constant to take the data to absolute units of meV. The two assumptions behind that step, namely the energy value of the top of the single-magnon band and the splitting, are not found to observably change with tilting in our calculations. In Fig. 17, we search for long-wavelength excitations given our sum rule exchange constants as a function of tilting of the vertical main axis of the spin rotation plane away from the \(c\)-axis at an angle \(\theta\). The long-wavelength excitations (\(q\to 0\)) are calculated for several tilt angles and shown in Fig. 17(a)-(c), based on the set of parameters derived from the sum rule analysis. Given that the sum rules and the fixing of the value of \(J_{3}\) described above is independent of the tilting of the static magnetic moments, in our stability calculations described here we fix the exchange constants to these determined values and vary the long-range static magnetic structure. On increased tilting, the exchange parameters derived from sum rules show no stable long-wavelength excitations, indicative that the derived exchange parameters combined with a tilted helicoid is unstable. This is further displayed in Fig. 17(d)-(e) which plot calculated constant energy cuts (integrating calculated data below 0.02 meV) as a function of tilting of the cycloid away from the \(c\)-axis for both the cases of exchange constants derived from sum rules, and refined values discussed above. In both cases, increased tilting of the helicoid results in unstable long-wavelength excitations. Based on this analysis, we suggest that the derived exchange constants are consistent with an untilted (\(\theta=0\)) magnetic structure. However, we emphasize that this analysis is based only a Hamiltonian with _symmetric-only_ exchange constants as expected based on the high-spin value of Mn\({}^{2+}\). We cannot rule out the possibility of small anisotropic or more complex magnetic exchange terms that may arise from the distorted framework surrounding the magnetic ions. In Ref. [18] we have shown with diffraction under magnetic field the possibility to manipulate the spin structure in MnSb\({}_{2}\)O\({}_{6}\) and that the appearance of electric polarization does not require a tilted structure as raised in Ref. [16]. Therefore, the stability analysis above is consistent with our neutron diffraction analysis. The elastic scattering outlined in our previous paper and the spin excitations can be modeled and understood in terms of a symmetric-only exchange model on an untilted structure. ## IV Conclusions In this paper, we have studied structurally chiral polar magnet MnSb\({}_{2}\)O\({}_{6}\), with magnetic interactions being described by seven symmetric Heisenberg exchanges in the magnetic Hamiltonian. We have presented a method using the first moment sum rule, and have applied this to extract the exchange constants from multiplexed neutron data. This method only depends on the correlators (angles) between neighboring spins and not the tilting of the overall spin rotation plane. Using Green's functions on a rotating frame, we have reproduced the spin-wave spectra, which are in good agreement with the measured ones and discussed refined values. Finally, we investigated the stability of the magnetic structure in terms of long-wavelength magnetic excitations present at low energies and suggest that the pure cycloid is favored in terms of stability given the derived exchange constants from the first moment sum rule. ###### Acknowledgements. We would like to thank M. Georgopoulou for helpful discussions. We thank the Carnegie Trust for the Universities of Scotland, the EPSRC, and the STFC for financial support. S.W.C. was supported by the DOE under Grant No. DOE: DE-FG02-07ER46382. Access to MACS was provided by the Center for High Resolution Neutron Scattering, a partnership between the National Institute of Standards and Technology and the National Science Foundation under Agreement No. DMR-1508249 ## Appendix A Single-crystal sum rules analysis ### Integration methods for first moment As the first moments are computed by a numerical integration, it is important to make sure that the integration methods do not have a significant impact on the results of the analysis. This section outlines five integration methods, and the resulting \(\gamma\) parameters are compared in Fig. 18, following a \(L\)-scan analysis on the \(E_{\text{f}}=2.4\,\text{meV}\) dataset. In Section III.4.2, the constant-\(\mathbf{Q}\) scans are fitted to two gaussians as shown in Fig. 11, and then the first moments were calculated by numerically integrating with a trapezoidal rule with the background removed from the fit to a two-gaussian model. The results are shown with bars (C). Of course, the first moments can also be computed without removing the background, resulting with bars (B). Then, they can be computed analytically using the fit parameters of the two-gaussian model, shown with the bars (A) in Fig. 18. In order to avoid the mixture of elastic scattering and one-magnon scattering, the elastic line can be fitted to a third gaussian, while the actual data above \(E=0.2\,\text{meV}\) are fitted to two gaussians. Then, the first moments can be again calculated analytically with the fitted parameters of these two gaussians in the good energy range. This is shown in bars (D). Finally, the trapezoidal integration can be performed, removing the background from this three-gaussian model, as shown in bars (E). Finally, it can be seen that all the fitted parameters agree within uncertainties. We have rather chosen to adopt trapezoidal integration, removing the background from the two-gaussian fit, to deal with any deviation from a two-mode spin-wave spectrum. ### \(H\)-scan In Section III.4.2, we have described the first moment sum rule analysis of the single crystal data, by fixing some \(H_{0}\) and calculating the first moment as a function of \(L\). We can perform the same analysis considering \(\mathbf{Q}=(H,H,L_{0})\) with \(H\) varying for a chosen \(L_{0}\) (\(H\)-scan). For each interaction indexed by spins \(i\) and \(j\), the corresponding term in the cosine in Eq. (15) can be written now: \[\mathbf{Q}\cdot\mathbf{d}_{ij}=2\pi H(d_{ij,x}+d_{ij,y})+2\pi L_{0}d_{ij,z} \tag{30}\] where the distances are expressed in lattice units, and the scattering vector in reciprocal lattice units. Similarly as in Eq. (18), a general formula for the first moment can be derived for a fixed \(L_{0}\), using trigonometric identities: \[\langle E\rangle(H,L_{0}) =A_{\mathrm{i}}(L_{0})\cos(2\pi\delta_{1}H)+... \tag{31}\] \[A_{\mathrm{e}}(L_{0})[\cos(2\pi\delta_{2}H)+...\] \[\cos(2\pi\delta_{3}H)+\cos(2\pi\delta_{4}H)]+C(L_{0})\] where we have now three functions \(A_{\mathrm{i}}\), \(A_{\mathrm{e}}\) and \(C\) which are \(L_{0}\)-dependent, expressed by: \[A_{\mathrm{i}}(L_{0}) =\frac{4}{3}[\gamma_{1}+\gamma_{i}\cos(2\pi L_{0})] \tag{32}\] \[A_{\mathrm{e}}(L_{0}) =\frac{4}{3}[\gamma_{2}+\gamma_{e}\cos(2\pi L_{0})]\] (33) \[C(L_{0}) =-\frac{2}{3}[2\gamma_{1}+6\gamma_{2}+3\gamma_{i}+3\gamma_{4}+6 \gamma_{e}]+...\] (34) \[\frac{2}{3}\cos(2\pi L_{0})(\gamma_{i}+3\gamma_{4})\] Fig. 19(a)-(c) shows some constant-\(\mathbf{Q}\) cuts for \((H,H,L_{0}=0.4)\) and their fit to two gaussians. The first moments are again calculated numerically using trapezoidal integration and the background is removed from the two-gaussian fit. These computed first moments are the red data points in Fig. 19(d), along with the \(H\)-dependence of the computed first moment, and the fit to Eq. (31), to extract \(A_{\mathrm{i}}\), \(A_{\mathrm{e}}\) and \(C\). This operation is repeated for several \(L_{0}\), as shown in Fig. 19(e)-(f). Finally, a total of 999 first moments \(\langle E\rangle(\mathbf{Q})\) are computed for this analysis on this \(E_{\mathrm{f}}=2.4\,\mathrm{meV}\) dataset, and plotted against the fitted first moments in Fig. 20(a). The \(\gamma\) parameters are then obtained by fitting the measured \(A_{\mathrm{i}}\), \(A_{\mathrm{e}}\) and \(C\) to their theoretical values, as shown in Fig. 20(b)-(d), where the red data points are the coefficients calculated in Fig. 19(d)-(f). As for the \(L\)-scan analysis, some remaining background can be included in the computation of \(C\). For this reason, the \(L_{0}\)-independent part of Eq. (34), which corresponds to an overall constant to the first moment sum rule, is not used to get the \(\gamma\) parameters and hence the exchange constants \(J_{i}\). ### Second dataset results The single crystal first moment sum rule analysis was repeated on the second dataset measured on MACS with \(E_{\mathrm{f}}=3.7\,\mathrm{meV}\). The results of the \(L\)-scan (469 computed first moments) and \(H\)-scan (487 computed first moments) analyses are respectively shown in Fig. 21 and Fig. 22. ### Parameters for Figure 14 The sum rule analysis had an ambiguity in the set of equations resulting from the fact that several exchange constants corresponded to the same bond distance. We therefore needed to fix one exchange constant through a comparison to the single crystal dispersion as discussed in the main text. This qualitative analysis is described in Fig. 15. The parameters for the calculations are listed in Table 5.
2305.12167
The Case Against Explainability
As artificial intelligence (AI) becomes more prevalent there is a growing demand from regulators to accompany decisions made by such systems with explanations. However, a persistent gap exists between the need to execute a meaningful right to explanation vs. the ability of Machine Learning systems to deliver on such a legal requirement. The regulatory appeal towards "a right to explanation" of AI systems can be attributed to the significant role of explanations, part of the notion called reason-giving, in law. Therefore, in this work we examine reason-giving's purposes in law to analyze whether reasons provided by end-user Explainability can adequately fulfill them. We find that reason-giving's legal purposes include: (a) making a better and more just decision, (b) facilitating due-process, (c) authenticating human agency, and (d) enhancing the decision makers' authority. Using this methodology, we demonstrate end-user Explainabilty's inadequacy to fulfil reason-giving's role in law, given reason-giving's functions rely on its impact over a human decision maker. Thus, end-user Explainability fails, or is unsuitable, to fulfil the first, second and third legal function. In contrast we find that end-user Explainability excels in the fourth function, a quality which raises serious risks considering recent end-user Explainability research trends, Large Language Models' capabilities, and the ability to manipulate end-users by both humans and machines. Hence, we suggest that in some cases the right to explanation of AI systems could bring more harm than good to end users. Accordingly, this study carries some important policy ramifications, as it calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability and a right to explanation of AI systems.
Hofit Wasserman Rozen, Niva Elkin-Koren, Ran Gilad-Bachrach
2023-05-20T10:56:19Z
http://arxiv.org/abs/2305.12167v1
# The Case Against Explainability ###### Abstract As artificial intelligence (AI) becomes more prevalent there is a growing demand from regulators to accompany decisions made by such systems with explanations. However, a persistent gap exists between the need to execute a meaningful right to explanation vs. the ability of Machine Learning systems to deliver on such a legal requirement. The regulatory appeal towards "a right to explanation" of AI systems can be attributed to the significant role of explanations, part of the notion called reason-giving, in law. Therefore, in this work we examine reason-giving's purposes in law to analyze whether reasons provided by end-user Explainability can adequately fulfill them. We find that reason-giving's legal purposes include: (a) making a better and more just decision, (b) facilitating due-process, (c) authenticating human agency, and (d) enhancing the decision makers' authority. Using this methodology, we demonstrate end-user Explainabilty's inadequacy to fulfil reason-giving's role in law, given reason-giving's functions rely on its impact over a human decision maker. Thus, end-user Explainability fails, or is unsuitable, to fulfil the first, second and third legal function. In contrast we find that end-user Explainability excels in the fourth function, a quality which raises serious risks considering recent end-user Explainability research trends, Large Language Models' capabilities, and the ability to manipulate end-users by both humans and machines. Hence, we suggest that in some cases the right to explanation of AI systems could bring more harm than good to end users. Accordingly, this study carries some important policy ramifications, as it calls upon regulators and Machine Learning practitioners to reconsider the widespread pursuit of end-user Explainability and a right to explanation of AI systems. ## 1 Introduction As AI systems increasingly take up the role of assisting, and at times replacing, human decision makers (Kaminski and Urban, 2021), there is a growing public call for establishing a right to receive an explanation to outcomes generated by automated decision-making processes. Purportedly originating from the General Data Protection Regulation (GDPR) (Regulation, 2016) and known as the "legal right to explanation"(Goodman and Flaxman, 2017), this right is often portrayed as one tool in the regulatory toolkit for creating, deploying, and monitoring ethical and accountable AI systems, mitigating the potential breach of fundamental principles of the rule of law, such as transparency and accountability (Hildebrandt, 2020) and protecting human rights. Simultaneously, and in correlation with the introduction of more complex AI, Explainability has triggered a growing interest within the Machine Learning (ML) community. Tasked with providing explanations to complex predictions and motivated by an incentive to cultivate trust in AI systems (Jacovi et al., 2021), ML developers have embraced Explainability (XAI) to develop end-users explanations. This work inquires whether, and to what extent, can end-user Explainability satisfy the right to explanation of AI systems' requirements by law. Embracing a "court-like" setting, Section 2 makes the case in favor of Explainability, addressing its organic evolvement, its usefulness for ML professionals and its often-cited potential contribution to protecting human rights. Next, Section 3 and Section 4 lay down the case against end-user Explainability. Accordingly, Section 3 sets the legal backdrop by providing a broad-brush analysis of the role of explanations in the legal domain. This analysis focuses on three questions: (1) _How_ is "explanation" defined in law, linking it to the notion of "reason-giving"; (2) _Where_ is reason-giving used in law, briefly surveying its appearances in public, private and international law, and most importantly; (3) _Why_ is reason-giving used in law, meaning what are the underlying functions at the heart of the ubiquitous legal practice of reason-giving? The analysis of reason-giving's legal functions in Section 3 uncovers its four main purposes: (a) promoting the making of a better and more just decision, (b) facilitating due-process, (c) authenticating human agency of both the decision subject and the decision maker, and (d) enhancing the decision makers' authority, by promoting legitimacy, accountability and providing guidance. As an interim conclusion, Section 3 highlights the fact that reason-giving is a mechanism aimed to influence the human decision maker in various forms, subsequently restraining human rationale and human judgement. Building upon this methodology Section 4 continues to make the case against Explainability. It utilizes reason-giving's deconstruction in Section 3 to analyze the extent to which end-users Explainability is capable of serving the roles assigned to reason-giving in law. It first examines Explainability's potential to impact the decision-making process itself, finding it slim given that a human decision-maker has been replaced by a prediction-making machine. Thus, reason-giving's function to promote the making of a better and more just decision is largely unfulfilled. Next, the analysis questions the ability of machine generated explanations to support human agency and respect human autonomy. Then, turning to the function of facilitating due-process rights, the analysis highlights Explainability's challenge to produce what is typically considered in law "an explanation". Having ruled out end-user Explainability's ability to serve three of reason-giving's functions in law, we do find that Explainability is compatible with fulfilling reason-giving's fourth function, i.e., enhancing the decision makers' authority. However, we observe that recent Explainability AI (XAI) research trajectories and Large Language Models' (LLMs) emerging capabilities raise serious challenges to the reliability of Explainability's outcomes and create a potential for manipulate end-users by humans and machines alike. As a final conclusion, the study outlines some policy implications. The gap between a legal right to explanation and the technological field called Explainability, challenges the usefulness of Explainability as a reason-giving tool in an end-user AI context. Policymakers and ML practitioners should thus reconsider reliance on end-users' Explainability for achieving the societal goals of reason-giving and explore alternatives. ## 2 Making the Case for Explainable Artificial Intelligence Prior to the legal and regulative interest in a "right to explanation" of AI systems, Explainability was developed by the ML community as a means to contend with one of the most publicly known features of AI systems: it's increasing opacity, or as more commonly known, the "black box" quality. Although not all AI systems are opaque, and there are several "degrees" of opaqueness, this quality nevertheless has become a meaningful challenge due to the introduction of Deep Learning Networks, some of them using billions of parameters. In sync with the rise of the level of complexity of data science, attempts began to offer means to address the opacity challenge, culminating in several concepts, methods, and tools, Explainability (XAI) being one of them (Nicholas, 2019). The term "Explainability" originates to the 80's and 90's (Miller et al., 2017). It was developed in order to produce good quality (robust) systems, which consists of an understanding of their inner workings, quality control, bug solving and continuous learning and progress towards the next generation of technology. At its core, XAI "seeks to bring clarity to how specific ML models work" (Laato et al., 2022), and the use of Explainability is often linked to context and relevancy considerations (Rudin, 2019; Molnar, 2020; Arrieta et al., 2020). This fact highlights how the progress made in easing the opacity challenge evolved from real professional challenges: finding out how the system works in order to improve it, fix it, extract takeaways from mistakes and strive to simplify the process (Arrieta et al., 2020). This core necessity sets the tone for the various technical solutions which were offered and are still being continuously developed to put forward explanations for automated systems, housing a vast amount of research work at the cutting edge of AI today (Biran and Cotton, 2017). Additionally, industry has also acknowledged the problem opacity creates in the general public, asked to be subject to major life-changing and at times high stakes decisions, construed by machines. Clearing out some of the mist around AI systems is often regarded as a step towards creating public trust in this innovative technology (Jacovi et al., 2021). This approach was largely facilitated by the increased focus of the HCI (Human Computer Interaction) field on extending the definition of human actors interacting with the machine. XAI was embraced by the HCI field at the intersection with the ML community, in a mission to make computational processes clearer to humans (Shniederman et al., 2016). Accordingly, explanation in the field of computer science has been understood as "making it possible for a human being (designer, user, affected person, etc.) to understand a result or the whole system" (Malgieri, 2021). This comprehensive definition represents perhaps the turning trajectory of XAI towards including the end user of AI systems, re-calibrated in correlation with the increasing deployment of these systems in domains already regulated by existing laws. But it also highlights the fact that explanations for AI systems are often mentioned in the context of the mission to promote trust, or trustworthiness, in AI (Laato et al., 2022). The absence of an ability to explain decisions and actions by AI black-boxes to human users has been recently referred to as a key limitation of today's intelligent systems, whereby the "...lack of Explainability hampers our capacity to fully trust AI systems"(Mehta et al., 2022). And trust, it has been argued, promotes user's utilization of models, both by relying on its predictions as well as by accepting its deployment (Ribeiro et al., 2016). Against this technological backdrop, Explainability has been enlisted to secure a legal mechanism, the so called "right to explanation", which regulators sought for the protection of society from potential AI harms. Regulation of AI systems grasped the vast potential of automated systems on the one hand, but expressed a genuine concern towards safeguarding human rights (Council, 2019). Being preoccupied with the purported "black-box" quality of AI systems, regulation sought transparency enhancing mechanisms to address those concerns. In the legal domain, transparency is often linked to fairness, as means to assure accountability of decision makers (Kaminski, 2021). Faithful to this transparency ethos, "...the majority of discourse around understanding machine learning models has seen the proper task as opening the black box and explaining what is inside" (Selbst and Barocas, 2018). Accordingly, explanations for AI systems are being promoted in service of multiple regulatory objectives aiming at enhancing transparency. Thus, explanation-giving for AI systems was mentioned as means for achieving AI accountability(Doshi-Velez et al., 2017; Smith-Renner et al., 2020; Gillis and Simons, 2019), detect discrimination (Brikan, 2019), reveal bias issues (Melsion et al., 2021), and ensure fairness in AI systems (Dodge et al., 2019). In regards to governmental use of AI, explanation-giving is presented as a way to accommodate due process requirements and achieve good governance (Crawford and Schultz, 2014). Similarly, it is also considered essential in order to allow for a meaningful contestation right towards automated decisions (Kaminski and Urban, 2021). This extensive list highlights the diverse groups, interests, and contexts for which a right to explanation of AI systems is considered a desired feature, as well as demonstrates the large extent of reliance on transparency in general, and explanations in particular, by regulators, legal practitioners, and legal scholars. This insight prompts the following question: why did regulators and legal practitioners turn to the tool of explanation-giving, in service of protecting humanity against AI harms? The answer lies in the role of explanations in law and law's ubiquitous use of explanations. ## 3 The Role of Explanation in Law - "What" is Explanation in Law? "The business of law is the business of making decisions"(Hawkins, 1986). This eloquent statement captures the fact that decision-making resides at the heart of the legal system. In a democratic society decision-making is often accompanied by explanations of those decisions (Rawls, 1997), making it a common practice both for law-making and law-applying (Raz, 2009). This form of "reason-explanations", typically used when humans try to understand and explain action and resolve disagreements (Baum et al., 2022), is usually referred to as "reason-giving". Its use is so ubiquitous that "the practice of providing reasons for decisions has long been considered an essential aspect of legal culture" (Schauer, 1994). To deconstruct the notion of reason-giving in law and to answer the question " _What_ is reason-giving in law?", this section will ask the following questions: _How_ is reasoning defined in law? _Where_ can we find the use of reason-giving in law? and most importantly _Why_ does law uses reason-giving to begin with, meaning what are its underlying functions? ### Reason-Giving in Law - the "How" - Defining Key Terms In order to alleviate some of the "fuzziness" around basic concepts it is important to first define their meaning. The giving of reasons can be described as "the practice of engaging in the linguistic act of providing a reason to justify what we do or what we decide" [Schauer, 1994]. The difference between explaining ("providing a reason") and justifying ("to justify") is not strictly semantic. While explanation in a general sense means "an act of spotting the main reasons or factors that led to a particular consequence, situation, or decision" [Malgieri, 2021], a justification takes on another layer, detailing why the decision at hand is the "right" and "just" one [Malgieri, 2021]. Therefore, an explanation is part of the justification. Explanations and justifications will be collectively referred to here as reason-giving, the process whereby decisionmakers elaborate the explanations and justifications supporting their decisions [Deeks, 2019]. Indeed, reason-giving is particularly suitable for legal decisions since "[w]hen we provide a reason for a particular decision, we typically provide a rule, principle, standard, norm, or maxim broader than the decision itself..." [Schauer, 1994]. It should be noted here that reason-giving has a multi-layered presence in law. For example, law demands reason-giving (e.g., courts requiring agencies to produce reasons for a decision), and law also manufactures reasons simultaneously (e.g., courts justifying their rulings re the agencies' actions). Moreover, reason-giving is relevant both as part of the decision-making process itself (adjudicating - the process of deliberating and deciding), and as a product accompanying the final decision if released publicly. ### Reason-Giving in Law - the "Where" Reason-giving and explanations are being ubiquitously used across the legal system. Some of the most dominant arenas where the legal system leverages reason-giving are public law, private law, and increasingly international law. In a nutshell, public law is perhaps the most widely recognized domain of reason-giving in the legal system, construed out of courts, agencies and legislators constantly manufacturing and reviewing explanations and justifications. Private law exemplifies the extent through which "regulatory transparency" has become the tool-of-choice to handle regulatory challenges [Wei et al., 2006], where perhaps the most prominent example is the requirement to obtain a patient's informed consent prior to undergoing medical procedures, itself contingent upon receiving an explanation from a physician [McLean, 2009]. Civil law also entails numerous examples of explanation usages such as in contractual relationships or in Tort law. In addition, the newly emerging habit of nations to explain foreign policy as part of international law fortifies the importance of reason-giving to decision making as a legal and social phenomenon, transcending states and geolocations[Keitner, 2018; Kingsbury, 2009]. ### Reason-Giving in Law - the "Why" The answer to the question "what does society gain from this constant explanation giving?" ought to spearhead the methodological framework of end user Explainability. Accordingly, this section will detail why are explanations and reason-giving such a repetitive practice in law? What purposes do they serve and what are their underlying functions? 1. **Making a Better and More Just Decision** - At the heart of reason-giving in law lies the non-instrumental purpose of securing a better and more just decision [Deeks, 2019]. The "just" feature, which supports the act as right, desirable, or reasonable, authenticates the decision as a non-biased, non-discriminatory one [Gillis and Simons, 2019]. Itaps into the core objective of making sure that "justice was done" [Atkinson et al., 2020]. The "better" feature is brought to fruition by triggering the mechanism of review, either internal during the making of the decision, or external as means for appeal and contestation. Taken together, the decision possesses both a rational and a moral basis, making it a more righteous and fair result. In this sense, there's an inherent, non-instrumental value in reason-giving since it impacts the decision itself. Reason-giving compels the decision maker to handle the decision process with extra care, in a thoughtful and slower manner. There might also be a psychological pressure on decision makers to make decisions worthy of reasonable reasoning [Cohen, 2010; Shapiro, 1992]. In other words, the need to articulate reasonable reasoning for a decision nudges decision-makers to make decisions that support such reasoned reasoning in a circular movement. Therefore, the mere fact that reasoning may be required may impact the decisions even prior to such a request being materialized. 2. **Facilitating Due Process** - understanding the decision-making system has been said to be instrumental for individuals to exercise their right to challenge decisions [Gillis and Simons, 2019]. When focusing on its role as a protector of individual rights, a right to explanation is usually regarded as a parasitic right, in service of fulfilling other values (Cohen, 2011; Mashaw, 2007). Those include a right for due process, housing both a right to a hearing (Friendly, 1974) and a right to contestation (Kaminski and Urban, 2021). The due-process theory is a core principle of the rule of law itself (Kaminski and Urban, 2021), and the procedure of due process is referred to today mainly as the requirement that any infringement on core rights should be taken after a notice was given and an opportunity for a hearing was granted (Crawford and Schultz, 2014). Reason-giving plays multiple roles in the execution of due process rights. Naturally, knowing one's reasons for a decision assists in crafting better informed arguments to rebuttal it (Cohen, 2010), thus supporting a robust defense against a rights-infringing decision or act. Of course, due process allows for a judicial review of the decision and is especially instrumental given its contribution to the conservation of records which can be leveraged later for contestation and review of the aforementioned decision or action. Moreover, it allows the decision maker and contesting party to evaluate the chances of an appeal in advance. Finally, the giving of reasons can serve as a non-political legitimate demand by the adjudicating body, in comparison to a more subjective requirement of decision "reasonableness". 3. **Acknowledging Human Agency of the Decision Subject and Decision Maker** - One of the core values underlying the existence of reasons for decisions is respecting human autonomy (Gillis and Simons, 2019). In the case of the _decision-subject_, the reasons issued to a decision signal his or her sovereignty, since giving reasons respects the fact that humans are autonomous people that should be treated with dignity, while unreasoned coercion "denies our moral agency and our political standing" (Mashaw, 2007). Moreover, respect comes in the form of providing grounds for detailed criticism not only when there is a right for contestation, but perhaps even more when there is no recourse for appeal (e.g., reason-giving accompanying Supreme Court decisions) and a decision subject is left with a right to public discourse. Additionally, reasons respect also the _decision-maker's_ human agency. At that capacity of a human decision maker, the presence of reasons for one's actions and decisions stands at the heart of human morality and sense of judgment and autonomy (Mashaw, 2007). Plainly put, a human decision maker needs there to be reasons for its actions, as an autonomous person. When actions are underlined with intent, the decision maker is acting as a rationale agent, thus strengthening his or her autonomy in the process. 4. **Enhancing the Decision Makers' Authority** - The giving of reasons makes actions, decisions, rules, and regulations more tolerable and acceptable. This is because acknowledging them as binding is dependent upon there being sufficient rational explanations underlying those rules (Mashaw, 2001). Simply put, "the authority of all law relies on a set of complex reasons for believing that it should be authoritative" (Mashaw, 2001). Reason-giving contributes to this objective by supporting attributes that promote compliance and adherence to the deciding body. These attributes comprise of enhancing the accountability and legitimacy of the deciding body, as well as the providence of guidance to numerous stakeholders (while simultaneously serving as a binding precedent on the decision maker itself). Those virtues jointly add to maintaining and boosting agreement, cooperation and acceptance of rules established by the decision-making body, thus bolstering the system's mandate. They also serve as a pressure system of socio-legal and relational considerations cast upon the human decision maker, which is often concerned with matters of reputation, colleagues' approval, avoidance of unpleasant repercussions when reviewed, and various other incentives to make the "right" decision and provide meaningful explanations to reason it (Mashaw, 2007). From executing a right for due process, contributing to the making of a better and more just decision, respecting human agency and promoting the decision makers' authority, reason-giving's central role in law serves purposes oriented towards the decision subject, but also to a larger degree towards the human decision maker. Leveraging the existence of societal and relational pressures upon the human decision maker, reason-giving is a legal tool aimed primarily to contain, retrain, and curb human discretion and human judgement. This conclusion is also supported by instances where a requirement for explanations in law is absent, as is the case of jurors (Doshi-Velez et al., 2017), where the value of restraining human judgement is attained by other means, such as internal deliberations. Having outlined reason-giving's role in law, being a basis for the regulatory pursuit of a right to explanation towards AI systems, it is now possible to ask to what extent can Explainability execute reason giving's role in law and society? In other words, can Explainability successfully fulfil reason-giving's functions? ## 4 Can End-User Explainability Fulli a "Right to Explanation"? "Explainability" has really come to dominate debates about ethics and regulation of machine learning [Bordt et al., 2022], largely framed as the tool to execute a right to explanation of AI systems. As several survey papers demonstrate [Adadi and Berrada, 2018, Carvalho et al., 2019, Guidotti et al., 2018], there is considerable effort employed at identifying a suitable framework or methodology for XAI in the context of end-users [Arrieta et al., 2020, Langer et al., 2021, Prakken, 2020, Tomsett et al., 2018]. However, currently, and despite this formidable effort, scholars have pointed out that the tool of Explainability is mostly used for professional debugging purposes [Mittelstadt et al., 2019], and has not yet managed to translate into a user-friendly explanation generating tool, albeit regulatory calls for an individual, decision-subject right to explanation [Goodman and Trehu, 2022]. Since "...much work in AI and ML communities tends to suffer from a lack of usability, practical interpretability and efficacy on real users" [Abdul et al., 2018], Explainability for end-users is proving to be a tough challenge. As scholars recently lamented, "...so far at least, aspirational Explainability cannot be relied upon either for effective communication about how algorithmic systems works or for holding them to account" [Goodman and Trehu, 2022]. Leveraging the legal reason-giving methodology presented in Section 3, this section proposes to frame the persistent gap between a right to explanation and Explainability, by examining to what extent can end-user Explainability fulfil the role law bestows upon explanations and reason-giving, and will accordingly ask: (a) can it contribute to the making of a better and more just decision? (b) can it facilitate due-process rights? (c) is it relevant for the authentication and respect of human agency, and (d) does it enhance the decision makers' authority? ### Can Explainability Contribute to a Better and More Just Decision? If one of reason-giving's main roles is to impact the decision-making process itself by restraining human judgement and thus contribute to a better and fairer decision, then it is hard to grasp in what form this purpose might be fulfilled given a machine now replaces a human decision maker. The impact of reason-giving on humans, slowing down decision processes and leveraging relational pressures, is largely irrelevant when a machine's decision is involved. Unlike a human decision maker, reason-giving does not serve to contain an algorithm's judgement or discretion. An algorithm does not possess a "rational" (or logic) to begin with, nor does it produce a "decision" but rather a prediction. It is not impacted, nor impressed, by what other algorithmic colleagues may think of it, nor does it seek to minimize unpleasant consequences, or "feel" accountable to anyone or anything. Therefore, currently prediction algorithms make no use of the external explanation generated to their predictions. There might be some potential impact over humans in the "surroundings" of a model, e.g., designers, deployers etc., but this impact, if exists, should be further explored, and is probably diminished. Therefore, it appears that one of the most important objectives of reason-giving cannot be attained using Explainability for end-users. ### Is Explainability Instrumental to Facilitating Due-Process Rights? To facilitate reason-giving's decision-subject purposes such as due process rights, appeal and contestation, this work proposes that Explainability should deliver decision-subjects with what law considers to be "an explanation", and reliable at that. #### 4.2.1 Can Explainability Generate "an Explanation"? In a call to stay clear of black-box models, one of the more significant scholars in the field of ML, [, Rudin] has opened that "[a]s the term is presently used in its most common form, an explanation is a separate model that is supposed to replicate most of the behavior of a black box...". In essence, the general concept dominating the XAI community is "to create a simple human-understandable approximation of a decision[...]making algorithm that accurately models the decision given the current inputs..." [Wachter et al., 2017]. These insights frame the different mehrods that were developed over the years to provide explanations for models, such as LIME, SHAP, LRP, etc. [Linardatos et al., 2020] and hints at the inadequacy of calling their output "an explanation" in nomenclature, as suggesting a reliable knowledge of how the complex model works (Mittelstadt et al., 2019). In fact, those "explanation-generating" techniques should be regarded as producing a clue to the source of the issue explored, by providing vague approximations of how the algorithm generated its output or some understanding of the features that need to be changed in order to alter the said output (Bordt et al., 2022). This insight requires further inquiry and human deduction skills, given causality may not be automatically inferred from the data an explanation has provided. It is up to ML experts to then leverage this clue and find the true cause for the decision/problem itself (Mittelstadt et al., 2019). True for ML experts, this is double the case for a layperson lacking technological background. Even if Explainability techniques can produce an actual contextualized explanation rather than a clue, scholars argue it is still a long way from producing layperson understandable explanations (Bhatt et al., 2020). In fact, most current Explainability techniques are non-accessible to a human lacking technological literacy (Wachter et al., 2017). As Figure 1 demonstrates A _run-of-the-mill_ person would have slim understanding of a saliency map, a data points analysis, or a feature importance result. Some may struggle even to understand a bar chart. Therefore, some kind of brokerage work would be needed, where a trusted expert would have to translate Explainability technique results to a person seeking an actual meaningful explanation. In this case, users' trust will be built upon experts' opinions rather than end user explanations, similar to many experiences in our lives like trusting the functioning of a navigation compass or trusting an engineer while crossing a bridge, where trust is granted not based on an explanation but on other features (Gryz and Shahbazi, 2020). Based on this examination it appears Explainability is currently not sufficient to deliver what regulators consider "an explanation". But even if it could deliver on such a requirement, can Explainability be trusted by decision-subjects to begin with? #### 4.2.2 Can Decision Subjects Rely on Explainability Generated "Explanations"? If users and decision subjects cannot rely on the explanations generated by end-user Explainability, then a major obstacle hinders its adoption. Research in the field has highlighted few potential problems in this regard. First, not all stakeholders tasked with generating explanations to automated decisions welcome this explanation generating requirement. Some concerns can include potential infringement of privacy rights, intellectual property and trade secrets, and genuine security concerns (Wachter et al., 2017; Powell, 2021; Milli et al., 2019; Tramer et al., 2016). Additionally, a potential to game the system when receiving explanations on the one hand, or a perceived inability of end-users to comprehend complex systems on the other hand, might also contribute to designers' resentment towards end-user Explainability (Powell, 2021; Zhang et al., 2019). And sometimes models are just so complex that it is claimed they simply cannot be explained in a meaningful way (Gryz and Shahbazi, 2020). One should also not overlook the inherently adversarial relationship between end-users and automated decision-maker stakeholders, given that end-users and automated decision subjects largely seek an explanation to change the machine's prediction (e.g., loan-seeker vs. credit score generator). Figure 1: An example of SHAP Explainability technique summary plots from Wood et al. (2019) Adversarial situations invite ambiguous and non-trustworthy explanations to begin with (Dimanov et al., 2020), and there are multiple techniques to possibly manipulate the "explanation" generated by Explainability methods (Bordt et al., 2022; Zhou and Joachims, 2022; Mothilal et al., 2020). ### Can Explainability Authenticate Human Autonomy? Naturally, the change of the decision makers' identity, meaning an autonomous decision-making system vs. a human decision maker nulls reason-giving's function as an acknowledger of the decision makers' human agency. However, we believe that end-user Explainability's potential to acknowledge the decision subject's humanity and autonomy should also be questioned. While residing outside the scope of this work, this function raises multiple important and fundamental moral and philosophical questions relevant to AI systems in general, and to XAI in particular (e.g., can human agency be acknowledged by a non-human agent to begin with?). So far, we have examined end-user Explainability's ability to fulfil three of reason-giving's functions in law and found it lacking. Turning to the final function, we at last find a function which Explainability is well suited to deliver, a fact which simultaneously raises serious concerns. ### Can Explainability Enhance the Decision-Makers' Authority? Contributing to the decision-makers' authority and legitimacy is another function of reason-giving in law. In the case of end-user Explainability, we find this function can be successfully fulfilled, perhaps even better than human decision makers, especially in the case of LLMs. As a recent paper exploring GPT-4's explanation's abilities demonstrates, it "is remarkably good at generating reasonable and coherent explanations, even when the output is nonsensical or wrong" (Bubeck et al., 2023). However, we recognize several problems emerging from XAI's ability to promote the decision makers' authority. First, at the heart of this function lies reason-giving's impact on the human decision maker. This impact means he or she feels accountable, seeks legitimacy and is bound by his or her previous decisions if they are to serve as guidance. Therefore, the replacement of a human with a machine nullifies most, if not all, of the aforementioned human effects. Moreover, the "explanation" Explainability generates is limited in the sense that we inherently expect an explanation to be based on some knowledge of the world (contextualized), whereas an algorithm only "knows" (if one can even attribute such adjective to a machine) what it was shown or defined to "know" (Bordt et al., 2022; Lipton, 2018). In other words, and until Artificial General Intelligence proves otherwise, "[e]very AI system is the fabled tabula rasa; it "knows" only as much as it has been told" (Gryz and Shahbazi, 2020). Under these conditions, an explanation cannot function as a rule, nor as guidance. Equally disconcertingly perhaps, although a human decision maker is increasingly replaced by a machine, one fact has yet to change, and that is the human identity of the decision subject. In automated decision making, a human is still the client/target of the explanation, a matter which potentially gives rise to rather alarming consequences. Research has shown Explainability's potential to cause human over-reliance on the system (Smith-Renner et al., 2020), as well as the opportunity for wrongdoing and manipulation by promoting misguided trust. This phenomenon of nudging users to act according to others' interest is known as "Dark Patterns" in XAI (Gray et al., 2018), and benefits from humans' "automation bias" towards trusting machines (Eiband et al., 2019; Kaminski and Urban, 2021; Lyell and Coiera, 2017). For example, people are more eager to comply with a request simply by being presented a placebic justification by computerized systems (Eiband et al., 2019). Further research has suggested that user manipulation can occur even unintentionally, causing "Explainability pitfalls" merely by choosing to present people with one sort of explanation over another (Ehsan and Riedl, 2021). It should also be pointed out that end users XAI appears to drift away from its initial trust building objective. For example, (Laato et al., 2022) have shown that transparency is mostly evaluated in the literature according to the user's perception of transparency, rather than actual transparency attributes of the system. As a systematic review of papers in the field conveys, research in the field scarcely highlights the purpose for generating explanations to begin with (Nunes and Jannach, 2017). Moreover, it appears the research of end-users' XAI is increasingly shifting towards exploring which explanation practices will impact users' trust and increase perceived trustworthiness of the system, rather than produce a meaningful and reliable tool to scrutinize AI systems by users (Forster et al., 2020). As Figure 2 taken from (Nunes and Jannach, 2017) demonstrates, surveying hundreds of XAI papers in the last decades displays a plateau or even an overall decrease in the study of XAI for transparency purposes, and a big increase in researching explanations' effectiveness, explanation techniques to enhance user's trust, techniques to increase explanations' persuasiveness, and to elevate user's levels of satisfaction from the system. Two interesting examples of this trend include Weitz et al. (2021) demonstrating how the use of virtual agents for Explainability seems especially promising for the purpose of increasing users' perceived trust in the system, or Goldman and Bustin (2022) experimenting with explanations as a technique to elevate user's comfort level in automated driving maneuvers of a simulated autonomous vehicle, to avoid manual take-overs. Both examples showcase how end-users' XAI might drift away from its original purpose of using explanations to promote "appropriate trust" (Gunning and Aha, 2019) and assist end-users to properly scrutinize AI systems (Forster et al., 2020), towards the study of how XAI can be used to influence end-users according to third parties' incentives, well intended as they may be. Finally, the emerging capabilities of LLMs demonstrate that machines might pose a risk when pursuing end-user Explainability. Recent models such as GPT-4 exhibit an increasing ability to generate convincing explanations for false decisions, a lack of consistent link between the decision-making process and the explanation generated to it, and a growing capability to generate specially tailored explanations to a human client (Bubeck et al., 2023). As Turpin et al recently demonstrated, LLMs can produce step-by-step reasoning which systematically misrepresents the real reason underlying the model's prediction (Turpin et al., 2023). Therefore, there is real potential to contribute to the decision-making system's trustworthiness, even when that trust is an unwarranted, misleading and even a dangerous one. These continuously improving capabilities should serve as a trigger warning for those promoting end user explanations. ## 5 Conclusion The deconstruction of reason-giving in the legal system this study presents offers a methodological framework to analyze the gap between a right to explanation and end-user Explainability. It highlights reason-giving's role as impacting the human decision maker, as well as facilitating decision subjects' rights. Given the change in the identity of the decision maker from human to machine, current end-user Explainability struggles to deliver most of explanations' functions in law, which include promoting a better and more just decision, facilitating due-process and acknowledging human agency. In contrast, end-user Explainability emerges as a successful mechanism to fulfill reason giving's fourth function in law, i.e., enhancing the decision makers' authority. However, this ability raises a set of risks for manipulating decision subjects by humans and machine alike. A key limitation of the case against Explainability is that it does not yet provide an alternative solution to the risks stemming from recent AI advancements (Bubeck et al., 2023). Nevertheless, we fear the reliance on inadequate techniques, coupled with newly generated risks, is perilous. Hence, we hope our work will impact how Explainability is being developed and implemented and will serve as a warning sign from incompatible usage and unwarranted research directions. Figure 2: (Nunes and Jannach, 2017) The figure shows that past decades demonstrate a plateau, or even a decrease, in researching explainability techniques for the purpose of transparency (“explain how the system works”) and a sharp increase in purposes such as enhance explanations’ effectiveness (“help users make good decisions”), enhance trust (“increase users’ confidence in the system”) and enhance persuasiveness (“convince users to try or buy”). Although this paper’s survey dates to 2017, it is plausible to assume that a current overview will demonstrate an even stronger orientation towards user influencing purposes. All purposes definitions are taken from Table 8 of the surveyed paper. ## Acknowledgement This work has been supported by the Israeli Science Foundation research grant 1437/22 and a grant from the Tel Aviv University Center for AI and Data Science (TAD).
2303.15586
Billion-years old proteins show the importance of N-lobe orientation in Imatinib-kinase selectivity
The molecular origins of proteins' functions are a combinatorial search problem in the proteins' sequence space, which requires enormous resources to solve. However, evolution has already solved this optimization problem for us, leaving behind suboptimal solutions along the way. Comparing suboptimal proteins along the evolutionary pathway, or ancestors, with more optimal modern proteins can lead us to the exact molecular origins of a particular function. In this paper, we study the long-standing question of the selectivity of Imatinib, an anti-cancer kinase inhibitor drug. We study two related kinases, Src and Abl, and four of their common ancestors, to which Imatinib has significantly different affinities. Our results show that the orientation of the N-lobe with respect to the C-lobe varies between the kinases along their evolutionary pathway and is consistent with Imatinib's inhibition constants as measured experimentally. The conformation of the DFG-motif (Asp-Phe-Gly) and the structure of the P-loop also seem to have different stable conformations along the evolutionary pathway, which is aligned with Imatinib's affinity.
Zahra Shamsi, Diwakar Shukla
2023-03-27T20:31:07Z
http://arxiv.org/abs/2303.15586v1
Billion-years old proteins show the importance of N-lobe orientation in Imatinib-kinase selectivity. ###### Abstract The molecular origins of proteins' functions are a combinatorial search problem in the proteins' sequence space, which requires enormous resources to solve. However, evolution has already solved this optimization problem for us, leaving behind suboptimal solutions along the way. Comparing suboptimal proteins along the evolutionary pathway, or ancestors, with more optimal modern proteins can lead us to the exact molecular origins of a particular function. In this paper, we study the long-standing question of the selectivity of Imatinib, an anti-cancer kinase inhibitor drug. We study two related kinases, Src and Abl, and four of their common ancestors, to which Imatinib has significantly different affinities. Our results show that the orientation of the N-lobe with respect to the C-lobe varies between the kinases along their evolutionary pathway and is consistent with Imatinib's inhibition constants as measured experimentally. The conformation of the DFG-motif (Asp-Phe-Gly) and the structure of the P-loop also seem to have different stable conformations along the evolutionary pathway, which is aligned with Imatinib's affinity. Evolutionary pathway Kinase inhibitor Simulations ## 1 Introduction Protein kinases are a class of enzyme that transfer phosphate groups from ATP to other proteins, thereby signaling growth and cell proliferation. Mutations in kinases can lead to uncontrolled cell growth and eventually cancer, making kinases a prime target for drug design, typically of small molecule inhibitors [1, 2]. Imatinib is one of the clinically successful drugs for the treatment of multiple cancers like chronic myelogenous leukemia [3]. It selectively inhibits Abl and not other structurally similar kinases like Src. Since the overactive Abl mutant only exists in cancer cells, Imatinib has a limited effect on healthy cells [4]. Why does the drug inhibit Abl and not Src despite the high protein sequence identity between Src and Abl (\(\sim\)46%) [4]? After two decades of study on the basis of Imatinib selectivity between Abl and Src, the question still remains unanswered. During these years, extensive work has been done to elucidate this problem using different approaches from NMR and fast kinetics [5], sequence swapping [4], and ancestral gene reconstruction [6] experiments to free energy calculations and long time-scale molecular dynamics (MD) simulations [7, 8, 9]. Each of these studies gave insight into different aspects of the problem, but a full answer is still missing. DFG-motif (Asp-Phe-Gly) is a highly conserved segment of the activation loop in kinase domains that is proposed to play a major role in the selection mechanism since Imatinib only binds a specific configuration of DFG. Two conformations of the DFG motif are the inactive "DFG-out" and the active "DFG-in". Imatinib only binds to DFG-out conformation of kinases. Multiple groups have argued that the kinetic basis of Imatinib selectivity for Abl, compared to Src kinase is because of a pre-existing equilibrium between DFG-out and DFG-in, or so-called a conformational selection mechanism [10, 7]. More recently despite the general belief on the critical role of conformational selection, Agafonov \(et\)\(al\). claimed that the Imatinib selectivity is rooted in conformational changes after drug binding, not before [5]. They support their hypothesis using NMR studies and showed the presence of induced-fit mechanism in Abl kinase. The next step was to find a sequence-function relationship, and clarify which set of residues is responsible for the accessibility of induced-fit mechanism in Abl, but not in Src. Sequence swapping had been performed to make Src similar to Abl but all studies had failed to illuminate the atomistic determinants of selectivity [4, 7]. On the other hand, phylogenetic studies have been used as powerful tools to study protein sequence-function relationships in different problems [11, 12, 13]. Therefore, following the NMR study, Wilson \(et\)\(al\). recreated the evolutionary pathway between Src and Abl by resurrecting the common ancestors between them [6]. Using x-ray structures of an ancestral kinase and binding kinetics data from NMR, they showed the evolution of their proposed induced-fit mechanism and its effect on the selectivity of Imatinib [6]. However, proteins are not just sequences of amino acids; they can adopt thousands of different 3D conformations, among which only specific sets of conformations are functional. In the aforementioned studies with the plausible sequence-function relationship, still, the sequence-structure-function relationship is missing. Even though these studies have shown the presence of induced-fit mechanism, they do not give any details on what are these mechanisms in the protein's structure. In this study, we investigate the sequence-structure relationship by reconstructing the common ancestors of Src and Abl computationally. Study of evolutionary pathways is a natural way to identify the key amino acid changes that differentiate one family member from another [14]. Evolution generates functional proteins in every stage despite the large sequence differences between them. It diversifies functions by altering their structure and the associated free-energy landscapes. The differences between Abl and Src also have evolved over a billion years from their common ancestor. Here, we borrow the sequences of the kinase ancestors from literature [6] and reconstruct them computationally (Figure 1(a) which is adapted from ref. [15]). To elucidate the sequence-structure-function relationship, we simulated four ancestors, ANC-A1, ANC-A2, ANC-AS, and ANC-S1, plus ANC-AS with 15 suggested mutations which changed () the affinity significantly [6]. We refer to ANC-AS+15 as ANC-AS(+15) in this paper to prevent any confusion. We compare the results of these simulations with the experimental values of free energies and inhibition constants of the ancestors. We also denote the evolution of conformations that play role in the conformational selection mechanism and induced-fit mechanism. Figure 1: (a) Phylogenetic tree of Abl and Src families showing the reconstructed nodes.(b) Crystal structure of Abl kinase (PDB ID: 2HYY [16]) is shown with labels for important regions. ## 2 Results ### Evolution of conformational differences: N-lobe rotation Comparison of crystal structures shows a significant difference in the orientation of N-lobe with respect to the C-lobe between Src and Abl. This rotational angle is known to play a role in kinase activation, as the catalytic cleft between the lobes is relatively closed in inactive kinases [6]. The ancestral study by Wilson \(et\ al.\) suggested that altering only 15 residues in ANC-AS's N-lobe to the corresponding residues in Abl kinase drastically increased the Imatinib affinity to a level similar to Abl. This indicates the importance of N-lobe residues in the drug binding mechanism [6]. Therefore, it is likely that lobe rotation can be the dynamics effect of this 15 residue mutations in N-lobe. Here, we want to calculate the orientation of N-lobe along the evolutionary pathway and see if it is correlated with the Imatinib binding affinities. We studied the dynamics of four common ancestors of Src and Abl, an ancestor with the 15 mutations, and the two modern kinases. In these long timescale unbiased MD simulations no Imatinib molecule was present. The simulations were performed using adaptive sampling technique and Markov State Model (MSM) analysis [17; 18; 19]. In total, we performed 0.545 ms of aggregated unbiased MD simulation for the seven apo kinases. In order to quantify the lobe rotation, we define vectors in C-lobe (V1) and N-lobe (V2), and calculate the angle between them (named as \(\theta\)) as shown in Figure 2(a). Our results show a gradual shift in the density distributions of \(\theta\) as we move from Abl to Src kinase on the phylogenetic tree (Figure 2 (b)). The peak values of \(\theta\) distributions are well correlated with the experimental values of Imatinib inhibition constants (\(K_{i}\)) at 25\({}^{\circ}\)C as reported in the literature [6]. This result suggests that the available area at the N-lobe and C-lobe is one of the major factors affecting the Imatinib binding affinity. ### Evolution of induced-fit mechanism: secondary structure of P-loop. Changes in the secondary structure of P-loop are critical reaction coordinates for the Imatinib binding process. To observe the evolution of P-loop conformation, we calculated the root mean square deviation (RMSD) of the P-loop from the crystal structures of Abl and Src in the kinases (Figure 3). Based on our observations, Src-like P-loop is stable in all seven kinases, whereas the ability to form Abl-like P-loop has been lost in ANC-S1 and Src. This suggests Imatinib binding from the P-loop side and the following induced-fit mechanism is not possible in Src and ANC-S1, but it could Figure 2: **N-lobe rotational angle of different kinases.** (a) The vectors defining the rotational angle are shown on the kinase structure. (b) The distribution of angles for different kinases reveals how position of N-lobe with respect to C-lobe evolved along the evolutionary pathway and the angle shifted from Src to Abl. (c) The correlation between the most likely value of \(\theta\) calculated from the simulation and the experimental values of Imatinib inhibition constants (\(K_{i}\)) at 25\({}^{\circ}\)C as reported in literature [6]. be feasible in the other five kinases. The exact residues responsible for Abl-like kinked P-loop should be a subset of the residue differences between ANC-AS and ANC-S1 (ANC-AS and ANC-S1 are 82% identical as shown in Figure S1). Looking at the simulations of Abl kinase and its crystal structure, we can observe that two inter residue interactions responsible for Abl P-loop kinked conformation are Y253-N322 and Q252-N322. Both of these interactions are conserved in ANC-A1, ANC-A2, and ANC-AS(+15), which makes P-loop more likely to form a helical structure as shown in Figure 3(a). However, only one of the interactions is conserved in ANC-AS and ANC-S1, and none in Src. This partially explains the lower helical content of P-loop in ANC-AS, ANC-S1, and Src. The corresponding residue pairs are F-S and Q-S in ANC-AS/ANC-S1 and F-S and C-S in Src kinase. ### Evolution of conformational selection: DFG flip's mechanism. To investigate the dynamics of the DFG-motif, distances between two pairs of residues were measured as shown in Figure 4. Our results indicate that all ancestors are able to adopt both DFG-in and DFG-out conformations. In Src kinase, DFG-out conformation is relatively less stable, whereas it is stable in Abl and all the ancestors. The difference in the stability of DFG-out conformation between Src and other kinases is \(\sim\)1 kcal/mol, which is not sufficient to justify the \(\sim\)3000 fold difference in their inhibition constants [4]. Stable DFG-out conformation in the ancestors and Abl, indicates that the residues responsible for less stable DFG-out conformation in Src are among the set of residue differences between ANC-S1 and Src (ANC-S1 and Src are 75% identical as shown in Figure S1). ### Evolution of conformational differences: secondary structure of A-loop. The activation-loop (A-loop) in a kinase is usually includes the highly conserved DFG motif and 17 of its following residues. This region adopts a closed or inactive conformation and an open or active conformation. In open conformations of A-loop, substrate proteins are able to bind the kinases for phosphate transfer, whereas in closed Figure 3: **Evolutionary pathway of P-loop’s conformation.** Secondary structure of P-loop in (a) Abl [16] and (b) Src’s [4] crystal structure is shown. Configuration of P-loop is the most distinct difference between Abl and Src’s crystal structures. (c) Free energy landscapes of P-loop conformation show how ancestors lose their Abl-like P-loop structure as they get closer to Src. (d) Src and ancestors closer to Src and less likely to forms helical P-loop. Residue numbering is based on the sequence of ANC-AS(+15) as presented in the Figure S2. Colorbar shows free energies in kcal/mol. configurations, ATP is not exposed to the substrate proteins and phosphate transfer is not feasible. Imatinib binds the closed conformation of Abl and is incapable of binding to its open configuration [20]. Analyzing the simulations of the seven apo kinases showed that the Abl-like inactive configuration of A-loop is accessible in Abl's closer ancestors and disappears in ANC-S1. Therefore residues responsible for inactive Abl-like A-loop conformation should be among the sequence differences between ANC-S1 and ANC-AS (Figure 5). On the other hand, Src-like inactive structure of A-loop appears in ANC-AS, which shows residues coding for Src-like helical conformation of A-loop are among the sequence differences between ANC-AS and ANC-AS(+15) (Figure 5). We also see in Figure 5(d) that A-loop in Src and ancestors closer to it tend to be less structured as compared to Abl. ## 3 Discussion Protein-ligand selectivity remains a mysterious phenomenon in biology due to the lack of knowledge in biophysical mechanism of ligand binding. Here, we studied the long-lasting problem of Imatinib selectivity toward Abl kinase, using long MD simulations of Src, Abl, and their common ancestors. We compared the simulation results with experimental activity and binding free energy values to better understand the mechanism. We evaluated two of the best-known hypotheses that try to explain the selectivity mechanism by following their corresponding conformational changes along the evolutionary pathway connections Abl and Src kinases. The first hypothesis assumes the high selectivity of Imatinib towards Abl is due to its particular conformation of DFG motif. We compared the dynamics of DFG motif in Src, Abl and their ancestors and observed that all of them can adopt DFG-out conformation, which is crucial for Imatinib binding. However, DFG-out conformation seemed to be less stable in Src kinase as compared to the other six. This suggests that Imatinib can potentially bind to all of these kinases, but it is less likely to find the desirable conformation in Src kinase and bind to it. The second hypothesis that we studied was induced-fit mechanism. Even though we still do not know what exactly the induced-fit mechanism is, we believe P-loop conformation plays a role in the process. Therefore we studied the evolution of 3D structure of P-loop and observed that the helical P-loop adapts a \(\beta\)-sheet-rich conformation as we go from Src to Abl kinase on their phylogenetic tree. This change seemed to be very gradual along the evolutionary pathway and align with the Imatinib towards the kinases measured by experiments [6]. Figure 4: **Stability of DFG-in and DFG-out conformations.** (a) Free energy landscapes of DFG flip for Abl, ancestors and Src are shown. When both the distances are greater than 1 nm, DFG motif is in DFG-out conformation, and when both of them are less than 1 nm it is in DFG-in conformation. Imatinib only binds DFG-out conformation. Colorbar shows free energies in kcal/mol. DFG-in and DFG-out conformations is shown in (b). Residue numbering is based on the sequence of ANC-AS(+15) as presented in the SI Figure S2 Although Imatinib has achieved remarkable success in treating chronic myeloid leukemia, the emergence of resistance to this agent weakens the prospect of a cure for this leukemia. Imatinib resistance in most of the patients coincides with single point mutations in Abl kinase. In this study, we also rationalized the origin of some of these mutations. A set of mutations that reported to be among the most frequent mutation in Imatinib resistance patients are Y253F, Y253H, and Q252H in Abl kinase [21]. We showed that these two residues are crucial to forming the helical conformation in the P-loop by making Y253-N322 and Q252-N322 interactions. In conclusion, our data suggest that future drug design efforts should focus more on understanding the binding pathways and their corresponding induced-fit steps. To this end, the full binding process of more drugs and drug targets need to be studied at atomic resolution with the hope that new insights will translate into improved next-generation compounds. As we learn more about Imatinib selectivity mechanism, these insights could be transferred to understand other selectivity challenges.
2307.07475
Analysis of Unified Galaxy Power Spectrum Multipole Measurements
We present a series of full-shape analyses of galaxy power spectrum multipole measurements from the 6dFGS, BOSS, and eBOSS galaxy surveys. We use an emulated effective field theory of large-scale structure (EFTofLSS) model to conduct these analyses. We exploit the accelerated prediction speed of the neural-network-based emulator to explore various analysis setups for our cosmological inference pipeline. Via a set of mock full-shape analyses of synthetic power spectrum multipoles, designed to approximate measurements from the surveys above, we demonstrate that the use of alternative priors on nuisance parameters and restricted model complexity reduces many of the biases previously observed in marginalised cosmological constraints coming from EFTofLSS analyses. The alternative priors take the form of a Jeffreys prior; a non-informative prior that can mitigate against biases induced by marginalising over poorly constrained nuisance parameters. When performing a joint analysis of all synthetic multipoles, we see an improvement in the level of agreement between the marginalised $\ln{\left(10^{10}A_s\right)}$ constraints and the truth; from $\sim2.0\sigma$ to $\sim0.42\sigma$. Using our pipeline to analyse the measured multipoles, we find an improvement in the level of agreement with cosmic microwave background (CMB) results; from $\sim2.4\sigma$ to $\sim0.5\sigma$. Therefore, we conclude that the spectroscopic galaxy survey datasets listed above are consistent with constraints obtained from the CMB.
Jamie Donald-McCann, Rafaela Gsponer, Ruiyang Zhao, Kazuya Koyama, Florian Beutler
2023-07-14T16:55:25Z
http://arxiv.org/abs/2307.07475v3
# Analysis of Unified Galaxy Power Spectrum Multipole Measurements ###### Abstract We present a series of full-shape analyses of galaxy power spectrum multipole measurements from the 6dFGS, BOSS, and eBOSS galaxy surveys. We use an emulated effective field theory of large-scale structure (EFTofLSS) model to conduct these analyses. We exploit the accelerated prediction speed of the neural-network-based emulator to explore various analysis setups for our cosmological inference pipeline. Via a set of mock full-shape analyses of synthetic power spectrum multipoles, designed to approximate measurements from the surveys above, we demonstrate that the use of alternative priors on nuisance parameters and restricted model complexity reduces many of the biases previously observed in marginalised cosmological constraints coming from EFTofLSS analyses. The alternative priors take the form of a Jeffreys prior; a non-informative prior that can mitigate against biases induced by marginalising over poorly constrained nuisance parameters. When performing a joint analysis of all synthetic multipoles, we see an improvement in the level of agreement between the marginalised \(\ln\left(10^{10}A_{s}\right)\) constraints and the truth; from \(\sim 2.0\sigma\) to \(\sim 0.42\sigma\). Using our pipeline to analyse the measured multipoles, we find an improvement in the level of agreement with cosmic microwave background (CMB) results; from \(\sim 2.4\sigma\) to \(\sim 0.5\sigma\). Therefore, we conclude that the spectroscopic galaxy survey datasets listed above are consistent with constraints obtained from the CMB. keywords: large-scale structure of the Universe - methods: data analysis - cosmology: cosmological parameters ## 1 Introduction Conducting _full-shape_ analyses of galaxy clustering statistics (Satpathy et al., 2017; Kobayashi et al., 2021; Chen et al., 2022; Lange et al., 2023), such as the power spectrum, is becoming a standard approach to complement analyses that focus of specific features like the baryon acoustic oscillations (BAO). To run one of these full-shape analyses, we require a theoretical model that allows us to make a prediction for the clustering statistic of interest for a given set of cosmological parameters \(\mathbf{\theta}\). There are two possible routes here: 1.) use a simulation-based model, 2.) use an analytical model. A simulation-based model will likely be more accurate on small, nonlinear, scales. Comparisons of dark matter only N-body simulation codes have shown agreement in predictions of the dark matter power spectrum for scales \(k\lesssim 1\)\(h\) Mpc\({}^{-1}\)(Schneider et al., 2016; Grove et al., 2022). However, developing a simulation-based model requires many simulations with different sets of cosmological parameters sampling from the parameter space of interest. These suites of simulations (e.g. Heitmann et al., 2010; Maksimova et al., 2021) require huge computational cost to produce, and this cost can prohibit the use of such models. An analytic model may be less accurate on nonlinear scales (Foreman et al., 2016; Alkhanishvili et al., 2022), but using such a model will incur a significantly lower computational cost. One such analytical model that is gaining in popularity when conducting full-shape analyses is the _effective field theory of large-scale structure_ (EFTofLSS; Baumann et al., 2012; Carrasco et al., 2012; Senatore, 2015; de la Bella et al., 2017; Philcox et al., 2020; Ivanov, 2022; Mergulhio et al., 2023; Moretti et al., 2023). This perturbation-theory based model maps predictions for the dark matter clustering to that of galaxies via a series of nuisance parameters \(\mathbf{\phi}\), that are marginalised over when putting constraints on the cosmological parameters \(\mathbf{\theta}\). Two popular examples of EFTofLSS code implementations are PyBird(D'Amico et al., 2021) and CLASS-PT (Chudaykin et al., 2020). Predictions for the galaxy power spectrum multipoles can be made with PyBird in \(\mathcal{O}(1\ \mathrm{s})\)1. This is significantly faster than a numerical simulation, but running an MCMC with PyBird still requires a non-negligible amount of computational resources. This cost can limit the exploration of the analysis setup when using this model to carry out parameter inference. Footnote 1: This is a processor dependant statement. In Donald-McCann et al. (2022) the prediction speed was reported as \(1.01\ \mathrm{s}\pm 13.1\) ms. Based on 100 predictions made on a laptop with an Intel i5 2.50 GHz dual-core processor with four threads and 8 GB of RAM. Table 1 of (Chudaykin et al., 2020) reports prediction speeds from CLASS-PT. In default mode, the performance appears similar to PyBird. The idea of _emulation_ to reduce computational cost is being used more and more frequently for cosmolog and is now used to accelerate inference pipelines that are based on analytic theory models (Albers et al., 2019; Arico et al., 2022; DeRose et al., 2022; Mancini et al., 2022; Gunther et al., 2022; Egemeier et al., 2022; Gunther, 2023; Nygaard et al., 2023) as well as those with simulation-based models (Heitmann et al., 2006; Agarwal et al., 2014; Nishimichi et al., 2019; Euclid Collaboration et al., 2021; Storey-Fisher et al., 2022). These emulators consist of nonlinear interpolators that are fitted to (or trained with) a set of input and output pairs \(\{\mathbf{\theta},\mathbf{Y}(\mathbf{\theta})\}\), with \(\mathbf{Y}(\mathbf{\theta})\) being the function of interest. The nonlinear interpolation scheme generally takes the form of a machine learning algorithm like a Gaussian process or neural network (NN). In Donald-McCann et al. (2022), the NN-based EFTEMU was added to the matrysohka suite of emulators (Donald-McCann et al., 2022). The EFTEMU was developed to reduce the cost of EFTofLSS model evaluations and increased the prediction speed of the galaxy power spectrum multipoles by over three orders of magnitude. This increase in prediction speed opens up the opportunity to test more analysis setup choices when using the EFTofLSS model. In this paper, we exploit the increased prediction speed from the EFTofLSS output to perform full-shape analyses of galaxy power spectrum multipole measurements from several completed galaxy surveys. We also examine how the analysis setup impacts the inferred cosmology. Through a series of mock full-shape analyses, we validate our cosmological inference pipeline. We then demonstrate that using alternative priors and more restrictive sets of nuisance parameters can alleviate some of the biases in the inferred cosmological parameters that can be seen when conducting full-shape analyses with the EFTofLSS. We find that using these alternative priors can alleviate some of the slight tensions in the marginalised cosmological parameter constraints when comparing with results from cosmic microwave background (CMB) analyses. The paper is organised as follows. In Section 2, we introduce the galaxy surveys considered for this work, along with the multipole measurements used. In Section 3, we further introduce the EFTofLSS and discuss any changes made to the EFTEMU for this work. In Section 4, we present a series of mock analyses designed to test our inference pipeline. In Section 5, we present results from the analysis of the multipole measurements introduced in Section 2. We conclude in Section 6. ## 2 Data There have now been several large-scale spectroscopic redshift surveys that have run to completion; combining to provide detailed maps of the universe covering a wide redshift range. For this work, we focus on three surveys that cover distinct redshift ranges: the _6dF galaxy survey_ (6dFGS, Jones et al., 2004, 2009), the _baryon oscillation spectroscopic survey_(BOSS, Dawson et al., 2013; Alam et al., 2017), and the _extended baryon oscillation spectroscopic survey_(eBOSS, Dawson et al., 2016; eBOSS Collaboration et al., 2021). The redshift catalogues from each of these surveys are now publicly available such that galaxy clustering measurements can be made for each of them. Beutler and McDonald (2021) presents measurements of the power spectrum multipoles from each of these surveys, along with wide-angle and window function matrices. These matrices allow wide-angle effects and the survey window function to be included in theory predictions of the galaxy power spectrum multipoles via two simple matrix multiplications. All measurements have 40 \(k\)-bins over the range \(0<k<0.4\)\(h\) Mpc\({}^{-1}\). The BOSS and eBOSS samples are split into subsamples for the northern and southern galactic cap (NGC and SGC) and, in the case of BOSS, two redshift bins (BOSS21 and BOSS23). This results in seven sets of multipoles with four effective redshifts \(z_{\rm eff}=[0.096,0.38,0.61,1.52]\). We refer the reader to Table 1 in Beutler and McDonald (2021) for more details about each sample. ### Mocks When exploring analysis setups, we need to examine if a particular setup leads to more or less bias in the inferred cosmological parameters than another. Mock multipoles were published alongside the measurements in Beutler and McDonald (2021). These mocks are those used to calculate covariance matrices and contain survey geometry and systematics to match their associated measurements. Each of the galaxy surveys considered for this work has its own set of mocks. The number of mock realisations and specifics of simulations used to produce them are covered in Section 5 of Beutler and McDonald (2021), or for the 6dFGS mocks see Koda et al. (2016); Carter et al. (2018), for BOSS see Klypin et al. (2016); Kitaura et al. (2016), and for eBOSS see Chuang et al. (2015); Zhao et al. (2021). It is helpful to have sets of mock multipoles for which we know the true cosmology as well as the "true" values for the nuisance parameters of the EFTofLSS model (bias parameters and counterterms, see Section 3). To that end, we produce a set of mock multipoles using PyBird with the cosmology set to the TT,TE,EE+lowE+lensing+BAO \(\Lambda\)CDM best-fit values from Table 2 in Planck Collaboration et al. (2020, henceforth Planck, 2018). The nuisance parameters are fit to the mean of the mock multipole measurements published in Beutler and McDonald (2021) for each sample. We refer to the resulting multipoles as the "PyBird mocks". The nuisance parameters for the PyBird mocks are determined by finding the maximum _a posteriori_ (MAP) estimate for four bias parameters and six counterterms. This is done by finding the minimum of the negative log-likelihood (see Section 4.1 for likelihood definition) with a wide uniform prior on all bias parameters and counterterms. Except for the linear bias, this prior ranges from \(-50<b_{i}<50\). The linear bias prior is truncated at zero to allow for positive values only. The nuisance parameters are fit to the mean of the mock multipoles on scales \(0<k<0.2\)\(h\) Mpc\({}^{-1}\), and the covariance is rescaled by a factor of \(10\).2 Figure 1 shows the PyBird mock multipoles alongside the multipole measurements and mocks from (Beutler and McDonald, 2021) for the \(z=0.61\) NGC sample. The bottom panel shows the residuals normalised by the rescaled covariance \(\frac{\Delta(k)}{\{\sigma(k)/10\}}\). We can see that the agreement of the PyBird mock multipoles and the mocks of (Beutler and McDonald, 2021) is within \(1\sigma\). It should be noted that the agreement is better still when considering the unscaled covariance. Plots showing the PyBird mocks for the other samples all exhibit similar results. Footnote 2: We rescale the covariance so that the nuisance parameters are well constrained for each sample. We could, in principle, rescale by a large factor that depends on the number of mock realisations for each sample. However, when we are producing the PyBird mocks, we are not looking to answer how well PyBird can recover different simulation methods with such large effective volumes. We are solely trying to produce synthetic multipoles that have the same functional form as the data for which all the true parameters are known. ### Mocks When exploring analysis setups, we need to examine if a particular setup leads to more or less bias in the inferred cosmological parameters than another. Mock multipoles were published alongside the measurements in Beutler and McDonald (2021). These mocks are those used to calculate covariance matrices and contain survey geometry and systematics to match their associated measurements. Each of the galaxy surveys considered for this work has its own set of mocks. The number of mock realisations and specifics of simulations used to produce them are covered in Section 5 of Beutler and McDonald (2021), or for the 6dFGS mocks see Koda et al. (2016); Carter et al. (2018), for BOSS see Klypin et al. (2016); Kitaura et al. (2016), and for eBOSS see Chuang et al. (2015); Zhao et al. (2021). It is helpful to have sets of mock multipoles for which we know the true cosmology as well as the "true" values for the nuisance parameters of the EFTofLSS model (bias parameters and counterterms, see Section 3). To that end, we produce a set of mock multipoles using PyBird with the cosmology set to the TT,TE,EE+lowE+lensing+BAO \(\Lambda\)CDM best-fit values from Table 2 in Planck Collaboration et al. (2020, henceforth Planck, 2018). The nuisance parameters are fit to the mean of the mock multipole measurements published in Beutler and McDonald (2021) for each sample. We refer to the resulting multipoles as the "PyBird mocks". The nuisance parameters for the PyBird mocks are determined by finding the maximum _a posteriori_ (MAP) estimate for four bias parameters and six counterterms. This is done by finding the minimum of the negative log-likelihood (see Section 4.1 for likelihood definition) with a wide uniform prior on all bias parameters and counterterms. Except for the linear bias, this prior ranges from \(-50<b_{i}<50\). The linear bias prior is truncated at zero to allow for positive values only. The nuisance parameters are fit to the mean of the mock multipoles on scales \(0<k<0.2\)\(h\) Mpc\({}^{-1}\), and the covariance is rescaled by a factor of \(10\).2 Figure 1 shows the PyBird mock multipoles alongside the multipole measurements and mocks from (Beutler and McDonald, 2021) for the \(z=0.61\) NGC sample. The bottom panel shows the residuals normalised by the rescaled covariance \(\frac{\Delta(k)}{\{\sigma(k)/10\}}\). We can see that the agreement of the PyBird mock multipoles and the mocks of (Beutler and McDonald, 2021) is within \(1\sigma\). It should be noted that the agreement is better still when considering the unscaled covariance. Plots showing the PyBird mocks for the other samples all exhibit similar results. ## 3 Model As alluded to in Section 1, there are two general routes to modelling the galaxy power spectrum. The first is to use numerical simulations; providing accurate small-scale predictions but coming at a high computational cost. The second is to develop an analytic model; producing computationally efficient predictions (in comparison to numerical simulations) but being less accurate on small scales. Probing the small, nonlinear, scales of the galaxy power spectrum can improve the constraints on the cosmological parameters. For a given survey, we will have a larger number of galaxy-galaxy pairs with small separations than large separations; thus, the statistical error on small scales will be lower than on large scales. The EFTofLSS was developed to extend the scales of validity of analytic predictions, allowing us to probe smaller scales and exploit the reduced statistical error. ### EFTofLSS Standard perturbation theory (SPT) models the dark matter overdensity field as a perfect fluid. Although successful on large scales, where the density perturbations are small, its description starts to break down when entering nonlinear scales (1-loop SPT breaks down at \(k\sim 0.1\)\(h\) Mpc\({}^{-1}\) for redshift \(z=0\), Carlson et al. 2009). In recent years considerable effort has been put into an effective description which extends the range of SPT into a mildly nonlinear regime. EFTofLSS introduces a cut-off scale which acts as an effective low-pass filter, leading to the fluid equations now being solved in terms of long-wavelength overdensity and velocity fields. Furthermore, an effective stress-energy tensor is introduced, which captures the effects of the small scales physics on the larger scales. At a given order \(n\), the effect of these small scales and their backreaction onto the long wavelength field can be captured by a finite number of so-called "counterterms" \(c_{i}\). These counterterms are free parameters that must be fitted to data or calibrated with simulations. Including a nonlinear bias scheme, mapping the underlying dark matter field as described above to the observed galaxy densities, the 2D redshift-space galaxy power spectrum in terms of scale \(k\) and cosine of angle to the line-of-sight \(\mu\), can be written as \[P_{g}(k,\mu) =Z_{1}(\mu)^{2}P_{11}(k)\] \[+2\int\frac{d^{3}q}{(2\pi)^{3}}Z_{2}(\mathbf{q},\mathbf{k}\cdot \mathbf{q},\mu)^{2}P_{11}(|\mathbf{k}\cdot\mathbf{q}|)P_{11}(q)\] \[+6Z_{1}(\mu)P_{11}(k)\int\frac{d^{3}q}{(2\pi)^{3}}Z_{3}(\mathbf{ q},\mathbf{q},\mathbf{k},\mu)P_{11}(q)\] \[+2Z_{1}(\mu)P_{11}(k)\left(c_{\mathcal{C}t}\frac{k^{2}}{k_{M}^{2 }}+c_{r,1}\mu^{2}\frac{k^{2}}{k_{M}^{2}}+c_{r,2}\mu^{4}\frac{k^{2}}{k_{M}^{2}}\right)\] \[+\frac{1}{\tilde{n}_{g}}\left(c_{\epsilon,1}+c_{\mathrm{mono.}} \frac{k^{2}}{k_{M}^{2}}+\frac{3}{2}c_{\mathrm{quad.}}\left(\mu^{2}-\frac{1}{ 3}\right)\frac{k^{2}}{k_{M}^{2}}\right). \tag{1}\] In the above \(Z_{i}\) are the redshift-space galaxy density kernels (for their exact form, see D'Amico et al. 2020), \(\tilde{n}_{g}\) is the mean galaxy density3, and \(k_{M}^{-1}\) is a normalisation scale4. Overall the 1-loop EFTofLSS introduces ten nuisance parameters. Four parameters (\(b_{1-4}\)) are introduced in the expansion of the galaxy density and velocity field in terms of the underlying dark matter field. These parameters are found in the galaxy kernels \(Z_{i}\). It has been noted that \(b_{2}\) and \(b_{4}\) are highly degenerate (D'Amico et al. 2020). It is common to reparameterise such that Footnote 3: For the analyses of this work we use values of \(4\times 10^{-4}\)\(h^{3}\) Mpc\({}^{-3}\) for the 6dFGS and BOSS samples. For the eBOSS QSO samples we use \(1.5\times 10^{-5}\)\(h^{3}\) Mpc\({}^{-3}\). Footnote 4: More recent papers that use the PyBird EFTofLSS model have an additional normalisation scale \(k_{R}\). For this work, we neglect \(k_{R}\), as such \(k_{R}=k_{M}\). Throughout we set \(k_{M}=0.7\) Mpc\({}^{-1}\). \[c_{2} =(b_{2}+b_{4})\ /\sqrt{2}\,\] \[c_{4} =(b_{2}-b_{4})\ /\sqrt{2}. \tag{2}\] There are three stochastic parameters (\(c_{\epsilon,1},c_{\mathrm{mono.}},c_{\mathrm{quad.}}\)) that are introduced to capture the difference between the actual observed galaxy field and its expected value. Finally, three counterterms that encapsulate the impact of UV physics: the effective sound speed of the dark matter field \(c_{\mathcal{C}t}\), and \(c_{r,1}\) and \(c_{r,2}\) which control the impact of small scales on redshift space distortion. ### Alcock-Paczynski effect A reference cosmology is required to measure the galaxy power spectrum from redshift catalogues provided by surveys like those introduced in Section 2. Any differences between the true underlying cosmology and the reference cosmology lead to distortions of distances parallel and perpendicular to the line of sight. This is the so-called Alcock-Paczynski (AP) effect (Alcock and Paczynski, 1979). The distortion parallel and perpendicular to the line of sight is given Figure 1: _Top_: With points and error bars, the mean of 1049 multipoles measured from the MD-Patchy mocks (Kitaura et al., 2016) for the NGC at \(z=0.61\). The error bars show the \(1\sigma\) error calculated from the 1049 measurements. The solid lines show the PyBird prediction for the Planck 2018 TT,TE,EE+lowE+lensing+BAO\(\Lambda\)CDM best-fit cosmology and the MAP estimate resulting from fitting bias parameters and counterterms to the mean multipoles from the MD-Patchy mocks. The crosses show the multipoles measured from BOSS NGC data, again with \(z=0.61\). _Bottom_: The residual of the mean multipole measurements and the PyBird prediction normalised by the \(1\sigma\) errors reduced by a factor of 10. The colours blue, orange, and green in both panels represent the monopole, quadrupole, and hexadecapole multipole moments, respectively. by the distortion parameters \(q_{\parallel}\) and \(q_{\perp}\), respectively. These parameters are defined as \[q_{\parallel} =\frac{D_{A}(z)H(z=0)}{D_{A}^{\text{ref.}}(z)H(z=0)}\,\] \[q_{\perp} =\frac{H^{\text{ref.}}(z)H(z=0)}{H(z)H^{\text{ref.}}(z=0)}\, \tag{3}\] with \(H(z)\) and \(D_{A}(z)\) being the Hubble parameter and angular-diameter distance as a function of redshift, respectively. The superscript ref. in the above equations indicates quantities calculated at the reference cosmology. The AP distortion is applied to the scales and angles as \(k^{\prime}=q_{\perp}^{-1}Bk^{\text{ref.}}\) and \(\mu^{\prime}=F^{-1}B^{-1}\mu^{\text{ref.}}\). With \(F=q_{\parallel}/q_{\perp}\), and \(B\) given by \[B=\left\lceil 1+\left(\mu^{\text{ref.}}\right)^{2}\left(F^{-2}-1\right) \right\rceil^{1/2}. \tag{4}\] The 2D power spectrum can then be decomposed into multipoles via \[P_{I}(k)=\frac{2l+1}{2q_{\parallel}q_{\perp}^{2}}\int_{-1}^{1}P\left(k^{ \prime},\mu^{\prime}\right)\,\mathcal{L}_{I}\left(\mu^{\text{ref.}}\right) \mathrm{d}\mu^{\text{ref.}}\, \tag{5}\] with \(\mathcal{L}_{I}\) being the \(l\)-th order Legendre polynomial. The EFTEMU (and PyBird) make predictions for the power spectrum multipoles rather than the 2D power spectrum. To include the AP effect, via Equation 5, we need to reconstruct the 2D power spectrum from the multipoles. We do this via \[P(k,\mu)=\sum_{I=0}P_{I}(k)\mathcal{L}_{I}(\mu). \tag{6}\] The EFTEMU (as trained for this work) makes predictions for the first two even multipoles. Reconstructing the 2D power spectrum from only the first two even multipoles will result in systematic errors when including the AP effect via Equation 5. These errors are expected to be small compared to the error associated to the multipole measurements discussed in Section 2. It should be noted that the PyBird mocks introduced in Section 2.1 were constructed including the hexadecap \(P_{A}(k)\). As such, the mock analyses of Section 4 will test if these systematic errors from the 2D power spectrum reconstruction impact the inferred cosmology. ### Emulator The EFTofLSS model described above (as implemented in PyBird) takes \(\mathcal{O}(1\ \text{s})\) to produce predictions for a given set of cosmological parameters at a given redshift. Although efficient enough for direct use when conducting cosmological inference, this prediction time does prohibit the exploration of analysis setups (such as prior choice, scale cuts, and fixed parameters). If running a typical MCMC using this model requires \(\mathcal{O}(10^{5}\text{--}10^{6})\) model evaluations, then \(\mathcal{O}(\text{days})\) would be required to reach convergence. In Donald-McCann et al. (2022b), the EFTEMU was added to the matryoshka (Donald-McCann et al. 2022a) suite of emulators. The EFTEMU was developed to accelerate EFTofLSS predictions by several orders of magnitude by replacing the direct calculation of the kernels \(P_{n,l}\) of the EFTofLSS model with predictions from simple NNs. The EFTEMU was originally trained with data drawn from a five-dimensional \(\Lambda\)CDM parameter space, approximately centred on the Planck 2018 best-fit cosmology. Despite being wide, this training space is too restrictive to constrain some of the \(\Lambda\)CDM parameters much beyond this when using the large-scale structure data considered for this work. With this in mind, we re-train the EFTEMU for this work. The width of the prior on \(\omega_{c}\), \(h\), and \(\ln\left(10^{10}A_{s}\right)\) was increased significantly, and the spectral index \(n_{s}\) was fixed as we do not expect to get any meaningful constraint on \(n_{s}\) from our analyses. Table 1 compares the prior for the original EFTEMU to that used in this work. The larger training space required a change in the training procedure compared to that in Donald-McCann et al. (2022b). The increased width of the cosmological prior, particularly for \(\ln\left(10^{10}A_{s}\right)\), increases the dynamic range of the kernels \(P_{n,l}\). The original preprocessing procedure involved rescaling all \(P_{n,l}\) such that at every \(k\)-value their magnitude was in the range \([0,1]\). We modify this procedure by first taking the log of the \(P_{n,l}\) before rescaling into the range \([0,1]\). Figure 2 shows the kernels for the PyBird mocks at \(z=0.61\) for the first three even multipoles on scales \(0.001\leqslant k\leqslant 0.3\ h\ \text{Mpc}^{-1}\). There are 21 kernels for each multipole, and these 21 kernels can be split into three groups. The first group (\(P_{n,l}^{11}\)) contains the linear terms, the second group (\(P_{n,l}^{\text{loop}}\)) contains the loop terms, and the third group (\(P_{n,l}^{\text{cl.}}\)) contains the counterterms. These three groups also represent the grouping used for the EFTEMU; each component of the EFTEMU emulates a different group (see Section 3 of Donald-McCann et al. 2022b). It can be seen from Figure 2 that some of the \(P_{I}^{\text{loop}}\) and \(P_{I}^{\text{cl.}}\) kernels are exclusively negative or have a zero crossing. To allow us to take the log of these kernels, we include either a simple sign change or the addition of a constant to the kernel preprocessing. Taking the log results in a reduced dynamic range in the training data and leads to higher prediction accuracy. We also significantly increase the number of samples generated for training and testing from 10,000 to 50,000. Only 40,000 are used for training; the remaining 10,000 are used for testing. Figure 3 shows the prediction error on the monopole of the power spectrum when producing predictions with the re-trained EFTEMU. Each row shows the prediction error at a different redshift, and each column shows the prediction error computed with different sets of nuisance parameters. The orange shaded regions show the 68% and 95% credible intervals (CIs) of the prediction error as a function of \(k\). The solid coloured lines show the inverse signal-to-noise ratio (SNR) for the monopole measurements considered for this work at their respective redshifts. The shaded regions have been calculated from predictions for 10,000 unseen cosmologies. For the left column, the 10,000 cosmologies have been combined with sets of nuisance parameters that produce "reasonable" predictions for the \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Donald-McCann et al. (2022b) & This Work \\ \hline \(\omega_{c}\) & \(\mathcal{U}(0.101,0.140)\) & \(\mathcal{U}(0.0900,1.160)\) \\ \(\omega_{b}\) & \(\mathcal{U}(0.0210,0.0240)\) & \(\mathcal{U}(0.0200,0.0240)\) \\ \(h\) & \(\mathcal{U}(0.575,0.748)\) & \(\mathcal{U}(0.500,0.850)\) \\ \(\ln\left(10^{10}A_{s}\right)\) & \(\mathcal{U}(2.78,3.32)\) & \(\mathcal{U}(1.50,3.75)\) \\ \(n_{s}\) & \(\mathcal{U}(0.901,1.03)\) & 0.965 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of priors on the cosmological parameters of the EFTEMU from Donald-McCann et al. (2022b) and this work. \(\mathcal{U}(a,b)\) denotes a uniform distribution with boundaries \(a\) and \(b\). monopole. We take random draws from a very wide uniform prior5 on the nuisance parameters and calculate the multipoles for each set of cosmological and nuisance parameters. We define "reasonable" predictions as those which the monopole is strictly positive and those which can be said to remain perturbative6. Any sets of parameters that do not meet these criteria are rejected, and the nuisance parameters resampled from the prior. This is repeated until we have nuisance parameters for all 10,000 cosmologies. For the right column, samples from the posterior resulting from full-shape analysis of the deFGS-like PyBird mock (see Section 4) are used to inform the nuisance parameters for the unseen cosmologies. For each unseen test cosmology, the posterior sample with the closest cosmology7 is selected, and its nuisance parameters are associated to that test cosmology. The two columns of Figure 3 show two different aspects of the prediction accuracy: the left column represents the prediction accuracy across the entire theoretically viable parameter space, the right column represents the prediction accuracy for power spectra that look more similar to something that has been previously observed. We can see from the right column that for all redshifts considered and for all \(k<0.25\)\(h\) Mpc\({}^{-1}\), the prediction error from the emulator is less than the error on the data at the 68% level at each respective redshift. However, from the left column, we can see that for \(z=0.38,~{}0.61\) when considering the entire theoretically viable prior space, the prediction error can be greater than the error on the data on small scales (\(k\gtrsim 0.17\)\(h\) Mpc\({}^{-1}\)). In practice, we find that the level of prediction accuracy from the re-trained EFTEMU does not induce any significant bias to the cosmological parameters when performing inference, as shown in Section 4. Footnote 5: \(0<b_{1}<10,-10<\{b_{2},~{}b_{4}\}<10,-500<\{b_{3},~{}c_{ct},~{}c_{r,1},~{}c_{ r,2}\}<500\). Footnote 6: See Appendix A for our perturbative condition. Footnote 7: The nearest neighbour in the 4D cosmological parameter space. With the Euclidean distance as the distance metric. ## 4 MOCK Analyses In this section, we present the results from a series of analyses of the PyBird mocks (described in Section 2.1). These mock analyses aim to verify that our cosmological inference pipeline does not induce biases in the cosmological parameter constraints. In addition, we explore how various analysis setups impact the results. In all cases, to put constraints on cosmological parameters, we sample from the posterior distribution via _Preconditioned Monte Carlo_(Karamanis et al., 2022); as implemented in roocoMC8(Karamanis et al., 2022). Precondition Monte Carlo utilises _Normalising Flows_(Papamakarios et al., 2021) and _Sequential Monte Carlo_(Del Moral et al., 2006) to efficiently sample from posterior distributions even when they have a very complex shape. We use a Gaussian likelihood of the form Footnote 8: Various parameters control the efficiency of the sampling with rocoMC. We use the default values for all of these. \[\ln\left[\mathcal{L}(P|\theta,\phi)\right]=-\frac{1}{2}(P-\bar{P})^{T}\mathbf{C}^{ -1}(P-\bar{P})\, \tag{7}\] with \(P\) being a concatenation of the multipole measurements considered \(P=[P_{0},P_{2}]\), \(\bar{P}\) being the multipole predictions from the model \(\bar{P}=[\bar{P}_{0},\bar{P}_{2}]\) for a given set of cosmological parameters \(\theta\) and nuisance parameters \(\phi\), and \(\mathbf{C}\) being the covariance matrix. Many of the nuisance parameters of the EFTofLSS model appear linearly as multiplicative factors for the kernels. This allows us to marginalise over these parameters analytically rather than sampling Figure 2: Redshift space kernels \(P_{n,l}\) calculated with PyBird for the Planck 2018 TT,TE,EE+lowE+lensing+BAO \(\Lambda\)CDM best-fit cosmology at \(z=0.61\). from them. This is standard practice when conducting parameter inference with the EFTofLSS (D'Amico et al., 2020, 2021; Glanville et al., 2022). Carrying out the analytic marginalisation reduces dimensionality and thus leads to a more efficient inference of the cosmological parameters. Although it is more efficient to analytically marginalise the linearly appearing parameters, the prediction speed of the EFTEMU means that fully sampling the parameter space is tractable. We refer to the likelihood with no analytic marginalisation as the "full" likelihood, and we explore the use of both the marginalised and full likelihood in the results below. ### Fiducial Results We start by presenting results from an analysis with a fiducial setup. For this fiducial setup, we analyse the power spectrum monopole and quadrupole on scales \(0.01<k<0.15\ h\ \text{Mpc}^{-1}\). Figure 3 shows that the nearest neighbour prediction error on these scales is considerably lower than the error associated to the mocks at all redshifts for which the EFTEMU is trained. We fix three out of the ten nuisance parameters to zero, those parameters being \(c_{A}\), \(c_{r,2}\), \(c_{\text{mono},}\). These parameters are commonly set to zero in analyses of the monopole and quadrupole with PyBird(D'Amico et al., 2020; Simon et al., 2022). The priors on \(\omega_{c}\), \(h\), and \(\ln\left(10^{10}A_{s}\right)\) are those that define the emulator training space (given in Table 1). For \(\omega_{b}\), we use a truncated normal distribution as the prior, with a mean of 0.02235 and a standard deviation of 0.000499. The hard bounds of this prior are given by the emulator training space as with the other cosmological parameters. The priors on the nuisance parameters are given in Table 2. We refer to the prior of Table 2 as the "classic" prior. A majority of the EFTofLSS works cited in this paper use a prior of a similar form. Note that the prior on \(c_{\epsilon,1}\) is defined independent of \(\tilde{n}_{g}\). For \(\tilde{n}_{g}=4\times 10^{-4}h^{3}\ \text{Mpc}^{-3}\) the prior width is 400, which is in line with other works that use the PyBird EFTofLSS model. Footnote 9: This is motivated by BBN (Cooke et al., 2018) and are the same values as those used in Glanville et al. (2022). Figure 4 shows the resulting marginalised 1D and 2D posteriors from the analysis of the PyBird mocks with the fiducial setup and using the full likelihood10. The two contour levels in the off-diagonal panels are \(1\sigma\) and \(2\sigma\), and the grey dashed lines indicate the location of the true values used to generate the mocks. Along with the sampled parameters \(\omega_{c}\), \(h\), and \(\ln\left[10^{10}A_{s}\right]\) we also plot the marginalised posterior distributions on two derived parameters: \(\Omega_{m}=(\omega_{c}+\omega_{b})\,h^{-2}\), and \(\tilde{A}=b_{1}^{2}A_{s}10^{8}\). For the purposes of this plot, the derived \(\tilde{A}\) posterior samples have had the truth subtracted, such that the 1D marginalised posterior should peak exactly at zero if unbiased. This normalisation of \(\tilde{A}\) allows us to compare the distributions calculated for each sample as they all have different \(b_{1}\) values. Looking at Figure 4, it is clear that for PyBird mocks with a higher SNR (BOSSz1 and BOSSz3 NGC), the agreement with the truth is very good for all parameters. For PyBird mocks with a lower SNR (6dFGS and eBOSS QSO SGC), we observe some significant shifts from the truth in many of the 1D and 2D projections. A likely cause for these shifts is the _volume effect_(Carrilho et al., 2022; Simon et al., 2022; Hadzhiyska et al., 2023); these shifts are (at least partially) a result of marginalisation. Figure 3: Prediction error of the re-trained EFTEMU used in this work. The orange shaded regions in each panel show the 68% and 95% credible intervals of the prediction error, respectively. The credible intervals are calculated by examining the prediction error on 10,000 test cosmologies not used for training. The prediction error is defined as the ratio of the EFTEMU prediction to the PyBird prediction for the same set of cosmological and nuisance parameters. The ratio is then normalised such that it is equal to zero for a perfect prediction. Each row represents a different redshift 0.096, 0.38, 0.61, and 1.52 from top to bottom. For the left column, the cosmological parameters are combined with random draws of nuisance parameters from the theoretically viable prior space. For the right column, each test cosmology is combined with a set of nuisance parameters that result in 6dFGS-like predictions. The coloured solid lines show the inverse signal-to-noise ratio on the monopole for the datasets considered for this work. Panels with both blue and green lines represent the NGC and SGC, respectively. In previous works, it has been shown that \(\ln\left(10^{10}A_{s}\right)\) is particularly susceptible to volume effects (Carrilho et al., 2022; Simon et al., 2022), and indeed it is the parameter in Figure 4 that shows the most significant observed shift. See Appendix B for more discussion on the volume effect with a toy example. The shifts induced in marginalised posteriors are reduced when the constraining power from the data is higher. Figure 5 shows, with dashed coloured lines, the \(2\sigma\) region of the 2D marginalised posterior distributions on \(b_{1}\) and \(\ln\left(10^{10}A_{s}\right)\) resulting from analysis of the PyBird mocks for various samples with the fiducial setup described above. Also plotted in Figure 5, with coloured shaded regions, is the \(2\sigma\) region of the 2D marginalised posteriors obtained from analysis of the PyBird mocks with covariance matrices rescaled by a factor of 1 / 50. It can be seen that although there is agreement with the truth (represented with dotted grey lines) at the \(2\sigma\) level in both cases for all the data samples plotted, the agreement is significantly better when the covariance has been rescaled. The posteriors have shrunk and remained consistent with the truth. If it were the case that the biases observed in Figure 4 were resulting from anything other than marginalisation, we would not see this behaviour. We also note from Figure 5 that the shift in posteriors and median values (shown with coloured squares and points) resulting from rescaling the covariance is along a line of constant \(\bar{A}\) (shown with grey solid lines). Giving a compelling argument for using \(\bar{A}\) as a diagnostic quantity when understanding if observed biases in \(\ln\left(10^{10}A_{s}\right)\) are a result of a true systemic bias from the analysis pipeline or a result of volume effects. Finally, we note that rescaling the covariance in this way does not only resolve the observed bias in \(\ln\left(10^{10}A_{s}\right)\), but in all parameters shown in Figure 4. ### Exploration of Analysis Setups The results from the previous section have shown that the analysis pipeline developed for this work can return unbiased constraints on cosmological parameters of interest for a typical EFTofLSS analysis setup. We can exploit the increased prediction speed of the EFTEMU to explore various analysis setups and observe their impact on the constrained cosmology. #### 4.2.1 Scale Cuts We start by exploring different scale cuts. It can be seen from the solid coloured lines in Figure 3 that there is clear scale dependence in the inverse SNR for all the data samples considered for this work. There is also a clear scale dependence in the emulator prediction error. As mentioned in Section 3, when analysing LSS data, there is a general expectation that the SNR increases when pushing to smaller scales. However, this is only true if the scales are not dominated by shot noise. If we combine this with a higher modelling error on smaller scales, although the expectation might be that including smaller scales will improve the constraints, this might not be the case. Figure 6 shows the peak posterior values and 68% CIs of 1D marginalised posteriors (with coloured squares and lines, respectively) on the cosmological parameters \(\Omega_{m}\), \(h\), and \(\ln\left(10^{10}A_{s}\right)\) resulting from analysis of the PyBird mocks with \(k_{\rm max.}=0.150,0.175,0.200~{}h~{}\text{Mpc}^{-1}\) and the full likelihood. The results from the analysis of the BOSS-like mocks all show the same general trend; including smaller scales shrinks the 68% CI, reduces the observed bias in the peak posterior value, or both. The results for 6dFGS show a slightly tighter constraint on \(\Omega_{m}\) and \(h\) when including smaller scales but the constraint on \(\ln\left(10^{10}A_{s}\right)\) remains almost constant. This is likely because the constraint on \(\ln\left(10^{10}A_{s}\right)\) from 6dFGS is completely dominated by volume effects. We can also see that including smaller scales worsens the agreement with the truth for the eBOSS-like mocks; the 68% CI shrinks, the peak posterior shifts away from the truth, or both. As can be seen from Figure 3, the emulator error is always significantly lower than the error associated with the eBOSS-like mocks; thus, the cause for the behaviour of the eBOSS-like results is more likely to be a result of the worsening SNR rather than emulator error. It can also be seen from Figure 3 that the smaller-scale modes have larger errors, thus including them worsens the volume effect. Table 3 quantifies the level of agreement between the true cosmological parameters of the PyBird mocks and the 1D marginalised posteriors resulting from analysis of these mocks with \(k_{\rm max.}=0.15~{}h~{}\text{Mpc}^{-1}\) and \(k_{\rm max.}=0.2~{}h~{}\text{Mpc}^{-1}\). For the purposes of this paper, we quantify the agreement as the number of \(\sigma\) separating the peak posterior values of two given marginalised distributions. We define the agreement \(N_{\sigma}\) as \[N_{\sigma}=\frac{|\mu_{0}-\mu_{i}|}{\sqrt{\sigma_{0}^{2}+\sigma_{i}^{2}}}~{}, \tag{8}\] with \(\mu_{i}\) and \(\sigma_{i}\) being the mean and \(1\sigma\) error calculated from the 1D marginalised posterior, and \(\mu_{0}\) and \(\sigma_{0}\) being the mean and \(1\sigma\) error of the reference (when calculating \(N_{\sigma}\) for the PyBird mocks \(\sigma_{0}=0\).). In the case of asymmetric distributions, if the residual \(\mu_{0}-\mu_{i}\) is positive, we use the \(1\sigma\) error to the right of the peak posterior. If the residual is negative, we use the \(1\sigma\) error to the left of the peak posterior. We note that for all apart from the eBOSS-like mocks, the level of agreement does not significantly change and is at the \(\lesssim 0.5\sigma\) level for \(\Omega_{m}\) and \(h\) when comparing the results obtained with the two \(k_{\rm max.}\) values. For the BOSS-like mocks, the level of agreement improves to \(<1\sigma\) for \(\ln\left(10^{10}A_{s}\right)\) when including smaller scales. It is also worth noting that although the analyses with \(k_{\rm max.}=0.2~{}h~{}\text{Mpc}^{-1}\) include scales at which the observed emulator error from Figure 3 is at a similar level to the data error, we find no \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter & Prior & \(\mathcal{M}_{1}\) & \(\mathcal{M}_{3}\) \\ \hline \(b_{1}\) & \(\mathcal{U}(0,4)\) & \(\checkmark\) & \(\checkmark\) \\ \(c_{2}\) & \(\mathcal{U}(-4,4)\) & \(\checkmark\) & \(\checkmark\) \\ \(b_{3}\) & \(\mathcal{N}(0,2)\) & \(\checkmark\) & \(\mathbf{x}\) \\ \(c_{4}\) & \(\mathcal{N}(0,2)\) & \(\checkmark\) & \(\mathbf{x}\) \\ \(c_{\sigma}\) & \(\mathcal{N}(0,2)\) & \(\checkmark\) & \(\mathbf{x}\) \\ \(c_{r,1}\) & \(\mathcal{N}(0,8)\) & \(\checkmark\) & \(\checkmark\) \\ \(c_{r,2}\) & \(\mathcal{N}(0,2)\) & \(\mathbf{x}\) & \(\mathbf{x}\) \\ \(c_{\epsilon,1}\) & \(\mathcal{N}(0,0.16)\) & \(\checkmark\) & \(\mathbf{x}\) \\ \(c_{\rm mono.}\) & \(\mathcal{N}(0,2)\) & \(\mathbf{x}\) & \(\mathbf{x}\) \\ \(c_{\rm quad.}\) & \(\mathcal{N}(0,2)\) & \(\checkmark\) & \(\mathbf{x}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Priors on the bias parameters and counterterms of the PyBird EFTofLSS model. \(\mathcal{U}(a,b)\) denotes a uniform distribution with boundaries \(a\) and \(b\), and \(\mathcal{N}(\mu,\sigma)\) denotes a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). The last two columns indicate which parameters are included in the two sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\) (defined in Section 4.2.2). significant bias in the constrained cosmology for those samples least susceptible to volume effects (BOSSz1 NGC, BOSSz3 NGC). #### 4.2.2 Bayesian Model Comparison pocoMC allows us to easily calculate the Bayesian evidence for each posterior distribution. We use these evidence calculations to compare EFTofLSS _sub-models_. We define the _full model_ as the PyBird EFTofLSS model with all nuisance parameters free, and a _sub-model_ as any model that results from fixing any single nuisance parameter or combination of parameters to zero. The first sub-model we consider (\(\mathcal{M}_{1}\)) is that of the fiducial setup; with \(c_{4}\), \(c_{r,\,2}\), and \(c_{\rm mono}\), all set to zero. Figure 7 shows the natural log of the Bayes factor in (\(B_{i}\)) resulting from analysis of the PyBird mocks with \(k_{\rm max.}=0.2\ h\) Mpc\({}^{-1}\) and the full likelihood. With Figure 4: 1D and 2D marginalised posterior distributions on the cosmological parameters of interest resulting from analysis of the PyBird mocks with the fiducial analysis setup (described in Section 4.1). The two contour levels in the off-diagonal panels represent the \(1\sigma\) and \(2\sigma\) regions, and the grey dashed lines in all panels show the true values of the PyBird mocks. The parameters \(\Omega_{m}\) and \(\bar{A}\) have been derived, whilst the other parameters were sampled (see Section 4.1 for details). In \((B_{i})\) given by \[\ln\left(B_{i}\right)=\ln\mathcal{Z}\left(\mathcal{M}_{i}\right)-\ln\mathcal{Z} \left(\mathcal{M}_{0}\right)\,. \tag{9}\] In the above equation, \(\mathcal{Z}\left(\mathcal{M}_{0}\right)\) is the evidence calculated for the sub-model full model, and \(\mathcal{Z}\left(\mathcal{M}_{i}\right)\) is the evidence calculated for the sub-model being tested. We can see that although \(\ln\left(B_{i}\right)\) is positive for all data samples, indicating that the sub-model is preferred, the preference is weak for all samples apart from the two eBOSS-like samples. The next sub-model we consider (\(\mathcal{M}_{3}\)) is chosen by observing the level constraint beyond the prior for each of the bias parameters and counterterms when analysing the PyBird mocks with \(\mathcal{M}_{0}\) and \(k_{\text{max.}}=0.2\)\(h\) Mpc\({}^{-1}\) and the full likelihood. Figure 8 shows the ratio of the prior standard deviation to the 1D marginalised posterior standard deviation for each bias parameter and counterterm. We can see that the only parameters to have a significant constraint beyond the prior (ratio > 1) are \(b_{1}\), \(c_{2}\), and \(c_{r,1}\). As such, we define sub-model \(\mathcal{M}_{3}\) to be that with \(b_{1}\), \(c_{2}\), and \(c_{r,1}\) as the only free nuisance parameters, and all others fixed to zero. The results of Figure 8 are clearly prior dependent; a reduction in the prior width for \(c_{r,1}\) will result in the ratio in Figure 8 being lower for this parameter. These results represent the case in which we are limited to the classic prior defined in Table 2. We calculate the Bayes factor for each sample in the same way as for sub-model \(\mathcal{M}_{1}\). These Bayes factors are also plotted in Figure 7. We can see that sub-model \(\mathcal{M}_{3}\) is preferred over the full model \(\mathcal{M}_{0}\) at a similar level to \(\mathcal{M}_{1}\) for all the BOSS-like samples and the 6dFGS-like sample. However, the preference for sub-model \(\mathcal{M}_{3}\) over the full model for the eBOSS-like samples is much stronger than sub-model \(\mathcal{M}_{1}\). This stronger preference for the more restrictive sub-model \(\mathcal{M}_{3}\) is likely because of the SNR of the eBOSS-like samples, as discussed in previous sections (shot noise leads to a worse SNR on small scales compared to other samples). As the parameters set to zero primarily impact small scales, and the small scales of the eBOSS-like samples are much noisier than the other samples, the data provides very little evidence for these parameters. Table 4 shows the same as Table 3 for analyses of the PyBird mocks with sub-model \(\mathcal{M}_{3}\). If we compare the results from the two tables, we can see that generally, the agreement is of a similar level or better than that from the results obtained with sub-model \(\mathcal{M}_{1}\). For the eBOSS-like mocks, the level of agreement is significantly better, and the evolution with \(k_{\text{max.}}\) is now similar to that of the results from the BOSS-like mocks when considering \(\ln\left(10^{10}A_{s}\right)\). These results show that we can reduce the parameter space significantly without biasing the constrained cosmology and, in some cases, can alleviate biases likely caused by volume effects. #### 4.2.3 Priors on Nuisance Parameters The choice of prior for the nuisance parameters can have a significant impact on the constraint on the cosmological parameters (Carrilho et al., 2022; Simon et al., 2022), however physically motivating priors on these parameters is challenging. The EFToLSS is a perturbative model, and as such, if the contribution to the model from the loop corrections becomes too large, the model breaks down; this has led to priors on the nuisance parameters restricting values to be \(\mathcal{O}(1)\). In this section, we explore using a Jeffreys prior (Jeffreys, 1998) as an alternative to the zero-centred Gaussian priors commonly used in the literature. We explore the use of a Jeffreys prior because it is non-informative. This is a desirable property as it means we are not favouring any particular region of the parameter space _a priori_. Hadzhiyska et al. (2023) shows that the use of the Jeffreys prior on nuisance parameters can resolve volume effects like those observed in the results presented in previous sections. The Jeffreys prior is \begin{table} \begin{tabular}{c c c c c c} \hline \hline Sample & \(\Omega_{m}\) & \(h\) & \(\ln\left(10^{10}A_{s}\right)\) \\ \hline 6dFGS & **0.41** & 0.42 & 0.09 & **0.04** & **2.19** & 2.51 \\ BOSSz1 NGC & **0.24** & 0.25 & **0.09** & 0.23 & 0.66 & **0.03** \\ BOSSz1 SGC & 0.46 & **0.43** & **0.05** & 0.11 & 1.29 & **0.78** \\ BOSSz3 NGC & 0.22 & **0.02** & **0.06** & 0.1 & 0.63 & **0.07** \\ BOSSz3 SGC & **0.35** & 0.43 & **0.02** & 0.14 & 1.16 & **0.7** \\ eBOSS NGC & **0.25** & 0.39 & 0.12 & **0.04** & 0.78 & **0.72** \\ eBOSS SGC & **0.4** & 0.49 & 0.14 & **0.1** & 1.24 & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 4: Same as Table 3 for analyses with sub-model \(\mathcal{M}_{3}\) defined in Section 4.2.2. Figure 5: 2D marginalised posterior for \(b_{1}\) and \(\ln\left(10^{10}A_{s}\right)\) resulting from analysis of mocks representing various samples of interest for this work with the fiducial setup described in Section 4.1. The dashed coloured contours represent the \(2\sigma\) region calculated when analysing mocks with covariance representative of their respective datasets. The filled contours represent the \(2\sigma\) region calculated with the covariance rescaled by a factor of 50. The coloured squares show the median values of the posterior obtained from analysis with the standard covariance and the circles from analysis with the rescaled covariance. The vertical dotted line shows the true \(\ln\left(10^{10}A_{s}\right)\) value of the mock, and the horizontal dashed lines show the true \(b_{1}\) values for each mock. The grey solid lines show lines of constant \(A\) with \(b_{1}\) values equal to the truth from the mocks. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Sample & \(\Omega_{m}\) & \(h\) & \(\ln\left(10^{10}A_{s}\right)\) \\ \hline 6dFGS & **0.36** & 0.46 & 0.14 & **0.11** & **2.23** & 2.71 \\ BOSSz1 NGC & **0.24** & 0.3 & **0.01** & 0.21 & 0.74 & **0.22** \\ BOSSz1 SGC & **0.35** & 0.35 & **0.07** & 0.12 & 1.31 & **0.86** \\ BOSSz3 NGC & 0.16 & **0.05** & **0.03** & 0.14 & 0.61 & **0.19** \\ BOSSz3 SGC & **0.19** & 0.27 & **0.02** & 0.12 & 1.17 & **0.75** \\ eBOSS NGC & **0.94** & 1.07 & **0.51** & 0.52 & **1.06** & 1.09 \\ eBOSS SGC & **1.08** & 1.33 & **0.78** & 0.95 & **1.17** & 1.45 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of sigma between the true cosmology of the PyBird mocks and the 1D marginalised posteriors, resulting from analysis with \(k_{\text{max.}}=0.15\)\(h\) Mpc\({}^{-1}\) and \(k_{\text{max.}}=0.2\)\(h\) Mpc\({}^{-1}\). Left and right columns for each cosmological parameter correspond to \(k_{\text{max.}}=0.15\)\(h\) Mpc\({}^{-1}\) and \(k_{\text{max.}}=0.2\)\(h\) Mpc\({}^{-1}\), respectively. Lower values are indicated with bold font. defined as \[J(\theta)=\sqrt{\left|F(\theta)\right|}\, \tag{10}\] with \(F(\theta)\) being the Fisher information matrix, which for a Gaussian likelihood with covariance independent of model parameters \(\theta\) can be written as \[F_{ij}(\theta)=\frac{\partial M(\theta)}{\partial\theta_{i}}C^{-1}\frac{ \partial M(\theta)}{\partial\theta_{j}}^{T}. \tag{11}\] From the equations above, we can see that partial derivatives of the model with respect to the model parameters are needed to evaluate the Jeffreys prior. These partial derivatives are trivial for the nuisance parameters that appear linearly in the model. They are simple sums of relevant kernels that are predicted by the EFTEMU (or PyBird) for a given set of cosmological parameters. For this work, we only impose the Jeffreys prior on these linearly appearing nuisance parameters. This means that volume effects related to these parameters should be mitigated. However, any volume effects related to marginalisation over the remaining nuisance parameters (\(b_{1}\), \(c_{2}\), and \(c_{4}\)) and the cosmological parameters will still remain. In practice, we impose hard bounds at -100 and 100 on the linear nuisance parameters in addition to the Jeffreys prior when using the Jeffreys prior with the full likelihood, and we impose additional Gaussian priors with \(\sigma=200\) when using the Jeffreys prior with the marginalised likelihood. These additional priors are chosen relatively arbitrarily and are motivated by the practicalities of our inference pipeline11. For the mock analyses presented below, the linearly appearing parameters are constrained well within the additional uniform prior when using the full likelihood. We also test setting \(\sigma=1000\) when using the Jeffreys prior and see no significant difference when comparing to posteriors calculated with \(\sigma=200\). Footnote 11: rocoMC requires prior samples as starting positions for particles. This means we must define a prior that we can sample from when using the full likelihood, hence the imposition of the hard bounds at -100 and 100. Figure 8: Ratio of the prior standard deviation to the posterior standard deviation for the marginalised 1D posteriors resulting from analysis of the BOSSz3 NGC PyBird mock with \(k_{\rm max.}=0.2\ h\ {\rm Mpc}^{-1}\) and the full likelihood. The black solid line indicates unity. Figure 6: Summary of the 1D marginalised posteriors on cosmological parameters of interest resulting from the analyses described in Section 4.2.1. Coloured squares show peak posterior values, dark horizontal coloured lines show the width of the 68% CI, light coloured lines with caps show the 95% CI, and vertical dashed lines show the true values of the mocks. Figure 7: Natural log of the Bayes factor comparing two EFTofLSS submodels, \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\), for each of the datasets considered for this work with a \(k_{\rm max.}=0.2\ h\ {\rm Mpc}^{-1}\). The grey dashed lines indicate two limits of the Jeffreys scale (Jeffreys, 1998); any models with a Bayes factor greater than \(\sim 2.5\) have definite evidence that the sub-model is preferred, and any models with a Bayes factor greater than \(\sim 5\) have very strong evidence that the sub-model is preferred. Figure 9 shows 1D marginalised posteriors for the cosmological parameters obtained from analysis of the PyBird models with sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\) (defined in Section 4.2.2), the Jeffreys prior, and the full likelihood (these setups will henceforth be referred to as JP1 and JP3, respectively). Also plotted are the results obtained with \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\), the classic prior, and the full likelihood (henceforth be referred to as CP1 and CP3, respectively). We start by considering the results obtained with CP1 and JP1. We can see that for all samples, the agreement with the truth is better when using the Jeffreys prior; this is particularly noticeable for \(\ln\left(10^{10}A_{s}\right)\). When using the classic prior, the \(\ln\left(10^{10}A_{s}\right)\) peak posterior values shift significantly depending on the SNR of the sample. When using the Jeffreys prior, these peak posterior values are more consistently located around the true value. We expect consistency when examining the results obtained from analysis of the PyBird models as they are sample variance free. We can visualise the consistency of the results by calculating the agreement between the results obtained from each sample with Equation 8 and plotting this as a matrix in Figure 10. We can see that for the results obtained with the Jeffreys prior, with the exception of 6dFGS, there is good agreement between the results from each other sample; however, the results obtained with the classic prior show some inconsistency. We can quantify the level of consistency by averaging the lower triangle of the matrices in Figure 10. This results in \(0.30\sigma\) and \(0.94\sigma\) for the Jeffreys prior and classic prior, respectively. From Figure 9, we can also see that using the Jeffreys prior results in an increase in the width of the 68% CIs of the marginalised 1D posteriors. This should be expected, as many of the nuisance parameters converge to the prior when using the classic prior. These parameters have some degeneracy with the cosmological parameters; expanding the space that these parameters can explore inevitably leads to some degradation of the constraints on the cosmological parameters. If we examine the results obtained with JP3, we see that for the low SNR BOSS-like mocks (BOSSz1 and BOSSz3 SGC), we still have a reduction in bias in the \(\ln\left(10^{10}A_{s}\right)\) constraint whilst at the same time maintaining a CI that is competitive with the classic prior. We note that for the eBOSS-like mocks, although the \(\ln\left(10^{10}A_{s}\right)\) bias is reduced, it is not reduced to the same degree as with JP1. We also note that a greater bias observed in the \(\Omega_{m}\) constraints when using the JP3 compared to the CP3. ### Joint Analyses So far, we have considered each sample individually, which can give interesting insights into how the specifics of each sample (such as redshift and sample selection) impact the results. However, we would ultimately like to analyse multiple samples simultaneously to improve constraining power on the cosmological parameters. To do this we treat each sample as being independent, and as such define the joint likelihood as \(\ln\left[\mathcal{L}_{\rm joint}(\theta|\phi_{\rm joint})\right]=\sum_{i}\ln \left[\mathcal{L}(\theta|\phi_{i})\right]\), with \(\theta\) being the shared cosmological parameters, \(\phi_{\rm joint}\) being the complete set of nuisance parameters \(\phi_{\rm joint}=\left\{\phi_{1},\phi_{2},\ldots,\phi_{n}\right\}\), and \(\ln\left[\mathcal{L}(\theta|\phi_{i})\right]\) being defined in Equation 7. Unless explicitly stated, the joint analyses of mocks and data measurements in this work are done with the marginalised likelihood. We exclusively use the marginalised likelihood for these kinds of analyses as the joint parameter space can become very large when considering multiple samples. The analytic marginalisation keeps the dimensionality low, thus keeping the joint analyses tractable 12. Footnote 12: It is feasible to sample the parameter space for these joint analyses fully. However, we find that the number of particles for the sampler needs to be increased as suggested in the rocoMC documentation; [https://pocomc.readthedocs.io/en/latest/](https://pocomc.readthedocs.io/en/latest/). These extra particles mean extra likelihood evaluations are required for each iteration. This adds to the computational cost for each analysis that is already increased by expanding the dimensionality. Figure 11 shows the posterior distributions resulting from analysis of all the BOSS-like mocks (BOSSz1 NGC, BOSSz1 SGC, BOSSz3 NGC, BOSSz3 SGC) with sub-model \(\mathcal{M}_{1}\), the classic prior, \(k_{\rm max.}=0.2\ h\ {\rm Mpc^{-1}}\), and the marginalised likelihood. We note that biases can be observed in the marginalised posteriors. To verify that our joint inference pipeline does not cause these biases, we also analyse the BOSS-like mocks with the covariance rescaled by a factor of 50. These results are also plotted in Figure 11. We can see that the \(\sim 1\sigma\) shift from the truth when considering \(\ln\left(10^{10}A_{s}\right)\) peak has been completely resolved. It can be seen that there is still a slight shift when considering \(\omega_{c}\) and \(h\). These biases are now more likely a result of the analysis setup, emulator error, or both rather than volume effects. We do not explore this further, as in all projections of the posterior resulting from analysis with the rescaled covariance, the truth is contained within \(1\sigma\). Appendix C compares results obtained with the inference pipeline of this work with those obtained with the pipeline of Zhao et al. (2023). Figure 11 summarises the marginalised 1D posteriors for the cosmological parameters of interest resulting from analyses of various combinations of the PyBird mocks, with various analysis setups. All analyses were conducted with \(k_{\rm max.}=0.2\ h\ {\rm Mpc^{-1}}\) and the marginalised likelihood. Results obtained with sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\) and the classic prior (as before referred to as CP1 and CP3) are represented with blue and orange points and lines, respectively. Results obtained with sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\) and the Jeffreys prior (JP1 and JP3) are represented with green and red points and lines, respectively. Much of what can be seen from Figure 11 is in line with that from Figure 11. That being; when limited to the classic prior, the results obtained using CP3 are less biased than those obtained with CP1, and when considering alternative priors, the results obtained with JP1 are less biased compared to those from CP1 and CP3 at the cost of wider error bars, and although JP3 reduces the bias in the \(\ln\left(10^{10}A_{s}\right)\) constraints compared to CP1 these results are more biased than those from CP3 when considering \(\Omega_{m}\). As mentioned above, the 68%CIs are considerably wider when using JP1. This raises the question, is it even worth combing high SNR data with low SNR data if the Jeffreys prior is needed to mitigate against bias? To answer this, we look at the ratio of the 68% CIs resulting from the joint analysis of the BOSSz1 NGC and BOSSz3 NGC with CP1 to the 68% CIs resulting from joint analysis of all the PyBird mocks with JP1. For \(\Omega_{m}\), \(h\), and \(\ln\left(10^{10}A_{s}\right)\), this ratio is 0.81, 0.92, and 0.99, respectively. We can see that the use of the Jeffreys prior in JP1 has degraded the constraint in such a way that it is better to simply combine the two samples that have negligible volume effects rather than combine all samples. If we instead look at the ratio of the 68% CIs obtained from analysis of BOSSz1 NGC and BOSSz3 NGC with CP1 to those obtained from the analysis of all the mocks with JP3, it is 1.3, 1.3, and 1.7 for \(\Omega_{m}\), \(h\), and \(\ln\left(10^{10}A_{s}\right)\), respectively. In this case, there is a significant benefit from doing the joint analysis of all the samples even if the Jeffreys prior is required. It is important to note that when using JP3, we see a \(\sim 1\sigma\) shift from the truth when considering \(\Omega_{m}\). This is no worse than the bias in \(\Omega_{m}\) seen in the results of the joint analysis of all the PyBird mocks with CP1 but is worse than that from the joint analysis with JP1. ## 5 Main results In this section, we present the main results of this work; constraints on cosmological parameters from analysis of the unified power spectrum multipole measurements discussed in Section 2. We repeat many of the analyses discussed in Section 4, replacing the mock multipoles with those measured from the 6dFGS, BOSS, and eBOSS redshift surveys. ### Individual Constraints We start by presenting the cosmological parameter constraints obtained via analysis of each sample individually. Figure 13 shows the peak posterior values and 68% CIs for the cosmological parameters \(\Omega_{m}\), \(h\), and \(\ln\left(10^{10}A_{s}\right)\) resulting from analysis of the galaxy power spectrum multipole measurements with four different setups. The first (shown with blue points and lines) being sub-model \(\mathcal{M}_{1}\) (\(c_{4}\), \(c_{r,2}\), and \(c_{\rm mono.}\) set to zero; see Section 4.2.2) with the classic prior (see Table 2 and Section 4.2.3), the next (shown with green) being sub-model \(\mathcal{M}_{1}\) with the Jeffreys prior described in Section 4.2.3, the third (shown with orange) being sub-model \(\mathcal{M}_{3}\) (all nuisance parameters set to zero except \(b_{1}\), \(c_{2}\), and \(c_{r,1}\)) with the classic prior, and the last being sub-model \(\mathcal{M}_{3}\) with the Jeffreys prior. We refer to these four setups as CP1, JP1, CP3, and JP3, respectively. The black points and lines, and grey shaded regions, show the 99% CI of the Planck 2018 \(\Lambda\)CDM TT, TE, EE+low \(\ell\)+lowE+lensing+BAO results13. The results shown in Figure 13 are also summarised in Table D1. Footnote 13: The 99% CIs have been plotted for Planck to make them more visible for comparison. The first thing to note is the strange appearance of the CIs resulting from the analyses of BOSSz1 and BOSSz3 SGC with JP1. The marginalised 1D posteriors on \(\Omega_{m}\) and \(h\) are multimodal in these cases. The second modes of these distributions correspond to chain samples with extreme nuisance parameters. This could indicate a breakdown of the model. Further discussion on these results can be found in Appendix A. With the exception of these results, we see Figure 10: Matrices visualising agreement between constraints on \(\ln\left(10^{10}A_{s}\right)\) resulting from analysis of different datasets with the same setup. For both panels the data was analysed with sub-model \(\mathcal{M}_{1}\) (as defined in Section 4.2.2) with \(k_{\rm max.}=0.15\)\(h\) Mpc, and the colour indicates the magnitude of \(T_{ij}\) (Equation 8). _Left:_ results from analysis using the Jeffreys prior defined in Section 4.2.3. _Right:_ results from analysis with the fiducial prior. Figure 9: Same as Figure 6 but comparing the impact of prior choice rather than varying \(k_{\rm max.}\). The blue and orange lines and squares show results using the classic EFToflSS prior (defined in Table 2) and sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\) (defined in Section 4.2.2) respectively. The green and red lines and squares show the results obtained using the Jeffreys prior (defined in Section 4.2.3) with sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\) respectively. All analyses conducted with \(k_{\rm max.}=0.2\)\(h\) Mpc\({}^{-1}\). good agreement between the results obtained with both sub-models and prior choices. Each given sample and parameter has agreement within \(1\sigma\) for all analysis setups. However, we do note that although \(<1\sigma\), there are more differences between the analysis setups when considering \(\ln\left(10^{10}A_{s}\right)\). Table 5 quantifies the average level of agreement14 between the results presented in Figure 13 and the Planck 2018 results. When we compare the level of agreement between the results obtained with CP1 and CP3 and the Planck 2018 results, we find that they are similar for both setups. For \(\Omega_{m}\) and \(h\) there is very little difference between the results obtained with the two setups for a majority of the Figure 11: Same as Figure 4 for posteriors resulting from the joint analyses of the PyBird mocks (discussed in Section 4.3). Blue lines represent results from analysis of all BOSS-like PyBird mocks, orange represents the results from analysis of the BOSS-like mocks rescaled by a factor of 50. Both analyses done with sub-model \(\mathcal{M}_{1}\) (see Section 4.2.2), the classic style prior (see Table 2), and the marginalised likelihood. samples. When considering the results from the eBOSS samples, we see more differences in the \(\Omega_{m}\) and \(h\) constraints when comparing the two setups. However, as there is a shift from an \(\Omega_{m}\) that is lower than that from Planck 2018 to one that is higher, the average level of agreement does not change significantly. As mentioned above, the differences between the results obtained with CP1 and CP3 are clearer when considering \(\ln\left(10^{10}A_{s}\right)\). For a majority of the samples, there is a shift in the peak posterior \(\ln\left(10^{10}A_{s}\right)\) value towards the Planck result. This is combined with an \(\sim 10\%\) reduction in the width of the 68% CIs. However, Table 5 shows Figure 12: Same as Figure 9 for posteriors resulting from the joint analyses of the PyBmap mocks (discussed in Section 4.3) with various analysis setups. Blue and orange represent results from analyses with sub-models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{3}\), and the classic style prior, respectively. Green and red show results with the Jeffreys prior defined in Section 4.2.3, \(k_{\mathrm{max.}}=0.2\)\(h\) Mpc\({}^{-1}\) and the marginalised likelihood. Figure 13: Same as Figure 9 for posteriors resulting from analysis of the multipole measurements described in Section 2. Grey shaded regions and black points and lines show the peak posterior and 99% CI from the Planck 2018 \(\Lambda\)CDM results. The 99% CI has been plotted rather than 68% (as for the other results) to aid comparison, as the Planck 68% is much smaller than all others. a similar level of agreement when using CP1 and CP3. This is because of the results from the eBOSS QSO NGC analysis. We can see from Figure 13 that the \(\ln\left(10^{10}A_{s}\right)\) posterior when using CP1 is higher than that from Planck 2018. When using CP3, this shifts to even higher values. If we exclude this result, the average level of agreement in the \(\ln\left(10^{10}A_{s}\right)\) constraints between the results obtained with CP1 and CP3 and the Planck results is now 1.06 and 0.860, respectively. We now consider the average level of agreement between the results obtained with JP1 and JP3 and the Planck 2018 results. We see that there is better agreement with the Planck 2018 \(\ln\left(10^{10}A_{s}\right)\) constraint when compared to results obtained with CP1 for a majority of samples. As discussed in previous sections, using J1 widens the 68% CIs. Some of the improvement in agreement with the Planck results will be because of this. However, the shifts in the peak posterior values towards the Planck 2018 results that can be seen in Figure 13 will also result in better agreement. These results show that reducing model complexity going from CP1 to CP3 does not induce any statistically significant bias when considering analyses of the same sample with the two sub-models. They also show that using the reduced sub-model results in \(\ln\left(10^{10}A_{s}\right)\) peak posterior values that are closer to that from Planck 2018 for the majority of data samples. We can also see that using the Jeffreys prior allows us to obtain results consistent with those obtained with the classic prior whilst being more agnostic to the form of the nuisance parameter prior. The Jeffreys prior can also increase the level of agreement with CMB results for \(\ln\left(10^{10}A_{s}\right)\). However, this can come with the possible probing of unphysical regions of the parameter space. ### Joint Constraints Figure 14 is the same as Figure 13 but summarises the 1D marginalised posterior distributions on the cosmological parameters of interest resulting from joint analyses of the unified multipole measurements. These results are also summarised in D2. Also plotted are the Planck 2018 results and relevant results from Carrilho et al. (2022); Simon et al. (2022); Glanville et al. (2022). These works use the EFTofLSS to constrain \(\Lambda\)CDM parameters from analysis of galaxy power spectrum multipoles measured from different datasets. Simon et al. (2022) uses PyBird to analyse the same eBOSS QSO multipoles used for this work. As with sub-model \(\mathcal{M}_{1}\) of this work \(c_{4}\), \(c_{r,2}\), and \(c_{\text{mono}}\), are fixed to zero. Glanville et al. (2022) uses PyBird to perform joint analysis of 6dfGS, BOSS, eBOSS QSO multipole measurements. The BOSS samples used in Glanville et al. (2022) are slightly different from those used in this work; we refer the reader to Table 1 in Glanville et al. (2022) for details. The analysis of Glanville et al. (2022) also differs in that the hexadecapole \(P_{4}(k)\) is included in the data vector in addition to \(P_{0}(k)\) and \(P_{2}(k)\). Additionally, fewer nuisance parameters are fixed to zero than in either of the sub-models of this work. Glanville et al. (2022) only fixes \(c_{4}\) to zero. Carrilho et al. (2022) uses an independent modelling pipeline for the EFTofLSS to analyse BOSS multipole measurements. Again the BOSS measurements used in Carrilho et al. (2022) are slightly different from those used in this work; we refer the reader to Section 2.1 of Carrilho et al. (2022) for details. The Carrilho et al. (2022) analysis also differs in that \(n_{s}\) is free, and the data vector includes \(P_{4}(k)\). The form of the nuisance parameters in the Carrilho et al. (2022) pipeline differs from that of this work (for details, see Section 2.2.3 of Carrilho et al. 2022). None of these nuisance parameters are fixed in the Carrilho et al. (2022) analysis. Each of the three joint analysis results presented in Figure 14 approximates one of the EFTofLSS works above in the sense that the same kind of data is used. The eBOSS analysis is comparable to Simon et al. (2022), the BOSS analysis is comparable to Carrilho et al. (2022), and the ALL analysis is comparable to Glanville et al. (2022). Table 6 quantifies the level of agreement between the results of the joint analyses presented in Figure 14 and the EFTofLSS literature results and the Planck 2018 \(\Lambda\)CDM results. We first note the good agreement between the results of each joint analysis with their respective EFTofLSS literature results. With the exception of the constraint on \(h\) from the ALL analysis, we see the results of this work agree with the literature results within \(\lesssim 1\sigma\). The results of the joint eBOSS analysis show a more significant dependence on the analysis setup for all parameters compared to the BOSS and ALL analyses. Unlike with the analyses of the PyBird mocks, it is more difficult to determine if these shifts in the results are because of volume effects (resulting from a given analysis setup), sample variance, or errors in the modelling. From the mock analysis results presented in Figure 12, we see a slight shift towards the truth when using CP3. From Figure 14, we see that using CP3 shifts the results towards those of Simon et al. (2022). However, this shift is away from the other EFTofLSS literature results and the Planck 2018 results. If we look again at Figure 12, we see that using JP1 shifts the \(\ln\left(10^{10}A_{s}\right)\) results even closer to the truth. Comparing to the equivalent result in Figure 14, we see that using JP1 shifts the \(\ln\left(10^{10}A_{s}\right)\) back toward the results obtained with CP1. We note that the \(\tilde{A}\) posteriors obtained with both sub-models agree very well with each other and with those from Simon et al. (2022). The linear bias values obtained with CP1 are \(2.4\pm 0.3\) and \(2.3\pm 0.3\) for the NGC and SGC, respectively. This aligns with the linear bias obtained via analysis of the eBOSS QSO samples with non-EFTofLSS models (Hou et al., 2020). These linear bias values are significantly lower when using sub-model CP3 at \(2.1\pm 0.2\) for both the NGC and SGC. The results of the BOSS and ALL analyses show less dramatic shifts in the parameters compared to those from the eBOSS analysis and behave more like the results of the mock analyses. We see that for \(\Omega_{m}\) and \(h\), there is very little difference between the analysis setups. There is slightly better agreement with the EFTofLSS literature results and Planck 2018 for these parameters when using JP1 for both the BOSS and the ALL analysis. This results from \begin{table} \begin{tabular}{c c c c} \hline \hline Comparison & \(\Omega_{m}\) & \(h\) & \(\ln\left(10^{10}A_{s}\right)\) \\ \hline & 0.676 & 0.499 & 0.993 \\ Planck 2018 & 0.685 & 0.504 & 0.996 \\ & 0.636 & 0.687 & 0.629 \\ & 0.722 & 0.506 & 0.756 \\ \hline \hline \end{tabular} \end{table} Table 5: The average level of agreement between the 1D marginalised scatteriors resulting from the analyses (described in Section 5.1) of the unified multipole measurements and the Planck 2018 results. Each row corresponds to results obtained with different analysis setups. From top to bottom, those are: sub-model \(\mathcal{M}_{1}\) with the classic prior, sub-model \(\mathcal{M}_{3}\) with the Zelffeys prior, and model \(\mathcal{M}_{3}\) with the Jeffreys prior. the increased width of the 68% CI in addition to a slight shift in the peak posterior values. The width of the 68% and 95% CIs appear wider from the Carrillo et al. (2022) results than those from this work. This is most likely a result of the differences in the analysis setup mentioned above; for example, allowing \(n_{s}\) to vary. Glanville et al. (2022) shows an increase in the CIs of all relevant cosmological parameters when including \(n_{s}\) as a free parameter. When we examine \(\ln\left(10^{10}A_{s}\right)\), we observe that the BOSS and ALL joint analyses with CP1 display a level of agreement with the Planck 2018 results that is at the \(\sim 2.5\sigma\) level. For the results from CP3, this is at the \(\sim 2.4\sigma\) and \(\sim 1.9\sigma\) levels for the BOSS and ALL analyses, respectively. The results from JP1 improve the level of agreement with the Planck 2018 results for both the BOSS and ALL joint analyses to \(<1\sigma\). The results from JP3 also show improved agreement with the Planck 2018 results. However, this is still \(>1\sigma\) for the results from the BOSS analysis. Although the peak posterior agrees with that of the JP1 analysis, the 68% CI is tighter and results in a \(>1\sigma\) difference. ## 6 Conclusions We have presented results from multiple cosmological inference analyses of mock galaxy power spectrum multipoles designed to determine how choices about the analysis setup impact the inferred cosmological parameters. To minimise the computational cost of these mock analyses, we use the neural-network-based EFTEMU to predict the power spectrum multiples. The training procedure of the EFTEMU has been improved beyond that in Donald-McCann et al. (2022) to allow for accurate predictions to be made on a much larger cosmological prior space. The main \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Sample & \(\Omega_{m}\) & \multicolumn{2}{c}{\(h\)} & \multicolumn{2}{c}{\(\ln\left(10^{10}A_{s}\right)\)} \\ \hline eBOSS & 0.336 & 0.372 & 0.259 & 0.293 & 0.841 & 0.103 \\ & 0.234 & 0.412 & 0.077 & 0.544 & 0.183 & 1.024 \\ & 0.265 & 0.43 & 0.346 & 0.257 & 0.57 & 0.269 \\ & 0.336 & 0.356 & 0.515 & 0.181 & 0.009 & 1.501 \\ \hline & 0.464 & 1.216 & 0.333 & 1.066 & 0.189 & 2.523 \\ & 0.471 & 1.249 & 0.404 & 1.086 & 0.081 & 2.375 \\ & 0.252 & 0.768 & 0.133 & 0.634 & 0.265 & 0.996 \\ & 0.52 & 1.258 & 0.258 & 0.959 & 0.355 & 1.664 \\ \hline & 0.361 & 1.349 & 1.199 & 0.944 & 0.544 & 2.412 \\ & 0.346 & 1.297 & 1.179 & 0.922 & 0.134 & 1.908 \\ & 0.368 & 0.374 & 0.827 & 0.454 & 0.58 & 0.539 \\ & 0.537 & 1.467 & 1.135 & 0.861 & 0.626 & 0.695 \\ \hline \hline \end{tabular} \end{table} Table 6: The level of agreement between the marginalised 1D posteriors on the cosmological parameters of interest resulting from the analyses described in Section 5.2, and the Planck 2018 results and appropriate EFTofLSS literature results. For each sample, there are four rows; these correspond to results with different analysis setups. From top to bottom, they are sub-model \(\mathcal{M}_{1}\) with the classic prior, sub-model \(\mathcal{M}_{3}\) with the Jeffreys prior, and sub-model \(\mathcal{M}_{3}\) with the Jeffreys prior. Each cosmological parameter has two columns. The left column of each corresponds to the comparison with the appropriate EFTofLSS literature results: for ALL this is Glanville et al. (2022), Carrillo et al. (2022) for BOSS, Simon et al. (2022) for eBOSS. The right column shows the comparison with Planck 2018. Figure 14: Same as Figure 12 for the analyses of the mock multipole measurements discussed in Section 2. In addition to the results from this work, the Planck 2018 results and results from Carrillo et al. (2022); Simon et al. (2022); Glanville et al. (2022) are plotted for comparison. As with Figure 13 the 99% CI of the Planck results have been plotted to aid comparison. All results from this work were obtained with \(k_{\text{max}}=0.2~{}h~{}\text{Mpc}^{-1}\) and the marginalised likelihood.
2306.16889
On some series involving the binomial coefficients $\binom{3n}{n}$
Using a simple transformation, we obtain much simpler forms for some series involving binomial coefficients $\binom{3n}n$ derived by Necdet Batir. New evaluations are given; and connections with Fibonacci numbers and the golden ratio are established. Finally, we derive some Fibonacci and Lucas series involving the reciprocals of $\binom{3n}{n}$.
Kunle Adegoke, Robert Frontczak, Taras Goy
2023-06-29T12:22:12Z
http://arxiv.org/abs/2306.16889v2
# On Some Series Involving the Binomial Coefficients \(\binom{3n}{n}\) ###### Abstract Using a simple transformation, we obtain much simpler forms for some series involving binomial coefficients \(\binom{3n}{n}\) derived by Necdet Batir. New evaluations are given; and connections with Fibonacci numbers and the golden ratio are established. Finally, we derive some Fibonacci and Lucas series involving the reciprocals of \(\binom{3n}{n}\). _2020 Mathematics Subject Classification_: 40A05, 11B65, 11B39. _Keywords_: series; binomial coefficient; binomial sum; Fibonacci number; Lucas number. Introduction In an article published in the year 2005, Batir [1], inspired by the results of Lehmer [6], studied the series \(\sum\limits_{k=1}^{\infty}\frac{z^{k}}{k^{n}\binom{3n}{n}}\), giving particular attention to the special cases \(n\in\mathbb{N}\cup\{0\}\), for which he derived explicit closed formulas. He obtained many interesting formulas by evaluating the closed forms at appropriate arguments. Some of his results had earlier been obtained experimentally by Borwein and Girgensohn [2]. In [4], D'Aurizio and Di Trani studied this kind of series using hypergeometric functions. In the recent paper [3], Chu evaluated many series having the form \(\sum\limits_{k=1}^{\infty}\frac{z^{k}}{k^{a+1}\binom{3n+b}{n}}\), where \(a\in\{0;\pm 1;\pm 2\}\) and \(b\in\{0;1;\pm 2\}\). The purpose of this note is to derive equivalent but much simpler expressions for the special cases and thereby obtain new evaluations. Batir [1, Identity (3.1)] showed, for \(|z|\leq 27/4\), that \[\sum\limits_{k=1}^{\infty}\frac{z^{k}}{k^{2}\binom{3k}{k}}=6\arctan^{2}\! \left(\frac{\sqrt{3}}{2\phi(z)-1}\right)-\frac{1}{2}\log^{2}\!\left(\frac{ \phi^{3}(z)+1}{\left(\phi(z)+1\right)^{3}}\right)\!, \tag{1.1}\] where \[\phi(z)=\sqrt[3]{\frac{27-2z+3\sqrt{81-12z}}{2z}}. \tag{1.2}\] At \(z=27/4\), \(z=20/3\), \(z=77/12\), \(z=6\), \(z=65/12\), \(z=14/3\), \(z=15/4\), \(z=8/3\), and \(z=17/12\) (then the expression \(81-12z\) in (1.2) will be a perfect square), from (1.1) we immediately obtain, respectively, such series: \[\sum\limits_{k=1}^{\infty}\frac{\left(\frac{27}{4}\right)^{k}}{k ^{2}\binom{3k}{k}} =\frac{2\pi^{2}}{3}-2\log^{2}2, \tag{1.3}\] \[\sum\limits_{k=1}^{\infty}\frac{\left(\frac{20}{3}\right)^{k}}{k ^{2}\binom{3k}{k}} =6\arctan^{2}\!\left(\frac{\sqrt{3}}{\sqrt[3]{10}-1}\right)-\frac {1}{2}\log^{2}\!\left(\frac{18}{\left(\sqrt[3]{10}+2\right)^{3}}\right)\!,\] \[\sum\limits_{k=1}^{\infty}\frac{\left(\frac{77}{12}\right)^{k}}{ k^{2}\binom{3k}{k}} =6\arctan^{2}\!\left(\frac{7\sqrt{3}}{2\sqrt[3]{539}-7}\right)-\frac {1}{2}\log^{2}\!\left(\frac{882}{\left(\sqrt[3]{539}+7\right)^{3}}\right)\!,\] \[\sum\limits_{k=1}^{\infty}\frac{6^{k}}{k^{2}\binom{3k}{k}} =6\arctan^{2}\!\left(\frac{\sqrt{3}}{2\sqrt[3]{2}-1}\right)-\frac {1}{2}\log^{2}\!\left(\frac{3}{\left(\sqrt[3]{2}+1\right)^{3}}\right)\!,\] (1.4) \[\sum\limits_{k=1}^{\infty}\frac{\left(\frac{65}{12}\right)^{k}}{ k^{2}\binom{3k}{k}} =6\arctan^{2}\!\left(\frac{5\sqrt{3}}{2\sqrt[3]{325}-5}\right)-\frac {1}{2}\log^{2}\!\left(\frac{450}{\left(\sqrt[3]{325}+5\right)^{3}}\right)\!,\] \[\sum\limits_{k=1}^{\infty}\frac{\left(\frac{14}{3}\right)^{k}}{ k^{2}\binom{3k}{k}} =6\arctan^{2}\!\left(\frac{\sqrt{3}}{\sqrt[3]{28}-1}\right)-\frac {1}{2}\log^{2}\!\left(\frac{36}{\left(\sqrt[3]{28}+2\right)^{3}}\right)\!,\] \[\sum\limits_{k=1}^{\infty}\frac{\left(\frac{15}{4}\right)^{k}}{ k^{2}\binom{3k}{k}} =6\arctan^{2}\!\left(\frac{\sqrt{3}}{2\sqrt[3]{5}-1}\right)-\frac {1}{2}\log^{2}\!\left(\frac{6}{\left(\sqrt[3]{5}+1\right)^{3}}\right)\!,\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{8}{3}\right)^{k}}{k^{2}{3k\choose k }}=\frac{\pi^{2}}{6}-\frac{\log^{2}3}{2}, \tag{1.5}\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{17}{12}\right)^{k}}{k^{2}{3k \choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{17}-1}\biggr{)}- \frac{1}{2}\log^{2}\biggl{(}\frac{18}{\left(\sqrt[3]{17}+1\right)^{3}}\biggr{)}.\] Series (1.3) and (1.4) one can find in [1, Identities (3.4) and (3.5)]. Series (1.5) was obtained by D'Aurizio and Di Trani using the hypergeometric function \({}_{4}F_{3}\)[4, Formula (8)]. Similarly, we find the corresponding alternating series: \[\sum_{k=1}^{\infty}\frac{\left(-\frac{27}{4}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{3+2\sqrt{2}}+1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{2+2\sqrt{2}}{\left(\sqrt[3]{3+2 \sqrt{2}}-1\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{20}{3}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{40}}{2\sqrt[3]{121+9 \sqrt{161}}+\sqrt[3]{40}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{81+9\sqrt {161}}{\left(\sqrt[3]{121+9\sqrt{161}}-\sqrt[3]{40}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{77}{12}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{77}}{2\sqrt[3]{239+36 \sqrt{158}}+\sqrt[3]{77}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{162+18 \sqrt{158}}{\left(\sqrt[3]{239+18\sqrt{158}}-\sqrt[3]{77}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-6\right)^{k}}{k^{2}{3k\choose k}} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{\sqrt[3]{26+6\sqrt{17}}+1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{9+3\sqrt{17}}{\left(\sqrt[3]{13+ 3\sqrt{17}}-\sqrt[3]{4}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{65}{12}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{65}}{2\sqrt[3]{227+18 \sqrt{146}}+\sqrt[3]{65}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{162+18 \sqrt{146}}{\left(\sqrt[3]{227+18\sqrt{146}}-\sqrt[3]{65}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{14}{3}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{7}}{\sqrt[3]{218 +18\sqrt{137}}+\sqrt[3]{7}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{81+9 \sqrt{137}}{\left(\sqrt[3]{109+9\sqrt{137}}-\sqrt[3]{28}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{15}{4}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{5}}{2\sqrt[3]{23+6 \sqrt{14}}+\sqrt[3]{5}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{18+6\sqrt{1 4}}{\left(\sqrt[3]{23+6\sqrt{14}}-\sqrt[3]{5}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{8}{3}\right)^{k}}{k^{2}{3k \choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{2}}{\sqrt[3]{97+9 \sqrt{113}}+\sqrt[3]{2}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{81+9\sqrt {113}}{\left(\sqrt[3]{97+9\sqrt{113}}-\sqrt[3]{16}\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{17}{12}\right)^{k}}{k^{2}{3 k\choose k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{17}}{2\sqrt[3]{179+12 6\sqrt{2}}+\sqrt[3]{17}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{162+126 \sqrt{2}}{\left(\sqrt[3]{179+126\sqrt{2}}-\sqrt[3]{17}\right)^{3}}\biggr{)}.\] The substitution \(z=\frac{27xy}{(x+y)^{2}}\) in (1.1) reduces function \(\phi(z(x,y))\) to \(\sqrt[3]{x/y}\), thereby yielding the following more manageable identity: \[\sum_{k=1}^{\infty}\frac{(27xy)^{k}}{k^{2}(x+y)^{2k}{3k\choose k}}=6\arctan^{2} \biggl{(}\frac{\sqrt{3}\sqrt[3]{y}}{2\sqrt[3]{x}-\sqrt[3]{y}}\biggr{)}-\frac{1}{ 2}\log^{2}\biggl{(}\frac{x+y}{(\sqrt[3]{x}+\sqrt[3]{y})^{3}}\biggr{)},\] (A) which is valid for \(x/y\geq 1\) or \(x/y\leq-(\sqrt{2}+1)^{2}=-\cot^{2}(\pi/8)\). Differentiating twice identity (A) with respect to \(x\), we obtain, for \(x/y>1\) or \(x/y\leq-(\sqrt{2}+1)^{2}\), the following identities: \[\begin{split}&\sum_{k=1}^{\infty}\frac{(27xy)^{k}}{k(x+y)^{2k}{3k \choose k}}\\ &\quad=\frac{\sqrt[3]{xy}}{x-y}\left(2\sqrt{3}\big{(}\sqrt[3]{x} +\sqrt[3]{y}\big{)}\arctan\!\left(\!\frac{\sqrt{3}\sqrt[3]{y}}{2\sqrt[3]{x}- \sqrt[3]{y}}\right)+\big{(}\sqrt[3]{x}-\sqrt[3]{y}\big{)}\log\!\left(\frac{x+y }{\big{(}\sqrt[3]{x}+\sqrt[3]{y}\big{)}^{3}}\right)\right)\end{split}\] (B) and \[\begin{split}&\sum_{k=1}^{\infty}\frac{(27xy)^{k}}{(x+y)^{2k}{3k \choose k}}=\frac{4xy}{(x-y)^{2}}\\ &\quad+\frac{\sqrt[3]{xy}}{3}\frac{x+y}{(x-y)^{3}}\left(2\sqrt{3 }\left(2\sqrt[3]{xy}\big{(}\sqrt[3]{x^{2}}+\sqrt[3]{y^{2}}\big{)}+\sqrt[3]{x^{ 4}}+\sqrt[3]{y^{4}}\right)\arctan\!\left(\!\frac{\sqrt{3}\sqrt[3]{y}}{2\sqrt[3]{ x}-\sqrt[3]{y}}\right)\right.\\ &\quad\left.-\left(2\sqrt[3]{xy}\big{(}\sqrt[3]{x^{2}}-\sqrt[3]{ y^{2}}\big{)}-\sqrt[3]{x^{4}}+\sqrt[3]{y^{4}}\right)\log\!\left(\frac{x+y}{ \big{(}\sqrt[3]{x}+\sqrt[3]{y}\big{)}^{3}}\right)\right).\end{split}\] (C) Setting \((x,y)=(8,1)\), \((x,y)=(8,-1)\), \((x,y)=(8,1/8)\), \((x,y)=(8,-1/8)\), \((x,y)=(1,1/27)\) and \((x,y)=(1,-1/27)\) in (A), (B), and (C) we have the following series list: \[\begin{split}&\sum_{k=1}^{\infty}\frac{\big{(}\frac{8}{3}\big{)}^{k}}{k {3k\choose k}}=\frac{2\sqrt{3}\,\pi}{7}-\frac{2}{7}\log 3,\\ &\sum_{k=1}^{\infty}\frac{\big{(}\frac{8}{3}\big{)}^{k}}{3k \choose k}=\frac{32}{49}+\frac{74\sqrt{3}\,\pi}{343}-\frac{18}{343}\log 3,\\ &\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{6\sqrt[6]{7}}{7} \big{)}^{2k}}{k^{2}{3k\choose k}}=6\arctan^{2}\!\left(\frac{\sqrt{3}}{5} \right)-\frac{1}{2}\log^{2}7,\\ &\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{6\sqrt[6]{7}}{7} \big{)}^{2k}}{k{3k\choose k}}=\frac{4\sqrt{3}}{9}\arctan\!\left(\frac{\sqrt{3}} {5}\right)-\frac{2}{3}\log 7,\\ &\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{6\sqrt[6]{7}}{7} \big{)}^{2k}}{{3k\choose k}}=-\frac{32}{81}-\frac{28\sqrt{3}}{729}\arctan\! \left(\frac{\sqrt{3}}{5}\right)-\frac{14}{81}\log 7,\\ &\sum_{k=1}^{\infty}\frac{\big{(}\frac{24\sqrt{3}}{65}\big{)}^{2k }}{k^{2}{3k\choose k}}=6\arctan^{2}\!\left(\frac{\sqrt{3}}{7}\right)-\frac{1} {2}\log^{2}\!\left(\frac{25}{13}\right)\!,\\ &\sum_{k=1}^{\infty}\frac{\big{(}\frac{24\sqrt{3}}{65}\big{)}^{2k }}{k^{2}{3k\choose k}}=\frac{40\sqrt{3}}{63}\arctan\!\left(\frac{\sqrt{3}}{7} \right)-\frac{4}{21}\log\!\left(\frac{25}{13}\right)\!,\\ &\sum_{k=1}^{\infty}\frac{\big{(}\frac{24\sqrt{3}}{65}\big{)}^{2k }}{{3k\choose k}}=\frac{256}{3969}+\frac{68120\sqrt{3}}{250047}\arctan\! \left(\frac{\sqrt{3}}{7}\right)-\frac{1300}{27783}\log\!\left(\frac{25}{13} \right)\!,\end{split}\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{8\sqrt{3}}{21}\big{)} ^{2k}}{k^{2}\binom{3k}{k}} =6\arctan^{2}\Big{(}\frac{\sqrt{3}}{9}\Big{)}-\frac{1}{2}\log^{2} \Big{(}\frac{7}{3}\Big{)},\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{8\sqrt{3}}{21} \big{)}^{2k}}{k^{2}\binom{3k}{k}} =6\arctan^{2}\Big{(}\frac{\sqrt{3}}{9}\Big{)}-\frac{1}{2}\log^{2} \Big{(}\frac{7}{3}\Big{)},\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{8\sqrt{3}}{21} \big{)}^{2k}}{k^{2}\binom{3k}{k}} =-\frac{256}{4225}+\frac{20328\sqrt{3}}{274625}\arctan\Big{(}\frac {\sqrt{3}}{9}\Big{)}-\frac{252}{2197}\log\Big{(}\frac{7}{3}\Big{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{28}\big{)}^{2k}}{k^{2} \binom{3k}{k}} =6\arctan^{2}\Big{(}\frac{\sqrt{3}}{5}\Big{)}-\frac{1}{2}\log^{2} \Big{(}\frac{16}{7}\Big{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{28}\big{)}^{2k}}{k^{2} \binom{3k}{k}} =\frac{12\sqrt{3}}{13}\arctan\Big{(}\frac{\sqrt{3}}{5}\Big{)}- \frac{3}{13}\log\Big{(}\frac{16}{7}\Big{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{28}\big{)}^{2k}}{ \binom{3k}{k}} =\frac{27}{169}+\frac{994\sqrt{3}}{2197}\arctan\Big{(}\frac{\sqrt {3}}{5}\Big{)}-\frac{112}{2197}\log\Big{(}\frac{16}{7}\Big{)},\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{27}{26}\big{)}^{2k} }{k^{2}\binom{3k}{k}} =6\arctan^{2}\Big{(}\frac{\sqrt{3}}{7}\Big{)}-\frac{1}{2}\log^{2 }\Big{(}\frac{13}{4}\Big{)},\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{27}{26}\big{)}^{2k} }{k\binom{3k}{k}} =\frac{3\sqrt{3}}{7}\arctan\Big{(}\frac{\sqrt{3}}{7}\Big{)}-\frac {3}{7}\log\Big{(}\frac{13}{4}\Big{)},\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{27}{26}\big{)}^{2k} }{k^{2}\binom{3k}{k}} =-\frac{27}{196}+\frac{143\sqrt{3}}{2744}\arctan\Big{(}\frac{ \sqrt{3}}{7}\Big{)}-\frac{52}{343}\log\Big{(}\frac{13}{4}\Big{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{54\sqrt{2}}{35}\big{)}^{2k} }{k^{2}\binom{3k}{k}} =6\arctan^{2}\Big{(}\frac{\sqrt{3}}{2}\Big{)}-\frac{1}{2}\log^{2 }\Big{(}\frac{25}{7}\Big{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{54\sqrt{2}}{35}\big{)}^{2k} }{k^{2}\binom{3k}{k}} =\frac{60\sqrt{3}}{19}\arctan\Big{(}\frac{\sqrt{3}}{2}\Big{)}- \frac{6}{19}\log\Big{(}\frac{25}{7}\Big{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{54\sqrt{2}}{35}\big{)}^{2k} }{\binom{3k}{k}} =\frac{864}{361}+\frac{35420\sqrt{3}}{6859}\arctan\Big{(}\frac{ \sqrt{3}}{2}\Big{)}-\frac{350}{6859}\log\Big{(}\frac{25}{7}\Big{)},\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{54\sqrt{2}}{19} \big{)}^{2k}}{k^{2}\binom{3k}{k}} =6\arctan^{2}\Big{(}\frac{\sqrt{3}}{4}\Big{)}-\frac{1}{2}\log^{2 }19,\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{54\sqrt{2}}{19} \big{)}^{2k}}{k\binom{3k}{k}} =\frac{12\sqrt{3}}{35}\arctan\Big{(}\frac{\sqrt{3}}{4}\Big{)}- \frac{6}{7}\log 19,\] \[\sum_{k=1}^{\infty}(-1)^{k}\frac{\big{(}\frac{54\sqrt{2}}{19} \big{)}^{2k}}{\binom{3k}{k}} =-\frac{864}{1225}-\frac{4484\sqrt{3}}{42875}\arctan\Big{(}\frac{ \sqrt{3}}{4}\Big{)}-\frac{38}{343}\log 19.\] The first two series from this list can be found in [3, Corollaries 2.3 and 3.3]. ## 2 Evaluations at selected arguments In this section we will evaluate identities (A), (B) and (C) at carefully selected values of \(x\) and \(y\). Some of the resulting summation identities will involve Fibonacci and Lucas numbers in the summand and possibly in the evaluations. Let \(F_{n}\) and \(L_{n}\) denote the \(n\)-th Fibonacci and Lucas numbers, both satisfying the recurrence relation \[X_{n}=X_{n-1}+X_{n-2},\quad n\geq 2,\] but with the initial conditions \(F_{0}=0\), \(F_{1}=1\) and \(L_{0}=2\), \(L_{1}=1\). Extending these numbers to negative subscripts gives \[F_{-j}=(-1)^{j-1}F_{j},\quad L_{-j}=(-1)^{j}L_{j}.\] Throughout the paper, we denote the golden ratio \(\alpha=(1+\sqrt{5})/2\) and write \(\beta=(1-\sqrt{5})/2\), so that \(\alpha\beta=-1\) and \(\alpha+\beta=1\). For any integer \(j\), the explicit formulas (Binet formulas) for Fibonacci and Lucas numbers are \[F_{j}=\frac{\alpha^{j}-\beta^{j}}{\alpha-\beta},\qquad L_{j}=\alpha^{j}+ \beta^{j}. \tag{2.1}\] We will often require the following identities, valid for any integer \(r\), which are straightforward consequences of (2.1): \[\alpha^{2r}+(-1)^{r+1} =\alpha^{r}F_{r}\sqrt{5}, \tag{2.2}\] \[\alpha^{2r}+(-1)^{r} =\alpha^{r}L_{r},\] (2.3) \[\beta^{2r}+(-1)^{r+1} =-\beta^{r}F_{r}\sqrt{5},\] (2.4) \[\beta^{2r}+(-1)^{r} =\beta^{r}L_{r}. \tag{2.5}\] We also require the following well-known identities [5, 7]: \[F_{n}^{2}+(-1)^{n+m-1}F_{m}^{2} =F_{n-m}F_{n+m}, \tag{2.6}\] \[F_{n+m}+(-1)^{m}F_{n-m} =L_{m}F_{n},\] (2.7) \[F_{n+m}+(-1)^{m-1}F_{n-m} =F_{m}L_{n},\] (2.8) \[L_{n}F_{m}+F_{n}L_{m} =2F_{n+m},\] (2.9) \[L_{n+m}+(-1)^{m}L_{n-m} =L_{m}L_{n},\] (2.10) \[L_{n+m}+(-1)^{m-1}L_{n-m} =5F_{m}F_{n}. \tag{2.11}\] ### Results from identity (A) **Theorem 1**.: _If \(r\) is a non-negative integer, then_ \[\sum_{k=1}^{\infty}\frac{(-1)^{k(r-1)}\big{(}\frac{27}{5}\big{)}^{k} }{k^{2}\binom{3k}{k}F_{r}^{2k}} \tag{2.12}\] \[\qquad\qquad=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{ \alpha^{2r}}+(-1)^{r}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{\sqrt{5} \alpha^{r}F_{r}}{\big{(}\sqrt[3]{\alpha^{2r}}-(-1)^{r}\big{)}^{3}}\biggr{)}, \quad r\neq 0,\] \[\sum_{k=1}^{\infty}\frac{(-1)^{kr}27^{k}}{k^{2}\binom{3k}{k}L_{r}^ {2k}}\] (2.13) \[\qquad\qquad=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{ \alpha^{2r}}-(-1)^{r}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{\alpha^{r}L _{r}}{\big{(}\sqrt[3]{\alpha^{2r}}+(-1)^{r}\big{)}^{3}}\biggr{)},\quad r\neq 1.\] Proof.: Identity (2.12) is proved by setting \(x=\alpha^{2r}\), \(y=(-1)^{r+1}\) in (A) and making use of identity (2.2). Identity (2.13) follows from setting \(x=\alpha^{2r}\), \(y=(-1)^{r}\) in (A) and using (2.3). **Example 1**.: _Evaluation at \(r=1\), \(r=2\) and \(r=3\) in (2.12) and (2.13), respectively, gives_ \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{5}\big{)}^{k}}{k^{2} \binom{3k}{k}} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{2}}-1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{\alpha\sqrt{5}}{\big{(}\sqrt[3]{ \alpha^{2}}+1\big{)}^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{(-27)^{k}}{k^{2}\binom{3k}{k}} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{2}}+1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{\alpha}{\big{(}\sqrt[3]{\alpha^{2 }}-1\big{)}^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{-27}{5}\big{)}^{k}}{k^{2} \binom{3k}{k}} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{4}}+1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{\alpha^{2}\sqrt{5}}{\big{(}\sqrt[3] {\alpha^{4}}-1\big{)}^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{-27}{16}\big{)}^{k}}{k^{2} \binom{3k}{k}} =6\arctan^{2}\biggl{(}\frac{4\sqrt{3}-\sqrt{15}}{11}\biggr{)}-2 \log^{2}2.\] **Corollary 2**.: _If \(r\) is a non-negative integer, then_ \[\sum_{k=1}^{\infty}\frac{(-1)^{k(r-1)}\big{(}\frac{27}{5}\big{)}^ {k}}{k^{2}\binom{3k}{k}F_{3r}^{2k}} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{\alpha^{2r}+\alpha^{r}L_{r} }\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{F_{3r}}{5F_{r}^{3}}\biggr{)}, \quad r\neq 0,\] \[\sum_{k=1}^{\infty}\frac{(-1)^{kr}27^{k}}{k^{2}\binom{3k}{k}L_{3r }^{2k}} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{\alpha^{2r}+\sqrt{5}\alpha^{ r}F_{r}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{L_{3r}}{L_{r}^{3}}\biggr{)}.\] Proof.: Replace \(r\) with \(3r\) in (2.12) and (2.13) and use (2.2), (2.3). **Theorem 3**.: _Let \(m\) and \(n\) be positive integers such that \(n\geq m\) unless stated otherwise. Then_ \[\sum_{k=1}^{\infty} \frac{(-1)^{k(n-m-1)}}{k^{2}\binom{3k}{k}}\bigg{(}\frac{27F_{n}^{ 2}F_{m}^{2}}{F_{n-m}^{2}F_{n+m}^{2}}\bigg{)}^{k} \tag{2.14}\] \[=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{m}^{2}}}{2\sqrt[3] {F_{n}^{2}}+(-1)^{n-m}\sqrt[3]{F_{m}^{2}}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(} \frac{F_{n-m}F_{n+m}}{\big{(}\sqrt[3]{F_{n}^{2}}-(-1)^{n-m}\sqrt[3]{F_{m}^{2}} \big{)}^{3}}\biggr{)},\quad n>m,\] \[\sum_{k=1}^{\infty} \frac{(-1)^{km}}{k^{2}\binom{3k}{k}}\left(\frac{27F_{n+m}F_{n-m} }{F_{m}^{2}F_{n}^{2}}\right)^{k}\] \[=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{n-m}}}{2\sqrt[3] {F_{n+m}}-(-1)^{m}\sqrt[3]{F_{n-m}}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(} \frac{L_{m}F_{n}}{\big{(}\sqrt[3]{F_{n+m}}+(-1)^{m}\sqrt[3]{F_{n-m}}\big{)}^{ 3}}\biggr{)},\] \[\sum_{k=1}^{\infty} \frac{(-1)^{k(m-1)}}{k^{2}\binom{3k}{k}}\left(\frac{27F_{n+m}F_{n -m}}{F_{m}^{2}L_{n}^{2}}\right)^{k}\] \[=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{n-m}}}{2\sqrt[3] {F_{n+m}}+(-1)^{m}\sqrt[3]{F_{n-m}}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(} \frac{2F_{n+m}}{\big{(}\sqrt[3]{L_{n}F_{m}}+\sqrt[3]{L_{m}F_{n}}\big{)}^{3}} \biggr{)},\quad\frac{L_{n}}{L_{m}}>\frac{F_{n}}{F_{m}},\] \[\sum_{k=1}^{\infty} \frac{(-1)^{km}}{k^{2}\binom{3k}{k}}\left(\frac{27L_{n+m}L_{n-m} }{L_{m}^{2}L_{n}^{2}}\right)^{k}\] \[=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{L_{n-m}}}{2\sqrt[3] {L_{n+m}}-(-1)^{m}\sqrt[3]{L_{n-m}}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(} \frac{L_{m}L_{n}}{\big{(}\sqrt[3]{L_{n+m}}+(-1)^{m}\sqrt[3]{L_{n-m}}\big{)}^{ 3}}\biggr{)},\] \[\sum_{k=1}^{\infty} \frac{(-1)^{k(m-1)}}{k^{2}\binom{3k}{k}}\left(\frac{27L_{n+m}L_{n -m}}{25F_{m}^{2}F_{n}^{2}}\right)^{k}\] \[=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{L_{n-m}}}{2\sqrt[3] {L_{n+m}}+(-1)^{m}\sqrt[3]{L_{n-m}}}\biggr{)}-\frac{1}{2}\log^{2}\biggl{(} \frac{5F_{m}F_{n}}{\big{(}\sqrt[3]{L_{n+m}}-(-1)^{m}\sqrt[3]{L_{n-m}}\big{)}^{ 3}}\biggr{)}.\] Proof.: Straightforward using identities (2.6) to (2.11) and identity (A). **Example 2**.: _Identities (2.14) and (2.15) yield_ \[\sum_{k=1}^{\infty}\frac{(-1)^{kn}}{k^{2}\binom{3k}{k}}\left( \frac{54L_{2n}}{L_{n}^{4}}\right)^{k} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{\sqrt[3]{4L_{2n}}-(-1)^{n}} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{L_{n}^{2}}{\big{(}\sqrt[3]{L_{2n}}+ (-1)^{n}\sqrt[3]{2}\big{)}^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{(-1)^{k(n-1)}}{k^{2}\binom{3k}{k}} \left(\frac{54L_{2n}}{25F_{n}^{4}}\right)^{k} =6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{\sqrt[3]{4L_{2n}}+(-1)^{n}} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{5F_{n}^{2}}{\big{(}\sqrt[3]{L_{2n}}- (-1)^{n}\sqrt[3]{2}\big{)}^{3}}\biggr{)}.\] By writing \(\cot^{2}x\) for \(x\) and setting \(y=1\), a useful trigonometric version of identity (A) is obtained, namely, \[\sum_{k=1}^{\infty}\frac{\left(\frac{27}{4}\right)^{k}}{k^{2}\binom{3k}{k}}\sin^ {2k}2x=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\cot^{2}x}-1}\biggr{)}- \frac{1}{2}\log^{2}\biggl{(}\frac{\csc^{2}x}{\left(\sqrt[3]{\cot^{2}x}+1 \right)^{3}}\biggr{)}. \tag{2.16}\] Identity (2.16) is valid for \(x\in(0,\pi/4]\). Evaluation of (2.16) at \(x=\pi/12\), \(x=\pi/8\) and \(x=\pi/6\), respectively, gives \[\sum_{k=1}^{\infty}\frac{\left(\frac{27}{16}\right)^{k}}{k^{2} \binom{3k}{k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{7+4\sqrt{3}}-1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{4+2\sqrt{3}}{\left(\sqrt[3]{7+4 \sqrt{3}}+1\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{27}{8}\right)^{k}}{k^{2} \binom{3k}{k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{3}-1}\biggr{)}- \frac{1}{2}\log^{2}\biggl{(}\frac{4+2\sqrt{2}}{\left(\sqrt[3]{3}+2\sqrt{2}+1 \right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{81}{16}\right)^{k}}{k^{2} \binom{3k}{k}}=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{3}-1}\biggr{)}- \frac{1}{2}\log^{2}\biggl{(}\frac{4}{\left(\sqrt[3]{3}+1\right)^{3}}\biggr{)}.\] Writing \(-\cot^{2}x\) for \(x\) and setting \(y=1\) in (A) and noting that \(1-\cot^{2}x=-\cos 2x\sin^{-2}x\), we obtain another useful trigonometric version of (A): \[\sum_{k=1}^{\infty}\frac{\left(-\frac{27}{4}\right)^{k}}{k^{2}\binom{3k}{k}} \tan^{2k}2x=6\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\cot^{2}x}+1} \biggr{)}-\frac{1}{2}\log^{2}\biggl{(}\frac{\csc^{2}x\,\cos 2x}{\left(\sqrt[3]{\cot^{2}x}-1 \right)^{3}}\biggr{)}, \tag{2.17}\] valid for \(x\in(0,\pi/8]\). At \(x=\pi/12\) in (2.17) we obtain ### Results from identity (B) **Theorem 4**.: _If \(r\) is a positive integer, then_ \[\begin{split}\sum_{k=1}^{\infty}\frac{(-1)^{k(r-1)}\bigl{(}\frac{ 27}{5}\bigr{)}^{k}}{k\binom{3k}{k}F_{r}^{2k}}=\frac{1}{\sqrt[3]{\alpha^{r}}L_ {r}}\left(2\sqrt{3}\bigl{(}\sqrt[3]{\alpha^{2r}}-(-1)^{r}\bigr{)}\arctan \biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{2r}}+(-1)^{r}}\biggr{)}\right.\\ \left.-(-1)^{r}\bigl{(}\sqrt[3]{\alpha^{2r}}+(-1)^{r}\bigr{)} \log\biggl{(}\frac{\sqrt{5}\,\alpha^{r}F_{r}}{\left(\sqrt[3]{\alpha^{2r}}-(-1) ^{r}\right)^{3}}\biggr{)}\right)\end{split} \tag{2.18}\] _and_ \[\begin{split}\sum_{k=1}^{\infty}\frac{(-1)^{kr}27^{k}}{k\binom{3k }{k}L_{r}^{2k}}&=\frac{\sqrt{5}}{5\sqrt[3]{\alpha^{r}}F_{r}} \left(2\sqrt{3}\left(\sqrt[3]{\alpha^{2r}}+(-1)^{r}\right)\arctan\biggl{(} \frac{\sqrt{3}}{2\sqrt[3]{\alpha^{2r}}-(-1)^{r}}\biggr{)}\right.\\ &\left.+(-1)^{r}\bigl{(}\sqrt[3]{\alpha^{2r}}-(-1)^{r}\bigr{)} \log\biggl{(}\frac{\alpha^{r}L_{r}}{\left(\sqrt[3]{\alpha^{2r}}+(-1)^{r} \right)^{3}}\biggr{)}\right).\end{split} \tag{2.19}\] Proof.: Identity (2.18) is proved by setting \(x=\alpha^{2r}\), \(y=(-1)^{r+1}\) in (B) and making use of identity (2.2). Identity (2.19) follows from setting \(x=\alpha^{2r}\), \(y=(-1)^{r}\) in (B) and using (2.3). **Example 3**.: _Evaluation of (2.18) at \(r=1\), \(r=2\), \(r=3\) and (2.19) at \(r=2\) and \(r=3\), respectively, gives_ \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{5}\big{)}^{k}}{k\binom{ 3k}{k}} =2\sqrt{3}\,\frac{\sqrt[3]{\alpha^{2}}+1}{\sqrt[3]{\alpha}}\arctan \biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{2}}-1}\biggr{)}+\frac{\sqrt[3]{ \alpha^{2}}-1}{\sqrt[3]{\alpha}}\log\biggl{(}\frac{\alpha\sqrt{5}}{(\sqrt[3]{ \alpha^{2}}+1)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}-\frac{27}{5}\big{)}^{k}}{k \binom{3k}{k}} =\frac{2\sqrt{3}}{3}\frac{\sqrt[3]{\alpha^{4}}-1}{\sqrt[3]{\alpha ^{2}}}\arctan\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{4}}+1}\biggr{)}-\frac{ \sqrt[3]{\alpha^{4}}+1}{3\sqrt[3]{\alpha^{2}}}\log\biggl{(}\frac{\alpha^{2} \sqrt{5}}{(\sqrt[3]{\alpha^{4}}-1)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{20}\big{)}^{k}}{k \binom{3k}{k}} =\frac{\sqrt{15}}{2}\arctan\bigl{(}\sqrt{15}-\sqrt{12}\bigr{)}- \frac{1}{4}\log\biggl{(}\frac{5}{2}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{3^{k}}{k\binom{3k}{k}} =\frac{2\sqrt{15}}{5}\frac{\sqrt[3]{\alpha^{4}}+1}{\sqrt[3]{\alpha ^{2}}}\arctan\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{4}}-1}\biggr{)}+\frac{ \sqrt[3]{\alpha^{4}}-1}{\sqrt{5}\sqrt[3]{\alpha^{2}}}\log\biggl{(}\frac{3 \alpha^{2}}{(\sqrt[3]{\alpha^{4}}+1)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\big{(}-\frac{27}{16}\big{)}^{k}}{k \binom{3k}{k}} =\frac{\sqrt{15}}{5}\arctan\biggl{(}\frac{4\sqrt{3}-\sqrt{15}}{1 1}\biggr{)}-\log 2.\] **Corollary 5**.: _If \(r\) is a positive integer, then_ \[\sum_{k=1}^{\infty}\frac{(-1)^{(r-1)k}\,\big{(}\frac{27}{5}\big{)} ^{k}}{k\binom{3k}{k}F_{3r}^{2k}} =2\sqrt{15}\,\frac{F_{r}}{L_{3r}}\arctan\biggl{(}\frac{\sqrt{3}}{ \alpha^{r}(\alpha^{r}+L_{r})}\biggr{)}-(-1)^{r}\frac{L_{r}}{L_{3r}}\log\biggl{(} \frac{F_{3r}}{5F_{r}^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{(-1)^{rk}27^{k}}{k\binom{3k}{k}L_{3r}^{2 k}} =\frac{2\sqrt{15}}{5}\frac{L_{r}}{F_{3r}}\arctan\biggl{(}\frac{\sqrt{3}}{\alpha^{r}( \alpha^{r}+\sqrt{5}F_{r})}\biggr{)}+(-1)^{r}\frac{F_{r}}{F_{3r}}\log\biggl{(} \frac{L_{3r}}{L_{r}^{3}}\biggr{)}.\] Replacing \(x\) with \(\cot^{2}x\) and setting \(y=1\) in identity (B) gives \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{4}\big{)}^{k}}{k\binom{ 3k}{k}}\sin^{2k}2 =\frac{2\sqrt{3}\sin^{2}x}{\sqrt[3]{\cot^{2}x}}\,\bigl{(}\sqrt[3]{ \cot^{2}x}+1\bigr{)}\arctan\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\cot^{2}x}-1} \biggr{)} \tag{2.20}\] \[\quad+\frac{\sin^{2}x}{\cos 2x}\sqrt[3]{\cot^{2}x}\, \bigl{(}\sqrt[3]{\cot^{2}x}-1\bigr{)}\log\biggl{(}\frac{\csc^{2}x}{\bigl{(} \sqrt[3]{\cot^{2}x}+1\bigr{)}^{3}}\biggr{)},\] valid for \(x\in(0,\pi/4)\). At \(x=\pi/12\), \(x=\pi/8\) and \(x=\pi/6\), from (2.20) we have \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{27}{16}\big{)}^{k}}{k\binom{ 3k}{k}} =\Bigl{(}\sqrt[3]{2+\sqrt{3}}+\sqrt[3]{2-\sqrt{3}}\Bigr{)}\arctan \biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{7+4\sqrt{3}}-1}\biggr{)}\] \[\quad+\frac{\sqrt{3}}{6}\Bigl{(}\sqrt[3]{2+\sqrt{3}}-\sqrt[3]{2- \sqrt{3}}\Bigr{)}\log\biggl{(}\frac{8+4\sqrt{3}}{\bigl{(}\sqrt[3]{7+4\sqrt{3} }+1\bigr{)}^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{27}{8}\right)^{k}}{k\binom{3k}{k }} =\sqrt{3}\left(\sqrt[3]{1+\sqrt{2}}-\sqrt[3]{1-\sqrt{2}}\right)\arctan \biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{3+2\sqrt{2}}-1}\biggr{)}\] \[\quad+\frac{1}{2}\Bigl{(}\sqrt[3]{1+\sqrt{2}}+\sqrt[3]{1-\sqrt{2 }}\Bigr{)}\log\biggl{(}\frac{4+2\sqrt{2}}{\left(\sqrt[3]{3+2\sqrt{2}}+1\right) ^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{81}{16}\right)^{k}}{k\binom{ 3k}{k}} =\sqrt[6]{3^{5}}\left(\sqrt[3]{3}+1\right)\arctan\biggl{(}\frac{\sqrt{3}}{2 \sqrt[3]{3}-1}\biggr{)}+\frac{\sqrt[3]{3}\left(\sqrt[3]{3}-1\right)}{2}\log \biggl{(}\frac{4}{\left(\sqrt[3]{3}+1\right)^{3}}\biggr{)}.\] ### Results from identity (C) **Theorem 6**.: _If \(r\) is a positive integer, then_ \[\sum_{k=1}^{\infty}\frac{(-1)^{(k-1)(r-1)}\bigl{(}\frac{27}{5} \bigr{)}^{k}}{\binom{3k}{k}F_{r}^{2k}}=\frac{4}{L_{r}^{2}} \tag{2.21}\] \[\quad+\frac{(-1)^{r}\sqrt{5}}{3}\frac{F_{r}}{L_{r}^{3}}\Bigl{(}2 \bigl{(}\sqrt[3]{\alpha^{2r}}-\sqrt[3]{\beta^{2r}}\bigr{)}+(-1)^{r}\bigl{(} \sqrt[3]{\alpha^{4r}}-\sqrt[3]{\beta^{4r}}\bigr{)}\Bigr{)}\log\biggl{(}\frac{ \sqrt{5}\alpha^{r}F_{r}}{\left(\sqrt[3]{\alpha^{2r}}-(-1)^{r}\right)^{3}} \biggr{)},\] \[\sum_{k=1}^{\infty}\frac{(-1)^{r(k-1)}27^{k}}{\binom{3k}{k}L_{r}^ {2k}}=\frac{4}{5F_{r}^{2}}\] (2.22) \[\quad+\frac{2\sqrt{15}}{75}\frac{L_{r}}{F_{r}^{3}}\left(2\bigl{(} \sqrt[3]{\alpha^{2r}}+\sqrt[3]{\beta^{2r}}\bigr{)}+(-1)^{r}\bigl{(}\sqrt[3]{ \alpha^{4r}}+\sqrt[3]{\beta^{4r}}\bigr{)}\right)\arctan\biggl{(}\frac{\sqrt{3 }}{2\sqrt[3]{\alpha^{2r}}-(-1)^{r}}\biggr{)}\] \[\quad-\frac{(-1)^{r}\sqrt{5}}{75}\frac{L_{r}}{F_{r}^{3}}\left(2 \bigl{(}\sqrt[3]{\alpha^{2r}}-\sqrt[3]{\beta^{2r}}\bigr{)}-(-1)^{r}\bigl{(} \sqrt[3]{\alpha^{4r}}-\sqrt[3]{\beta^{4r}}\bigr{)}\right)\log\biggl{(}\frac{ L_{r}}{\left(\sqrt[3]{\alpha^{r}}+\sqrt[3]{\beta^{r}}\right)^{3}}\biggr{)}.\] Proof.: Identities (2.21) and (2.22) are proved by setting, respectively, \(x=\alpha^{r}\), \(y=-\beta^{r}\) and \(x=\alpha^{2r}\), \(y=(-1)^{r}\) in (C) and making use of (2.2) and (2.3). **Example 4**.: _Evaluation of (2.21) at \(r=1,2,3\) and (2.22) at \(r=2,3,6\), respectively, gives_ \[\sum_{k=1}^{\infty}\frac{\left(\frac{27}{5}\right)^{k}}{\binom{3k }{k}} =4+\frac{2\sqrt{15}}{3}\Bigl{(}2\left(\sqrt[3]{\alpha^{2}}+\sqrt[3]{ \beta^{2}}\right)+\sqrt[3]{\alpha^{4}}+\sqrt[3]{\beta^{4}}\Bigr{)}\arctan \biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{2}}-1}\biggr{)}\] \[\quad-\frac{\sqrt{5}}{3}\Bigl{(}2\left(\sqrt[3]{\alpha^{2}}- \sqrt[3]{\beta^{2}}\right)-\bigl{(}\sqrt[3]{\alpha^{4}}-\sqrt[3]{\beta^{4}} \bigr{)}\Bigr{)}\log\biggl{(}\frac{\sqrt{5}\alpha}{\left(\sqrt[3]{\alpha^{2}}+ 1\right)^{3}}\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(-\frac{27}{5}\right)^{k}}{\binom{3 k}{k}} =-\frac{4}{9}-\frac{2\sqrt{15}}{81}\left(2\bigl{(}\sqrt[3]{\alpha^{4}}+\sqrt[3]{ \beta^{4}}\bigr{)}-\bigl{(}\sqrt[3]{\alpha^{8}}+\sqrt[3]{\beta^{8}}\bigr{)} \right)\arctan\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{4}}+1}\biggr{)}\] \[\quad-\frac{\sqrt{5}}{81}\left(2\bigl{(}\sqrt[3]{\alpha^{4}}- \sqrt[3]{\beta^{4}}\bigr{)}+\sqrt[3]{\alpha^{8}}-\sqrt[3]{\beta^{8}}\right) \log\biggl{(}\frac{\sqrt{5}\alpha^{2}}{\left(\sqrt[3]{\alpha^{4}}-1\right)^{3} }\biggr{)},\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{27}{20}\right)^{k}}{\binom{3k}{k}}=\frac{1}{4 }+\frac{13\sqrt{15}}{48}\arctan\bigl{(}\sqrt{15}-\sqrt{12}\bigr{)}-\frac{5}{96} \log\left(\frac{5}{2}\right)\] _and_ \[\sum_{k=1}^{\infty}\frac{3^{k}}{\binom{3k}{k}} =\frac{4}{5}+\frac{2\sqrt{15}}{25}\left(2\bigl{(}\sqrt[3]{\alpha^ {4}}+\sqrt[3]{\beta^{4}}\bigr{)}+\sqrt[3]{\alpha^{8}}+\sqrt[3]{\beta^{8}} \right)\arctan\biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{4}}-1}\biggr{)}\] \[\quad+\frac{\sqrt{5}}{25}\left(2\bigl{(}\sqrt[3]{\alpha^{4}}- \sqrt[3]{\beta^{4}}\bigr{)}-\bigl{(}\sqrt[3]{\alpha^{8}}-\sqrt[3]{\beta^{8}} \bigr{)}\right)\log\bigl{(}1+\sqrt[3]{\alpha^{2}}+\sqrt[3]{\beta^{2}}\bigr{)},\] \[\sum_{k=1}^{\infty}\frac{\bigl{(}-\frac{27}{16}\bigr{)}^{k}}{ \binom{3k}{k}} =-\frac{1}{5}+\frac{\sqrt{15}}{75}\arctan\biggl{(}\frac{4\sqrt{3} -\sqrt{15}}{11}\biggr{)}-\frac{1}{6}\log 4,\] \[\sum_{k=1}^{\infty}\frac{\bigl{(}\frac{11}{12}\bigr{)}^{k}}{ \binom{3k}{k}} =\frac{1}{80}+\frac{183\sqrt{15}}{3200}\arctan\biggl{(}\frac{\sqrt {15}-\sqrt{12}}{3}\biggr{)}-\frac{9}{256}\log\Bigl{(}\frac{3}{2}\biggr{)}.\] ## 3 Fibonacci and Lucas series involving inverses of the binomial coefficients \(\binom{3n}{n}\) In this section we will derive Fibonacci and Lucas identities which contain reciprocals of the binomial coefficients \(\binom{3n}{n}\). **Lemma 7**.: [5] _If \(p\) and \(q\) are integers, then_ \[F_{p}\alpha^{q}-F_{p+q} =-\beta^{p}F_{q}, \tag{3.1}\] \[F_{p+q}-\beta^{q}F_{p} =\alpha^{p}F_{q}. \tag{3.2}\] ### Fibonacci series associated with identity (A) **Theorem 8**.: _Let \(p\) and \(q\) be integers such that \(p\leq-2\), \(q\geq 4\) with \(q>|p|+1\). Then_ \[\sum_{k=1}^{\infty} \biggl{(}\frac{-27F_{p}F_{p+q}}{F_{q}^{2}}\biggr{)}^{k}\frac{F_{ (2p+q)k}}{k^{2}\binom{3k}{k}}\] \[=\frac{6}{\sqrt{5}}\Biggl{(}\arctan^{2}\biggl{(}\frac{\sqrt{3} \sqrt[3]{F_{p+q}}}{2\sqrt[3]{\alpha^{q}F_{p}}+\sqrt[3]{F_{p+q}}}\biggr{)}- \arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p}}}{2\sqrt[3]{\alpha^{q}F_{p+q }}+(-1)^{q}\sqrt[3]{F_{p}}}\biggr{)}\Biggr{)} \tag{3.3}\] \[-\frac{\sqrt{5}}{10}\Biggl{(}\log^{2}\biggl{(}\frac{(-1)^{p}F_{q }}{\bigl{(}\sqrt[3]{\alpha^{p}F_{p+q}}-\sqrt[3]{\alpha^{p+q}F_{p}}\bigr{)}^{ 3}}\biggr{)}-\log^{2}\biggl{(}\frac{\alpha^{p+q}F_{q}}{\bigl{(}\sqrt[3]{ \alpha^{q}F_{p+q}}+(-1)^{q}\sqrt[3]{F_{p}}\bigr{)}^{3}}\biggr{)}\Biggr{)},\] \[\sum_{k=1}^{\infty}\Bigl{(}\frac{-27F_{p}F_{p+q}}{F_{q}^{2}}\Bigr{)}^{ k}\frac{L_{(2p+q)k}}{k^{2}\binom{3k}{k}} \tag{3.4}\] \[\qquad=6\Biggl{(}\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p+q }}}{2\sqrt[3]{\alpha^{q}F_{p}}+\sqrt[3]{F_{p+q}}}\biggr{)}+\arctan^{2}\biggl{(} \frac{\sqrt{3}\sqrt[3]{F_{p}}}{2\sqrt[3]{\alpha^{q}F_{p+q}}+(-1)^{q}\sqrt[3]{F_ {p}}}\biggr{)}\Biggr{)}\] \[\qquad-\frac{1}{2}\Biggl{(}\log^{2}\biggl{(}\frac{(-1)^{p}F_{q}}{ \bigl{(}\sqrt[3]{\alpha^{p}F_{p+q}}-\sqrt[3]{\alpha^{p+q}F_{p}}\bigr{)}^{3}} \biggr{)}+\log^{2}\biggl{(}\frac{\alpha^{p+q}F_{q}}{\bigl{(}\sqrt[3]{\alpha^{q }F_{p+q}}-(-1)^{q}\sqrt[3]{F_{p}}\bigr{)}^{3}}\biggr{)}\Biggr{)}.\] Proof.: Set \((x,y)=(F_{p}\alpha^{q},-F_{p+q})\) in identity (A) and use (3.1) to obtain \[\sum_{k=1}^{\infty}\Bigl{(}\frac{-27F_{p}F_{p+q}}{F_{q}^{2}} \Bigr{)}^{k}\frac{\alpha^{(2p+q)k}}{k^{2}\binom{3k}{k}} \tag{3.5}\] \[\qquad=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p+q}}}{2 \sqrt[3]{\alpha^{q}F_{p}}+\sqrt[3]{F_{p+q}}}\biggr{)}-\frac{1}{2}\log^{2} \biggl{(}\frac{(-1)^{p}F_{q}}{\bigl{(}\sqrt[3]{\alpha^{p}F_{p+q}}-\sqrt[3]{ \alpha^{p+q}F_{p}}\bigr{)}^{3}}\biggr{)}.\] Similarly, \((x,y)=(F_{p+q},-\beta^{q}F_{p})\) in identity (A) and the use of (3.2) gives \[\sum_{k=1}^{\infty}\Bigl{(}\frac{-27F_{p}F_{p+q}}{F_{q}^{2}} \Bigr{)}^{k}\frac{\beta^{(2p+q)k}}{k^{2}\binom{3k}{k}} \tag{3.6}\] \[\qquad=6\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p}}}{2 \sqrt[3]{\alpha^{q}F_{p+q}}+(-1)^{q}\sqrt[3]{F_{p}}}\biggr{)}-\frac{1}{2} \log^{2}\biggl{(}\frac{\alpha^{p+q}F_{q}}{\bigl{(}\sqrt[3]{\alpha^{q}F_{p+q}} -(-1)^{q}\sqrt[3]{F_{p}}\bigr{)}^{3}}\biggr{)}.\] Identities (3.3) and (3.4) follow from the subtraction and addition of (3.5) and (3.6) with the use of the Binet formulas (2.1). **Example 5**.: _At \(p=-2\) and \(q=5\) from (3.3) and (3.4) we have the following series:_ \[\sum_{k=1}^{\infty}\frac{\bigl{(}\frac{54}{25}\bigr{)}^{k}F_{k}}{ k^{2}\binom{3k}{k}} =\frac{6\sqrt{5}}{5}\left(\arctan^{2}\biggl{(}\frac{\sqrt{3} \sqrt[3]{2}}{2\sqrt[3]{\alpha^{5}}-\sqrt[3]{2}}\biggr{)}-\arctan^{2}\biggl{(} \frac{\sqrt{3}}{2\sqrt[3]{2\alpha^{5}}+1}\biggr{)}\right)\] \[\qquad+\frac{\sqrt{5}}{10}\left(\log^{2}\biggl{(}\frac{5\alpha^{ 3}}{\bigl{(}\sqrt[3]{2\alpha^{5}}-1\bigr{)}^{3}}\biggr{)}-\log^{2}\biggl{(} \frac{5\alpha^{2}}{\bigl{(}\sqrt[3]{\alpha^{5}}+\sqrt[3]{2}\bigr{)}^{3}} \biggr{)}\right)\!,\] \[\sum_{k=1}^{\infty}\frac{\bigl{(}\frac{54}{25}\bigr{)}^{k}L_{k}} {k^{2}\binom{3k}{k}} =6\left(\arctan^{2}\biggl{(}\frac{\sqrt{3}\sqrt[3]{2}}{2\sqrt[3] {\alpha^{5}}-\sqrt[3]{2}}\biggr{)}+\arctan^{2}\biggl{(}\frac{\sqrt{3}}{2 \sqrt[3]{2\alpha^{5}}+1}\biggr{)}\right)\] \[\qquad-\frac{1}{2}\left(\log^{2}\biggl{(}\frac{5\alpha^{3}}{ \bigl{(}\sqrt[3]{2\alpha^{5}}-1\bigr{)}^{3}}\biggr{)}+\log^{2}\biggl{(}\frac {5\alpha^{2}}{\bigl{(}\sqrt[3]{\alpha^{5}}+\sqrt[3]{2}\bigr{)}^{3}}\biggr{)} \right)\!.\] ### Fibonacci series associated with identity (B) **Theorem 9**.: _Let \(p\) and \(q\) be integers such that \(p\leq-2\), \(q\geq 4\), and \(q>|p|+1\). Then_ \[\frac{\sqrt{5}}{\sqrt[3]{F_{p}F_{p+q}}}\sum_{k=1}^{\infty}\Big{(} \frac{-27F_{p}F_{p+q}}{F_{q}^{2}}\Big{)}^{k}\frac{F_{(2p+q)k}}{k\binom{3k}{k}}\] \[\qquad=2\sqrt{3}\Bigg{(}A_{\alpha}^{-}\arctan\biggl{(}\frac{ \sqrt{3}\sqrt[3]{F_{p+q}}}{2\sqrt[3]{\alpha^{4}F_{p}}+\sqrt[3]{F_{p+q}}}\biggr{)} +A_{\beta}^{-}\arctan\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p}}}{2(-1)^{q}\sqrt[3] {\alpha^{4}F_{p+q}}+\sqrt[3]{F_{p}}}\biggr{)}\biggr{)}\] \[\qquad\qquad\qquad-\Bigg{(}A_{\alpha}^{+}\log\biggl{(}\frac{\beta ^{p}F_{q}}{\big{(}\sqrt[3]{F_{p+q}}-\sqrt[3]{\alpha^{4}F_{p}}\big{)}^{3}} \biggr{)}-A_{\beta}^{+}\log\biggl{(}\frac{\alpha^{p}F_{q}}{\big{(}\sqrt[3]{F_ {p+q}}-\sqrt[3]{\beta^{q}F_{p}}\big{)}^{3}}\biggr{)}\biggr{)},\] \[\frac{1}{\sqrt[3]{F_{p}F_{p+q}}}\sum_{k=1}^{\infty}\biggl{(}\frac {-27F_{p}F_{p+q}}{F_{q}^{2}}\biggr{)}^{k}\frac{L_{(2p+q)k}}{k\binom{3k}{k}}\] \[\qquad\qquad\qquad=2\sqrt{3}\,\Bigg{(}A_{\alpha}^{-}\arctan \biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p+q}}}{2\sqrt[3]{\alpha^{4}F_{p}}+\sqrt[3]{ F_{p+q}}}\biggr{)}-A_{\beta}^{-}\arctan\biggl{(}\frac{\sqrt{3}\sqrt[3]{F_{p}}}{2(-1)^{q} \sqrt[3]{\alpha^{4}F_{p+q}}+\sqrt[3]{F_{p}}}\biggr{)}\biggr{)}\] \[\qquad\qquad\qquad-\Bigg{(}A_{\alpha}^{+}\log\biggl{(}\frac{\beta ^{p}F_{q}}{\big{(}\sqrt[3]{F_{p+q}}-\sqrt[3]{\alpha^{4}F_{p}}\big{)}^{3}} \biggr{)}+A_{\beta}^{+}\log\biggl{(}\frac{\alpha^{p}F_{q}}{\big{(}\sqrt[3]{F_ {p+q}}-\sqrt[3]{\beta^{q}F_{p}}\big{)}^{3}}\biggr{)}\biggr{)},\] _where_ \[A_{s}^{\pm}=\sqrt[3]{s^{q}}\,\frac{\sqrt[3]{s^{q}}\,F_{p}\pm\sqrt[3]{F_{p+q}}} {s^{q}F_{p}+F_{p+q}}.\] Proof.: The proof is similar to that one given for Theorem 8 and omitted. **Example 6**.: _At \(p=-2\) and \(q=5\) from Theorem 9 we obtain the following series:_ \[\frac{1}{\sqrt[3]{2\alpha^{5}}}\sum_{k=1}^{\infty}\frac{\big{(} \frac{54}{25}\big{)}^{k}\,F_{k}}{k\binom{3k}{k}}\] \[\qquad\qquad\qquad=\frac{2\sqrt{15}}{5}\,\Bigg{(}\frac{\sqrt[3]{2 }+\sqrt[3]{\alpha^{5}}}{2-\alpha^{5}}\arctan\biggl{(}\frac{\sqrt{3}}{1-\sqrt[ 3]{4\alpha^{5}}}\biggr{)}+\frac{1-\sqrt[3]{2\alpha^{5}}}{1+2\alpha^{5}}\arctan \biggl{(}\frac{\sqrt{3}}{2\sqrt[3]{2\alpha^{5}}+1}\biggr{)}\Bigg{)}\] \[\qquad\qquad\qquad+\frac{\sqrt{5}}{5}\,\Bigg{(}\frac{\sqrt[3]{2 }-\sqrt[3]{\alpha^{5}}}{2-\alpha^{5}}\log\biggl{(}\frac{5\alpha^{2}}{\big{(} \sqrt[3]{2}+\sqrt[3]{\alpha^{5}}\big{)}^{3}}\biggr{)}+\frac{1+\sqrt[3]{2 \alpha^{5}}}{1+2\alpha^{5}}\log\biggl{(}\frac{5\alpha^{3}}{\big{(}\sqrt[3]{2 \alpha^{5}}-1\big{)}^{3}}\biggr{)}\Biggr{)},\] \[\frac{1}{\sqrt[3]{2\alpha^{5}}}\sum_{k=1}^{\infty}\frac{\big{(} \frac{54}{25}\big{)}^{k}\,L_{k}}{k\binom{3k}{k}}\] \[\qquad\qquad\qquad=2\sqrt{3}\,\Bigg{(}\frac{\sqrt[3]{2}+\sqrt[3]{ \alpha^{5}}}{2-\alpha^{5}}\arctan\biggl{(}\frac{\sqrt{3}}{1-\sqrt[3]{4\alpha^{ 5}}}\biggr{)}-\frac{1-\sqrt[3]{2\alpha^{5}}}{1+2\alpha^{5}}\arctan\biggl{(} \frac{\sqrt{3}}{2\sqrt[3]{2\alpha^{5}}+1}\biggr{)}\Bigg{)}\] \[\qquad\qquad\qquad\qquad+\frac{\sqrt[3]{2}-\sqrt[3]{\alpha^{5}}}{2 -\alpha^{5}}\log\biggl{(}\frac{5\alpha^{2}}{\big{(}\sqrt[3]{2}+\sqrt[3]{ \alpha^{5}}\big{)}^{3}}\biggr{)}-\frac{1+\sqrt[3]{2\alpha^{5}}}{1+2\alpha^{5} }\log\biggl{(}\frac{5\alpha^{3}}{\big{(}\sqrt[3]{2\alpha^{5}}-1\big{)}^{3} }\biggr{)}.\] ### Fibonacci series associated with identity (C) **Theorem 10**.: _Let \(p\) and \(q\) be integers such that \(p\leq-2\), \(q\geq 4\), and \(|q|>|p|+1\). Then_ \[\frac{1}{F_{q}\sqrt[3]{F_{p}F_{p+q}}}\sum_{k=1}^{\infty}(-1)^{k-p} \Big{(}\frac{27F_{p}F_{p+q}}{F_{q}^{2}}\Big{)}^{\!k}\frac{F_{(2p+q)k}}{\binom{3 k}{k}}\] \[=\frac{4(-1)^{p-1}\sqrt[3]{F_{p+q}^{2}F_{p}^{2}}}{(F_{p+q}+\alpha ^{q}F_{p})^{2}(F_{p+q}+\beta^{q}F_{p})^{2}}\] \[\quad-\frac{2\sqrt{15}}{15}\Bigg{(}B_{\alpha}^{+}\arctan\!\left( \frac{\sqrt{3}\sqrt[3]{F_{p+q}}}{2\sqrt[3]{\alpha^{q}F_{p}}+\sqrt[3]{F_{p+q}} }\right)+B_{\beta}^{+}\arctan\!\left(\frac{\sqrt{3}\sqrt[3]{F_{p}}}{2(-1)^{q} \sqrt[3]{\alpha^{q}F_{p+q}}+\sqrt[3]{F_{p}}}\right)\!\Bigg{)}\] \[\quad+\frac{\sqrt{5}}{15}\Bigg{(}B_{\alpha}^{-}\log\!\left(\frac {(-1)^{p}F_{q}}{\big{(}\sqrt[3]{\alpha^{p}F_{p+q}}-\sqrt[3]{\alpha^{p+q}F_{p} }\big{)}^{3}}\right)-B_{\beta}^{-}\log\!\left(\frac{\alpha^{p+q}F_{q}}{\big{(} \sqrt[3]{\alpha^{q}F_{p+q}}-(-1)^{q}\sqrt[3]{F_{p}}\big{)}^{3}}\right)\! \Bigg{)},\] \[\frac{1}{F_{q}\sqrt[3]{F_{p}F_{p+q}}}\sum_{k=1}^{\infty}(-1)^{k-p }\Big{(}\frac{27F_{p}F_{p+q}}{F_{q}^{2}}\Big{)}^{\!k}\frac{L_{(2p+q)k}}{ \binom{3k}{k}}\] \[\quad=\frac{4(-1)^{p-q-1}\sqrt[3]{F_{p+q}^{2}F_{p}^{2}}}{F_{q}} \cdot\frac{F_{p}^{2}L_{q}+(-1)^{q}F_{p+q}^{2}L_{q}+4F_{p}F_{p+q}}{(F_{p+q}+ \alpha^{q}F_{p})^{2}(F_{p+q}+\beta^{q}F_{p})^{2}}\] \[\quad-\frac{2}{\sqrt{3}}\Bigg{(}B_{\alpha}^{+}\arctan\!\left( \frac{\sqrt{3}\sqrt[3]{F_{p+q}}}{2\sqrt[3]{\alpha^{q}F_{p}}+\sqrt[3]{F_{p+q}} }\right)-B_{\beta}^{+}\arctan\!\left(\frac{\sqrt{3}\sqrt[3]{F_{p}}}{2(-1)^{q} \sqrt[3]{\alpha^{q}F_{p+q}}+\sqrt[3]{F_{p}}}\right)\!\Bigg{)}\] \[\quad+\frac{1}{3}\Bigg{(}B_{\alpha}^{-}\log\!\left(\frac{(-1)^{p} F_{q}}{\big{(}\sqrt[3]{\alpha^{p}F_{p+q}}-\sqrt[3]{\alpha^{p+q}F_{p}} \big{)}^{3}}\right)+B_{\beta}^{-}\log\!\left(\frac{\alpha^{p+q}F_{q}}{\big{(} \sqrt[3]{\alpha^{q}F_{p+q}}-(-1)^{q}\sqrt[3]{F_{p}}\big{)}^{3}}\right)\! \Bigg{)},\] _where_ \[B_{s}^{\pm}=\frac{\sqrt[3]{s^{q}-3p}}{(s^{q}F_{p}+F_{p+q})^{3}}\bigg{(}\sqrt[3 ]{s^{4q}F_{p}^{4}}\pm\sqrt[3]{F_{p+q}^{4}}\mp 2\sqrt[3]{s^{q}F_{p}F_{p+q}} \left(\sqrt[3]{s^{2q}F_{p}^{2}}\pm\sqrt[3]{F_{p+q}^{2}}\right)\!\bigg{)}.\] Proof.: The proof is similar to the previous two proofs. **Example 7**.: _At \(p=-2\) and \(q=5\) from Theorem 10 we obtain the following series:_ \[\sum_{k=1}^{\infty}\frac{\big{(}\frac{54}{25}\big{)}^{\!k}F_{k}}{ \binom{3k}{k}} =\frac{200}{361}+\frac{10\sqrt[3]{2\alpha^{11}}}{\sqrt{15}} \left(\frac{\sqrt[3]{16}(1+\alpha^{5})+\sqrt[3]{\alpha^{5}}(4+\alpha^{5})}{(5 \alpha+1)^{3}}\arctan\!\left(\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{5}}/2-1}\right)\right.\] \[\quad+\frac{\sqrt[3]{16\alpha^{5}}(\alpha^{5}-1)+1-4\alpha^{5}}{( \alpha-5)^{3}\alpha^{11}}\arctan\!\left(\frac{\sqrt{3}}{2\sqrt[3]{2\alpha^{5 }}+1}\right)\!\right)\] \[\quad-\frac{\sqrt{5}\sqrt[3]{2\alpha^{11}}}{3}\left(\frac{\big{(} \sqrt[3]{2}+\sqrt[3]{\alpha^{5}}\big{)}\big{(}\sqrt[3]{2}-\sqrt[3]{\alpha^{5 }}\big{)}^{3}}{(5\alpha+1)^{3}}\log\!\left(\frac{5\alpha^{2}}{\big{(}\sqrt[3] {\alpha^{5}}+\sqrt[3]{2}\big{)}^{3}}\right)\] \[\quad\left.+\frac{\big{(}\sqrt[3]{2\alpha^{5}}-1\big{)}\big{(} \sqrt[3]{2\alpha^{5}}+1\big{)}^{3}}{(\alpha-5)^{3}\alpha^{11}}\log\!\left( \frac{5\alpha^{3}}{\big{(}\sqrt[3]{2\alpha^{5}}-1\big{)}^{3}}\right)\! \right)\!,\] \[\sum_{k=1}^{\infty}\frac{\left(\frac{54}{25}\right)^{k}L_{k}}{k \binom{3k}{k}} =\frac{328}{361}+\frac{10\sqrt{3}\sqrt[3]{2\alpha^{11}}}{3}\left( \frac{\sqrt[3]{\alpha^{5}}(\alpha^{5}+4)+\sqrt[3]{16}(\alpha^{5}+1)}{(\alpha^ {5}-2)^{3}}\arctan\!\left(\frac{\sqrt{3}}{2\sqrt[3]{\alpha^{5}}/2}-1\right)\right.\] \[\left.+\,\alpha\frac{\sqrt[3]{16\alpha^{20}}+1-\sqrt[3]{16\alpha^ {5}}\left(\sqrt[3]{4\alpha^{10}}+1\right)}{(2\alpha^{5}+1)^{3}}\arctan\!\left( \frac{\sqrt{3}}{2\sqrt[3]{2\alpha^{5}}+1}\right)\!\right)\] \[\quad-\frac{5\sqrt[3]{2\alpha^{11}}}{3}\left(\frac{\left(\sqrt[3] {2}+\sqrt[3]{\alpha^{5}}\right)\left(\sqrt[3]{2}-\sqrt[3]{\alpha^{5}}\right)^ {3}}{(\alpha^{5}-2)^{3}}\log\!\left(\frac{5\alpha^{2}}{\left(\sqrt[3]{\alpha^ {5}}+\sqrt[3]{2}\right)^{3}}\right)\right.\] \[\left.+\frac{\alpha\left(\sqrt[3]{2\alpha^{5}}-1\right)\left( \sqrt[3]{2\alpha^{5}}+1\right)^{3}}{(2\alpha^{5}+1)^{3}}\log\!\left(\frac{5 \alpha^{3}}{\left(\sqrt[3]{2\alpha^{5}}-1\right)^{3}}\right)\!\right)\!.\] ## 4 Concluding comments In this paper we presented new closed forms for some types of infinite series involving binomial coefficients \(\binom{3n}{n}\). To prove our results, we applied some routine arguments, combining Batir's formula (1.1) with Binet's formulas for Fibonacci and Lucas numbers. Using similar techniques, we can establish series evaluations involving binomial coefficients \(\binom{3n}{n}\) with Fibonacci and Lucas polynomials and other known number and polynomial sequences. Let us give, for example, a generalization of Theorems 1 and 4 to the case of the Horadam sequence defined by the recurrence \[W_{n}=pW_{n-1}-qW_{n-2},\quad n\geq 2,\] with initial values \(W_{0}=a\) and \(W_{1}=b\). Let \[\Delta=\sqrt{p^{2}+4q},\ \alpha_{*}=(p+\Delta)/2,\ \beta_{*}=(p-\Delta)/2,\ \ A=b-a\beta_{*},\ \ B=b-a\alpha_{*}.\] The following identities hold for positive integer \(r\): \[\sum_{k=1}^{\infty}\frac{(-1)^{k(r-1)}}{k^{2}\binom{3k}{k}} \left(\frac{27ABq^{r}}{\Delta^{2}}\right)^{k}W_{r}^{-2k}\] \[=6\arctan^{2}\!\left(\frac{\sqrt{3}\sqrt[3]{Bq^{r}}}{2\sqrt[3]{ A\alpha_{*}^{2r}}+\sqrt[3]{B(-q)^{r}}}\right)-\frac{1}{2}\log^{2}\!\left(\frac{ \alpha_{*}^{r}\Delta W_{r}}{\left(\sqrt[3]{A\alpha_{*}^{2r}}-\sqrt[3]{B(-q)^ {r}}\right)^{3}}\right)\] and \[\frac{A\alpha_{*}^{2r}+B(-q)^{r}}{\sqrt[3]{AB\alpha_{*}^{2r}}q^{r}} \sum_{k=1}^{\infty}\frac{(-1)^{k(r-1)}}{k\binom{3k}{k}}\left( \frac{27ABq^{r}}{\Delta^{2}}\right)^{k}W_{r}^{-2k}\] \[=2\sqrt{3}\!\left(\sqrt[3]{A\alpha_{*}^{2r}}-\sqrt[3]{B(-q)^{r}} \right)\arctan\!\left(\frac{\sqrt{3}\sqrt[3]{Bq^{r}}}{2\sqrt[3]{A\alpha_{*}^ {2r}}+\sqrt[3]{B(-q)^{r}}}\right)\] \[\quad-(-1)^{r}\!\left(\sqrt[3]{A\alpha_{*}^{2r}}+\sqrt[3]{B(-q)^ {r}}\right)\log\!\left(\frac{\alpha_{*}^{r}\Delta W_{r}}{\left(\sqrt[3]{A \alpha_{*}^{2r}}-\sqrt[3]{B(-q)^{r}}\right)^{3}}\right)\!.\] ## Acknowledgment We would like to thank Professor Wenchang Chu for drawing our attention to several papers related to the subject of our research.
2304.00892
Asservissement visuel 3D direct dans le domaine spectral
This paper presents a direct 3D visual servo scheme for the automatic alignment of point clouds (respectively, objects) using visual information in the spectral domain. Specifically, we propose an alignment method for 3D models/point clouds that works by estimating the global transformation between a reference point cloud and a target point cloud using harmonic domain data analysis. A 3D discrete Fourier transform (DFT) in $\mathbb{R}^3$ is used for translation estimation and real spherical harmonics in $SO(3)$ are used for rotation estimation. This approach allows us to derive a decoupled visual servo controller with 6 degrees of freedom. We then show how this approach can be used as a controller for a robotic arm to perform a positioning task. Unlike existing 3D visual servo methods, our method works well with partial point clouds and in cases of large initial transformations between the initial and desired position. Additionally, using spectral data (instead of spatial data) for the transformation estimation makes our method robust to sensor-induced noise and partial occlusions. Our method has been successfully validated experimentally on point clouds obtained with a depth camera mounted on a robotic arm.
Maxime Adjigble, Brahim Tamadazte, Cristiana de Farias, Rustam Stolkin, Naresh Marturi
2023-04-03T11:28:02Z
http://arxiv.org/abs/2304.00892v1
# Asservissement visual 3D direct dans le domaine spectral ### Abstract _This paper presents a direct 3D visual servo scheme for the automatic alignment of point clouds (respectively, objects) using visual information in the spectral domain. Specifically, we propose an alignment method for 3D models/point clouds that works by estimating the global transformation between a reference point cloud and a target point cloud using harmonic domain data analysis. A 3D discrete Fourier transform (DFT) in \(\mathbb{R}^{3}\) is used for translation estimation and real spherical harmonics in \(\mathbf{SO(3)}\) are used for rotation estimation. This approach allows us to derive a decoupled visual servo controller with 6 degrees of freedom. We then show how this approach can be used as a controller for a robotic arm to perform a positioning task. Unlike existing 3D visual servo methods, our method works well with partial point clouds and in cases of large initial transformations between the initial and desired position. Additionally, using spectral data (instead of spatial data) for the transformation estimation makes our method robust to sensor-induced noise and partial occlusions. Our method has been successfully validated experimentally on point clouds obtained with a depth camera mounted on a robotic arm._ ### Keywords Point clouds, visual servoing, pose estimation, 3D registration, Fourier Transform ## 1 Introduction Au cours des trois dernieres decennies, l'accent a ete mis de plus en plus sur les methodes d'asservissement visuel pour realiser des taches robotiques dans divers secteurs comme l'industrie, la defense, les vehicules autonomes, la medecine, et bien d'autres. L'asservissement visuel designe le controle dynamique de systemes en utilisant un retour visuel continu. Par consequent, les principales composantes d'un controleur d'asservissement visuel classique sont l'extraction des caracteristiques visuelles, leur mise en correspondance et leur suivi dans le temps. Cependant, la faisabilite et l'efficacite des methodes d'asservissement visuel classiques sont croitement lies a celles de detection, de mise en correspondance et de suivi d'informations visuelles, dont les performances peuvent etre limitees dans certains cas (images peu texturees, absence de formes geometriques saillantes, occlusions, etc.). Des methodes d'asservissement visuel qui s'affranchissent du suivi visuel ont recement emerge. Elles utilisent directement l'information globale de l'image dans la boucle de commande. Ces approches sont appelees methodes _d'asservissement visuel direct_[1]. Differents types d'informations globales ont ete etudiees dans la litterature, telles que l'intensite des pixels [2, 3], les gradients spatio-temporels [4], les histogrammes [5], l'information mutuelle [6], les melanges de Gaussiennes [7], etc. Plus receemment, certains auteurs ont propose d'utiliser des informations visuelles temps-frequence en utilisant les ondelettes [8] et shearlets [9]. Cependant, les methodes directes presentent clairement des domaines de convergence plus etroits par rapport aux methodes conventionnelles. Pour remedier a ce probleme, certains travaux proposent d'utiliser des caracteristiques visuelles exprimees dans le domaine spectral. Ces caracteristiques se sont averees robustes au bruit et sont utilisees pour de nombreuses applications de vision par ordinateur et robotique comme la correlation d'images [10], l'alignement de modeles issus de capteurs de profondeurs [11], la prehension robotique [12], etc. Dans [13], les coefficients de la transformee en cosinus discrete (TCD) ont ete utilisees pour l'asservissement visuel direct et dans [14, 15], la propriete de translation temporele de la transformee de Fourier a ete utilisee dans un schema de commande decoule par asservisement visuel. La majorite des methodes d'asservissement visuel presentees ci-dessus sont fondees sur l'utilisation d'information visuelles 2D. Cependant, tres peu sur l'asservissement visuel utilisant des informations 3D directement dans la boucle de commande [16, 17]. Un des avantages a utiliser des informations visuelles 3D est notamment l'expression direct du controleur sans passer par des procedures d'estimation de pose ou d'etalonnaage. Bien que ces methodes aient donne des resultats prometteurs, elles necessient des donnees de profondeur denses et presentent une convergence limitee. Une methode d'asservissement visuel virtuelle utilisant un maillage polygonale genere hors ligne a partir de nuages de points est presentee dans [18]. Dans cet article, nous presentons une methode d'asservissement visuel 3D direct utilisant des informations visuelles exprimees dans le domaine spectral. Bien que tres peu de travaux aient utilise l'information spectrale dans une loi de commande par asservissement visuel 2D [13, 14, 15, 19], a notre connaissance il n'y a pas de travaux dans la litterature qui utilisent des informations spectrales 3D issues de nuages de points. L'idee principale de notre approche est l'estimation d'une transformation spatiale (6 DDLs) entre un nuage de point de reference et une autre cible. Les deux nuages de points transformes dans le domaine spectral, ainsi la translation est estimee par analyse de Fourier alors que la rotation est estimee par correlation spherique. Une procedure d'optimisation a ete ensuite utilisee pour iterativement minimiser la fonction cout en translation et en rotation. A noter que l'estimation des translations et rotations de l'objet sont independantes l'une de l'autre. Cela nous permet d'avoir un controleur completement decouple. La methode proposee utilise une transformee de Fourier rapide 3D dans l'espace Cartesian \(\mathbb{R}^{3}\) et les harmoniques spheriques reelles sur la sphere unitaire \(\mathbf{S}^{2}\) et le groupe de rotation \(\mathbf{SO(3)}\) pour respectivement calculer le gradient des couts de translation et de rotation. Un exemple d'alignement d'un objet dans une scene simple est presente a la Fig. 1, ou un modele de reference est aligne sur un nuage de points de scene contenant un seul et meme objet. Le nuage de points de la scene actuelle ou de la scene cible est capture en ligne par un capteur de profondeur statique ou monte ou sur un bras robotique. Les principales contributions de cet article sont les suivantes : * Nous proposons une nouvelle methode fondee sur le domaine spectral pour l'alignement des modeles d'objets en 3D, c'est-a-dire pour estimer la translation et rotation globales entre deux nuages de points. * Nous proposons une nouvelle methode d'asservissement visuel 3D direct a 6 DDIs utilisant directement des nuages de points (complets ou partiels) representes dans le domaine spectral. Les avantages de la methode proposee sont multiples. Contrariement aux approches existantes qui necessitent des donnees de profondeur denses et completes, notre methode peut fonctionner efficacement avec des nuages de points partiels d'objets. En comparaison aux methodes d'asservissement visuel directes utilisant des donnees de profondeur, notre methode montre un domaine de convergence plus large. L'utilisation de donnees spectrales rend naturellement notre methode robuste face au bruit, plus particulie-rement lorsqu'on utilise des nuages de points issus de capteurs de distance 3D. Etant donne qu'aucune information de couleur ou d'intensite n'est requise, la methode proposee peut bien fonctionner dans le cas d'objets non-textures ou dans le cas d'eclairage faible. Finalement, la methode proposee peut etre utilisee a la fois pour aligner un modele d'objet dans une scene avec plusieurs objets differents et pour positionner un manipulateur robotique dans l'espace de tache. ## 2 Methodologie Dans cette section, nous presentons notre methode d'asservissement visuel direct 3D dans le domaine spectral. Comme mentionne precedemment, la base principale de Figure 1: Illustration du processus d’alignement de modele avec l’approche proposée. Le nuage de points rouge est le modele de reference, les nuages de points gris sont des candidats intermediaires au cours de la convergence, le nuage de point vert est le nuage de points de la scene cible et en bleu, le modele final aligne sur la cible. Les courbes de trajectoire grises indiquent la convergence du modele. notre approche est la strategie d'alignement de modeles d'objets a l'aide de nuages de points transformes dans le domaine spectral. Dans cette optique, nous introduisons tout d'abord la representation utilisee par notre methode, ensuite les concepts de correlation de phase dans l'espace Cartesian et sur la sphere unitaire. Enfin, la loi de controle d'alignement iteratif de modele est presente. ### Representation de Nuage de Points La premiere etape de notre pipeline d'asservissement visual 3D consiste a representer les points et normales de surface du nuage de points respectivement comme une grille de voxels et une Image Gaussienne Etendue (IGE ou Extended Gaussian Image). Points sous forme de grille de voxels.La discretisation d'un nuage de points est un processus simple. Etant donne un nuage de points compose de \(N\) points, une grille de voxels 3D de resolution \(r\in\mathbb{R}^{+}\) peut etre construite. Ainsi, pour chaque point \(p=(x,y,z)\) du nuage de points, les indices du voxel du point \(p_{ijk}=(i,j,k)\) sont calcules comme suit : \[i=[x/r]\qquad j=[y/r]\qquad k=[z/r] \tag{1}\] L'operation \([./.]\) represente la division d'entier, c'est-a-dire que seule la partie en entiere de la division est conserve. Definissons \(v_{t}:\mathbb{R}^{3}\to\mathbb{N}^{3}\) comme etant la fonction de correspondance entre les coordonnees Cartesiannes et les indices de voxels. La fonction de grille de voxel 1\(f_{t}:\mathbb{R}^{3}\to\mathbb{N}\) d'un nuage de points peut etre defini comme suit : Footnote 1: L’indice \(t\) indique que la fonction est utilisee pour l’estimation de la translation, de la même manière, l’indice \(r\) sera utilise pour les fonctions l’étes à l’estimation de la rotation. \[f_{t}(p)=f_{t}(x,y,z)=v_{ijk} \tag{2}\] ou \(v_{i,j,k}\in[0,1]\). Ici, \(v_{i,j,k}=1\) si au moins un point du nuage de points a des indices egaux a \(v_{T}(p)=(i,j,k)\) et \(v_{ijk}=0\) dans le cas contraire. Notre methode utilise une grille de voxels a valeurs reelles; cependant, une grille de voxels a valeurs binaires peut aussi etre utilisee de la meme maniere. Le score LoCoMo (Local Contact Moments) presente dans [21] peut etre un bon candidat pour ameliorer l'information contenue dans les voxels. Normales de surface comme Image Gaussienne Etendue (IGE).l'IGE est une representation populaire et utilisee pour les fonctions exprimees sur la sphere unitaire. Elle a etre tres utilisee dans la litterature comme descripteur de forme pour les normales de surface des objets [22, 23, 24, 12]. Changer la representation d'une normale de surface \(n=(n_{x},n_{y},n_{z})\in\mathbb{R}^{3}\) utilisant des coordonnees Cartesiannes aux coordonnees spheriques \(n=(r,\theta,\phi)\) en utilisant (3), permet d'exprimer la normale de surface sur la sphere unitaire. \[r=\sqrt{n_{x}^{2}+n_{y}^{2}+n_{z}^{2}}\qquad\theta=\arctan\frac{\sqrt{n_{x}^{ 2}+n_{y}^{2}}}{n_{z}} \tag{3}\] \[\phi=\arctan(\frac{n_{y}}{n_{x}})\] La distance radiale \(r=1\) pour toutes les normales de surface puisqu'elles sont des vecteurs unitaires. De ce fait, l'ensemble \((\theta,\phi)\) suffit pour decrire la distribution des normales de surface sur la sphere unitaire. Une representation discrete de la sphere est necessaire pour effectuer les calculs numeriques. La discretisation suivante est utilisee en fonction de la longitude et latitude : \(\theta_{j}=\frac{\pi(2j+1)}{4B}\) et \(\phi_{k}=\frac{\pi k}{B}\), \((j,k)\in\mathbb{N}\) avec la contrainte \(0\leq j,k<2B\) et \(B\in\mathbb{N}\), \(B\) etant la bande passante. La valeur de la bande passante est habituellement choisie comme puissance de \(2\), ce qui signifie que \(B=2^{n},n\in\mathbb{N}^{+}\). L'IGE des normales de surface d'un nuage de points peut ainsi etre exprimee comme la fonction \(f_{r}:\mathbf{S}^{2}\to\mathbb{N}\) : \[f_{r}(\theta,\phi)=c_{r}(\theta_{j},\phi_{k}) \tag{4}\] ou \(c_{r}\in\mathbb{N}\) represente le nombre de normales de surface dans le nuage de points avec une longitude et une latitude discretisee egales a \((\theta_{j},\phi_{k})\). Dans ce cas, des valeurs entieres sont utilisees plutot que des valeurs binaires. L'avantage est qu'une distribution des normales de surface sur la sphere unitaire fournit plus d'information sur la geometrie de l'objet qu'une simple distribution binaire. Figure 2 montre des exemples d'IGE d'un objet et d'une scene avec plusieurs objets, representes sous forme de nuages de points. Figure 2 - IGE d’une tasse (haut) et d'une scème à plusieurs objets (bas), qui sont représentées sous forme de nuages de points avec des normales de surface (petites fèches vertes). ### Estimation de Translation via l'Analyse de Fourier dans \(\mathbb{R}^{3}\) La translation entre deux nuages de points (cible et reference) est estimee a l'aide de la correlation de phase 3D dans le domaine spectral avec l'analyse de Fourier. Le principal avantage des methodes fondees sur l'analyse de Fourier est qu'elles sont robustes aux differentis types de bruit [11, 15]. La methode de correlation de phase est fondee sur la propriete de translation temporelle de Fourier et convervett les translations dans l'espace Cartesian en deplacement de phase dans le domaine spectral. Definissions \(f_{t}:\mathbb{R}^{3}\to\mathbb{N}\) comme etant la representation en voxels du nuage de points d'un objet ou d'une scene. Les coefficients de Fourier de \(f_{t}\) sont calcules comme suit : \[F_{t}(u,v,w)=\sum_{x=0}^{M-1}\sum_{y=0}^{N-1}\sum_{z=0}^{L-1}f_{t}(x,y,z)e^{-i2 \pi(\frac{u}{M}x+\frac{v}{M}y+\frac{v}{L}z)} \tag{5}\] ou, \(M,N,L\in N^{+}\) sont les degres maximaux de decomposition des coefficients de Fourier sur respectivement sur \(X\), \(Y\), et \(Z\) et \((u,v,w)\) sont les coordonnees correspondantes dans le domaine spectral. Supposons que l'objet ou la scene soit translatee par \(T=(\tau_{x},\tau_{y},\tau_{z})\in\mathbb{R}^{3}\), et que \(g_{t}:\mathbb{R}^{3}\to\mathbb{N}\) est la nouvelle representation en voxels du nuage de points translate. En s'appuvant sur la propriete de translation temporelle, les coefficients de Fourier \(G_{t}\) de \(g_{t}\) peuvent etre calcules a l'aide de : \[G_{t}(u,v,w)=F_{t}(u,v,w)e^{-i2\pi(\frac{u}{M}\tau_{x}+\frac{v}{M}\tau_{y}+ \frac{v}{L}\tau_{z})} \tag{6}\] Le but de l'estimation de la translation est de trouver \(T\) en connaissant \(f_{t}\) et \(g_{t}\). Ce qui peut etre realise objectivement en tout d'abord calculant le spectre de puissance croise normalise \(\mathcal{C}_{t}\) de \(F_{t}\) et \(G_{t}\) et en appliquant la transformee de Fourier inverse en utilisant (7). \[\mathcal{C}_{t}(u,v,w) =\frac{F_{t}(u,v,w)\overline{G_{t}(u,v,w)}}{|F_{t}(u,v,w) \overline{G_{t}(u,v,w)}|} \tag{7}\] \[\delta(\tau_{x},\tau_{y},\tau_{z}) =\mathcal{F}^{-1}(\mathcal{C}_{t}(u,v,w))\] ou, \(\overline{G_{t}}\) est le complexe conjugue de \(G_{t}\) et \(\mathcal{F}^{-1}\) est la transformee de Fourier inverse. Le resultat \(\delta\) est la fonction _delta de Dirac_ dont la position sur l'axe des abscisses de la valeur maximale correspond a la translation \(T\). Par consequent, la translation \(T\) peut etre trouvee en maximisant la fonction \(\delta\). \[T=\nabla_{glob}T=\operatorname{argmax}\{\delta(\tau_{x},\tau_{y},\tau_{z})\} \tag{8}\] Meme si la solution globale \(\nabla_{glob}T\) de la translation peut etre trouvee directement, dans le contexte d'asservissement visuel 3D, seule une petite partie du vecteur egale a \(\nabla T=\lambda_{t}\nabla_{glob}T\), avec \(\lambda_{t}\in\mathbb{R}^{+}\) et \(\lambda_{t}<1\), sera utilisee a chaque iteration. Cela permet d'estimer simultanement la translation et la rotation, mais aussi de controler le taux de convergence du controleur. La fonction de coit \(J_{t}(T)\) suivante peut etre formulee pour evaluer la performance de l'algorithme d'estimation de la translation sur \(\mathbb{R}^{3}\) : \[J_{t}(T)=\frac{1}{2}||g_{t}(x)-f_{t}(x+T)||^{2} \tag{9}\] ### Estimation de la Rotation par Analyse de Fourier dans \(\boldsymbol{S}^{2}\) Comme pour la translation, la rotation entre deux nuages de points (cible et reference) peut aussi etre estimee en utilisant une analyse spectrale. Ici, la representation unitaire de donnees exprimees sur la sphere unitaire est utilisee pour encoder l'information des normales de surface de l'objet. Dans ce cas, nous estimons la rotation globale a partir de la correlation d'IGE. Il est possible de trouver la rotation "exate" directement en cherchant la valeur de la rotation qui maximise la correlation, mais cela implique le calcul d'une integrale double gourmande en temps de calcul. Comme solution, une optimisation de type gradient est utilisee pour calculer de maniere iterative la rotation qui maximise la correlation entre les deux nuages de points. Transformee de Fourier dans \(\boldsymbol{S}^{2}\) et \(\boldsymbol{SO}(3)\).Soit \(f_{r}:\boldsymbol{S}^{2}\to\mathbb{N}\) l'IGE des normales de surface d'un objet. De fait que les valeurs de la fonction \(f_{r}\) sont dans l'ensemble \(\mathbb{N}\subset\mathbb{R}\), l'analyse harmonique reelle dans \(\boldsymbol{SO(3)}\), introduite dans [25], peut etre utilisee pour calculer les coefficients de Fourier. Etant donnee une bande passante \(B\), la transformee de Fourier de \(f_{r}\) dans \(\boldsymbol{S}^{2}\) est definie par : \[f_{r}(\theta,\phi)=\sum_{l=0}^{B-1}(F_{r}^{l})^{T}S^{l}(\theta,\phi) \tag{10}\] ou, \(F_{r}^{l}\in\mathbb{R}^{(2l+1)\times 1}\) sont les coefficients de la transformee de Fourier et \(S^{l}\in\mathbb{R}^{2l+1}\) sont les axes orthogonaux de fonctions a valeurs reelles definies sur \(\boldsymbol{S}^{2}\). Le vecteur \(S^{l}\) est construit a partir des harmoniques spheriques reelles \(Y^{l}(\theta,\phi)\) et d'une matrice \(T^{l}\in\mathbb{C}^{(2l+1)\times(2l+1)}\) a coefficients complexes comme suit : \[S^{l}(\theta,\phi)=T^{l}Y^{l}(\theta,\phi) \tag{11}\] Se referer a [25, 26] pour plus de details sur les harmoniques spheriques. Supposons que le nuage de points soit transforme par une rotation \(R\in SO(3)\) autour de son centre de gravite. \(R\) est parametree en utilisant la convention \(ZYZ\) des angles d'Euler par \(\alpha,\gamma\in[0,2\pi[\) et \(\beta\in[0,\pi]\), avec \(g_{r}:\boldsymbol{S}^{2}\to\mathbb{N}\) etant l'IGE du nuage de point transforme par la rotation. La matrice de rotation \(R\) peut ainsi etre exprimee comme : \[R=R(\alpha,\beta,\gamma)=\exp(\alpha\hat{e}_{z})\exp(\beta\hat{e}_{y})\exp( \gamma\hat{e}_{z}) \tag{12}\] avec \(e_{y}\) et \(e_{z}\) etant respectivement les vecteurs \((0,1,0)\) et \((0,0,1)\). L'operateur \(\hat{:}:\mathbb{R}^{3}\to\mathfrak{so}(3)\) transforme un vecteur 3D en sa matrice symetrique de dimension \(3\times 3\) par l'algebre de Lie : \(\mathfrak{so}(3)=\{S\in R^{3\times 3}|S+S^{T}=0\}\). Bien que la representation de (12) presente des singularites inherentes, elle est tres pratique pour calculer de la transformee de Fourier dans \(\mathbf{SO(3)}\). De meme que dans (10), la transformee de Fourier de \(g_{r}\) est obtenue par : \[g_{r}(\theta,\phi)=\sum_{l=0}^{B-1}(G_{r}^{l})^{T}S^{l}(\theta,\phi) \tag{13}\] avec \(G_{r}^{l}\in\mathbb{R}^{(2l+1)\times 1}\) sont les coefficients de Fourier. En considerant que \(g_{r}\) est une version transformee de \(f_{r}\) par une rotation, d'ou la relation (14), la transformee de Fourier de \(g_{r}\) peut etre calculee en utilisant les coefficients de Fourier de \(f_{r}\) par (15). \[g_{r}(\theta,\phi)=f_{r}(R^{T}(\theta,\phi)) \tag{14}\] ou \(R^{T}(\theta,\phi)\) est une notation simplifiee pour l'expression \(M_{s2c}^{-1}(R^{T}M_{s2c}(\theta,\phi))\), ou \(M_{s2c}:\mathbf{S}^{2}\to\mathbb{R}^{3}\) est la fonction qui convertt les secondonnees spheriques en coordonnees Cartesiennes, et \(M_{s2c}^{-1}:\mathbb{R}^{3}\to\mathbf{S}^{2}\), son inverse qui peut etre obtenue par (3). En remplacant \(f_{r}\) par sa valeur et en reformulant (14), on obtient : \[\begin{split} g_{r}(\theta,\phi)&=\sum_{l=0}^{B-1}(F_ {r}^{l})^{T}S^{l}(R^{T}(\theta,\phi))\\ &=\sum_{l=0}^{B-1}(U^{l}(R)F_{r}^{l})^{T}S^{l}(\theta,\phi)\end{split} \tag{15}\] ou, \(U^{l}(R)=\overline{T^{l}}D^{l}(R)(T^{l})^{T}\). \(\overline{T^{l}}\) est le conjugue complexe de \(T^{l}\), tandis que \(\mathbf{D}^{l}\) est la matrice \(\mathbf{D}\) de Wigner. L'expansion de (15) est possible puisque les rotations sont exprimees comme des matrices \(\mathbf{D}\) de Wigner dans le domaine spectral et appliquer une rotation aux fonctions de base \(S^{l}\) est equivalente a appliquer une transformation lineaire des fonctions de base par la matrice \(\mathbf{D}\) de Wigner associee a la rotation. De (13) et (15), nous constatons que \(G_{r}^{l}=U^{l}(R)F_{r}^{l}\). Ainsi, \(G_{r}^{l}\) est obtenu en appliquant la transformation \(U^{l}(R)\) aux coefficients de Fourier de \(f_{r}\). Pour plus de details sur les proprietes couramment utilisees de la matrice \(\mathbf{D}\) de Winger, se referer a [25, 26, 27]. L'objectif de l'estimation de rotation est de trouver \(R\) en connaissant \(f_{r}\) et \(g_{r}\). Correlation dans \(\mathbf{SO(3)}\) et ses derivees.La correlation entre \(f_{r}\) et \(g_{r}\) est calculee comme suit : \[\mathcal{C}_{r}(R)=corr(f_{r},g_{r})=\frac{1}{4\pi}\sum_{l=0}^{B-1}(G_{r}^{l })^{T}U^{l}(R)F_{r}^{l} \tag{16}\] Ce resultat est obtenu apres simplification, en remplacant \(f_{r}\) et \(g_{r}\) par leurs representations de Fourier (10) et (15), en utilisant le theoreme de convolution de la transformee de Fourier2 et le principe d'orthogonalite des bases \(S^{l}\). La relation \(\langle S^{l}(\theta,\phi),(S^{l}(R^{T}(\theta,\phi))^{T})\rangle=\frac{1}{4 \pi}U^{l}(R)\) resulte directement de l'orthogonalite des vecteurs de base \(S^{l}\), ou l'operateur \(\langle.\rangle\) est le produit interne dans \(\mathcal{L}^{2}(SO(3))\). Dans (16), uniquement \(U^{l}\) depend de la rotation \(R\), ainsi, la derivee de \(\mathcal{C}_{r}\) peut etre obtenue en calculant la derivee de \(U^{l}\). La derivee de \(U^{l}\), evaluee a \(R\), par rapport a une rotation elementaire \(R_{\epsilon}=\exp(\epsilon\hat{\eta})\) (\(\epsilon\approx 0\) et \(\eta\in\mathbb{R}^{3}\)) est calculee comme suit : Footnote 2: La convolution dans le domaine spatial équivaut à la multiplication des coefficients de Fourier dans le domaine spectral \[\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}U^{l}(R\exp(\epsilon\hat{\eta})) =U^{l}(R)\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}U^{l}(\exp( \epsilon\hat{\eta})) \tag{17}\] Dans l'equation precedente, la propriete d'homomorphisme de \(U^{l}\) a ete utilisee, c.-a-d., \(U^{l}(R_{1}R_{2})=U^{l}(R_{1})U^{l}(R_{2})\) pour \(R_{1},R_{2}\in\mathbf{SO}(3)\). La derivee de \(\mathcal{C}_{r}\) est ensuite calculee par : \[\begin{split}\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}\mathcal{C }_{r}(\exp(\epsilon\hat{\eta}))&=\frac{1}{4\pi}\sum_{l=0}^{B-1}(G_ {r}^{l})^{T}U^{l}(R)u^{l}(\eta)F_{r}^{l}\cdot\eta\\ &=\nabla\mathcal{C}_{r}(R,\eta)\cdot\eta\end{split} \tag{18}\] ou, \(\nabla\mathcal{C}_{r}(R,\eta)\in\mathbb{R}^{3}\) est le gradient de \(\mathcal{C}_{r}(R)\) autour de l'axe \(\eta\) et \(u^{l}(\eta)=\left.\frac{d}{d\epsilon}\right|_{\epsilon=0}U^{l}(\exp(\epsilon \hat{\eta}))\). Evaluer le gradient \(\nabla\mathcal{C}_{r}(R,\eta)\) a \(\eta=e_{x},e_{y},e_{z}\) permet de trouver la rotation elementaire, qui compose avec \(R\), accroit la valeur de la correlation \(\mathcal{C}_{r}\). Plus formellement : \[\left.\nabla\mathcal{C}_{r}(R,e_{k})\right|_{k\in\{x,y,z\}}=\frac{1}{4\pi}\sum_ {l=0}^{B-1}(G_{r}^{l})^{T}U^{l}(R)u^{l}(e_{k})F_{r}^{l} \tag{19}\] Le calcul de \(u^{l}(e_{k})\) est trivial puisqu'il s'agit d'une differenciation directe des entrees de la matrice \(\mathbf{D}\) de Wigner pour laquelle une derivee analytique est disponible dans [25]. Une methode de descente de gradient peut maintenant etre utilisee pour iterativement rechercher la rotation ideale. Une fonction cout \(J_{r}\) peut etre formulee pour evaluer les performances de l'algorithme d'estimation de rotation dans \(\mathbf{SO}(3)\) : \[J_{r}(R)=\frac{1}{2}||g_{r}(\theta,\phi)-f_{r}(R^{T}(\theta,\phi))||^{2} \tag{20}\] ### Loi de Commande Pour estimer la transformation \(H=(R,T)\in\mathbf{SO}(3)\times\mathbb{R}^{3}\) entre les nuages de points courant et de reference, la loi de commande donnee par (21) est utilisee. \[\begin{split} T&=T+\lambda_{t}\nabla_{glob}T\\ R&=R\exp\left(\lambda_{r}\widehat{\nabla\mathcal{C}_{r}} \right)\end{split} \tag{21}\] ou, \(\lambda_{t},\lambda_{r}\in\mathbb{R}^{+}\) et \(\lambda_{t},\lambda_{r}<1\). \(\nabla_{glob}T\) et \(\nabla\mathcal{C}_{r}\) sont respectivement calcules a partir de (8) et (19). A la premiere iteration, matrice de rotation \(R\) et le vecteur de \(T\) peuvent etre initialises aleatoirement ou fixes respectivement en une matrice identite et de zeros, respectivement. Le controleur converge lorsque : \[||\nabla_{glob}T||+||\nabla\mathcal{C}_{r}||<\epsilon_{g} \tag{22}\] ou, \(\epsilon_{g}\in\mathbb{R}^{+}\) est le seuil de tolerance. Pour controler le robot, la loi de commande suivante est utilisee : \[\dot{\mathbf{q}}=\mathcal{J}_{c}^{+}\dot{X}_{c} \tag{23}\] avec, \(\mathcal{J}_{c}^{+}\) etant le pseudo-inverse de la Jacobienne du robot exprimee dans le repere de la camera, \(\dot{\mathbf{q}}\) est le vecteur des vitesses articulaires du robot et \(\dot{X}_{c}\) les vitesses Cartesianes de la camera, calculees a partir de (21). L'algorithme correspondant a la loi de commande est presente dans Alg. 1. ## 3 Validation Experimentale ### Description de la Plate-forme Experimentale Les validations experimentales sont realisees a l'aide de nuages de points acquis grace a une camera de profondeur. Deux essais differents sont presentes dans cet article. Nous validons d'abord le processus de recalage de modeles, ou un modele de reference complet d'un objet est aligne sur un nuage de points de scene. Ensuite, nous montrons des essais d'asservisement visuel direct realises a l'aide d'un robot a 7 DDLs (KUKA iwa) muni d'une camera de profondeur (Ensenso N35) montee sur l'effecteur du robot. Dans ce cas, la totalite du nuage de points est utilise pour positionner le robot a une position objet-cible. La librairie de nuages de points (PCL) [28] est utilisee pour le traitement des nuages de points, les libraires FFTSO3 [25] et FFTW [29] sont utilisees pour l'analyse spectrale. Comme mentione dans la Sec. 1, le nuage de points de l'objet de reference, utilise pour les experiences de recalage de modeles, est construit hors ligne en combinant plusieurs nuages de points issus de different points de vues comme presente dans [20]. En utilisant le capteur Ensenso N35, les normales de surface pour chaque point sont obtenues directement au moment de l'acquisition. Les nuages de points sont voxelises avec une grille de resolution \(8\,\mathrm{mm}\), alors que les normales de surface sont discretisees sur la sphere unitaire en utilisant une bande passante \(B=16\). Le degre maximal d'expansion des coefficients harmoniques sur la sphere est \(l_{max}=32\). Ces valeurs sont estimees empiriquement et donnent de bons resultats en termes de vitesse de calculs et de precision de recalage. Les trois principaux facteurs qui agissent sur la vitesse de convergence de notre approche sont la resolution de la grille de voxels, la bande passante de l'IGE et les parametres \(\lambda_{t}\) et \(\lambda_{r}\). Des grilles plus fines necessentent le calcul d'un nombre plus eleve de coefficients de Fourier, ce qui ralenti la vitesse de convergence de l'algorithme. Avec les parametres mentionnes precedemment, la vitesse de traitement actuelle de notre approche est en moyenne de \(8.7\,\mathrm{ms}/\mathrm{iteration}\). ### Analyse du recalage Les trois experiences suivantes sont realisees pour valider la capacite de recalage de modeles de notre approche: (C--1) le nuage de point complet d'un objet est aligne sur sa version transformee par une rotation et une translation arbitaires (Fig. 1 et Fig. 3); (C--2) le nuage de points complet d'un objet est recale sur un nuage de point partiel cible du meme objet (Fig. 4); et (C--3) le nuage de points complet d'un objet est recale sur une scene contenant plusieurs objets (Fig. 5). Pour ces tests, les nuages de points de reference et cible partagent le meme referentiel global. Differents objets de tous les jours sont utilises pour les essais et les scenes avec plusieurs objets sont construites en positionnant plusieurs objets de maniere aleatoire comme illustre sur la Fig. 5. Les images presentees dans les Fig. 1, 3, 4, et 5, montrent des exemples de convergence durant la procedure d'alignement et l'evolution des erreurs (translation et rotation) durant la procedure. A partir des resultats obtenus, on peut observer que les modeles de reference sont alignes avec precision avec les nuages de points cibles dans toutes les configurations. Les fonctions de cout de convergence finaux moyens pour les conditions C-1 et C-2 et la valeur moyenne finale du gradient pour C-3 sont presentes dans le tableau 1. Les resultats demontrent clairement la precision de notre approche. En outre methode a demontre des performances tout a fait interessantes dans des conditions complexes comme l'alignement d'un modele complet a des modeles partiellement observres ainsi qu'a des scenes pas du tout structurees Figure 3: Courbes de convergence pour les objets: tasse (a gauche) et poignée de gaz (à droite) présentés dans Fig. 1. (avec des occlusions) contenant plusieurs objets. ### Tache de Positionnement d'un Bras Robotique : Asservissement Visual 3D Direct Pour cet essai, nous considerons une tache de positionnement pour laquelle la position du robot est controlee par notre methode d'asservissement direct. Comme precedemment mentionne, l'ensemble du nuage de points est utilise dans la boucle de commande, c'-a-d sans utiliser de methode de segmentation. Ce test a ete realise avec une scene constituee de plusieurs objets pour tester la generalite de notre approche, notamment dans le cas de scenes complexes. Le nuage de points de reference est initialement acquis a la position de deseree du robot. Le robot est ensuite deplace a une position aleatoire dans l'espace des taches en s'assurant d'une grande transformation spatiale. Les resultats obtenus sont illustres sur la Fig. 6. A partir de ces resultats, il est clair que la methode fonctionne bien lorsqu'il s'agit d'aligner un nuage de points entier a partir d'une position ou seulement une partie de celle-ci est visible. Les courbes de convergence montrent la fluidite des mouvements du robot pour atteindre la position cible. ## 4 Conclusion Dans cet article, nous avons presente une methode d'asservissement visuel 3D direct fondee sur l'utilisation de nuages de points exprimes dans le domaine spectral. L'approche presentee utilise la propriete de translation temporelle de la transformee de Fourier pour estimer les translations et les harmoniques spheriques reelles sur \(\boldsymbol{SO(3)}\) pour estimer les rotations, afin d'aligner progressivement un nuage de points de reference sur un nuage de points cible. Cette approche a ete initialement utilisee pour aligner des modeles 3D sur differentes scenes, puis utilisee pour controler le position d'un bras robotique pour ra \begin{table} \begin{tabular}{c|c|c|c} \hline & C–1 (cöt) & C–2 (cöt) & C–3 \\ & & & (gradient) \\ \hline Tran. erreur1 & 1.775e-5 & 8.113e-5 & 6.4e-05 \\ Rot. erreur1 & 2.747e-2 & 3.2e-2 & 1.199e-12 \\ \hline \end{tabular} \end{table} Table 1: - Valeurs moyennes a convergence. Figure 4: - Exemple d’une proédure d’alignement dans le cas ou un modele de réérence complet est aligné sur une scème partiellement observée. Les nuages de points rouges, bleus et verts representent respectivement, la réérence, le modele aligné et la cible. Les resultats pour deux objets sont presentés pour un gant (a gauche) et une boite de mou-tarde (a droite). Figure 5: - Exemple d’alignement de modèles dans le cas de scènes à plusieurs objets. La scème utilisee (en haut) pour l’alignement de deux objets differents; et les courbes correspondantes (en bas) montrant l’evolution des gradients de translation et rotation pendant la proédure. Figure 6: Illustration de l’asservissement visuel 3D direct. La rangée du haut montre le robot en position de depart, intermediaïare et finale. La rangée du milieu montre les nuages de points initial, intermediaïaire et final. La rangée du bas montre la courbe de convergence et la trajectoire suivie par l’effecteur du robot. Pour ce test, le nuage de points complet est utilise sans aucun appariement de modele local. -laser automatiquement une tache de positionnement. Les resultats experimentaux obtenus demontrent l'efficacite de notre approche en termes de precision et de comportement du controleur y compris dans des conditions defavorables (exemple, occlusions). Les travaux futurs seront axes sur l'utilisation de l'approche proposee pour la manipulation d'objets statiques ou en mouvement.
2307.08746
Unravelling the structure of magnetised molecular clouds with SILCC-Zoom: sheets, filaments and fragmentation
To what extent magnetic fields affect how molecular clouds (MCs) fragment and create dense structures is an open question. We present a numerical study of cloud fragmentation using the SILCC-Zoom simulations. These simulations follow the self-consistent formation of MCs in a few hundred parsec sized region of a stratified galactic disc; and include magnetic fields, self-gravity, supernova-driven turbulence, as well as a non-equilibrium chemical network. To discern the role of magnetic fields in the evolution of MCs, we study seven simulated clouds, five with magnetic fields, and two without, with a maximum resolution of 0.1 parsec. Using a dendrogram we identify hierarchical structures which form within the clouds. Overall, the magnetised clouds have more mass in a diffuse envelope with a number density between 1-100 cm$^{-3}$. We find that six out of seven clouds are sheet-like on the largest scales, as also found in recent observations, and with filamentary structures embedded within, consistent with the bubble-driven MC formation mechanism. Hydrodynamic simulations tend to produce more sheet-like structures also on smaller scales, while the presence of magnetic fields promotes filament formation. Analysing cloud energetics, we find that magnetic fields are dynamically important for less dense, mostly but not exclusively atomic structures (typically up to $\sim 100 - 1000$~cm$^{-3}$), while the denser, potentially star-forming structures are energetically dominated by self-gravity and turbulence. In addition, we compute the magnetic surface term and demonstrate that it is generally confining, and some atomic structures are even magnetically held together. In general, magnetic fields delay the cloud evolution and fragmentation by $\sim$ 1 Myr.
S. Ganguly, S. Walch, D. Seifried, S. D. Clarke, M. Weis
2023-07-17T18:00:03Z
http://arxiv.org/abs/2307.08746v1
Unravelling the structure of magnetised molecular clouds with SILCC-Zoom: sheets, filaments and fragmentation ###### Abstract To what extent magnetic fields affect how molecular clouds (MCs) fragment and create dense structures is an open question. We present a numerical study of cloud fragmentation using the SILCC-Zoom simulations. These simulations follow the self-consistent formation of MCs in a few hundred parsec sized region of a stratified galactic disc; and include magnetic fields, self-gravity, supernova-driven turbulence, as well as a non-equilibrium chemical network. To discern the role of magnetic fields in the evolution of MCs, we study seven simulated clouds, five with magnetic fields, and two without, with a maximum resolution of 0.1 parsec. Using a dendrogram we identify hierarchical structures which form within the clouds. Overall, the magnetised clouds have more mass in a diffuse envelope with a number density between 1-100 cm\({}^{-3}\). We find that six out of seven clouds are sheet-like on the largest scales, as also found in recent observations, and with filamentary structures embedded within, consistent with the bubble-driven MC formation mechanism. Hydrodynamic simulations tend to produce more sheet-like structures also on smaller scales, while the presence of magnetic fields promotes filament formation. Analysing cloud energetics, we find that magnetic fields are dynamically important for less dense, mostly but not exclusively atomic structures (typically up to \(\sim 100-1000\) cm\({}^{-3}\)), while the denser, potentially star-forming structures are energetically dominated by self-gravity and turbulence. In addition, we compute the magnetic surface term and demonstrate that it is generally confining, and some atomic structures are even magnetically held together. In general, magnetic fields delay the cloud evolution and fragmentation by \(\sim 1\) Myr. keywords: MHD - methods: numerical - stars: formation - ISM: clouds - ISM: kinematics and dynamics ## 1 Introduction Magnetic fields are ubiquitous in the interstellar medium (ISM, Crutcher et al., 2003; Heiles & Troland, 2005; Fletcher et al., 2011; Beck, 2015). Since the discovery of interstellar magnetic fields by Hiltner (1951) and Hall (1951), they have been known to be integral to the dynamical evolution of the ISM. Magnetic fields, however, are also notoriously difficult to measure accurately and model theoretically. Decades of dedicated observations have resulted in a good understanding of the morphology and strength of the magnetic field in different ISM phases (Crutcher, 1999; Bourke et al., 2001; Heiles & Crutcher, 2005; Troland & Crutcher, 2008; Crutcher, 2012; Beck, 2015; Planck Collaboration et al., 2020; Lopez-Rodriguez et al., 2023). However, the exact nature of how magnetic fields affect molecular cloud (MC) formation and evolution is an open question and subject of intense scrutiny (see e.g. reviews by Crutcher, 2012; Hennebelle & Inutsuka, 2019; Girichidis et al., 2020; Pattle et al., 2022). Various numerical studies have performed detailed analysis on the interplay of magnetic fields with other physical processes (e.g. turbulence, thermal pressure) in order to determine how MCs are shaped, formed, and how they evolve (e.g. Heitsch et al., 2001; Federrath & Klessen, 2012; Walch et al., 2015; Kortgen & Banerjee, 2015; Girichidis et al., 2016; Kortgen et al., 2018; Seifried et al., 2019; Ibanez-Mejia et al., 2022). On galactic scales, ordered magnetic fields have been observed, with a correlation between the direction of the spiral arms and the magnetic field (Beck, 2009; Fletcher et al., 2011; Li & Henning, 2011). In the diffuse ISM, the magnetic field strength, \(B\), does not show any correlation with the density for number densities of up to roughly 300 cm\({}^{-3}\)(Crutcher et al., 2010). Above these densities, Crutcher et al. (2010) find \(B\propto\rho^{\kappa}\), with \(\kappa\approx 2/3\), consistent with sub-dominant magnetic field strengths, although there remains considerable scatter in the observations. The lack of correlation between the strength of the magnetic field and the density of the ambient medium implies that in the diffuse ISM, magnetic fields can channelise gas flows along the field lines and therefore influence the environment in which MCs form. Pardi et al. (2017) show that magnetic fields are more likely to cause a smoother gas distribution, while Molina et al. (2012) find that they are more likely to affect the dynamics of lower-density gas. Magnetic fields can add to the thermal pressure exerted by the gas and slow down the formation of dense gas (Hill et al., 2012), as well as molecular gas (Girichidis et al., 2018; Seifried et al., 2020). A sufficiently strong magnetic field can prevent the collapse of a MC altogether (Mouschovias, 1991; Spitzer, 1978) or slow down cloud evolution (Heitsch et al., 2001; Padoan & Nordlund, 2011; Federath & Klessen, 2012; Ibanez-Mejia et al., 2022). In terms of morphology, they can facilitate the formation of elongated filamentary structures (Hacar et al., 2022; Pineda et al., 2022) and are essential in understanding the filamentary nature of the ISM (see e.g. Bally et al., 1987; Andre et al., 2014). The direction of such elongation relative to the direction of the magnetic field is a matter of great active research (e.g. Soler & Hennebelle, 2017; Seifried et al., 2020). In the lower density range, for sub-Alfvenic gas, anisotropic turbulence can lead to structures elongated parallel to field lines. In contrast, at higher densities, magnetic fields can channelise flows along field lines and therefore facilitate structures perpendicular to the field direction. Magnetic fields are likely to also affect the fragmentation of clouds and cloud cores. Commercon et al. (2011) find that fragments in magnetized cloud cores are more massive compared to those formed without magnetic fields. Although the probability density function (PDF) of lower density gas is found to be different in the presence of magnetic fields (Molina et al., 2012), the high density, potentially star-forming part does not seem to significantly affected (Klessen & Burkert, 2001; Slyz et al., 2005; Girichidis et al., 2014; Schneider et al., 2015). In this work, we perform a numerical investigation of the role that magnetic fields play in the formation and shaping of density structures within MCs. We do a detailed analysis of realistic MC simulations based on the SILCC-Zoom simulations (Seifried et al., 2017) by comparing the morphological, dynamical, and fragmentation properties in seven simulated clouds, five with magnetic fields (magnetohydrodynamic or MHD clouds) and two without (hydrodynamic or HD clouds). The paper is structured as follows: In Section 2, we outline the numerical setup of the simulation. Section 3 discusses the procedure for identifying and classifying structures (Ganguly et al., 2022). We highlight the differences density PDFs between HD and MHD clouds in Section 4. The morphological properties of the obtained structures are presented in Section 5. We find all the MCs to be sheet-like on the largest scales (tens of parsecs). On smaller scales, we see that the presence of magnetic fields enhances the formation of filamentary over sheet-like sub-structures. In Section 6, we analyse the dynamics and energetic balance of magnetized structures and relate them to the fragmentation of cloud sub-structures. We find that the presence of magnetic fields slows down cloud evolution and, in particular, leads to more massive fragments at low to intermediate densities (\(<\)100 cm\({}^{-3}\)). We attempt to make an order of magnitude estimate of this slow-down effect in Section 6.5. Finally, we present the summary of our findings in Section 7. ## 2 Numerical methods and simulation We present here results based on the SILCC-Zoom simulations (Seifried et al., 2017, 2019). The SILCC-Zoom simulations are MCs with realistic boundary conditions, generated by embedding the clouds within the SILCC simulations of multi-phase interstellar gas, thus having realistic initial conditions (Walch et al., 2015; Girichidis et al., 2016). In this section, we highlight some key features of the simulations. More details on the simulations can be found in Seifried et al. (2017) and Seifried et al. (2019). All simulations were executed using the adaptive mesh refinement code FLASH, version 4 (Fryxell et al., 2000; Dubey et al., 2008), which solves the ideal MHD equations for an ideal fluid. If we consider a fluid parcel of density \(\rho\), velocity \(\mathbf{v}\), total energy \(e_{\rm tot}\), and magnetic field vector \(\mathbf{B}\) (zero if pure hydrodynamics), these are given as follows: \[\frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho\mathbf{v}\right)=0, \tag{1}\] \[\frac{\partial\rho\mathbf{v}}{\partial t}+\nabla\cdot\left[\rho\mathbf{v} \otimes\mathbf{v}+\left(P+\frac{B^{2}}{8\pi}\right)\mathbf{I}-\frac{\mathbf{B }\otimes\mathbf{B}}{4\pi}\right]=\rho\mathbf{g}, \tag{2}\] \[\frac{\partial\epsilon_{\rm tot}}{\partial t}+\nabla\cdot\left[\left(e_{\rm tot }+P\right)\mathbf{v}-\frac{\left(\mathbf{B}\cdot\mathbf{v}\right)\mathbf{B}}{4 \pi}\right]=\rho\mathbf{v}\cdot\mathbf{g}+\dot{u}_{\rm heat}, \tag{3}\] \[\frac{\partial\mathbf{B}}{\partial t}-\nabla\times\left(\mathbf{v}\times \mathbf{B}\right)=0. \tag{4}\] Here, Eqs. 1 to 4 represent the conservation of mass, momentum, energy, and magnetic flux, respectively. \(P\) represents the thermal pressure, \(\mathbf{g}\) is the local gravitational acceleration obtained from solving the Poisson equation, \(u\) is the internal energy, and \(\dot{u}_{\rm heat}\) is the internal energy input rate due to the combination of heating and cooling processes. The \(\otimes\) is the outer product (i.e. \((\mathbf{a}\otimes\mathbf{b})_{ij}=a_{i}b_{j}\)). The total energy and the pressure are computed as follows: \[e_{\rm tot}=u+\frac{1}{2}\rho v^{2}+\frac{1}{8\pi}B^{2}, \tag{5}\] \[P=(\gamma-1)u, \tag{6}\] with \(\gamma\) being the adiabatic index. Here, we present results from runs with and without magnetic fields. The MHD simulations shown are performed using an entropy-stable solver that guarantees minimum possible dissipation (Derigs et al., 2016, 2018). The hydrodynamic simulations have been performed using the MHD Bouchut 5-wave solver (Bouchut et al., 2007; Waagan, 2009) that guarantees positive entropy and density. The magnetic field strength has been set to zero for these runs. All simulations include self-gravity as well as an external galactic potential due to the presence of old stars. This external potential is calculated assuming a stellar population density of \(\Sigma_{\rm star}=30\) M\({}_{\odot}\) pc\({}^{-2}\), a sech\({}^{2}\) vertical profile and a scale height of 100 pc, according to Spitzer (1942). The self-gravity of the gas is calculated using a tree-based algorithm (Wunsch et al., 2018). The entire simulation domain consists of a box of \(500\) pc \(\times\)\(500\) pc \(\times\)\(\pm\) 5 kpc size, with the long axis representing the vertical \(z-\)direction of a galactic disc. The box is set with periodic boundary conditions in the \(x-\) and \(y-\) direction, and outflow boundary condition in the \(z-\)direction. The initial gas surface density is set to \(\Sigma_{\rm gas}=10\) M\({}_{\odot}\) pc\({}^{-2}\) which corresponds to solar neighbourhood conditions. The vertical distribution of the gas is modelled as a Gaussian, i.e. \(\rho=\rho_{0}\exp(-z^{2}/2h_{z}^{2})\), where \(h_{z}\)=30 pc is the scale height and \(\rho_{0}=9\times 10^{-24}\) g cm\({}^{-3}\). The initial gas temperature is set to 4500 K. For runs with magnetic fields, the magnetic field is initialized along the \(x-\)direction, i.e. \(\mathbf{B}=(B_{x},0,0)\) with \(B_{x}=B_{x,0}\sqrt{\rho(z)/\rho_{0}}\) and the magnetic field strength at the midplane \(B_{x,0}=3\)\(\mu\)G. The field strength is chosen to be in accordance with recent observations (e.g. Beck & Wielebinski, 2013). The turbulence in the simulations is generated by supernova explosions. The explosion rate is set to 15 SNe Myr\({}^{-1}\), which is consistent with the Kennicutt-Schmidt relation, which observationally determines the star formation rate surface density for a given gas surface density (Schmidt, 1959; Kennicutt, 1998). 50% of the supernovae are placed following a Gaussian random distribution along the \(z-\)direction up to a height of 50 pc, while the other 50% are placed at density peaks of the gas. This prescription of supernova driving creates a multi-phase turbulent ISM which can be used as initial conditions for the zoom-in simulations (Walch et al., 2015; Girichidis et al., 2016). Apart from the dynamics of the gas, we also model its chemical evolution using a simplified non-equilibrium chemical network based on hydrogen and carbon chemistry (Nelson and Langer, 1997; Glover and Mac Low, 2007; Glover et al., 2010). For this purpose, we follow the abundance of H\({}^{+}\), H, H\({}_{2}\), CO, C\({}^{+}\), e\({}^{-}\), and O. At the beginning of the simulation, all hydrogen in the disc midplane is neutral and carbon is in its ionized form (i.e. H and C\({}^{+}\), respectively). To correctly model the chemistry of the gas, we include an interstellar radiation field (ISRF) of strength \(G_{0}=1.7\) in Habing units (Habing, 1968; Draine, 1978). The attenuation of this radiation field is taken into consideration by computing the true optical depth inside any given point in the simulation domain. This is computed as follows: \[\mathrm{A_{V,3D}}=-\frac{1}{2.5}\mathrm{ln}\left[\frac{1}{N_{\mathrm{PUX}}} \sum_{i=1}^{N_{\mathrm{PUX}}}\mathrm{exp}\left(-2.5\frac{N_{\mathrm{H,tot,i}}} {1.87\times 10^{21}\;\mathrm{cm}^{-2}}\right)\right], \tag{7}\] where the sum is carried over each Healpix pixel, with \(N_{\mathrm{PUX}}\) being the total number of such pixels (here 48), and \(N_{\mathrm{H,tot,i}}\) is the column density computed for the \(i-\)th pixel. In essence, for any given point, we compute the column density along various lines of sight and use that for an effective \(\mathrm{A_{V,3D}}\). The averaging is performed in an exponential manner because the intensity of radiation decreases in an exponential manner due to extinction caused by the gas column density along the line of sight. The calculation for this is performed by the TreRay Optical Depth module developed by Wunsch et al. (2018). To study the formation of MCs, all supernova explosions are stopped at a certain time \(t_{0}\). Up to this point, the maximum grid resolution is 3.9 pc. At time \(t_{0}\), different regions are identified for the zoom-in process, primarily by determining which regions form molecular gas when the simulations are run further at the original SILCC resolution of 3.9 pc. The time \(t=t_{0}\) refers to the start of the evolution of the different clouds and is set as an evolutionary time \(t_{\mathrm{evol}}=0\). The total simulation time \(t\) is related to the evolution time as \[t=t_{0}+t_{\mathrm{evol}}. \tag{8}\] From \(t_{\mathrm{evol}}=0\) on, in the selected regions, the AMR grid is allowed to be refined to a higher resolution to capture structures that form as MCs. These regions are called zoom-in regions and are of primary importance to us as sites of MCs. Each SILCC simulation we run contains two such "zoom-in" boxes simultaneously. All runs present here have a maximum resolution of 0.125 pc. For details of how the zoom-in process is achieved, see Seifried et al. (2017). ## 3 Classification of Structures For the analysis presented in this work, we look at eight different cubic boxes of 62.5 pc in size, each from a different SILCC zoom-in region. These boxes are chosen by visual inspection, in order to capture the most interesting features contained in each zoom-in region. For the purpose of this work, we will refer to these cubic regions as MCs. They are named MC1-HD and MC2-HD for the two hydrodynamic clouds, and MCx-MHD for the MHD clouds, where \(x\) is between one and six. We present some basic details of the different MCs in Table 1. A projected view of all the different MCs is added in the Appendix A. For more information on the presented clouds, we refer the reader to Seifried et al. (2017) for the HD clouds and Seifried et al. (2019) for the MHD clouds. We perform a detailed analysis of the different clouds, following their evolution from \(t_{\mathrm{evol}}=2\) Myr to \(t_{\mathrm{evol}}=3.5\) Myr, primarily focusing on the latter time. The beginning and the end time are chosen to look at relatively early stages of structure formation in the MCs. We do not look at times earlier than 2 Myr primarily because the clouds undergo the refinement process and are not fully resolved until \(t_{\mathrm{evol}}\sim 1.5\) Myr. ### Structure identification To identify structures in our MCs, we use a dendrogram algorithm (Rosolowsky et al., 2008). Dendrogram is a model-independent method to determine hierarchical structures in two and three dimensions. Since we are interested in 3-dimensional structures, we perform the dendrogram analysis on 3-dimensional density cubes. We do not use the 3D AMR grid structure inherent in the data, but rather convert it into a uniform mesh at 0.125 and 0.25 pc resolution (see also Table 2). Given an initial density field, \(\rho\), the dendrogram essentially depends on three free parameters: the initial starting threshold, \(\rho_{0}\), the density jump, \(\Delta\rho\), and the minimum number of cells that need to be included in any structure, \(N_{\mathrm{cells}}\). Due to high density contrasts, we build the dendrogram tree on the logarithmic density profile of the gas, and therefore have used density bins of \(\Delta\mathrm{log_{1}}\rho\), rather than \(\Delta\rho\). In addition to the three parameters mentioned, we can choose a pruning peak, \(\rho_{\mathrm{prune}}\), to allow the dendrogram to create new structures only when such a structure will have peak density \(\rho_{\mathrm{peak}}>\rho_{\mathrm{prune}}\), although this has not been used in the present work. Using these parameters, the dendrogram algorithm allows us to define volumes of gas as structures in a hierarchical tree, primarily defined by their threshold density \(\rho_{\mathrm{thr}}\), which is the minimum density value inside a given structure. This can be thought of as equivalent to contour values for two dimensional maps. The hierarchy is characterised by different dendrogram _branches_, where a branch is a given dendrogram structure and all its parent structures, up to the largest and most diffuse ancestor in the dendrogram tree. For probing both the higher and lower density ends of the data, we perform two dendrogram analyses on the same regions: a higher density dendrogram analysis performed at a resolution of 0.125 pc for probing gas above densities of \(10^{-22}\) g cm\({}^{-3}\) (referred to as _high-den_), and a lower density analysis performed at 0.25 pc for gas between the densities of \(10^{-24}\) and \(10^{-22}\) g cm\({}^{-3}\) (referred to as _low-den_). The _low-den_ values are computed as volume averaged \begin{table} \begin{tabular}{l c c c c c} \hline \hline Run name & MHD & \(t_{0}\) & Total mass & H\({}_{2}\) mass & (B) \\ & & [Myr] & [\(10^{4}\) M\({}_{\odot}\)] & [\(10^{4}\) M\({}_{\odot}\)] & [\(\mu\)G] \\ \hline MC1-HD & no & 12 & 7.3 & 2.1 & 0 \\ MC2-HD & no & 12 & 5.4 & 1.6 & 0 \\ \hline MC1-MHD & yes & 16 & 7.8 & 1.3 & 4.8 \\ MC2-MHD & yes & 16 & 6.2 & 0.86 & 3.9 \\ (MC3-MHD\({}^{\mathrm{a}}\) & yes & 16 & 2.0 & 0.19 & 2.0) \\ MC4-MHD & yes & 11.5 & 6.8 & 1.2 & 6.4 \\ MCS-MHD & yes & 11.5 & 10.1 & 1.6 & 6.8 \\ MC6-MHD & yes & 16 & 6.6 & 1.4 & 4.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Basic information on the eight analysed simulations. From left to right we list the run name, whether magnetic fields are present or not, the time when the AMR “zoom-in” starts, as well as the total mass, molecular hydrogen mass and the average magnetic field strength at \(t_{\mathrm{evol}}=2\) Myr. values from the higher resolution grid. We present the dendrogram parameters used for both analyses in Table 2. In addition to the difference in the basic parameters between the two dendrogram analyses, we remove all structures with \(\rho_{\rm thr}>10^{-22}\) g cm\({}^{-3}\) for the _low-den_ analysis. This is done in order to avoid double counting of structures. The parameter values mentioned in Table 2 have been chosen from a mixture of practical considerations, such as CPU memory, computation time, and through trial and error. We note that in principle the same analysis could be performed by a single dendrogram analysis at \(\rho_{\rm thr}=10^{-24}\) g cm\({}^{-3}\) at the highest resolution of 0.125 pc. However, the computation cost of such an analysis was prohibitive in our case. Combining the _high-den_ and _low-den_ dendrogram analyses allows us to probe a much higher density range than would be otherwise possible. In terms of the parameters used, we have seen no unexpected change in the results by changing the free parameters within a reasonable range. We refer the reader to our companion paper (Ganguly et al., 2022) for a more thorough discussion of the effect of altering the parameter values on the analysis. Overall, we find that changing the parameters, while resulting in a varying number of obtained structures, leaves the statistical properties of the structures virtually unaffected. An example of the leaf density structures (structures that contain no further sub-structures) from the dendrogram analysis can be seen in Fig. 1 for MC1-MHD at \(t_{\rm evol}=3.5\) Myr, as contours over column density maps. The three panels show, from left to right, the cloud projected along the \(x\)-, \(y-\) and \(z-\)direction. The contours are drawn as projections of the 3D dendrogram structure outlines in the projected direction. We distinguish between structures depending on their molecular H\({}_{2}\) content, by plotting structures with over 50% of their total hydrogen mass in molecular form (referred to as molecular structures) in solid lines and otherwise in dashed lines (referred to as atomic structures). Due to the nature of the dendrogram algorithm, there are some structures which touch the edge of the box. This can lead to structures whose morphology is determined by their proximity to the edge. To avoid this, we do not classify the morphology of any structures which have more than 5% of their surface cells touching any edge. This is relevant especially for the large-scale structures from the _low-den_ dendrogram analysis. However, they can still be of interest while considering cloud dynamics, and in such a case we add them as an additional category of "unclassified". While in a different context, Alves et al. (2017) have shown the importance of having closed contours while studying 2D maps. We have attempted to follow the same principle here as much as possible. ### Structure classification Once we obtain the tree of dendrogram density sub-structures, we aim to classify their morphology. For each structure, we compute an equivalent ellipsoid that has the same mass and the same moments of inertia (MOI) as the original structure. We then use the axes lengths of this equivalent ellipsoid to classify the shape of the different structures. Let us consider a uniform density ellipsoid of mass \(M\) and semi-axes lengths \(a,\ b,\ c\) with \(a\geq b\geq c\). The moments of inertia along the three principal axes will be given as follows: \[\begin{split} I_{a}&=\frac{1}{5}M(b^{2}+c^{2}),\\ I_{b}&=\frac{1}{5}M(c^{2}+a^{2}),\\ I_{c}&=\frac{1}{5}M(a^{2}+b^{2}),\\ \end{split} \tag{9}\] where \(I_{c}\geq I_{b}\geq I_{a}\). If we now compute the principal moments of inertia of our given dendrogram structure to be \(A\), \(B\) and \(C\), respectively, then the ellipsoid has an equivalent moment of inertia if \[\begin{split} A=I_{a},\ B=I_{b},\ C=I_{c}.\end{split} \tag{10}\] This leads to the following equation for computing the axis lengths of the equivalent ellipsoids: \[\begin{split} a&=\sqrt{\frac{5}{2M}(B+C-A)}\,\\ b&=\sqrt{\frac{5}{2M}(C+A-B)}\,\\ c&=\sqrt{\frac{5}{2M}(A+B-C)}\.\end{split} \tag{11}\] We then use the aspect ratio of the semi-axes of the corresponding ellipsoid and the position of the center of mass (COM) of the structure relative to its boundary (i.e. whether the COM is contained by the structure itself) to categorise the different structures into four categories - sheets, curved sheets (referred to as sheet_c in this paper), filaments, and spheroids: \[\begin{split}\textbf{sheet}:\frac{a}{b}&\leq f_{ \rm gap},\frac{a}{c}>f_{\rm gap}\\ \textbf{filament}:\frac{a}{b}&>f_{\rm gap}\\ \textbf{spheroid:}\frac{a}{c}&\leq f_{\rm gap}, \ \text{contains its own COM}\\ \textbf{sheet\_c}:\frac{a}{c}&\leq f_{\rm gap}, \ \text{does not contain its own COM}\\ \end{split} \tag{12}\] where we set the aspect ratio factor \(f_{\rm gap}=3\). The inclusion of the COM criterion in addition to the ratio of the ellipsoid axes help us deal with especially the larger-scale structures which can be highly curved. A highly curved sheet could have comparable MOI eigenvalues along the different eigen-directions, but would not contain its own COM. We highlight some visual examples of such highly curved sheet-like structures when we discuss the large scale morphology of our clouds in Section 5. In contrast to curved sheets, a spheroidal structure would contain its own COM. Apart from using the normal moment of inertia, we also perform the classification by computing a volume-weighted moment of inertia, where we compute the moment of inertia of the structures (the quantities \(A\), \(B\) and \(C\)) by assuming the structure is of the same mass but with uniform density, but find statistically little to no difference in the resulting morphologies. The discussion above highlights some possible caveats of our method. If we have a situation of multiple crossing filaments (hub-like structure), or parallel filaments joined by a more diffuse intermediate medium - the method will identify it as a sheet-like structure splitting into filaments in the dendrogram tree hierarchy. We must therefore emphasise that our definition of a sheet in this context is more general and contains also situations where multiple filamentary structures are connected by a more diffuse medium. Further, for highly curved structures, it is possible that the simple fit ellipsoid method may not result in a good description of the ellipsoid axis lengths. ## 4 Density distribution and magnetic fields We first consider the bulk properties of the different MCs to quantify the differences between the hydrodynamic and MHD clouds. From Table 1, we see that the volume-weighted root-mean-square average magnetic field strength for all MHD clouds is comparable and varies between 3.9-6.8 \(\mu\)G. These values are slightly higher than the initial magnetic field strength \(B_{x,0}=3\,\mu\)G. The cloud masses and their H\({}_{2}\) masses are also within a factor of roughly 2 to each other (with the exception of MC3-MHD, see below). For a view of the time evolution of the total and H\({}_{2}\) masses, as well as the H\({}_{2}\) mass fraction, we refer the reader to Appendix A. MC3-MHD stands out as it has a much lower H\({}_{2}\) mass and H\({}_{2}\) mass fraction compared to the other clouds (Table 1). Visual inspection of this cloud shows that its structures are still diffuse and not as prominent, suggesting that it perhaps needs much longer to collapse, or may not collapse at all (see Fig. 1, bottom row left). Its molecular content remains at a roughly constant level of 10% throughout. Since we are interested primarily in the problem of density structures that eventually form stars, we exclude MC3-MHD from further analysis considering its unevolved state and low molecular content. It is of interest to examine whether the mass distribution in different clouds is affected by the presence of magnetic fields. This can be seen in Fig. 2, which shows the volume-weighted density PDF of all different clouds at \(t_{\rm evol}=2\) Myr (top) and \(t_{\rm evol}=3.5\) Myr (bottom) in the density range probed by the dendrogram analysis (\(>10^{-24}\) g cm\({}^{-3}\)). The respective density PDFs for the full density range can be found in Appendix B. The two hydrodynamic clouds are plotted using reddish lines (red and salmon), while the magnetised clouds are shown using darker colours. For all clouds, the shown density range contains more than 99% of their total mass. From Fig. 2, we see that between \(10^{-24}\) and \(10^{-22}\) g cm\({}^{-3}\), corresponding to the rough number densities between 1 and 100 cm\({}^{-3}\), the MHD clouds contain much more gas. This is more prominent at \(t_{\rm evol}=2\) Myr, but remains also clearly visible at \(t_{\rm evol}=3.5\) Myr. This effect can also be visually seen in the column density plots of Fig. 1, where the denser parts of the hydrodynamic clouds seem to be embedded in a more rarefied medium compared to their MHD counterparts. We calculate the mass percentage at 2 Myr in different density regimes in Table 3, which shows that, at this time, the MHD clouds contain almost 50% of their mass between \(10^{-24}\) and \(10^{-22}\) g cm\({}^{-3}\), in contrast to only around 26% for the hydrodynamic MCs. Magnetic fields in our simulations therefore play an important \begin{table} \begin{tabular}{c c c c c c c} \hline \hline dendrogram & Resolution & \(\rho_{0}\) & \(\Delta\log_{10}\rho\) & \(N_{\rm cells}\) & \(\rho_{\rm pune}\) & additional \\ type & [pc] & [g cm\({}^{-3}\)] & & & [g cm\({}^{-3}\)] & criteria \\ \hline _high-den_ & 0.125 & \(10^{-22}\) & 0.1 & 100 & None & None \\ _low-den_ & 0.25 & \(10^{-24}\) & 0.2 & 100 & None & \(\rho_{\rm thr}<10^{-22}\) g cm\({}^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Information on the parameters used for the two different kinds of dendrogram analyses. From left to right arc: the type of dendrogram, the grid resolution at which it is performed, the starting density, the logarithmic density jump, the minimum number of cells in structures, the density of the pruning peak used, and if any additional criteria were used to select structures. Figure 1: Left to right: Projections of MC1-MHD at \(t_{\rm evol}\simeq\)3.5 Myr along the \(x\)-, \(y\)-, and \(z\)-axis, respectively. The contours show the projections of the leaf dendrogram structures along the same axis. Molecular structures (\(>50\%\) H\({}_{2}\) mass fraction) are plotted with solid, and atomic structures (\(<50\%\) H\({}_{2}\) mass fraction) are plotted with dashed lines. The molecular structures nicely trace the dense spine of the two main filaments, while the atomic structures mostly represent the envelope. \begin{table} \begin{tabular}{c c c c} \hline \hline Cloud & \multicolumn{3}{c}{Mass percentage at 2 Myr} \\ \cline{2-4} sample & \(\frac{\rho}{\rm g\,cm^{-3}}<10^{-24}\) & \(10^{-24}\leq\frac{\rho}{\rm g\,cm^{-3}}<10^{-22}\) & \(\frac{\rho}{\rm g\,cm^{-3}}\geq 10^{-22}\) \\ \hline HD & 0.9 & 25.9 & 73.2 \\ MHD & 0.5 & 50.0 & 49.5 \\ \hline \hline \end{tabular} \end{table} Table 3: The average mass percentage in different density regimes for the HD and MHD clouds at \(t_{\rm evol}=2\) Myr. The MHD clouds have twice the amount of mass in the intermediate density range between \(10^{-24}\) g cm\({}^{-3}\) and \(10^{-22}\) g cm\({}^{-3}\) compared to their HD counterparts. role in shaping the environment inside which denser, molecular, and potentially star-forming structures live. This is consistent with the picture that magnetic fields have a noticeable effect on the dynamics of low (here, \(\lesssim 10^{-22}\) g cm\({}^{-3}\)) density gas (Molina et al., 2012). Similar conclusions have been reached by Seifried et al. (2020) using the technique of relative orientation of magnetic fields with respect to filaments who find that this change in the relative impact of magnetic fields occurs around \(\sim 100\) cm\({}^{-3}\). We explore the effects of magnetic fields in more detail by looking at the clouds' fragmentation properties in Section 6.4. For \(\rho>10^{-22}\) g cm\({}^{-3}\) we see no clear trend in the slope of the density PDF between the hydrodynamic and MHD clouds. This is consistent with simulations and observations showing that column density PDFs are not sensitive to the presence of a magnetic fields in the high column density regime (Klessen & Burkert, 2001; Slyz et al., 2005; Girichidis et al., 2014; Schneider et al., 2015). However, at \(t_{\rm evol}=2\) Myr, the two hydrodynamic clouds seem to have a bit more dense gas mass (see also Table 3), although the effect is visually far from clear. If there were a "delay" in the formation of denser gas when magnetic fields are present, this would be extremely relevant for the formation of well-shielded, molecular gas. In Table 4, we show the mass above an A\({}_{\rm V,3D}\) (Eq. 7) of 1 and 10 for one magnetised and one non-magnetised cloud of comparable mass (MC1-MHD and MC1-HD, see Table 1). Additionally, the mass-weighted PDF of A\({}_{\rm V,3D}\) for these two clouds is shown in Appendix C, Fig. C1. From Table 4, as well as from Fig. C1, we find that the amount of gas above A\({}_{\rm V,3D}>1\) and A\({}_{\rm V,3D}>10\) in MC1-MHD is consistently lower compared to MC1-HD. In Section 6.5, we attempt to quantify such a delay timescale due to magnetic fields. For a more detailed analysis on the connection between magnetic fields and A\({}_{\rm V,3D}\), we refer the reader to Seifried et al. (2020). ## 5 Morphology We perform a morphological classification of all simulated cloud structures using the method described in Section 3.2. As an intuitive visual aid, we first present 3D surfaces of three large-scale cloud dendrogram structures1 (from top to bottom: MC1-MHD, MC5-MHD, and MC6-MHD) seen from three different viewing angles (different columns) in Fig. 3. The lighter blue colour shows the large-scale structure (identified at \(\rho_{\rm thr}\approx 10^{-22}\) g cm\({}^{-3}\)) and in red, we show one of the primary embedded filamentary structures (identified using values of \(\rho_{\rm thr}\) between \(10^{-20}-10^{-21}\) g cm\({}^{-3}\)). Visual inspection seems to suggest that the large-scale, lighter blue structures are rather thin and sheet-like, and indeed all three clouds shown in Fig. 3 are identified as sheets or curved sheets according to the classification algorithm of Section 3.2. This is even clearer in a video view, which can be found here2. The visual suggestion of the clouds being sheet-like on the largest scales is also confirmed for all clouds in a quantitative analysis, presented below. Footnote 1: We show the largest structures from the _high-den_ dendrogram analysis (\(\rho>10^{-22}\) g cm\({}^{-3}\)) as they are on the maximum resolution and therefore capture the finer complexities of the cloud better. The large scale structures for the _low-den_ dendrogram analysis follow the same trend. Footnote 2: [https://hera.ph1.uni-koeln.de/~ganguly/silic_zoom/](https://hera.ph1.uni-koeln.de/~ganguly/silic_zoom/) We estimate the size of the structures simply from the volume \(V\) as: \[R=V^{1/3}. \tag{13}\] We define \(N_{\rm tot}\) as the total number of morphologically classified \begin{table} \begin{tabular}{c c c} \hline cloud, time & mass above & mass above \\ & A\({}_{\rm V,3D}>1\) [\%] & A\({}_{\rm V,3D}>10\) [\%] \\ \hline MC1-HD, 2 Myr & 41.1 & 2.3 \\ MC1-HD, 3.5 Myr & 44.0 & 9.8 \\ \hline MC1-MHD, 2 Myr & 26.6 & 0 \\ MC1-MHD, 3.5 Myr & 31.7 & 0.8 \\ \hline \end{tabular} \end{table} Table 4: The percentage of mass above values of A\({}_{\rm V,3D}=1,~{}10\) for two similar mass clouds, MC1-HD and MC1-MHD. Figure 2: Volume-weighted density PDF for different HD and MHD clouds \(t_{\rm evol}=2\) Myr (top) and 3.5 Myr (bottom). The density range shown is used for a dendrogram analysis, and contains more than 99% of the total mass of the clouds. The two hydrodynamic clouds are plotted in reddish lines. The vertical line demarcate the boundaries of the _high-den_ (\(>10^{-22}\) g cm\({}^{-3}\)) and the _low-den_ (between \(10^{-24}-10^{-22}\) g cm\({}^{-3}\)) dendrogram analyses (see also Table 2). The MHD clouds have more fraction of gas in the density range between roughly \(10^{-24}\) and \(10^{-22}\) g cm\({}^{-3}\), or between approximately 1 and 100 cm\({}^{-3}\). structures, i.e. \(N_{\rm tot}\) is \[N_{\rm tot}=N_{\rm sheet}+N_{\rm sheet,\_c}+N_{\rm filament}+N_{\rm spheroid}, \tag{14}\] with \(N_{x}\) being the total number of structures (i.e. both parents and leaves) of morphological class \(x\) (where \(x\in\) [sheet, sheet\(\_\)c, filament, spheroid]). We express the number of structures of type \(x\) at a given size \(R\) by \(N_{x}(R)\). In Fig. 4 we plot the cumulative fraction (i.e. \(N_{x}(R)/N_{\rm tot}\)) of sheets, curved sheets, filaments, and spheroidal structures against \(R\) for all structures (i.e. both parents and leaves) in the two hydrodynamic clouds (left panel) and the five MHD clouds (right panel) at \(t_{\rm evol}=3.5\) Myr. The numerical values of the overall fractions across all scales, \(\int N_{x}(R)/N_{\rm tot}\ dR\), for both HD and MHD clouds at two different times can be found in Table 5. We find that spheroidal structures, shown in green, are generally less numerous compared to sheet-like or filamentary structures (\(\sim\)10% of \(N_{\rm tot}\) are spheroidal, Table 5). Sheets (including curved sheets) appear to be the most abundant structures within all clouds (summing up to \(\sim\)70% for the HD case and \(\sim\) 60% for the MHD case). However, filaments are considerably more abundant in the MHD clouds compared to their HD counterparts (\(>\) 30% for MHD as opposed to \(\sim\)20% for HD clouds). In terms of size, we find that at Figure 3: 3D surface rendering of example large-scale dendrogram structures from the _high-den_ dendrogram analysis for MC1-MHD (top row), MCS-MHD (middle row), and MCS6-MHD (bottom row), from different viewing angles (left to right). The blue structures represent the large-scale sheets or curved sheets at \(\rho_{\rm thr}\approx 10^{-22}\ {\rm g\ cm^{-3}}\), while the embedded red structures show one of the more prominent embedded filaments (\(\rho_{\rm thr}\) between \(10^{-20}-10^{-21}\ {\rm g\ cm^{-3}}\)). The units in the axes are in parsec. A video link for the various structures can be found in [https://hera.phl.uni-koelm.de/~ganguly/silcc_zoom/morphology_3d/](https://hera.phl.uni-koelm.de/~ganguly/silcc_zoom/morphology_3d/). the largest \(R\) values, indeed almost all clouds (six out of seven) are either sheets or curved sheets, confirming the visual trend we found in Fig. 3. We highlight the morphological trends as a function of the molecular fraction in Fig. 5. Similar to Fig. 4, we plot here the cumulative fraction of (curved) sheets, filaments, and spheroids, but this time as a function of the molecular mass fraction \(f_{\rm H_{2}}\), which is the H\({}_{2}\) mass in a structure divided by the total hydrogen mass in the structure. Note that structures with high \(f_{\rm H_{2}}\) are usually small (located mostly at small \(R\) in Fig. 4). We see that around \(f_{\rm H_{2}}>0.7\), there are more filaments than sheet-like structures in the MHD case (right panel). This trend is absent for the HD clouds (left panel). This implies that magnetic fields particularly enhance the formation of filaments on the small scales, shaping the morphology of the denser, well-shielded, molecular gas. This is in line with the fact that magnetic fields can, in general, aid the formation of filamentary sub-structures (Hacar et al., 2022; Pineda et al., 2022). Gravitational collapse naturally proceeds anisotropically and tends to create elongated structures (e.g. Burkert and Hartmann, 2004). However, we show in Ganguly et al. (2022) that most of our cloud structures are unbound or only marginally bound. This being the case, gravity cannot be the principal contributor to forming elongated structures, and we must therefore identify other possible sources of the lack of spheroidal structures. Shock compression and turbulence are two such methods for producing elongated structures (see, e.g., Inoue and Inutsuka, 2016 for shock compression; Federrath, 2016 for turbulence; and Hacar et al., 2022 for a general overview). Sheets and filaments are both elongated structures. However, it is interesting that for the hydrodynamic clouds, sheets are by far the most numerous, whereas for the MHD clouds filaments and sheets are more comparable in total number. This is consistent with the results of Hennebelle (2013), who investigate setups of both decaying supersonic turbulence and colliding flows, and find that their simulations tend to produce more sheet-like structures for hydrodynamical simulations, and more filamentary structures for MHD simulations. Overall, we see primarily sheet-like MCs with an abundance of elongated structures (filamentary or sheet-like), irrespective of whether the simulation contains magnetic fields or not. Sheets are generally more numerous, probably representing the fact that we trace a large number of structures belonging to the sheet-like atomic envelope of the MCs. This is supported by the fact that, in Fig. 5, both HD and MHD clouds show an abundance of sheet-like structures below \(f_{\rm H_{2}}\approx 0.5\). The presence of magnetic fields, however, tends to somewhat increase the fraction of filamentary over sheet-like structures. The sheet-like nature of our clouds is consistent with a number of recent observations. Kalberla et al. (2016) have argued that the cold, neutral hydrogen in the ISM is organised in sheet-like structures. Investigating the L1495 region of the Taurus molecular cloud, Arzoumanian et al. (2018) report evidence of extended sheet-like structures too. Using the recent GAIA data, Rezaei Kh. and Kainulainen (2022) have concluded that the California molecular cloud is sheet-like in nature. Tritsis et al. (2022) have reached a similar conclusion regarding the Musac molecular cloud using 3D dust extinction maps. Based on a Herschel study of the giant molecular filament G214.5, Clarke et al. (2023) have also posited that the filament is a result of the HI shell of an expanding superbubble interacting with the local medium. Our findings here are thus perfectly in line with these observations. The morphology of MCs at larger (tens of parsecs) scales is of paramount importance in relation to how the MCs themselves form. Our analysis shows that the clouds are preferentially sheet-like, with and without magnetic fields. The ISM in the SILCC simulations (and therefore also in the SILCC-Zoom simulations) has a multi Figure 4: Cumulative histogram of different morphologies (sheets, curved sheets, filaments, or spheroids) for all HD (left) and MHD (right) clouds at \(t_{\rm evol}=3.5\) Myr. 6 out of the 7 analysed clouds are sheet-like on large scales, with filamentary networks embedded inside. Spheroidal structures are rarer in the presence of magnetic fields. Both HD and MHD clouds produce more sheets than filaments, but the MHD runs tend to have a relative increase in the fraction of filaments. \begin{table} \begin{tabular}{c c c c c c} \hline cloud, time & \(\frac{N_{\rm Model}}{N_{\rm Net}}\) & \(\frac{N_{\rm shock,\,c}}{N_{\rm Not}}\) & \(\frac{N_{\rm filament}}{N_{\rm Not}}\) & \(\frac{N_{\rm Not}}{N_{\rm Not}}\) & \(N_{\rm Not}\) \\ \hline HD, 2 Myr & 0.58 & 0.12 & 0.22 & 0.08 & 910 \\ HD, 3.5 Myr & 0.63 & 0.07 & 0.19 & 0.11 & 1167 \\ \hline MHD, 2 Myr & 0.57 & 0.03 & 0.31 & 0.09 & 487 \\ MHD, 3.5 Myr & 0.56 & 0.04 & 0.33 & 0.07 & 2087 \\ \hline \end{tabular} \end{table} Table 5: Fraction of sheets, curved sheets, filaments, and spheroids among all morphologically classified structures, for both HD and MHD clouds at \(t_{\rm evol}=2,\,3.5\) Myr. While all clouds are dominated by sheet-like structures, the MHD clouds have a higher fraction of filaments compared to their hydrodynamic counterparts. phase structure (Walch et al., 2015; Girichidis et al., 2016). The MCs in these simulations form primarily at the shells or intersections of expanding supernova bubbles. The large-scale sheets we see, can therefore be interpreted as tracing these supernova-driven shells, with a complex network of different morphological sub-structure contained within. This picture is consistent with the bubble-driven structure formation scenario (Koyama and Inutsuka, 2000; Inoue and Inutsuka, 2009; Inutsuka et al., 2015; Pineda et al., 2022). ## 6 Dynamics and fragmentation ### The magnetic field - density scaling The impact of magnetic fields on the MCs is naturally correlated to the field strength. The initial 3 \(\mu\)G seed field in the original simulations is expected to be enhanced when we look at denser structures inside the MCs. The scaling behaviour of the magnetic field \(B\) with \(\rho\) is integral to understanding the importance of magnetic fields at different scales. If contraction of gas occurs exclusively along the magnetic field lines, this should lead to no dependence of the magnetic field strength on the density, i.e. \(B\propto\rho^{0}\). If magnetic field lines do contract with the enhancement of gas density, then one expects a scaling similar to \(B\propto\rho^{\kappa}\), with \(\kappa=0.5,0.67\) for the strong and weak field limits, respectively (see e.g. the review by Hennebelle and Inutsuka, 2019). In the ISM, indeed the \(\kappa=0\) relation is observed up to number densities of \(\sim\)300 cm\({}^{-3}\)(Troland and Heiles, 1986; Crutcher et al., 2010). This corresponds to densities of roughly 1.1\(\times 10^{-21}\) g cm\({}^{-3}\), using a mean molecular weight of 2.35. Crutcher et al. (2010) find that above these densities, the data is consistent with \(\kappa=2/3\), with considerable scatter. The transition in power law is usually associated with the magnetic fields becoming dynamically sub-dominant (Seifried et al., 2020; Pattle et al., 2022) and roughly matches with our observation that below \(\sim 100\) cm\({}^{-3}\) the mass in the MHD clouds is enhanced. We can attempt to capture whether this transition in the importance of the magnetic field is seen in the Alfvenic Mach number, \(\mathcal{M}_{\rm A}\). For a given sub-structure, we can compute \(\mathcal{M}_{\rm A}\) as \[\mathcal{M}_{\rm A}=\sigma_{\rm ID}/v_{\rm A}. \tag{15}\] Here \(\sigma_{\rm ID}\) is the one-dimensional velocity dispersion and \(v_{\rm A}\) is an estimate of the average Alfven wave group velocity. For a structure of mass \(M\), we compute \(\sigma_{\rm ID}\) from \[\sigma_{\rm 1D}^{2}=\frac{1}{3M}\int_{V}\rho({\bf v}-{\bf v}_{0})^{2}{\rm d}^{3}r, \tag{16}\] with \({\bf v}_{0}\) being the centre of mass velocity computed as \[{\bf v}_{0}=\frac{1}{M}\int_{V}\rho{\bf v}{\rm d}^{3}r. \tag{17}\] The integration is performed over the entire volume \(V\) of the given structure. The Alfven velocity can be computed as \[v_{\rm A}=\sqrt{\frac{\langle|{\bf B}|^{2}\rangle}{4\pi\rho_{\rm avg}}}. \tag{18}\] The density \(\rho_{\rm avg}\) here is the volume-averaged density, i.e. \[\rho_{\rm avg}=M/V, \tag{19}\] and \(\langle|{\bf B}|^{2}\rangle\) is the volume-averaged square of the magnetic field \({\bf B}\), \[\langle|{\bf B}|^{2}\rangle=\frac{1}{V}\int_{V}|{\bf B}|^{2}{\rm d}^{3}r. \tag{20}\] The behaviour of the magnetic field strength with density for the MHD clouds can be seen in Fig. 6, where we plot the root-mean-square magnetic field strength against the threshold (minimum) density \(\rho_{\rm thr}\) for all dendrogram structures at \(t_{\rm evol}=3.5\) Myr. The different dendrogram structures are marked with filled/empty symbols depending on whether their H\({}_{2}\) mass fraction (with respect to their total hydrogen mass) is greater/less than 50%. The colour bar shows \(\mathcal{M}_{\rm A}\), as computed from Eq. 15. The reddish points represent super-Alfvenic (\(\mathcal{M}_{\rm A}>1\)) structures, while the blueish points are sub-Alfvenic (\(\mathcal{M}_{\rm A}<1\)). In the sub-Alfvenic case, the fluid speed is smaller than the magnetic wave speed, meaning that the magnetic field is dynamically important and guides the flow. The vertical dotted line at \(10^{-22}\) g cm\({}^{-3}\) represents the boundary between the points obtained from the _low-den_ (left half) and _high-den_ (right half) dendrograms, respectively. The dash-dotted black line and the dotted power-law represent the Crutcher et al. (2010) relation discussed previously and \(B\propto\rho^{0.5}\), respectively. Figure 5: Cumulative histogram of different morphologies (sheets, curved sheets, filaments, or spheroids) against H\({}_{2}\) mass fraction for all HD (left) and MHD (right) clouds at \(t_{\rm evol}=3.5\) Myr. The most molecular structures are more filamentary in presence of magnetic fields. The cyan dashed line represents the linear least-squares best fit performed on the logarithm of the points for high densities (\(\rho_{\rm thr}>1.1\times 10^{-21}\) g cm\({}^{-3}\)). The best fit of \(\kappa=0.47\pm 0.03\) is consistent with the strong-field limit of \(B\propto\rho^{0.5}\). We have already shown in the previous section (Section 5) that our structures are on average highly elongated, and magnetic fields clearly help to deform the shape of the forming structures. It is therefore not unexpected that we find a shallower scaling compared to the weak field limit (\(\kappa=0.67\)). We see that, while there is no clear transition from the sub- to the super-Alfvenic regime, there is clearly a trend that higher Alfvenic Mach numbers are preferentially obtained at the higher density end. This is confirmed by a Kolmogorov-Smirnov (KS) two-sample test, which compares if two distributions belong to the same population. In this case, we compare the \(\rho_{\rm thr}\)-distributions of structures with \(\mathcal{M}_{A}>1\) and \(\mathcal{M}_{A}\leq 1\). We find the \(p\)-values3 to be very low: \(6\times 10^{-4}\) at 2 Myr and \(5.2\times 10^{-15}\) at 3.5 Myr (see Table 6). Footnote 3: If the \(p\)-value is larger than a certain value (typically 0.05), this means that we cannot reject the null hypothesis that the sub-Alfvenic and super-Alfvenic structures have the same underlying density distribution. Crutcher et al. (2010) found that the observed magnetic field distribution is rather flat at low density, in agreement with the idea that denser clouds are swept up along the magnetic field lines on large scales, while at higher density there is a power-law increase of the magnetic field strength. If spherical clouds start to collapse and the magnetic field is not strong enough to stop the collapse, one expects a power-law slope of \(\kappa=0.5-0.67\) (see above). In the case of our clouds, we find that the high-density end is well consistent with \(\kappa=0.5\), and the lower-density end clearly shows a much shallower slope. Nonetheless, there does not seem to be a clear single density at which there is a sharp change in slope. Simulations by Li et al. (2015), Mocz et al. (2017), Girichidis et al. (2018), Zhang et al. (2019) find similarly the lack of a sharp transition density. Auddy et al. (2022) predict that the transition density depends on the fourth power of \(\mathcal{M}_{A}\). While of potential interest, this is unfortunately not demonstrable from the present analysis. ### Impact of magnetic fields on the energetics of sub-structures We are also interested in assessing the energetic relevance of magnetic fields over different length scales in the MCs, especially with respect to potentially star-forming structures. For this purpose, we compute the volume term of the magnetic energy and compare it with the kinetic and potential energies. Similar work for the same simulations has been performed by Ganguly et al. (2022), who assess the virial balance of the cloud sub-structures. Here, we extend the range of our analysis to include the dynamics of lower-density gas (between \(10^{-24}\) and \(10^{-22}\) g cm\({}^{-3}\); _low-den_ dendrogram analysis, see Table 2). The magnetic energy of a given structure is computed as \[E_{\rm B}=\int_{V}\frac{1}{8\pi}|{\bf B}|^{2}{\rm d}^{3}r, \tag{21}\] where the integration is computed over the entire volume \(V\) of the structure. The kinetic energy is computed using the following relation: \[E_{\rm KE}=\frac{1}{2}\int_{V}\rho({\bf v}-{\bf v}_{0})^{2}{\rm d}^{3}r. \tag{22}\] Here, \({\bf v}_{0}\) is the centre of mass velocity computed from Eq. 17. The self-gravitating potential energy of a given structure is obtained using the following relation: \[E_{\rm PE}=-\frac{1}{2}G\int_{V}\int_{V}\frac{\rho({\bf r})\rho({\bf r^{ \prime}})}{|{\bf r}-{\bf r^{\prime}}|}{\rm d}^{3}r{\rm d}^{3}r^{\prime}, \tag{23}\] where \(G\) is the gravitational constant. We compute the self-gravity of each dendrogram structure using a KD-tree algorithm (Bentley, 1975) instead of an \(\mathcal{O}(N^{2})\) direct computation. We show the relative importance of magnetic fields with respect to potential and kinetic energy in the left and right panel of Fig. 7, respectively, for all MHD cloud structures at \(t_{\rm evol}=3.5\) Myr. For both plots, the \(x\)-axis represents the density threshold \(\rho_{\rm thr}\), and the \(y\)-axis represents \(E_{\rm B}/|E_{\rm PE}|\) (left) and \(E_{\rm B}/|E_{\rm KE}|\) (right), respectively. The colours of the points represent their morphologies. Here, for the purpose of understanding the dynamics of low-density gas, we also include the "unclassified" structures (i.e. structures with \(>\)5% of their surface cells touching the edge of the analysis box, see Section 3). The side panels to the right and top of each plot show the marginal distributions of \(N_{x}/N_{\rm tot}\) for each morphology. Note that, since the definition of \(N_{\rm tot}\) (Eq. 14) does not contain unclassified structures, the fractions in the two side panels add up to greater than unity. The filled symbols are molecular structures, while the open symbols are atomic. Typically, for low-density structures, which mostly consist of atomic gas, the magnetic energy is either comparable to or much larger than the potential energy (left panel of Fig. 7). The magnetic energy is \begin{table} \begin{tabular}{c c c c} \hline variable 1 & variable 2 & time [Myr] & p-value \\ \hline \(\rho_{\rm thr}(\mathcal{M}_{A}>1)\) & \(\rho_{\rm thr}(\mathcal{M}_{A}\leq 1)\) & 2 & \(6\times 10^{-4}\) \\ & & 3.5 & \(5.2\times 10^{-15}\) \\ \hline \end{tabular} \end{table} Table 6: The \(p\)-values of the 2-sample KS test for the density distribution of sub-Alfvenic and super-Alfvenic structures. We can see that the \(p\)-value is low for both 2 and 3.5 Myr, suggesting that sub-Alfvenic and super-Alfvenic structures (corresponding to bluish and reddish points in Fig. 6, respectively) have statistically significant differences in their density distributions. Figure 6: Relation between the root-mean-square magnetic field and \(\rho_{\rm thr}\) for all MHD clouds at \(t_{\rm evol}\)=3.5 Myr. The colour bar shows the Alfvénic Mach number \(\mathcal{M}_{\rm A}\). The dash-dotted line represents the B\(-\rho\) relation from Crutcher et al. (2010), while the dotted line represents a B \(\propto\rho^{0.5}\) power law. The cyan dashed line represents the best fit power law for all points with \(\rho_{\rm thr}>1.1\times 10^{-21}\) g cm\({}^{-3}\). also comparable to or larger than the kinetic energy (right panel), but the spread in this energy ratio is much smaller compared to the \(E_{\rm B}/E_{\rm PE}\) ratio. For some branches (a dendrogram branch is defined as a given structure and all its parent structures, see Section 3.1), the energy ratio seems to roughly follow a \(\rho^{-1/2}\) power law. These branches represent the evolution from diffuse, large-scale structures to denser, embedded structures. Camacho et al. (2022) also find a tight power-law scaling between the potential and magnetic energies. While not exactly the same, both scaling behaviours seem to imply that magnetic fields become less important as we go deeper into the MCs themselves. This is also in accordance with the findings of Seifried et al. (2020), Ibanez-Mejia et al. (2022), as well as Ganguly et al. (2022), as discussed previously. From the marginal distributions, we find a weak trend that the high-density end is dominated by filaments. Curved sheets and unclassified structures only appear at lower densities because they are usually larger-scale structures. There is no obvious correlation between the morphology of the structures and the energy ratios. This suggests that the different morphological configurations are created by the same formation mechanism, most likely turbulent compression. There also seems to be a difference in the energy ratios between atomic and molecular structures. This can be clearly seen in the average behaviour of these ratios over time. Fig. 8 plots the time evolution of the average value of \(E_{\rm B}/|E_{\rm PE}|\) (left) and \(E_{\rm B}/E_{\rm KE}\) (right) for all atomic (red), molecular (blue), and dense molecular (yellow) structures from the MHD clouds, where we define dense molecular structures to be structures that are both molecular and have \(\rho_{\rm thr}>10^{-20}\) g cm\({}^{-3}\). The error bars here represent the standard error on the mean. From Fig. 8, we see that magnetic energy dominates over potential and kinetic energies for atomic structures, while it plays a subordinate role in molecular structures. There is no clear trend indicating that this behaviour changes as a function of time. The subservient role of magnetic energy for dense structures compared to potential or kinetic energy suggests that while magnetic fields help to shape the cloud structures across different scales, the dynamics of the denser, and potentially star-forming structures, is determined by the interaction between gravity and turbulence 4. This explains why there is no discernible difference in the power-law tail of the density PDFs between hydrodynamic and MHD clouds (Fig. 2), confirming that the star-forming gas (see e.g. Klessen & Burkert, 2001; Girichidis et al., 2014; Schneider et al., 2015) is virtually unaffected by the presence of magnetic fields. However, magnetic fields change the gas properties of the environment from which denser structures form, accrete, and sit in (i.e. by making the surrounding envelope "flufier"), thereby also influencing the shape of these structures. Footnote 4: We explore the interplay between turbulence and gravity in much greater detail in our companion paper by means of a virial analysis (Ganguly et al., 2022) ### Magnetic surface energy In the previous section, we have discussed the magnetic pressure term in comparison to self-gravity and kinetic energy. The magnetic pressure relates to the stretching and compression of magnetic field lines, and does not take into account the effect of curvature in the field. The magnetic surface term can be computed as an integral over the surface of a given structure, \(S\), as follows: \[E_{\rm B}^{\rm surface}=\oint_{S}({\bf r}-{\bf r}_{0}){\bf T}\hat{\bf n}\ {\rm d}S. \tag{24}\] Here \({\bf r}_{0}\) is the centre of mass, \(\hat{\bf n}\) is the surface normal vector that points outwards, and \({\bf T}\) is the Maxwell stress tensor, which can be written as follows for ideal MHD: \[{\bf T}=\frac{1}{4\pi}\left({\bf B}\otimes{\bf B}-\frac{1}{2}|{\bf B}|^{2}{\bf 1 }\right). \tag{25}\] \(\hat{\bf 1}\) is here an identity matrix of rank two. We evaluate Eq. 24 as a volume integral using the Gauss' divergence theorem for convenience. This gives us the following relation: \[E_{\rm B}^{\rm surface}=-E_{\rm B}+\int_{V}({\bf r}-{\bf r}_{0})\cdot\nabla{ \bf T}\ {\rm d}V \tag{26}\] From Eq. 26, we can see that \(E_{\rm B}^{\rm surface}\) can be both positive or negative. When it is is positive, it adds to the magnetic pressure term and acts as a dispersive term. In contrast, when \(E_{\rm B}^{\rm surface}<0\), it acts as a confining term. Figure 7: Ratio of magnetic energy to self-gravitating potential energy (left) and to kinetic energy (right), respectively, plotted against the density threshold for all dendrogram structures of all MHD clouds at time \(t_{\rm evol}=3.5\) Myr. The colours represent different morphologies. The dash-dotted lines indicate a \(\rho^{-1/2}\) relation. The top and the right panels show the marginalised distributions (separated by morphology) over the density and the corresponding energy ratio. The importance of \(E_{\rm B}^{\rm surface}\) with respect to the volume term, \(E_{\rm B}\), can be seen in Fig. 9, left panel, which plots the magnitude of the ratio of \(E_{\rm B}^{\rm surface}/E_{\rm B}\) to the density threshold of the cloud sub-structures for all MHD clouds at \(t_{\rm evol}=3.5\) Myr. Structures where \(E_{\rm B}^{\rm surface}\) helps to disperse them (\(E_{\rm B}^{\rm surface}>0\)) are marked in red, while structures where \(E_{\rm B}^{\rm surface}\) acts as a confining term (\(E_{\rm B}^{\rm surface}<0\)) are marked in cyan. The vertical dotted line marks the difference between the results of the _low-den_ and _high-den_ dendrogram runs at \(\rho=10^{-22}\) g cm\({}^{-3}\), as in the previous plots. The horizontal dotted line represents a value of one, where the volume and surface terms are equally important magnitude-wise. The top and side panels show the marginal distributions. From the marginal distributions, we see that \(E_{\rm B}^{\rm surface}\) acts as a confining term for somewhat more number of structures compared to where the surface term is dispersive. The magnetic surface term seems to be comparable to and in some cases, even exceeding the volume term \(E_{\rm B}\). This implies that for diffuse and mostly atomic structures, where magnetic energy is comparable or dominant, the surface term is important. This is especially relevant when \(E_{\rm B}^{\rm surface}\) acts as a confining term. However, for dense structures, where \(E_{\rm B}\) is one to two orders of magnitude smaller than the potential and kinetic energies, the surface term is unlikely to significantly affect the dynamics. In the right panel of Fig. 9 we plot the magnitude of \((E_{\rm B}^{\rm surface}+E_{\rm B})/E_{\rm PE}\) against \(E_{\rm B}/E_{\rm PE}\) for all MHD cloud sub-structures at \(3.5\) Myr. The colour bar here represents the size of the structures. The horizontal and vertical dotted lines both represent a value of unity along the \(y-\) and \(x-\) axes, respectively. The dashed line represents a 1:1 line, and the shaded region around it represents a factor of 2 in each direction. The magnetic surface energy is not significant compared to the volume energy for structures on or close to the 1:1 line. Structures with strong dispersive \(E_{\rm B}^{\rm surface}\) terms lie above the 1:1 line, while points that lie below the 1:1 line represent structures where \(E_{\rm B}^{\rm surface}\) is confining in nature. Most interesting here are the points that lie in the bottom right quadrant of the plot. They represent structures where the magnetic pressure \(E_{\rm B}\) is higher compared to the self-gravity, and would be completely unbound in a traditional virial analysis. However, the confining \(E_{\rm B}^{\rm surface}\) term is strong enough that the overall magnetic contribution becomes far less, thus allowing for a sort of "magnetic confinement". These structures are mostly atomic and typically seem to be \(\lesssim 1\) pc. Two examples of structures belonging to MC2-MHD that exhibit such magnetic confinement are plotted in Fig. 10 as black contour lines over a density slice in the \(y-z\) plane. The background colour here represents the density, while the planar magnetic field is shown using the line integral convolution (LIC) technique5. For both structures, we mention the magnitude of the \(E_{\rm B}/E_{\rm PE}\) and \((E_{\rm B}^{\rm surface}+E_{\rm B})/E_{\rm PE}\) ratios in the figure title. As can be clearly seen, the magnetic surface term reduces the \(|(E_{\rm B}^{\rm surface}+E_{\rm B})/E_{\rm PE}|\) ratio to less than one. However, this naturally does not take into account other energy terms, i.e kinetic and thermal energy, and hence it is not fully clear whether these structures are overall confined. Interestingly, the structures for which the magnetic surface energy is important and of confining nature (see right panel of Fig. 9) are usually located at the "kinks" of magnetic field lines. Footnote 5: The package used can be found in [https://github.com/alexus37/licplot](https://github.com/alexus37/licplot). ### Fragmentation In this section, we attempt to quantify to what extent magnetic fields affect the fragmentation properties of molecular clouds. For this purpose, we study the numbers and masses of different fragments, represented by leaf structures (i.e. structures containing no further sub-structures) found in our dendrogram analysis, and in addition perform a magnetic Jeans analysis on these fragments. #### 4.4.1 Number and mass distribution of fragments Representing fragments by the leaves in the dendrogram analysis suffers from the caveat of depending on the dendrogram parameters. Increasing the minimum number of cells required in a dendrogram structure, for example, would naturally reduce the number of fragments and increase their masses. The absolute values of the masses and numbers we find, therefore, are sensitive to the parameter values we have used. However, since we used the exact same parameters Figure 8: Time evolution of the average ratio of magnetic to potential energy (left) and kinetic energy (right). The different colours represent atomic, molecular, and dense molecular (molecular and \(\rho_{\rm thr}>10^{-20}\) g cm\({}^{-3}\)) structures in red, blue, and yellow, respectively. The errors bars are the standard errors on the mean. For denser and molecular structures, magnetic energy is less important compared to potential or kinetic energies. The atomic structures, representing more the envelope of the molecular gas, have high magnetic energies, especially compared to self-gravity. for each HD and MHD run, and because all molecular clouds have similar masses and identical environmental parameters (solar neighbourhood parameters), the relative difference between the average behaviour of the HD and MHD clouds is meaningful. With this caveat in mind, let us look at the fragmentation properties of our dendrogram structures. We study the numbers and masses of leaf fragments in Fig. 11. The top row plots the cumulative distribution of the average number of leaf structures, \(\langle N_{\rm structure}^{\rm leaf}\rangle\), as a function of \(\rho_{\rm thr}\) for both HD (blue) and MHD (red) clouds. The average here simply means that we divide the total number of obtained structures by the number of clouds, i.e. 5 for MHD and 2 for HD. The three panels (left to right) show three different times, \(t_{\rm evol}=2,~{}2.5\) and \(3.5\) Myr, respectively. The vertical line at \(10^{-22}\) g cm\({}^{-3}\) marks the difference between the _low-den_ and the _high-den_ dendrogram analysis. We see that at \(t_{\rm evol}=2\) and \(2.5\) Myr, up to densities between \(10^{-23}-10^{-22}\) g cm\({}^{-3}\), the HD and MHD clouds form roughly similar numbers of leaf fragments. However, at higher densities, \(\langle N_{\rm structure}^{\rm leaf}\rangle\) is much higher for the HD clouds. This difference largely disappears at \(3.5\) Myr. This suggests that the formation of structures is somewhat slowed down in the presence of magnetic fields in the beginning, but at later stages, as gravity becomes dynamically more and more important, this difference diminishes. In the bottom row of Fig. 11 we plot the average mass of the leaf structures, \(\langle M_{\rm structure}^{\rm leaf}\rangle\), as a function \(\rho_{\rm thr}\) for HD and MHD structures for the three different times. The shaded regions represent the standard deviation of the average mass at a given \(\rho_{\rm thr}\). We see that at \(t_{\rm evol}=2\) Myr, the MHD fragments are slightly more massive compared to their hydrodynamic counterparts, in particular for \(\rho_{\rm thr}\lesssim 10^{-21}\) g cm\({}^{-3}\). This difference disappears later. For the densest structures, we do not seem to see a systematic difference in \(\langle M_{\rm structure}^{\rm leaf}\rangle\). This is in line with Fig. 2, which shows that the differ Figure 10: Two examples of structures confined by \(E_{\rm B}^{\rm surface}\) from MC2-MHD, plotted as black contours over density slices in the \(y-z\) plane, at \(t_{\rm evol}=2\) Myr. The colour map is the logarithmic density, and the direction of the planar magnetic field is plotted as line integral convolution. The relevant energy ratios of the indicated structures are denoted in the title. Structures for which the magnetic surface energy is important and of confining nature, are usually located at the “kinks” of magnetic field lines. Figure 9: Left: Ratio of the absolute value of the magnetic surface to volume energy, plotted against the density threshold. The different colours represent whether the magnetic surface term is positive and resists collapse or negative and promotes collapse. The magnetic surface energy seems to be as relevant as the volume energy, and for more than half of the structures acts as a confining term. Right: The ratio of the total magnetic energy (surface plus volume) to the self-gravitating potential energy, plotted against the magnetic volume energy over the self-gravitating potential energy. The dashed line represents a 1:1 ratio, and the shaded region represents a factor of 2. For many small-scale atomic structures, the magnetic surface term seems to be important as a confining force. ence in the density PDFs between the HD and MHD clouds in the density range that corresponds primarily to the cloud envelope (i.e. \(\lesssim 10^{-21}\) g cm\({}^{-3}\)) is most striking at \(t_{\rm evol}=2\) Myr, and less so later on. Overall, the results shown in Fig. 11 indicate that the MHD clouds fragment more slowly than the HD clouds but therefore have slightly more massive fragments at early times. This is consistent with the result that magnetic fields affect the dynamics of lower density gas more (Molina et al., 2012; Seifried et al., 2020, 2020; Ibanez-Mejia et al., 2022). We also see that the number and mass of the leaf structures are comparable at later times. This suggests that the magnetic fields "slow down" the evolution of the cloud but are less relevant once the cloud is more evolved, and gravity becomes energetically more and more important, as shown in the previous energetic analysis. This effect could be related to the overall strength of the magnetic field. We investigate this further in the next Section. #### 6.4.2 Magnetic Jeans analysis The classic thermal Jeans analysis (Jeans, 1902) is a useful tool to investigate the stability of MCs and their substructures (clumps and cores) under thermal perturbations. Here, we perform its magnetic equivalent. The thermal Jeans length, \(\lambda_{\rm T}\), defines the largest length-scale stable to thermal perturbations. For a given structure, this is defined as \[\lambda_{\rm T}=c_{s}\sqrt{\frac{\pi}{G\rho_{\rm avg}}}, \tag{27}\] where \(c_{s}\) is the average sound speed given by \[c_{s}=\frac{1}{V}\int_{V}\sqrt{\frac{p}{\rho}}{\rm d}^{3}r. \tag{28}\] Here, \(P\) is the thermal pressure and the sound speed is calculated assuming an isothermal equation of state due to the densities under consideration. We remind the reader that \(\rho_{\rm avg}\) is the volume-averaged density computed in Eq. 19. From the Jeans length, a maximum mass stable under thermal perturbations can be calculated. This mass is referred to as the thermal Jeans mass, \(M_{\rm T}\), and is given by \[M_{\rm T}=\frac{4}{3}\pi\rho_{\rm avg}\left(\frac{\lambda_{\rm T}}{2}\right)^ {3}. \tag{29}\] Similar to the thermal analysis, we can perform a magnetic Jeans analysis and a Jeans analysis combining both magnetic and thermal support. For the magnetic Jeans analysis, the relevant length (\(\lambda_{\rm B}\)) and mass (\(M_{\rm B}\)) scales are given by, \[\lambda_{\rm B}=c_{\rm B}\sqrt{\frac{\pi}{G\rho_{\rm avg}}}, \tag{30}\] \[M_{\rm B}=\frac{4}{3}\pi\rho_{\rm avg}\left(\frac{\lambda_{\rm B}}{2}\right)^ {3}. \tag{31}\] For a combination of thermal and magnetic effects, the relevant magneto-thermal Jeans length (\(\lambda_{\rm B,T}\)) and Jeans mass (\(M_{\rm B,T}\)) are \[\lambda_{\rm B,T}=c_{\rm B,T}\sqrt{\frac{\pi}{G\rho_{\rm avg}}}, \tag{32}\] \[M_{\rm B,T}=\frac{4}{3}\pi\rho_{\rm avg}\left(\frac{\lambda_{\rm B,T}}{2} \right)^{3}. \tag{33}\] Figure 11: Top row: Cumulative distribution of the average number of leaf structures against \(\rho_{\rm thr}\) for HD and MHD clouds at \(t_{\rm evol}=2\), 2.5, 3 Myr, respectively (from left to right). The hydrodynamic clouds have on average more new structures forming at earlier times, but this distinction slowly disappears later on. Bottom row: Distribution of average mass of leaf structures for both HD and MHD clouds at \(t_{\rm evol}=2\), 2.5, 3.5 Myr, respectively (from left to right). The leaf structures, representing fragments, are more massive for MHD clouds at earlier times, while this distinction mostly disappears later on as gravity takes over. The characteristic speeds are given by, \[c_{\rm B}=v_{\rm A}, \tag{34}\] \[c_{\rm B,T}=\sqrt{c_{\rm s}^{2}+v_{\rm A}^{2}}, \tag{35}\] where \(v_{\rm A}\) is the Alfven speed (Eq. 18). In Fig. 12, we show the ratio of a structure's mass to its magnetothermal Jeans mass, \(M/M_{\rm B,T}\), as a function of \(\rho_{\rm thr}\) for all MHD cloud branch structures (top) and leaves (bottom) at \(t_{\rm evol}=3.5\) Myr. We remind the reader that branch structures contain sub-structures and leaves do not. The Jeans mass can only be properly used when the corresponding length is resolved. This is shown in Appendix E, Fig. 11, which depicts that some structures with \(\rho_{\rm thr}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$>$}}10^{-20}\) g cm\({}^{-3}\) seem to be not properly Jeans resolved. These are marked with black outlines in Fig. 12. Note that the structures are (_un-resolved_ in the context of our dendrogram analysis, which requires a minimum number of 100 cells per structure, and therefore at least 200 cells to resolve fragmentation (as in the case of fragmentation, each fragment would need to contain at least 100 cells). The colour-bar denotes the ratio of \(c_{\rm s}\) to \(v_{\rm A}\). Most of the structures have \(v_{\rm A}>c_{\rm s}\) (blue points), suggesting support by magnetic fields rather than by thermal pressure. This is confirmed by our purely magnetic Jeans analysis (Fig. 12), which shows an almost identical distribution to the magneto-thermal Jeans analysis of Fig. 12 (as well as Fig. 11). From Fig. 12, top panel, we find that roughly below \(10^{-22}\) g cm\({}^{-3}\), all structures are Jeans stable (\(M/M_{\rm B,T}<1\)). At higher densities, we have both, Jeans stable and unstable structures. Some prominent branches clearly have \(M/M_{\rm B,T}>1\) above \(10^{-22}\) g cm\({}^{-3}\), indicating the growing importance of gravity for fragmentation at higher densities. For leaves, this transition density seems to occur at higher densities. Interestingly, the leaves seem to have an overall sharper scaling behaviour compared to the branches. However, this separation cannot be seen in the Jeans length, where all structures show a consistent scaling of roughly \(\lambda_{\rm B,T}\propto\rho^{-2/3}\) (Fig. 11). This can be understood as follows: The mass of a structure is dependent on the density and size, i.e. \[M\propto\rho R^{3}. \tag{36}\] Combining Eq. 36 with Eq. 33, we obtain \[\frac{M}{M_{\rm B,T}}\propto R^{3}\lambda_{\rm B,T}^{-3}. \tag{37}\] As the size of the leaf structures is more determined by the choice of \(N_{\rm cells}\), they typically show very weak or no scaling between density and size, and we can therefore approximate \(R\propto\rho^{0}\). For the leaves, this leads to \(M/M_{\rm B,T}\propto\lambda_{\rm B,T}^{-3}\). As \(\lambda_{\rm B,T}\propto\rho^{-2/3}\) approximately (Fig. 11), this leads to \(M/M_{\rm B,T}\propto\rho^{2}\). For the branches, we find a shallower slope. In Ganguly et al. (2022), we find many branches to follow \(M\propto R\), which would lead to \(M/M_{\rm B,T}\propto\rho^{1/2}\), roughly consistent with the trend seen for the branches here. The relation of the scaling between \(\lambda_{\rm B,T}\) and \(\rho\) is in itself interesting and we discuss it in Appendix F. Overall, the Jeans analysis seems to show the emergence of potentially Jeans-unstable structures at slightly lower densities (\(\sim 30\) cm\({}^{-3}\)) compared to that found in the previous energetic analysis. This could reflect the fact that the Jeans analysis performed here does not include the kinetic energy, which is often larger compared to \(E_{\rm B}\) and the thermal energy (see Section 4 in Ganguly et al. 2022). Turbulent motions can act as an effective kinetic pressure term. Although the kinetic energy is often treated as an effective pressure in the literature (see e.g. Chandrasekhar 1951; Bonazzola et al. 1987; Federrath and Klessen 2012), we show in Ganguly et al. (2022) that the volume and surface terms of the kinetic energy combine in a highly non-trivial manner, with structures often being confined or even compressed under ram pressure. This suggests that including a kinetic pressure in the Jeans analysis would be too simplistic and not meaningful. Overall, most leaf fragments in the Jeans analysis have \(M/M_{\rm B,T}<1\), suggesting that their fragmentation is unlikely to be primarily Jeans-like. However, above \(10^{-20}\) g cm\({}^{-3}\), we begin to obtain Jeans unstable fragments which are mostly unresolved and will likely undergo further fragmentation, possibly ending up as the precursors of protostars. ### Delay introduced by magnetic fields The fragmentation analysis performed in 6.4 seems to suggest that magnetic fields at least delay fragmentation in many cases. To estimate how much the evolution of the cloud is slowed down by the Figure 12: The ratio of the mass of a given structure to its magneto-thermal Jeans mass (\(M_{\rm B,T}\), Eq. 33) as a function of \(\rho_{\rm thr}\) for all MHD branch (top) and leaf (bottom) sub-structures at \(t_{\rm evol}=3.5\) Myr. A branch sub-structure has further sub-structures, while a leaf does not. The horizontal dotted line represents a ratio of unity. The vertical line separates the points obtained from the _high-dot_ and the _low-dot_ dendrogram analysis. The colour-bar shows the ratio of the sound speed to the Alfvén wave speed. For blue points, \(v_{\rm A}>c_{\rm s}\). A power-law is plotted in each panel for rough guidance. Structures whose fragmentation is not well-resolved (see Fig. 11) are marked with an additional black outline and mostly populate the right-hand top corner of the plot. The magneto-thermal forces seem unable to keep all the structures Jeans stable beyond \(\sim 10^{-22}\) g cm\({}^{-3}\), suggesting the growing importance of gravity. effect of magnetic fields, we define a delay timescale \(\Delta t_{\rm B}\). Consider a structure of size \(S\) which is compressed by an external flow with velocity \(v\). In the absence of either gravity or magnetic fields, as well as neglecting internal thermal and kinetic pressure, the structure would be compressed on a crossing time, \[t_{\rm v}=S/v. \tag{38}\] For simplicity, we estimate the size of a structure using the shortest axis of the equivalent ellipsoid, i.e. \(S=2c\) (Section 3.2). We approximate the sweep-up velocity \(v\) to be equal to the bulk velocity of the structure, i.e. \(v=|\mathbf{v}_{0}|\), where \(\mathbf{v}_{0}\) is obtained from Eq. 17. So overall we have \[t_{\rm v}=2c/|\mathbf{v}_{0}|. \tag{39}\] Next, we consider an additional gravitational acceleration \(a_{\rm g}\) assisting the sweeping up, where \[a_{\rm g}=-\frac{1}{V}\int_{V}\mathbf{g}\cdot\frac{\mathbf{r}-\mathbf{r}_{0}} {|\mathbf{r}-\mathbf{r}_{0}|}d^{3}r \tag{40}\] is the average acceleration towards the centre of mass, \(\mathbf{r}_{0}\). We can then estimate (to first order) the gravitationally assisted sweep-up timescale, \(t_{\rm v,~{}g}\), from \[S=vt_{\rm v,~{}g}+\frac{1}{2}a_{\rm g}t_{\rm v}^{2},~{}g. \tag{41}\] For non-gravitating structures, this reduces to \(t_{\rm v}\). For non-zero gravitational field, taking the real root, we get \[t_{\rm v,g}=\frac{(-v+\sqrt{v^{2}+2Sa_{\rm g}})}{a_{\rm g}}. \tag{42}\] In the presence of magnetic fields, we can represent the combined acceleration by gravity and magnetic fields as \(a_{\rm g,B}\), where \[a_{\rm g,B}=-\frac{1}{V}\int_{V}\left(\mathbf{g}-\frac{\nabla|\mathbf{B}|^{2}} {8\pi\rho}\right)\cdot\frac{\mathbf{r}-\mathbf{r}_{0}}{|\mathbf{r}-\mathbf{r}_ {0}|}d^{3}r. \tag{43}\] We can then rewrite Eq. 42 to estimate a combined timescale \[t_{\rm v,g,B}=\frac{(-v+\sqrt{v^{2}+2Sa_{\rm g,B}})}{a_{\rm g,B}}. \tag{44}\] The time delay due to the presence of magnetic fields, \(\Delta t_{\rm B}\), can then be estimated as \[\Delta t_{\rm B}=t_{\rm v,g,B}-t_{\rm v,g}. \tag{45}\] In Fig. 13, we plot \(\Delta t_{\rm B}\) for various structures from the _high-den_ dendrogram analysis. We see that at the largest cloud scales at \(t_{\rm evol}=2\) Myr, \(\Delta t_{\rm B}\) is of the order of \(\sim 1\) Myr, and then steadily decreases as a power-law roughly consistent with \(\Delta t_{\rm B}\propto R^{3}\). This timescale of \(\sim 1\) Myr seems to be consistent with the results of the fragmentation analysis in Section 6.4, where we found that the significant differences in the cloud fragmentation properties at \(t_{\rm evol}=2\) Myr seem to have completely disappeared at \(t_{\rm evol}=3.5\) Myr. The general power-law trend We emphasise, however, that the calculation of \(\Delta t_{\rm B}\) should only be considered a first-order approximation. Note that \(\Delta t_{\rm B}\) does not directly depend on the magnetic field strength but rather on its gradient. Hence, it is difficult to predict how \(\Delta t_{\rm B}\) would scale with different strengths of the background field. This means that molecular clouds that form in a more magnetised medium do not necessarily form structures more slowly. ### Densities at which magnetic fields become dynamically sub-dominant From the results presented in the previous sections, we can attempt to answer the question of at what densities magnetic fields become dynamically sub-dominant. From the density PDF of different clouds (Fig. 2), we find that the density distribution is significantly different in the presence of magnetic fields only below \(\sim 100\) cm\({}^{-3}\). This is in accordance with previous simulations and observations (Klessen & Burkert, 2001; Slyz et al., 2005; Girichidis et al., 2014; Schneider et al., 2015), as well as the conclusions drawn by Seifried et al. (2020) using distributions of the three-dimensional true optical depth, \(A_{\rm V,3D}\). From the energetic analysis (Fig. 7), we find that, magnitude-wise, gravity and kinetic energy supersede magnetic fields above a few \(\sim 100\) cm\({}^{-3}\), consistent with the results of Ibanez-Mejia et al. (2022). Moreover, this density range is also in accordance with the results of Seifried et al. (2020), who find that relative orientation of magnetic fields with respect to elongated filamentary structures changes at a few \(\sim 100\) cm\({}^{-3}\) due to the occurrence of gravity-driven converging flows (Soler & Hennebelle, 2017), suggesting energetic sub-dominance of magnetic fields at higher densities. Lastly, also the fragmentation analysis presented in this work (Fig. 11) shows differences in fragmentation patterns below a similar density regime of \(\sim 100\) cm\({}^{-3}\). A Jeans fragmentation analysis yields roughly consistent limits as well. In summary, for clouds born from an ISM with typical magnetic field strengths as in our Milky Way (Beck & Wielebinski, 2013), the density PDFs, the energetic analysis, the histogram of relative orientation technique applied by Seifried et al. (2020), and the fragmentation analysis in this work - all seem to point to the fact that the magnetic field becomes sub-dominant above densities of around \(100-1000\) cm\({}^{-3}\). This overall trend is also fully consistent with the \(B-\rho\) relation obtained by Crutcher et al. (2010), who conclude a transition density of \(\sim 300\) cm\({}^{-3}\). ## 7 Conclusions We investigate the role magnetic fields play in determining the morphology, energetics, and fragmentation properties of young molecular clouds by analysing seven different simulated clouds (five with magnetic fields and two without) from the SILCC-Zoom simulations. Figure 13: The estimated delay timescale, \(\Delta t_{\rm B}\) (Eq. 45), for various MHD cloud structures from the _high-den_ analysis at \(t_{\rm evol}=2\) Myr. A power-law proportional to \(R^{3}\) is plotted to show the rough scaling. These simulations are geared to study the evolution of the multiphase interstellar medium in a supernova-driven, turbulent, stratified galactic disc environment. To identify forming structures, we use a dendrogram algorithm, and trace the statistical properties of the identified structures. We include a simple chemical network which allows us to follow the formation of H\({}_{2}\) as the cloud assembles and thereby distinguish between mostly atomic (H\({}_{2}\) mass fraction < 50%) and mostly molecular (H\({}_{2}\) mass fraction > 50%) structures. * We observe that the MHD clouds are fluffier, meaning that they have more intermediate density gas between the number densities of roughly \(1-100\) cm\({}^{-3}\), compared to their hydrodynamic counterparts. In the hydrodynamic clouds, the lack of magnetic fields results in the denser structures being surrounded by a comparatively more rarefied envelope. * In terms of morphology, we find that almost all clouds are sheet-like, which is consistent with recent observations of sheet-like envelopes around denser filamentary cloud structures (Kalberla et al., 2016; Arzoumanian et al., 2018; Rezaei Kh. & Kainulainen, 2022; Tritsis et al., 2022; Pineda et al., 2022; Clarke et al., 2023). In our case, the MCs form due to compressions caused by expanding supernova shells, consistent with the bubble-driven MC formation scenario (Koyama & Inutsuka, 2000; Inoue & Inutsuka, 2009; Inutsuka et al., 2015). * We find that spheroidal structures within the clouds are rare on all spatial scales, with \(\sim 90\%\) of the structures being elongated. We further see that the runs with magnetic fields have a roughly comparable fraction of filaments and sheets, whereas the hydrodynamic runs overall produce more sheet-like structures compared to filaments. * Energetically, magnetic fields in our simulations are important for less dense (up to \(\sim\)1000 cm\({}^{-3}\)) and mostly, but not exclusively, atomic structures. The dynamics for denser and potentially star-forming structures is dominated by the interplay of turbulence and gravity. This density threshold, above which the magnetic fields seems to become sub-dominant, is supported by the previous works of Seifried et al. (2020), Ibanez-Mejia et al. (2022) and is consistent with the observed transition in the \(B-\rho\) relation (Crutcher et al., 2010). * By investigating the magnetic surface energy term, we find that for most structures it acts in a confining manner, and, for some low-density structures, it even leads to overall magnetic confinement. * By studying the numbers and masses of cloud fragments that form, we find that at densities below roughly \(\sim 100\) cm\({}^{-3}\), the presence of magnetic fields helps to create more massive fragments, but generally does not result in an increase in the number of such structures. A stability analysis suggests that in the resolved range, leaf fragments are mostly Jeans stable and the fragmentation is not primarily governed by magnetic Jeans instabilities. Instead of significantly altering the nature of fragmentation, magnetic fields seem to rather slow down the fragmentation process. Using a simple order-of-magnitude estimate, we find that this delay timescale is \(\sim 1\) Myr. Overall, using density PDFs, and an energetic as well as a fragmentation analysis, we find a scenario where magnetic fields significantly affect the flows and fragmentation in the lower density gas (below \(\sim 100\) cm\({}^{-3}\)), channalling flows and thereby affecting both, the morphology of the forming structures as well as the formation timescale of the dense gas. Once the dense structures (typically above \(\sim 1000\) cm\({}^{-3}\)) form, however, the further evolution and fragmentation of the dense gas seems to be mostly unaffected by the magnetic field. ## Acknowledgements We would like to thank the referee, Prof. Dr. Robi Banerjee, for their helpful comments, suggestions, and overall discussion, which have increased the quality of the paper. SG, SW, DS and MW would like to acknowledge the support of Bonn-Cologne Graduate School (BCGS), which is funded through the German Excellence Initiative, as well as the DFG for funding through SFB 956 "Conditions and Impact of Star Formation" (subprojects C5 and C6). SDC is supported by the Ministry of Science and Technology (MoST) in Taiwan through grant MoST 108-2112-M-001-004-MY2. This research made use of astrodendro, a Python package to compute dendrograms of Astronomical data ([http://www.dendrograms.org/](http://www.dendrograms.org/)); as well as yt, an open-source, permissively-licensed python package for analyzing and visualizing volumetric data ([https://yt-project.org/](https://yt-project.org/)). The 3D renderings in Fig. 3 were computed using paraview. The FLASH code used in this work was partly developed by the Flash Center for Computational Science at the University of Rochester. ## Data availability The data underlying this article can be shared for selected scientific purposes after request to the corresponding author.
2305.19032
A 3D model for the stellar populations in the nuclei of NGC 1433,NGC 1566, and NGC 1808
Aims. We aim to characterize the properties of the stellar populations in the central few hundred parsecs of nearby galactic nuclei; specifically their age, mass, and 3D geometry. Methods. We use spatially resolved spectroscopic observations of NGC 1433, NGC 1566, and NGC 1808 obtained with SINFONI to constrain a 3D model composed of a spherically symmetric nuclear star cluster (NSC) and an extended thick stellar disk. We computed UV to mid-infrared single stellar population (UMISSP) spectra to determine the age of the stellar populations and construct synthetic observations for our model. To overcome degeneracies between key parameters, we simultaneously fit the spatially resolved line-of-sight velocity, line-of-sight-velocity-dispersion, low-spectral-resolution NIR continuum, and high-spectral-resolution CO absorption features for each pixel. Results. For the three objects, we derive the age and mass of the young and old stellar populations in the NSC and surrounding disk, as well as their 3D geometry: radius for the NSC; thickness, inclination, and position angle for the disk. These results are consistent with published independent measurements when available. Conclusions. The proposed method allows us to derive a consistent 3D model of the stellar populations in nearby galactic centers solely based on a near-infrared IFU observation.
P. Vermot, J. Palouš, B. Barna, S. Ehlerová, M. R. Morris, R. Wünsch
2023-05-30T13:49:58Z
http://arxiv.org/abs/2305.19032v1
# A 3D model for the stellar populations in the nuclei of NGC 1433, NGC 1566, and NGC 1808+ ###### Abstract Context: Aims:We aim to characterize the properties of the stellar populations in the central few hundred parsecs of nearby galactic nuclei; specifically their age, mass, and 3D geometry. Methods:We use spatially resolved spectroscopic observations of NGC 1433, NGC 1566, and NGC 1808 obtained with SINFONI to constrain a 3D model composed of a spherically symmetric nuclear star cluster (NSC) and an extended thick stellar disk. We computed UV to mid-infrared single stellar population (UMISSP) spectra to determine the age of the stellar populations and construct synthetic observations for our model. To overcome degeneracies between key parameters, we simultaneously fit the spatially resolved line-of-sight velocity, line-of-sight-velocity-dispersion, low-spectral-resolution NIR continuum, and high-spectral-resolution CO absorption features for each pixel. Results:For the three objects, we derive the age and mass of the young and old stellar populations in the NSC and surrounding disk, as well as their 3D geometry: radius for the NSC; thickness, inclination, and position angle for the disk. These results are consistent with published independent measurements when available. Conclusions:The proposed method allows us to derive a consistent 3D model of the stellar populations in nearby galactic centers solely based on a near-infrared IFU observation. ## 1 Introduction In a series of papers (Palous et al. 2020; Barna et al. 2022; Ehlerova et al. 2022), we are using hydrodynamics simulations to investigate the potential of supernova remnants to feed supermassive black holes (SMBH) in the vicinity of the center of the Milky Way. We have discovered that nearby supernovae can have a positive impact on the growth of Sgr A* by pushing material from the interstellar medium (ISM) deeper into the central potential well. We wish to expand this investigation to other galactic nuclei, whose properties are expected to vary (gravitational potential, ISM distribution, supernova rate), potentially leading to different results. For this purpose, we need a proper description of the stellar populations and ISM in these regions. In this paper, we aim to determine the gravitational potential for three nearby galactic nuclei: NGC 1808, NGC 1433, and NGC 1566. These targets were chosen because they have been observed with both SINFONI and ALMA. The former is used in this paper to determine the gravitational potential, and the latter will be used in an upcoming publication to determine the ISM distribution. Recent publications on the same SINFONI data are associated with each of these objects, but none of them have provided a sufficiently detailed description of the gravitational potential: The analysis of NGC 1808 is presented in Busch et al. (2017), where the authors fit a 2D circular Plummer model with an inclination given by the Sersic profile of the galaxy to reproduce the stellar line-of-sight velocity (LOSV) map. However, the numerical values of the best parameters are not provided, as the fit is only used to subtract the circular motion from the map in order to detect noncircular motions of the gas and stellar content. A qualitative analysis of the LOSV dispersion (\(\sigma_{v}\)) is presented. NGC 1433 is described in Smajic et al. (2014). The stellar LOSV is not fitted, but the authors present an analysis of the gaseous content, where a combination of disk-like rotation and a one-sided outflow is used to explain the measured LOSV. The observation of NGC 1566 is presented in Smajic et al. (2015), where the authors also fit a 2D circular Plummer model with a fixed inclination to reproduce the stellar LOSV map and qualitatively discuss the \(\sigma_{v}\) map. The numerical values of the best parameters for the Plummer model are not provided, but it is indicated that no deviation from circular motion is detected. Analyzing the same data, we aim to improve the characterization of the stellar populations in these objects. Two main issues can explain the limited description of the stellar mass distribution in previous publications: Firstly, there is degeneracy between mass, inclination, and age: when using photometry to determine the mass distribution, a strong uncertainty is associated with the unknown age of the stellar population, which can lead to differences in the mass-to-luminosity ratio (M/L) of orders of magnitude; when using the LOSV distribution, a perfect degeneracy appears between the mass and the inclination of the disk. The second issue is the difficulty in determining the 3D mass distribution due to projection effects: it is particularly challenging to determine the geometrical thickness of a stellar disk with photometry and LOSV if not viewed edge-on. In order to overcome these difficulties, we use a high-resolution single stellar population (SSP) library to determine the age of the stellar populations and perform a simultaneous fit of four observables determined for each pixel: a low-resolution near-infrared (NIR) spectral energy distribution (SED), a high-resolution absorption spectrum of the CO bandheads, LOSV, and \(\sigma_{v}\). This allows us to break degeneracies, bring 3D information, and obtain reliable estimates of the main parameters of the model. The targets and observations are presented in Section 2, the construction of the SSP spectra in Section 3, the mass distribution model in Section 4, the results of the fitting procedure in Section 5, and a discussion and conclusion in Sections 6 and 7. Additional figures supporting the analysis are presented in the appendices of the paper. ## 2 Observations ### Targets and observations Three objects are analyzed in this paper: NGC 1808, NGC 1433, and NGC 1566. The method was developed on the first of these, for which it produces the best results, and was tentatively applied to the two other objects with good results. The modeling presented in this paper is done solely on archival SINFONI data obtained in the high spectral resolution and medium spatial resolution modes. For each observation, the official ESO pipeline was used to perform the data reduction, and the flux calibration was performed on the entire field of view with the 2MASS extended source survey as a reference. **NGC 1808** was observed as part of the 075.B-0648(A) ESO program on 24 March 2024, 2 April 2005 and 12 December 2005; each time in the three NIR J (\(R\sim 2000\)), H (\(R\sim 3000\)), and K (\(R\sim 4000\)) spectral bands. A \(6"\times 6"\) field of view is extracted with a \(125\times 250\)\(mas\) pixel scale and an effective resolution of \(0.7"\). We assume a distance of 12.8 Mpc, corresponding to \(z\sim 0.003\) and \(62\) pc/arcsec. A first analysis of this observation and a more detailed description of the host galaxy are presented in Busch et al. (2017). **NGC 1433** was observed as part of the 90.B-0657(A) ESO program on 21 October 2012 in the H and K bands. A \(4"\times 4"\) field of view is extracted, with a \(125\times 250\) mas pixel scale and an effective resolution of \(0.6"\). We assume a distance of 15.5 Mpc, corresponding to \(75\) pc/arcsec. A first analysis of this observation and a more detailed description of the host galaxy are presented in Smajic et al. (2014). **NGC 1566** was observed as part of the 90.B-0657(A) ESO program on 21 October 2012 in the H and K bands. A \(4.8"\times 4.8"\) field of view is extracted with a \(125\times 250\) mas pixel scale and an effective resolution of \(0.6"\). We assume a distance of 21.56 Mpc, corresponding to \(105\) pc/arcsec. A first analysis of this observation and a more detailed description of the host galaxy are presented in Smajic et al. (2015). ### The observables For each of the objects mentioned above, we extract the following information: 1. A low-spectral-resolution data cube, where the angular resolution, field of view, and spectral coverage of the observations are preserved, but the spectral resolution is lowered by a factor 50 by rebinning the original data cube in the spectral dimension. As the continuum emission is dominated by young stellar sources, this provides a spatially resolved SED for each pixel that will constrain the age, mass, and spatial distribution of the young stellar populations around the nuclei. 2. A high-spectral-resolution data cube, where the angular resolution, field of view, and spectral resolution of the observations are preserved, but the spectral domain is restrained to \(2.25-2.40\mu\)m (after correction of the redshift) and the flux normalized by the continuum. This provides the absorption profile of the CO bandheads for each pixel, which can be used to constrain the age and relative importance of the old and young stellar populations. 3. An LOSV map, obtained by fitting the above-mentioned absorption features with the high-spectral-resolution SSP spectra described below. For each pixel, we estimate the velocity shift as the median of 30 Doppler shifts measured with SSPs of randomly chosen ages. We use the LOSV map to constrain the total mass distribution of our model. 4. A \(\sigma_{v}\) map obtained with the same procedure as in point (3). We use this measurement to constrain the thickness of the stellar disk and solve the degeneracy between mass and inclination. The simultaneous use of these four sources of information to constrain our models will bring much stronger constraints than if fitted individually. In addition to these observables, the flux from the [Fe II] emission at 1.64 \(\mu\)m is measured in each pixel as the integrated emission line after a linear continuum subtraction. ## 3 Stellar templates To determine the age of the stellar populations and the LOSV and \(\sigma_{v}\) maps, we need a high-spectral-resolution library of unresolved stellar populations at various ages. Several spectral libraries computed with evolutionary SSP models are publicly available. Such models are built with an initial mass function, stellar evolution tracks, and spectral templates for individual stars, offering a high degree of freedom. Despite the relatively large number of possible libraries to choose from, we found no public stellar library matching our requirements, which are: 1. Solar metallicity. We lack sufficient information to simultaneously fit the ages and metallicities of the stellar populations, and therefore we assume a solar metallicity for the three objects as our best estimate for an elliptical star-forming galactic center. 2. Coverage of the NIR J, H, and K bands to match the spectral range of SINFONI. 3. Spectral resolution of \(R\gtrsim 3000\) (\(\Delta v\lesssim 100\) km.s\({}^{-1}\)) in the NIR to fit the shape of the absorption features and measure LOSV and \(\sigma_{v}\). 4. Spectra for very young post-starburst populations down to 10 Myr to investigate regions actively producing supernovae. Several public SSP libraries fulfill some of these criteria; MaStar (Maraston et al., 2020), A-LIST (Ashok et al., 2021), MILES (Vazdekis et al., 2016), XSL (Verro et al., 2022), and GALEV (Kotulla et al., 2009) being the most notable ones. MaStar, which is based on the MaNGA Stellar Library, is constructed with the empirical spectra of approximately 9000 SDSS stars with a very wide range of parameters. However, it only covers the shortest NIR wavelengths (up to 1.03 \(\mu\)m), does not provide sufficient spectral resolution to fit the \(\sigma_{v}\), and does not cover ages younger than 200 Myr. The APOGEE Library of Infrared SSP Templates (A-LIST) provides high-spectral-resolution templates, but only in a narrow band (1.51-1.70 \(\mu\)m) and for stellar populations older than 2 Gyr. The MILES extended SSP model is based on a composite library of empirical spectra covering a wide spectral range (NGSL for the UV, MILES for the optical, IRTF for the infrared), with moderate to high spectral resolution. Unfortunately, its spectral resolution in the NIR is R\(\sim 2000-2500\), corresponding to \(\Delta\lambda\sim 1\)nm, or \(\Delta v\sim 150\) km.s\({}^{-1}\), which is insufficient to fit stellar \(\sigma_{v}\). Although it provides spectra for stellar populations as young as 30 Myr, which is close to our goal. The X-shooter Spectral Library (XSL) from (Verro et al., 2022), which is based on empirical spectra from the eponym instrument, provides spectra at high resolution (R \(\sim 10\ 000\)) over a wide spectral range (350-2480 nm) and provides spectra for populations as young as 50 Myr. Lastly, the GALaxy Evolutionary (GALEV) synthesis models are fast, publicly available models for the computation of SSP spectra. These can be used to predict spectra for very young ages, down to 4 Myr, and use the theoretical Basel Stellar Library to cover a very large spectral domain from the extreme UV to the far-infrared (FIR). However, their spectral resolution is rather low (\(\Delta\lambda\sim 5-10\) nm in the NIR) and therefore they are not suitable for the analysis of absorption features. Unable to find a spectral library matching our requirements, we constructed our own using the Stellar Population Interface for Stellar Evolution and Atmospheres (SPISEA, Hosek et al., 2020). Our library matches and surpasses the above criteria, notably thanks to the use of synthetic spectra obtained with the stellar atmosphere code PHOENIX (Allard et al., 2003, 2007, 2012). As the construction of our library required a significant amount of resources and can now be rapidly used for the analysis of other observations, we have chosen to make it publicly available. ### Initial mass function We use a standard Kroupa IMF (Kroupa, 2001), which is a double-break power law, defined as \[\xi(m)\propto m^{-\alpha}\begin{cases}\alpha=1.3\text{ for }0.1<m<0.5\\ \alpha=2.3\text{ for }0.5<m<300\end{cases}\quad, \tag{1}\] to generate an initial population of stars. No stellar multiplicity is introduced. Due to the stochastic construction of the initial population, there is variability in the final spectra. For this reason, we compute ten 10\({}^{6}\) M\({}_{\odot}\) models for each age, providing good precision on the average spectrum as well as an estimate of the variability at a given age. The standard deviation of each spectrum is provided in the library. ### Stellar evolution For each age, an isochrone is computed to be used to attribute physical properties (\(T_{eff}\), \(log\ g\), and \(L\)) to each star of the initial population. We chose to use the MESA isochrones and stellar tracks (MIST, Dotter, 2016; Choi et al., 2016; Paxton et al., 2011, 2013, 2015), whose evolutionary tracks are computed with the MESA 1D stellar evolution code and cover a wide range of parameters: stellar mass (including high-mass stars, [0.1; 300] M\({}_{\odot}\)), age (including very young stars, [10\({}^{6}\); 10\({}^{10}\)] yr), and metallicity ([Fe/H], [-4; 0.5]). We used the version defined as v1.2 in SPISEA, which was downloaded in August 2018 (solar metallicity) and in April 2019 (other metallicities). Each star is then attributed a synthetic spectrum from the PHOENIX Models for Synphot1(Allard et al., 2003, 2007, 2012), which cover the 50-50000 nm spectral range. For [Fe/H]=0, the library is at high spectral resolution (\(R\sim 14000\)) and was constructed with no missing template except for the Wolf-Rayet stars. Footnote 1: Described and available for download at [https://www.stsci.edu/hst/instrumentation/](https://www.stsci.edu/hst/instrumentation/) reference-data-for-calibration-and-tools/astronomical-catalogs/phoenix-models-available-in-synphot This model was used to compute a library that we call UMISSP (UV to mid-infrared single stellar population), which consists of synthetic spectra of 10\({}^{6}\) M\({}_{\odot}\) solar-metallicity stellar populations with ages ranging from 10\({}^{6}\) to 10\({}^{10}\) yr with 10\({}^{0.05}\) steps2. In Appendix B, we perform several basic tests to verify that our spectra are in general agreement with other SSP models. Footnote 2: The library is publicly available at [http://galaxy.asu.cas.cz/page/unissspr](http://galaxy.asu.cas.cz/page/unissspr). Spectra at lower resolution (\(R\sim 280\)) are also computed for other metallicities ([Fe/H] = 0.5, -0.5, -1, -2, -3, -4), although not used in this paper. The mean spectra and standard deviation spectra for all ages and metallicities are provided. They are flux calibrated and expressed in W\({}_{\mu}\)m\({}^{-1}\), corresponding to the total cluster emission in a \(4\pi\) sr solid angle. In addition, for each stellar population, we provide: (a) the initial mass histogram, (b) the MIST isochrone, (c) the parameters of the isochrone and the ones used for the spectrum attribution of each star. ## 4 Model In order to model the observations, we choose a two-component model composed of a spherical nuclear cluster and an extended thick disk. For each set of parameters, we construct 3D maps of the various observables, which we stack along the line of sight. These are then used to compute synthetic observations, which are directly compared to the above-mentioned set of observables (Subsection 2.2). The construction of these synthetic observations is described in more detail in the following paragraphs. ### 3D maps of mass, velocity, and velocity dispersion We first construct 3D maps of the mass distribution, circular velocity, and velocity dispersion for each component using two self-consistent potential density function pairs: the Plummer spherical cluster (Plummer, 1911) and the Miyamoto-Nagai thick disk (Miyamoto and Nagai, 1975). The mass distribution is computed as \[\rho_{NSC}(R,z)=\frac{3M_{NSC}}{4\pi b_{NSC}^{3}}\left(1+\frac{R^{2}+z^{2}}{b_ {NSC}^{2}}\right)^{-\frac{1}{2}}, \tag{2}\] for the nuclear star cluster (NSC) Plummer model, and \[\rho_{disk}(R,z)=\] \[\frac{b^{2}M}{4\pi}\frac{a_{MNK}R^{2}+[a_{disk}+3(z^{2}+b_{disk}^{2} )^{1/2}][a_{disk}+(z^{2}+b_{disk}^{2})^{1/2}]^{2}}{(R^{2}+[a_{disk}+(z^{2}+b_{ disk}^{2})^{1/2}]^{2})^{5/2}(z^{2}+b_{disk}^{2})^{3/2}} \tag{3}\] for the extended Miyamoto-Nagai disk. The corresponding potentials are \[\Phi_{NSC}(R,z)=-\frac{GM_{NSC}}{\sqrt{R^{2}+z^{2}+b_{NSC}^{2}}}, \tag{4}\] \[\Phi_{disk}(R,z)=-\frac{GM_{disk}}{\sqrt{R^{2}+[a_{disk}+(z^{2}+b_{disk}^{2})^ {1/2}]^{2}}}, \tag{5}\] where (R, \(\theta\), z) are the cylindrical coordinates in the disk. The resulting total gravitational potential is \[\Phi(R,z)=\Phi_{NSC}(R,z)+\Phi_{disk}(R,z). \tag{6}\] We assume that the central cluster is spherical and has a rest-frame LOSV. For the extended disk, we compute the circular velocity as the derivative of the total gravitational potential (NSC + disk): \[\overrightarrow{v}_{disk}(R,z)=\sqrt{r\frac{d\Phi}{dr}}\left(\frac{R}{r} \right)\overrightarrow{e}_{\theta}, \tag{7}\] where \(r=\sqrt{R^{2}+z^{2}}\) and \(\overrightarrow{e}_{\theta}\) is the unit vector in the direction of rotation. For the NSC Plummer cluster (Dejonghe, 1987), the local isotropic velocity dispersion is given by \[\sigma_{NSC}^{2}(r)=\frac{GM_{NSC}}{6\sqrt{r^{2}+b_{NSC}^{2}}}, \tag{8}\] while for the extended MN disk, we assume that the local velocity dispersion is isotropic and equal to the product of the orbital velocity by the aspect ratio \((h/r)^{*}\) of the disk: \[\sigma_{disk}(R,z)=(h/r)^{*}\sqrt{r\frac{d\Phi_{NSC}c+disk}{dr}}\left(\frac{R }{r}\right), \tag{9}\] where \((h/r)^{*}\) is the scale height of the disk, that is, the ratio between the vertical and radial spatial scales. We note that this ratio is not equal to \(\frac{b_{disk}}{disk}\), but was measured by fitting an elongated Gaussian on \(\rho_{disk}\) distributions for a wide range of \((a_{disk},b_{disk})\) parameters, and is interpolated at the desired values (see Fig. 1). In the end, the LOSV of each component is given by \[\begin{cases}v_{disk,los}(R,z)=\overrightarrow{v}_{disk}(R,z)\cdot \overrightarrow{e}_{los}=v_{disk}(R,z)\times sin(l)sin(\theta)\\ v_{NSC,los}(R,z)=0,\end{cases} \tag{10}\] where \(\overrightarrow{e}_{los}\) is the unit vector along the direction of the LOS, \(i\) is the inclination of the disk (null for a face-on disk), and \(\theta\) the azimuth angle in the disk (null in the direction of the observer). ### Spectral templates With the model described above, we compute 3D grids for each component, with a mass, LOSV, and LOSV dispersion attributed to each cell. We use this information to attribute a spectrum to each cell of the grid, as follows: 1. Based on the high-resolution SSP spectra presented above, we attribute two spectra to each component: a low-resolution component with the full spectral domain, and a high-resolution component limited to the \(2.25-2.40\)\(\mu\)m wavelength interval. Both of them are scaled according to mass enclosed in each cell. 2. We perform a Doppler shift of the high-resolution spectrum based on the projected velocity along the LOS. 3. We convolve the high-resolution spectrum with a Gaussian whose width is given by the \(\sigma_{v}\). 4. We convolve the high-resolution spectrum with the SINFONI line spread function (measured on strong OH lines from the SKY observation, which are assumed to be constant within the Field of View FoV). ### Synthetic observations At last, we compute the synthetic observation corresponding to our four observables: 1. A low-resolution datacube obtained by summing the low-resolution spectra along the LOS. 2. A high-resolution datacube obtained by summing the shifted and convolved high-resolution spectra along the LOS, and normalizing it to the measured continuum. 3. A LOSV map obtained by calculating the average of the velocity projected along the LOS weighted by the luminosity of each cell at 2.3 \(\mu\)m. 4. A LOSV dispersion map obtained by computing the square root of the quadratic sum of (i) the average of the local velocity dispersion projected along the LOS, and (ii) the standard deviation of the velocity projected along the LOS, with both weighted by the luminosity in each cell at 2.3 \(\mu\)m. All the synthetic observations are convolved with a Gaussian kernel matching the angular resolution of the observation. ## 5 Results ### Fitting procedure The model consists of an NSC with two stellar populations -- one young and one old-- characterized by their masses \(M_{V,NSC}\) and \(M_{O,NSC}\) and ages \(age\kappa_{NSC}\) and \(age_{O,NSC}\), respectively; and similarly for the disk with \(M_{Y,disk}\) and \(M_{O,disk}\) and ages \(age\kappa_{disk}\) and \(age_{O,disk}\). The two stellar populations of each component are assumed to be extinguished by the same amount of foreground material, which are defined by \(A_{K,NSC}\) and \(A_{K,disk}\). Additionally, the geometry of the cluster is given by only one parameter, its radius \(b_{NSC}\); while the geometry of the disk is described by four parameters, namely its radius \(a_{disk}\), its scale height \((h/r)^{*}\), its inclination \(i\), and its position angle \(PA\). Overall, the model has 15 free parameters, and the computation time for each set of parameters is between 100 ms and 1000 ms. This prevents us from fully exploring the parameter space, and requires the use of a specific two-step fitting procedure. We first determine the eight parameters corresponding to the ages of the stellar populations, namely \(age\kappa_{NSC}\), \(age_{O,NSC}\), \(M_{Y,NSC}/M_{O,NSC}\), \(A_{K,NSC}\), \(age\kappa_{disk}\), \(age_{O,disk}\), \(M_{Y,disk}/M_{O,disk}\), and \(A_{K,disk}\), as follows: 1. We extract integrated spectra of the observed NSC and extended disk with aperture spectrometry. The average disk spectrum is subtracted from the NSC one (masks are determined based on the continuum images, where the NSC is clearly distinguishable). 2. For each component (NSC and disk), we measure one low-resolution spectrum covering the JHK bands, and a high-resolution spectrum covering the CO bandheads at 2.4 \(\mu\)m. 3. For each component, we fit an extinction \(A_{K}\) and the relative mass contribution (\(M_{Y}/M_{O}\)) between two stellar populations with different ages. We try every combination of young (\(10^{6}\) yr \(\leq age_{Y}\leq 10^{9}\) yr in steps of \(10^{0.025}\)) and old (\(10^{9}\) yr \(\leq age_{O}\leq 10^{10}\) yr in steps of \(10^{0.1}\)) stellar populations, measure the sum of the squared residuals both on the observed low-resolution continuum and on the normalized high-resolution absorption spectrum (weighted by the inverse of the average value of each of these), and keep the best combination for the second step of the fit (secondary solutions have also been tried). Once we have fixed the ages of the two stellar populations, we fit the masses and geometrical parameters, \(M_{NSC}\), \(b_{NSC}\), \(M_{disk}\), \(a_{disk}\), \(b_{disk}\), \(i\) and \(PA\): 1. At first, we find an initial solution by manually adjusting the parameters and visually inspecting the synthetic observables, as follows: * \(M_{NSC,0}\) and \(r_{NSC,0}\) are set to reproduce the continuum images and the \(\sigma_{v}\) map. * \(M_{disk,0}\) and \(a_{disk,0}\) are chosen to approximately match the continuum images. * \(PA_{0}\) is easily obtained by the line of nodes of the LOSV map. * Keeping the previous estimate for \(M_{disk,0}\) and \(a_{disk,0}\), we fit \(i_{0}\) and \(b_{disk,0}\) to best reproduce the observed LOSV and \(\sigma_{v}\). 2. We use the previous solution (\(M_{NSC,0}\), \(r_{NSC,0}\), \(M_{disk,0}\), \(a_{disk,0}\), \(b_{disk,0}\), \(i_{0}\), \(PA_{0}\)) as an initial guess for the automated Trust Region Reflective least-square algorithm implemented in scipy._optimize_, which provides the optimal solution around the initial guess. For each object, the weights associated to the four observables (continuum, CO absorption features, LOSV, and \(\sigma_{v}\)) are determined iteratively to give them similar importance in the fitting procedure. 3. Finally, we estimate the uncertainties on the parameters by exploring each of them individually, except for \(M_{disk}\) and \(i\), which can be strongly correlated and for which a 2D map of the summed squared residuals is computed to determine the uncertainties (see Figs. 10, 11, and 12). The best parameters for the three objects are presented in the following tables, and the maps associated to the best models are presented in the appendices. residuals in the continuum and CO absorption features (see Figs. A.1 and A.2) shows that the extinction is well constrained to a low value and several ages provide a good fit, with the deepest and widest minimum around the selected ages. We find similar ages for the populations in the disk and in the NSC, which supports the idea of a common star formation history. Two secondary solutions are observed at \(10^{8.3}\) and \(10^{8.5}\) yr, corresponding to the late apparition of red giants from \(2-3.5\) M\({}_{\odot}\) stars, which briefly make the NIR spectrum similar to that of a young stellar population. A large NSC surrounded by a compact thick disk inclined by \(i\sim 45^{\circ}\) and oriented along \(PA\sim 230^{\circ}\) is the best model for NGC 1808. Although a larger disk radius and mass can improve the LOSV map fit, this leads to a poor fit for the \(\sigma_{v}\) map and a slightly poorer fit for the continuum images. It is possible that two disk-like structures coexist, but for modeling the inner region, we focus on the compact disk solution. The model offers a relatively good fit to all observations, with some noticeable differences. The colors and morphology of the central bright continuum source match the observations, but its size is hard to estimate due to limited angular resolution. The CO absorption features are well represented. The LOSV is slightly underestimated, and our model predictions suggest its maximum is closer to the nucleus than actually observed, indicating that it is missing mass at large radii. However, the model correctly predicts the position of the maximum of \(\sigma_{v}\), as well as the central drop and its average value. ### Ngc 1433 NGC 1433 shows no evidence of recent star formation, with the youngest populations estimated to be over 100 million years old in both the disk and NSC. The spectra of both components are similar, and so are the residual maps in Figures A.4 and A.5, despite the NSC being more extincted. The best model features a massive NSC of \(\gtrsim 10^{9}\) M\({}_{\odot}\) with a radius of \(\sim 25\) pc surrounded by an extended, almost face-on disk of \(\sim 300\) pc in diameter with a height-to-radius ratio of \(\sim 0.25\) and an inclination of \(\lesssim 10^{\circ}\). The model offers a fair representation of all observables, particularly the CO absorption lines, continuum images, and LOSV map, which match the observations well. The \(\sigma_{v}\) map geometry is less convincing, but is still reasonable: the observed \(\sigma_{v}\) map is mostly featureless, and the model accurately reproduces its average value, which is relatively high. The model would predict a slight drop around the NSC, but this is not observed. ### Ngc 1566 The best model for NGC 1566 reveals recent star formation in both the NSC and the circumnuclear disk. The residual maps in Figures A.7 and A.8 indicate that star formation in the disk occurred very recently or is ongoing, with minimum residuals for ages of less than 3 Myr. The NSC also shows evidence of recent star formation, with ages younger than 10 Myr, and a minimum of between 3 and 10 Myr. The extinction values are well constrained and higher compared to the other two objects, particularly in the NSC, which supports the proposed recent star formation in these structures. \begin{table} \begin{tabular}{c|c c c} \hline \hline \(M_{NSC}\) & \(9.3\,\pm\,0.4\) & \(log(\)M\({}_{\odot})\) \\ \hline \(M_{disk}\) & \(10.4\,\pm\,0.25\) & \(log(\)M\({}_{\odot})\) \\ \hline \(i\) & \(2.3\,\pm\,2.2\) & \({}^{\circ}\) \\ \hline \(PA\) & \(335.1\,\pm\,54.5\) & \({}^{\circ}\) \\ \hline \(b_{NSC}\) & \(35.8\,\pm\,8.9\) & pc \\ \hline \(a_{disk}\) & \(200.0\,\pm\,17.5\) & pc \\ \hline \((h/r)_{disk}\) & \(0.23\,\pm\,0.06\) & \\ \hline \end{tabular} \end{table} Table 4: NGC 1433: Mass and geometry parameters \begin{table} \begin{tabular}{c|c c} \hline \hline \(\log(age\nu_{NSC})\) & \(8.6\) & \(\log(\)Myr) \\ \hline \(\log(age_{ONSC})\) & \(9.8\) & \(\log(\)Myr) \\ \hline \(M_{Y,NSC}/M_{O,NSC}\) & \(0.11\) & \\ \hline \(A_{K,NSC}\) & \(0.60\) & \\ \hline \(\log(age\nu_{disk})\) & \(8.2\) & \(\log(\)Myr) \\ \hline \(\log(age\nu_{disk})\) & \(9.8\) & \(\log(\)Myr) \\ \hline \(M_{Y,disk}/M_{O,disk}\) & \(0.04\) & \\ \hline \(A_{K,disk}\) & \(0.25\) & \\ \hline \end{tabular} \end{table} Table 3: NGC 1433: Results of the age-retrieval procedure Figure 2: NGC 1808: Observation (right) and best model (left). From top to bottom: Averaged CO absorption lines over the entire FoV, H band image, LOSV, and LOSV dispersion. The NSC in NGC 1566 is compact and relatively massive, and the disk has similar geometry to NGC 1433 with a size of \(\sim 150\) pc, an aspect ratio of \((h/r)_{disk}\sim 0.25\), and a nearly face-on orientation (\(i\lesssim 10^{\circ}\)). The central bright source of continuum emission is well represented by the model, but the depth of the CO absorption features is underestimated. This could be due to a large LOSV dispersion (\(\sigma_{v}\)), imprecision in determining the ages of the stars, or imperfections in the stellar templates. The observed LOSV map is well reproduced by the model, but the \(\sigma_{v}\) map is more difficult to compare, as the observed map is mostly uniform. However, the model accurately represents the average value of the \(\sigma_{v}\). In order to obtain a better fit of the CO absorption depth, we tried to constrain the ages of the stellar population in the disk and/or in the NSC to be older than 10 Myr. This slightly increased the CO depth. However, the improvement was minor, and this constraint had a negative impact on the rest of the fit: the continuum was not well reproduced by the template, making the \begin{table} \begin{tabular}{c|c c} \hline \hline \(M_{NSC}\) & \(8.5\,\pm\,0.4\) & log(M\({}_{\odot}\)) \\ \hline \(M_{disk}\) & \(10.1\,\pm\,0.3\) & log(M\({}_{\odot}\)) \\ \hline \(i\) & \(5.4\,\pm\,5.0\) & \({}^{\circ}\) \\ \hline \(PA\) & \(322.1\,\pm\,40.3\) & \({}^{\circ}\) \\ \hline \(b_{NSC}\) & \(7.9\,\pm\,2.0\) & pc \\ \hline \(d_{disk}\) & \(110.0\,\pm\,28.9\) & pc \\ \hline \((h/r)_{disk}\) & \(0.22\,\pm\,0.05\) & \\ \hline \end{tabular} \end{table} Table 6: NGC 1566: Mass and geometry parameters Figure 4: NGC 1566: Observation (right) and best model (left). From top to bottom: Averaged CO absorption lines over the entire FoV, H band image, LOSV, and LOSV dispersion. Figure 3: NGC 1433: Observation (right) and best model (left). From top to bottom: Averaged CO absorption lines over the entire FoV, H band image, LOSV, and LOSV dispersion. \begin{table} \begin{tabular}{c|c} \hline \hline \(\log(age_{LVSC})\) & \(6.6\) & log(Myr) \\ \hline \(\log(age_{O,NSC})\) & \(9.8\) & log(Myr) \\ \hline \(M_{Y,NSC}/M_{O,NSC}\) & \(0.08\) & \\ \hline \(A_{K,NSC}\) & \(0.90\) & \\ \hline \(\log(age_{O,disk})\) & \(6.1\) & log(Myr) \\ \hline \(\log(age_{O,disk})\) & \(9.2\) & log(Myr) \\ \hline \(M_{Y,disk}/M_{O,disk}\) & \(0.10\) & \\ \hline \(A_{K,disk}\) & \(0.40\) & \\ \hline \end{tabular} \end{table} Table 5: NGC 1566: Results of the age-retrieval procedure fitting procedure converge toward a solution with a very large NSC, replacing part of the disk. ## 6 Discussion ### Age and extinction By fitting the low-resolution continuum and CO absorption features, we determined ages and extinction for the stellar populations of the two components solely based on the SINFONI observations. For NGC 1808, we find evidence of past star formation with young stellar populations of ages between 10 and 50 Myr. These estimates are in good agreement with previous results, which all point toward the presence of recent stellar populations in the central region: Kotilainen et al. (1996) also estimate the age of the stellar populations in the central region to be between \(\sim 10\) and \(\sim 40\) Myr from NIR line and continuum analysis, Galliano et al. (2005); Galliano & Alloin (2008) observed the presence of a very embedded young stellar cluster in the MIR, and Busch et al. (2017) measured the age of the 1 kpc circumnuclear ring to range between 5 and 8 Myr (the age of the central cluster is not given, but is indicated as having experienced less recent star formation). For NGC 1433, we find no evidence of star formation in the last 100 Myr. This result is supported by previous publications on the object (Cid Fernandes et al., 1998; Sanchez-Blazquez et al., 2011; Smajic et al., 2014), which do not detect any significant star formation or extinction outside the 1 kpc circumnuclear ring. For NGC 1566, we find evidence of very recent or ongoing star formation, both in the NSC and in the 150 pc ring. This result is not in strong agreement with previous publications, which tend to attribute a significant portion of the nuclear featureless continuum to AGN activity (accretion disk + hot dust) instead of hot stars (Smajic et al., 2015; da Silva et al., 2017). This contribution from the AGN is supported by the observation of associated emission lines, and is not taken into account in our model, which probably overestimates the contribution from young stars. However, these latter two papers still point toward the presence of recent star formation: Smajic et al. (2015) measure the presence of significant star formation and supernova rates through \(Br_{\gamma}\) and [Fe II] emission lines, and the stellar spectral synthesis done with the STARLIGHT software in da Silva et al. (2017) attributes a major part of the featureless continuum to a young stellar population. The AGN interpretation alone cannot explain the observations; in particular, the spatial extent of the continuum source, nondiluted nuclear CO absorption features, and the continuum in the circumnuclear disk. The stellar population interpretation can explain most of the SINFONI observation, but does not take into account observations at other wavelengths, which reliably identify an AGN component. The nucleus of NGC 1566 most probably hosts both phenomena, and the contribution of a young stellar population predicted by our model should be interpreted as an upper limit. Overall, our method to determine the age of the young stellar populations in galactic centers provides good results. For NGC 1808 and NGC 1433, the ages obtained are fully consistent with observations and previous publications. For NGC 1566, the depth of the CO absorption features is not well reproduced by our model, and the contribution of young stellar populations might be overestimated by our model because of the presence of the featureless AGN. ### Supernova rates Figures 5 and 6 show the supernova rate maps for NGC 1808 and NGC 1566, respectively, obtained from (1) our modeling; (2) the [Fe II] scaling relation from Rosenberg et al. (2012). For NGC 1433, our model does not predict any supernova rate, and no [Fe II] emission line is detected. For NGC 1808, the total supernova rate in the considered field of view amounts to 0.038, 0.028, and 0.007 supernovae per year with the three estimators. The relative distribution between the NSC and the disk is also well reproduced (see 2). The consistency between the estimate from our modeling and [Fe II] emission lines confirms the age and mass deduced from our model for the young stellar population. According to Fig. 1, the stellar population is reaching the end of its supernova episode. For NGC 1566, the results are very similar, but with a slightly larger discrepancy between our modeling and the [Fe II] estimators, in particular for the NSC where the difference between methods (1) and (3) is of an order of magnitude. However, even with the lowest estimator, the supernova rate is very significant in the central region, which supports the diagnostic of a young stellar population discussed above. According to Fig. 1 and our very young age estimates, this corresponds to the beginning of the supernova episode for this stellar population, and the high supernova rate should be sustained for several tens of millions of years. The absence of [Fe II] emission in NGC 1433 is consistent with the prediction of our model, which does not detect any stellar population younger than 100 Myr, and so does not predict any core-collapse supernova. Overall, the estimates of the supernova Figure 5: NGC 1808: 2D maps of the supernova rate (number of supernova per year and per pixel): (top) as estimated from our modeling and (bottom) as estimated from the [Fe II] emission line with the Rosenberg et al. (2012) conversion factor. rate are of the same magnitude with the three methods. The consistency (same order of magnitude) between the estimate from our modeling and the direct [Fe II] measurements confirms the age determination we performed based on CO and continuum. ### Mass and geometry The simultaneous fit of the continuum of emission, LOSV, and \(\sigma_{v}\) allows us to obtain an estimate of the 3D geometry of the object, in particular the inclination and scale height of the disk. For NGC 1808, the model converges toward a rather compact and very thick disk inclined by \(\sim 50^{\circ}\). The inclination found for the disk is fully consistent with values derived from the geometry of larger scale structures, such as the 1 kpc starburst ring or the de Vaucouleurs profile of the galaxy. With this inclination, the high aspect ratio of the disk (\(r/h\sim 0.65\)) is imposed by the LOSV to \(\sigma_{v}\) maps ratio. The required mass to explain both LOSV and \(\sigma_{v}\) with this inclination is consistent with the observed continuum emission flux given the assumed stellar populations. This consistent description of the four observables gives us confidence that such a compact thick disk is present around the NSC. Nevertheless, as already mentioned, a more standard larger and flatter disk is also probably present. The geometry of our model for the disk of NGC 1433 and NGC 1566 consists of a larger and flatter disk seen almost face-on. The inclinations differ significantly from the estimates found from a large-scale fit of the galaxies, which indicates inclinations of \(\sim 30^{\circ}\) for both objects from HI geometry and kinematics (Elagali et al. 2019; Ryder et al. 1996, respectively). However, as noted in these latter papers, inclination is difficult to accurately measure so close to a face-on orientation, especially considering that strongly barred late-type galaxies could have an intrinsic elongation on average. Both results (from large-scale HI and small-scale stellar content) could be simultaneously correct in a tilted disk configuration. ## 7 Conclusions We used SINFONI spatially resolved spectroscopic observations of three nearby galactic nuclei (NGC 1433, NGC 1566, and NGC 1808) to derive the properties of the stellar populations in their central few hundred parsecs. Our main goal is to determine the age, mass, and 3D geometry of the nuclear star cluster (NSC) and the surrounding extended thick stellar disk. To achieve this, we propose a method that uses single stellar population (SSP) spectra to determine the age of the stellar populations and construct synthetic observations for our model. We simultaneously fitted the spatially resolved line-of-sight-velocity (LOSV) and its dispersion (\(\sigma_{v}\)), as well as the low-spectral-resolution NIR continuum and high-spectral-resolution CO absorption features for each pixel of the datacube in order to overcome common degeneracies between key parameters. We determined the ages of the various stellar populations by fitting the low-resolution continuum of emission and CO absorption features with UMISSP, an SSP library developed specifically for this purpose. We then determined the 3D mass distribution of the stellar populations with a two-component model composed of a Plummer NSC surrounded by a Miyamoto-Nagai disk, which produces synthetic continuum and CO datacubes, as well as LOSV and \(\sigma_{v}\) maps, which we directly compared to observations. For each object, we obtain a model that is consistent with all the SINFONI observables, and is in general agreement with previously published studies: * NGC 1808 experienced relatively recent star formation -- that is, around 10 Myr ago in the NSC and 30 Myr ago in a surrounding compact thick disk. A strong supernova rate is suggested by both the [Fe II] flux and our age modeling. * NGC 1433 has not experienced recent star formation. It is composed of the most massive NSC with \(M_{NSC}\sim 1.3\times 10^{9}\) M\({}_{\odot}\) and is surrounded by a large and flat disk seen almost face-on. No significant supernova rate is measured. * NGC 1566 has recently experienced or is currently experiencing a starburst, both in the disk and the central NSC. Its geometry is similar to that of NGC 1433, with a massive NSC surrounded by a large and flat disk seen almost face-on. The nucleus is at the beginning of a long episode of supernova. For the three objects, we find good agreement between the supernova rate estimated with our model and that estimated from [Fe II] scaling relations, confirming the age and mass deduced from our model for the young stellar population. We conclude that, based on an IFU observation alone, our method can be used to characterize the properties of the stellar populations in the central few hundred parsecs of nearby galactic nuclei. Figure 6: NGC 1566: 2D maps of the supernova rate (number of supernova per year and per pixel): (top) as estimated from our modeling and (bottom) as estimated from the [Fe II] emission line with the Rosenberg et al. (2012) conversion factor. ###### Acknowledgements. We thank the anonymous referee for his useful comments. This work was made possible by the support of the international collaboration in astronomy (ASU mobility) with the number \(CZ\)02.2.69/0.0/0.0/18/53/0016972 and the institutional project RVO:67985815. ASU mobility is co-financed by the European Union.
2303.14840
On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks
Learning-based methods to solve dense 3D vision problems typically train on 3D sensor data. The respectively used principle of measuring distances provides advantages and drawbacks. These are typically not compared nor discussed in the literature due to a lack of multi-modal datasets. Texture-less regions are problematic for structure from motion and stereo, reflective material poses issues for active sensing, and distances for translucent objects are intricate to measure with existing hardware. Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities. These effects remain unnoticed if the sensor measurement is considered as ground truth during the evaluation. This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction. We rigorously show the significant impact of sensor characteristics on the learned predictions and notice generalisation issues arising from various technologies in everyday household environments. For evaluation, we introduce a carefully designed dataset\footnote{dataset available at https://github.com/Junggy/HAMMER-dataset} comprising measurements from commodity sensors, namely D-ToF, I-ToF, passive/active stereo, and monocular RGB+P. Our study quantifies the considerable sensor noise impact and paves the way to improved dense vision estimates and targeted data fusion.
HyunJun Jung, Patrick Ruhkamp, Guangyao Zhai, Nikolas Brasch, Yitong Li, Yannick Verdie, Jifei Song, Yiren Zhou, Anil Armagan, Slobodan Ilic, Ales Leonardis, Nassir Navab, Benjamin Busam
2023-03-26T22:32:44Z
http://arxiv.org/abs/2303.14840v1
# On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks ###### Abstract Learning-based methods to solve dense 3D vision problems typically train on 3D sensor data. The respectively used principle of measuring distances provides advantages and drawbacks. These are typically not compared nor discussed in the literature due to a lack of multi-modal datasets. Texture-less regions are problematic for structure from motion and stereo, reflective material poses issues for active sensing, and distances for translucent objects are intricate to measure with existing hardware. Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities. These effects remain unnoticed if the sensor measurement is considered as ground truth during the evaluation. This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction. We rigorously show the significant impact of sensor characteristics on the learned predictions and notice generalisation issues arising from various technologies in everyday household environments. For evaluation, we introduce a carefully designed dataset1 comprising measurements from commodity sensors, namely D-ToF, I-ToF, passive/active stereo, and monocular RGB+P. Our study quantifies the considerable sensor noise impact and paves the way to improved dense vision estimates and targeted data fusion. Footnote 1: dataset available at [https://github.com/Junggy/HAMER-dataset](https://github.com/Junggy/HAMER-dataset) \({}^{1}\) Technical University of Munich, \({}^{2}\) 3Dwe.ai, \({}^{3}\) Huawei Noah's Ark Lab, \({}^{4}\) Siemens AG, \({}^{*}\) Equal Contribution [email protected], [email protected], [email protected], [email protected] ## 1 Introduction Our world is 3D. Distance measurements are essential for machines to understand and interact with our environment spatially. Autonomous vehicles [23, 30, 50, 58] need this information to drive safely, robot vision requires distance information to manipulate objects [15, 62, 72, 73], and AR realism benefits from spatial understanding [6, 31]. A variety of sensor modalities and depth prediction pipelines exist. The computer vision community thereby benefits from a wide diversity of publicly available datasets [23, 51, 52, 57, 60, 61, 65], which allow for evaluation of depth estimation pipelines. Depending on the setup, different sensors are chosen to provide ground truth (GT) depth maps, all of which have their respective advantages and drawbacks determined by their individual principle of distance reasoning. Pipelines are usually trained on the data without questioning the nature of the depth sensor used for supervision and do not reflect areas of high or low confidence of the GT. Popular **passive sensor** setups include multi-view stereo cameras where the known or calibrated spatial relationship Figure 1: Other datasets for dense 3D vision tasks reconstruct the scene as a whole in one pass [8, 12, 56], resulting in low quality and accuracy (cf. red boxes). On the contrary, our dataset scans the background and every object in the scene separately a priori and annotates them as dense and high-quality 3D meshes. Together with precise camera extrinsics from robotic forward-kinematics, this enables a fully dense rendered depth as accurate pixel-wise ground truth with multimodal sensor data, such as RGB with polarization, D-ToF, I-ToF and Active Stereo. Hence, it allows quantifying different downstream 3D vision tasks such as monocular depth estimation, novel view synthesis, or 6D object pose estimation. between them is used for depth reasoning [51]. Corresponding image parts or patches are photometrically or structurally associated, and geometry allows to triangulate points within an overlapping field of view. Such photometric cues are not reliable in low-textured areas and with little ambient light where **active sensing** can be beneficial [52, 57]. Active stereo can be used to artificially create texture cues in low-textured areas and photon-pulses with a given sampling rate are used in Time-of-Flight (ToF) setups either directly (D-ToF) or indirectly (I-ToF) [26]. With the speed of light, one can measure the distance of objects from the return time of the light pulse, but unwanted multi-reflection artifacts also arise. Reflective and translucent materials are measured at incorrect far distances, and multiple light bounces distort measurements in corners and edges. While ToF signals can still be aggregated for dense depth maps, a similar setup is used with LiDAR sensors which sparsely measure the distance using coordinated rays that bounce from objects in the surrounding. The latter provides ground truth, for instance, for the popular outdoor driving benchmark KITTI [23]. While LiDAR sensing can be costly, radar [21] provides an even sparser but more affordable alternative. **Multiple modalities** can also be fused to enhance distance estimates. A common issue, however, is the inherent problem of warping onto a common reference frame which requires the information about depth itself [27, 37]. While multi-modal setups have been used to enhance further monocular depth estimation using self-supervision from stereo and temporal cues [60, 25], its performance analysis is mainly limited to average errors and restricted by the individual sensor used. An unconstrained analysis of depth in terms of RMSE compared against a GT sensor only shows part of the picture as different sensing modalities may suffer from drawbacks. Where are the drawbacks of current depth-sensing modalities - and how does this impact pipelines trained with this (potentially partly erroneous) data? Can self- or semi-supervision overcome some of the limitations posed currently? To objectively investigate these questions, we provide multi modal sensor data as well as highly accurate annotated depth so that one can analyse the deterioration of popular monocular depth estimation and 3D reconstruction methods (see Fig. 1) on areas of different photometric complexity and with varying structural and material properties while changing the sensor modality used for training. To quantify the impact of sensor characteristics, we build a unique camera rig comprising a set of the most popular indoor depth sensors and acquire synchronised captures with highly accurate ground truth data using 3D scanners and aligned renderings. To this end, our main contributions can be summarized as follows: 1. We question the measurement quality from commodity **depth sensor** modalities and analyse their **impact** as supervision signals for the dense 3D vision tasks of depth estimation and reconstruction. 2. We investigate performance on texture-varying material as well as **photometrically challenging** reflective, translucent and transparent **areas** where **learning methods** systematically **reproduce sensor errors**. 3. To objectively assess and quantify different data sources, we contribute an **indoor dataset** comprising an unprecedented combination of **multi-modal sensors**, namely I-ToF, D-ToF, monocular RGB+P, monochrome stereo, and active light stereo together with highly accurate ground truth. ## 2 Related Work ### Geometry from X A variety of sensor modalities have been used to obtain depth maps. Typical datasets comprise one ground truth sensor used for all acquisitions, which is assumed to give accurate enough data to validate the models: _Stereo Vision._ In the stereo literature, early approaches [51] use a pair of passive cameras and restrict scenes to piecewise planar objects for triangulation. Complex setups with an industrial robot and structured light can yield ground truth depth for stereo images [1]. Robots have also been used to annotate keypoints on transparent household objects [36]. As these methods are incapable of retrieving reliable depth in textureless areas where stereo matching fails, active sensors are used to project patterns onto the scenes to artificially create structures. The availability of active stereo sensors makes it also possible to acquire real indoor environments [52] where depth data at missing pixels is inpainted. Structure from motion (SfM) is used to generate the depth maps of Sun3D [65] where a moving camera acquires the scenes and data is fused ex post. A temporally tracked handheld active sensor is further used for depth mapping for SLAM evaluation in the pioneering dataset of Sturm et al. [57]. While advancing the field, its depth maps are limited to the active IR-pattern used by its RGB-D sensor. _Time-of-Flight Sensors._ Further advances in active depth sensing emphasize ToF more. Initial investigations focus on simulated data [26] and controlled environments with little ambient noise [54]. The broader availability of ToF sensors in commercial products (e.g. Microsoft Kinect series) and modern smartphones (e.g. I-ToF of Huawei P30 Pro, D-ToF in Apple iPhone 12) creates a line of research around curing the most common sensor errors. These are multi-path interference (MPI), motion artefacts and a high level of sparsity and shot noise [27]. Aside of classical active and passive stereo, we therefore also include D-ToF and I-ToF modalities in all our experiments. _Polarimetric Cues._ Other properties of light are used to indirectly retrieve scene surface properties in the form of normals for which the amount of linearly polarized light and its polarization direction provide information, especially for highly reflective and transparent objects [17, 29]. Initial investigations for shape from polarization mainly analyse controlled setups [3, 18, 53, 70]. More recent approaches investigate also sensor fusion methods [28] even in challenging scenes with strong ambient light [60]. We consequently also acquire RGB+P data for all scenes. _Synthetic Renderings._ In order to produce pixel-perfect ground truth, some scholars render synthetic scenes [40]. While this produces the best possible depth maps, the scenes are artificially created and lack realism, causing pipelines trained on Sintel [7] or SceneFlow [40] to suffer from a synthetic-to-real domain gap. In contrast, we follow a hybrid approach and leverage pixel-perfect synthetic data from modern 3D engines to adjust highly accurate 3D models to real captures. ### Monocular Depth Estimation Depth estimation from a single image is inherently ill-posed. Deep learning has enabled this task for real scenes. _Supervised Training._ Networks can learn to predict depth with supervised training. Eigen et al. [14] designed the first monocular depth estimation network by learning to predict coarse depth maps, which are then refined by a second network. Laina et al. [32] improved the latter model by using only convolutional layers in a single CNN. The required ground truth often limits these methods to outdoor scenarios [22]. A way of bypassing this is to use synthetic data [39]. Narrowing down the resulting domain gap can be realized [26]. MiDaS [47] generalizes better to unknown scenes by mixing data from 3D movies. To predict high-resolution depth, most methods use multi-scale features or post processing [41, 69] which complicates learning. If not trained on a massive set of data, these methods show limited generalization capabilities. _Self-Supervision._ Self-supervised monocular methods try to circumvent this issue. The first such methods [19, 66] propose to use stereo images to train a network for depth prediction. With it, the left image is warped into the right where photometric consistency serves as training signal. Monodepth [24] added a left-right consistency loss to mutually leverage warping from one image into the other. Even though depth quality improves, it requires synchronized image pairs. Monocular training methods are developed that use only one camera where frames in a video are leveraged for the warping with simultaneously estimated poses between them. This task is more intricate, however, Monodepth2 [25] reduces the accuracy gap between the stereo and monocular training by automasking and with a minimum reprojection loss. A large body of work further improves the task [10, 33, 46, 47, 55, 68] and investigates temporal consistency [38, 50, 64]. To compare the effect of various supervision signals for monocular depth estimation, we utilized the ResNet backbone of the popular Monodepth2 [25] together with its various training strategies. ### Reconstruction and Novel View Synthesis The 3D geometry of a scene can be reconstructed from 2D images and optionally their depth maps [43]. Scenes are stored explicitly or implicitly. Typical explicit representation include point clouds or meshes [11] while popular implicit representation are distance fields [71] which provide the scene as a level set of a given function, or neural fields Figure 2: **Scanning Process Overview.** To extract highly accurate geometry, we design a multi-stage acquisition process. At first, 3D models are extracted with structured light 3D scanners (a). Scene objects (b) and mounted sensor rig (b) are calibrated towards a robot for accurate camera pose retrieval [61]. A motion trajectory is recorded in gravity compensation mode (d) and repeated to record synchronized images of all involved sensors (e). A partial digital twin of the 3D scene (f) is aligned to small (g) and larger (h) objects to retrieve an entire in silico replica of the scene which can be rendered from the camera views of each sensor used (i) which results in highly accurate dense depth maps that enable investigations of individual sensor components. where the scene is stored in the weights of a network [67]. _NeRFs._ Due to their photorealism in novel view synthesis, recent advances around neural radiance fields (_NeRF_) [42] experience severe attention. In this setup, one network is trained on a posed set of images to represent a scene. The method optimizes for the prediction of volume density and view-dependent emitted radiance within a volume. Integration along query rays allows to synthesize novel views of static and deformable [44] scenes. Most noticeable recent advances extend the initial idea to unbounded scenes of higher quality with Mip-NeRF 360 [5] or factor the representation into low-rank components with TensoRF [9] for faster and more efficient usage. Also robustness to pose estimates and calibration are proposed [34, 63]. While the initial training was computationally expensive, methods have been developed to improve inference and training. With spherical harmonics spaced in a voxel grid structure, Plenoxels [16] speed up processes even without a neural network and interpolation techniques [59] accelerate training. Geometric priors such as sparse and dense depth maps can regularize convergence, improve quality and training time [13, 49]. Besides recent works on methods themselves, [48] propose to leverage real world objects from crowdsourced videos on a category level to construct a dataset to evaluate novel view synthesis and category-centric 3D reconstruction methods. We make use of most recent NeRF advances and analyse the impact of sensor-specific depth priors in [49] for the task of implicit scene reconstruction. To neglect the influence of pose estimates and produce highly accurate data, we leverage the robotic pose GT of our dataset. ## 3 Data Acquisition & Sensor Modalities We set up scenes composed of multiple objects with different shapes and materials to analyse sensor characteristics. 3D models of photometrically challenging objects with reflective or transparent surfaces are recorded with high quality a priori and aligned to the scenes. Images are captured from a synchronised multi-modal custom sensor mounted at a robot end-effector to allow for precise pose camera measurements [61]. High-quality rendered depth can be extracted a posteriori from the fully annotated scenes for the viewpoint of each sensor. The acquisition pipeline is depicted in Fig. 2. Previous 3D and depth acquisition setups [8, 12, 56] scan the scene as a whole which limits the quality by the used sensor. We instead separately scan every single object, including chairs and background, as well as small household objects a priori with two high-quality structured light object scanners. This process significantly pushes the annotation quality for the scenes as the robotic 3D labelling process only has a point RMSE error of \(0.80\) mm [61]. For comparison, a Kinect Azure camera induces a standard deviation of \(17\) mm in its working range [35]. The accuracy allows us to investigate depth errors arising from sensor noise objectively, as shown in Fig. 3, while resolving common issues of imperfect meshes in available datasets (cf. Fig. 1, left). ### Sensor Setup & Hardware Description The table-top scanner (EinScan-SP, SHINING 3D Tech. Co., Ltd., Hangzhou, China) uses a rotating table and is designed for small objects. The other is a hand-held scanner (Artec Eva, Artec 3D, Luxembourg) which we use for larger objects and the background. For objects and areas with challenging material, self-vanishing 3D scanning spray (AESUB Blue) is used. For larger texture-less areas such as tables and walls we temporarily attach small markers [20] to the surface to allow for relocalization of the 3D scanner. The robotic manipulator is a KUKA LBR iwa 7 R800 (KUKA Roboter GmbH, Germany) with a position accuracy of \(\pm 0.1\) mm. We validated this during our pivot calibration stage (Fig. 2 b) by calculating the 3D location of the tool tip (using forward kinematics and hand-tip calibration) while varying robot poses. The position varied in \([-0.158,0.125]\) mm in line with this. Our dataset features a unique multi-modal setup with four different cameras, which provide four types of input images (RGB, polarization, stereo, Indirect ToF (I-ToF) correlation) and three different depth images modalities (Direct ToF (D-ToF), I-ToF, Active Stereo). RGB and polarization images are acquired with a Phoenix 5.0 MP Polarization camera (PHX050S1-QC, LUCID Vision Labs, Canada) equipped with a Sony Polarsens sensor (IMX264MYR CMOS, Sony, Japan). To acquire stereo images, we use an Intel RealSense D435 (In Figure 3: **Data Quality. A full 3D reconstruction of the RGB scene (left) allows to render highly accurate depth maps from arbitrary views. These serve as GT to study sensor errors of various depth sensors for different scene structures (right). E.g., due to the measurement principle, the translucent glass becomes invisible for the ToF sensors.** tel, USA) with switched off infrared projector. Depth is acquired from an Intel RealSense L515 D-ToF sensor, an Intel Realsense D435 active stereo sensor with infrared pattern projection, and a Lucid Helios (HLS003S-001, LUCID Vision Labs, Canada) I-ToF sensor. A Raspberry Pi triggers each camera separately to remove interference effects between infrared signals of depth sensors. The hardware is rigidly mounted at the robot end-effector (see Fig. 4) which allows to stop frame-by-frame for the synchronized acquisition of a pre-recorded trajectory. ### Scene Statistics & Data Comparison We scanned 7 indoor areas, 6 tables, and 4 chairs, with the handheld scanner as background and large objects. 64 household objects from 9 categories (bottle, can, cup, cutlery, glass, remote, teapot, tube, shoe) are scanned with the tabletop structured light scanner. The data comprises 13 scenes split into 10 scenes for training and 3 scenes for testing. Each scene is recorded with 2 trajectories of 200-300 frames with and without the objects. This sums up to 800-1200 frames per scene, with a total of 10k frames for training and 3k frames for our test set. The 3 test scenes have different background setups: 1) Seen background, 2) Seen background with different lighting conditions and 3) Unseen background and table, with three different object setups: 1) Seen objects 2) Unseen objects from the seen category 3) Unseen objects from unseen categories (shoe and tube). Table 1 compares our dataset with various existing setups. To the best of our knowledge, our dataset is the only multi-modal dataset comprising RGB, ToF, Stereo, Active Stereo, and Polarisation modalities simultaneously with reliable ground truth depth maps. ## 4 Methodology The dataset described above allows for the first time for rigorous, in-depth analysis of different depth sensor modalities and a detailed quantitative evaluation of learning-based dense scene regression methods when trained with varying supervision signals. We focus on the popular tasks of monocular depth estimation and implicit 3D reconstruction with the application of novel view synthesis. ### Depth Estimation To train the depth estimation from a single image, we leverage the widely adopted architecture from [25]. We train an encoder-decoder network with a ResNet18 encoder and skip connections to regress dense depth. Using different supervision signals from varying depth modalities allows to study the influence and the characteristics of the 3D sensors. Additionally, we analyze whether complementary semi-supervision via information of the relative pose between monocular acquisitions and consecutive image information of the moving camera can overcome sensor issues. We further investigate the network design influence on the prediction quality for the supervised case. For this, we train two high-capacity networks with transformer backbones on our data, namely DPT [46] and MIDAS [47]. Dense SupervisionIn the fully supervised setup, depth modalities from the dataset are used to supervise the prediction of the four pyramid level outputs after upsampling to the original input resolution with: \(\mathcal{L}_{\text{supervised}}=\sum_{i=1}^{i=4}\left\|\widetilde{D}_{i}-D \right\|_{1}\), where \(D\) is the supervision signal for valid pixels of the depth map and \(\widetilde{D}_{i}\) the predicted depth at pyramid scale \(i\). Self-SupervisionDepth and relative pose prediction between consecutive frames of a moving camera can be formulated as coupled optimization problem. We follow established methods to formulate a dense image reconstruction loss through projective geometric warping [25]. In this process, a temporal image \(I_{t^{\prime}}\) at time \(t^{\prime}\) is projectively transformed to the frame at time \(t\) via: \(I_{t^{\prime}\to t}=I_{t^{\prime}}\Big{\langle}\text{proj}(D_{t},T_{t \to t^{\prime}},K)\Big{\rangle}\), where \(D_{t}\) is the predicted depth for frame \(t\), \(T_{t\to t^{\prime}}\) the relative camera pose, and \(K\) the camera intrinsics. The photometric reconstruction error [25, 50, 64] between image \(I_{x}\) and \(I_{y}\), given by: \(E_{\text{pe}}(I_{x},I_{y})=\alpha\frac{1-\text{SSIM}(I_{x},I_{y})}{2}+(1- \alpha)\left\|I_{x}-I_{y}\right\|_{1}\) is computed between target frame \(I_{t}\) and each source frame \(I_{s}\) with \(s\in S\). The pixel-wise minimum error is retrieved to finally define \(\mathcal{L}_{\text{photo}}\) over \(S=[t-F,t+F]\) as \(\mathcal{L}_{\text{photo}}=\min_{s\in S}E_{\text{pe}}(I_{t},I_{s\to t})\). The edge-aware smoothness \(\mathcal{L}_{\text{s}}\) is applied [25] to encourage locally smooth depth estimations with the mean-normalized inverse depth \(\overline{d_{t}}\) as \(\mathcal{L}_{\text{s}}=\left|\partial_{x}\overline{d_{t}}\right|e^{-\left| \partial_{x}I_{t}\right|}+\left|\partial_{y}\overline{d_{t}}\right|e^{-\left| \partial_{y}I_{t}\right|}\). The final training loss for the self-supervised setup is: \(\mathcal{L}_{\text{self-supervised}}=\mathcal{L}_{\text{photo}}+\lambda_{ \text{s}}\cdot\mathcal{L}_{\text{s}}\). Figure 4: **Camera Rig and 3D Sensor Data.** The custom multi-modal sensor rig comprises depth sensors for I-ToF (top left), Stereo (lower left), D-ToF (lower right), and RGB-P (Polarization, top right). It is fixed to a robot end-effector (top) and a Raspberry Pi (right) triggers acquisition. Semi-SupervisionFor the semi-supervised training, the ground truth relative camera pose is leveraged. The predicted depth estimate is used to formulate the photometric image reconstruction. We also enforce the smoothness loss as detailed above. Data FusionDespite providing high accuracy ground truth, our annotation pipeline is time-consuming. One may ask whether this cannot be done with multi-view data aggregation. We therefore compare the quality against the dense structure from motion method Kinect Fusion [43] and an approach for TSDF Fusion [74]. The synchronized sensor availability allows also to investigate and improve sensor fusion pipelines. To illustrate the impact of high quality GT for this task, we also train the recent raw ToF+RGB fusion network Wild-ToFu [27] on our dataset. ### Implicit 3D Reconstruction Recent work on implicit 3D scene reconstruction leverages neural radiance fields (NeRF) [42]. The technique works particularly well for novel view synthesis and allows to render scene geometry or RGB views from unobserved viewpoints. Providing additional depth supervision regularizes the problem such that fewer views are required and training efficiency is increased [13, 49]. We follow the motivation of [49] and leverage different depth modalities to serve as additional depth supervision for novel view synthesis. Following NeRF literature [42, 49], we encode the radiance field for a scene in an MLP \(F_{\theta}\) to predict colour \(\mathbf{C}=[r,g,b]\) and volume density \(\sigma\) for some 3D position \(\mathbf{x}\in\mathbb{R}^{3}\) and viewing direction \(\mathbf{d}\in\mathbb{S}^{2}\). We use the positional encoding from [49]. For each pixel, a ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\) from the camera origin \(\mathbf{o}\) is sampled through the volume at location \(t_{k}\in[t_{n},t_{f}]\) between near and far planes by querying \(F_{\theta}\) to obtain colour and density: \(\hat{\mathbf{C}}(\mathbf{r})=\sum_{k=1}^{K}w_{k}\mathbf{c}_{k}\) with \(w_{k}=T_{k}\left(1-\exp(-\sigma_{k}\delta_{k})\right)\), \(T_{k}=\exp\left(-\sum_{k^{\prime}=1}^{k}\sigma_{k^{\prime}}\delta_{k^{\prime} }\right)\) and \(\delta_{k}=t_{k+1}-t_{k}\). The NeRF depth \(\hat{z}(\mathbf{r})\) is computed by: \(\hat{z}(\mathbf{r})=\sum_{k=1}^{K}w_{k}t_{k}\) and the depth regularization for an image with rays \(\mathcal{R}\) is: \(\mathcal{L}_{\text{D}}\ =\ \sum_{\mathbf{r}\in\mathcal{R}}\frac{|\hat{z}( \mathbf{r})-z(\mathbf{r})|}{\hat{z}(\mathbf{r})+z(\mathbf{r})}\), where \(z(\mathbf{r})\) is the depth of the sensor. Using the mean squared error (MSE) loss \(\mathcal{L}_{\text{colour}}=\text{MSE}(\hat{\mathbf{C}},\mathbf{C})\) for synthesized colours, the final training loss is: \(\mathcal{L}_{\text{NeRF}}=\mathcal{L}_{\text{colour}}+\lambda_{\text{D}}\cdot \mathcal{L}_{\text{D}}\). ## 5 Sensor Impact for Dense 3D Vision Tasks We train a series of networks for the task of monocular depth estimation and implicit scene reconstruction. ### Depth Estimation Results for monocular depth estimation with varying training signal are summarized in Table 2 and Fig. 5. We report average results for the scenes and separate performances for background, objects, and materials of different photometric complexity. The error varies from background to objects. Their varying photometric complexity can explain this. Not surprisingly, the ToF training is heavily influenced by reflective and transparent object material, where the active stereo camera can project some patterns onto diffusely reflective surfaces. Interestingly, the self- and semi-supervised setups help to recover information in these challenging setups to some extent, such that these cases even \begin{table} \begin{tabular}{c|c c c c c c c c c c c} \hline \hline Dataset & Acc.GT & RGB & D-ToF & I-ToF & Stereo & Act.Stereo & Polar. & Indoor & Real & Video & Frames \\ \hline Agresti [2] & - & - & - & ✓ & - & - & - & ✓ & ✓ & - & \(113\) \\ CroMo [60] & - & - & - & ✓ & ✓ & ✓ & ✓ & (✓) & ✓ & ✓ & \(>\)10k \\ Zhu [75] & - & (✓) & - & - & - & - & ✓ & ✓ & ✓ & - & \(1\) \\ Sturm [57] & - & ✓ & - & - & - & - & - & ✓ & ✓ & ✓ & \(>\)10k \\ [28][45][45][4] & - & ✓ & - & - & - & - & ✓ & ✓ & ✓ & - & \(1/40/300\) \\ Guo [26] & ✓ & - & - & ✓ & - & - & - & ✓ & - & - & \(2000\) \\ **Ours** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & \(>\)10k \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of Datasets**. Shown are differences between our dataset and previous multi-modal depth datasets for indoor environments. Our dataset is the only one that provides highly accurate GT (Depth, Surface Normals, 6D Object Poses, Instance Masks, Camera Poses, Dense Scene Mesh) together with varying sensor data for real scenes. \begin{table} \begin{tabular}{c l|l|c c|c c c} \hline \hline & Training Signal & Full & BG & Obj & Text. & Ref. & Transp. \\ \hline \multirow{4}{*}{Semi-} & I-ToF & 113.29 & 111.13 & 119.72 & 54.45 & 87.84 & 207.89 \\ & D-ToF & 77.97 & **69.87** & 112.83 & **37.88** & 71.59 & 207.85 \\ & Active Stereo & **72.20** & 71.94 & **61.13** & 50.90 & **52.43** & **87.24** \\ \hline \hline \multirow{4}{*}{Semi-} & Pose & **154.87** & **158.67** & **65.42** & **57.22** & **37.78** & 61.86 \\ & M & 180.34 & 183.65 & 85.51 & 84.26 & 48.80 & **49.62** \\ \cline{1-1} & M+S & 159.80 & 161.65 & 82.16 & 71.24 & 63.92 & 66.48 \\ \hline \hline \end{tabular} \end{table} Table 2: **Depth Prediction Results for Different Training Signals.** Top: Dense supervision from different depth modalities. Bottom: Evaluation of semi-supervised (pose GT) and self-supervised (mono and mono+stereo) training. The entire scene (Full), background (BG), and objects (Obj) are evaluated separately. Objects material is further split into textured, reflective and transparent. **Best** and 2nd best RMSE in mm are indicated. outperform the ToF supervision for photometrically challenging objects. In contrast, simpler structures (such as the background) benefit from the ToF supervision. This indicates that sensor-specific noise is learnt and reveals that systematic errors of learning approaches cannot be evaluated if such 3D devices are used for ground truth evaluation without critical analysis. This might ultimately lead to incorrect result interpretations, particularly if self-supervised approaches are evaluated against co-modality sensor data. The table also discloses that the mutual prediction of inter-frame poses in self-supervision indoor setups is challenging, and accurate pose labels can have an immediate and significant impact on the depth results (Pose vs. M). Fig. 6 shows that multi-view data aggregation in the form of dense SfM fails to reproduce highly reliable 3D reconstructions. In particular transparent and diffuse texture-less objects pose challenges to both Active Stereo and D-ToF. These can neither be recovered by the Kinect Fusion pipeline [43] nor by the TSDF Fusion implementation of Open3D [74] for which we use the GT camera poses. Inherent sensor artefacts are present even if depth maps from different viewpoints are combined. This quality advantage justifies our expensive annotation setup. We further analysed the results of training runs with DPT [46] and MIDAS [47], which we train from scratch. While these more complex architectures with higher capacity show the same trend and also learn sensor noise, the training time is significantly longer. More details are provided in the supplementary material. From the previous results, we have seen that ToF depth is problematic for translucent and reflective material. Fig 7 illustrates that an additional co-modal input signal at test time can cure these effects partly. It can be observed that the use of additional RGB data in [27] reduces the influence of MPI and resolves some material-induced depth artefacts. Our unique dataset also inspires cross-modal fusion pipelines' development and objective analysis. ### Implicit 3D Reconstruction & View Synthesis Our implicit 3D reconstruction generates novel views for depth, normals and RGB with varying quality. If trained with only colour information, the NeRF produces convincing RGB views with the highest PSNR (cf. Fig. 8 and Table 3). However, the 3D scene geometry is not well reconstructed. In line with the literature [13, 49], depth regularization improves this (e.g. on texture-less regions). Regularising with different depth modalities makes the sensor noise of I-ToF, AS, and D-ToF clearly visible. While the RMSE behaves similarly to the monocular depth prediction results with AS as best, followed by D-ToF and I-ToF. The cosine similarity for surface normal estimates confirms this trend. The overall depth and normal reconstruction for AS are very noisy, but depth error metrics are more sensitive for significant erroneous estimates for reflective and translucent objects. Prior artefacts of the respective sensor influence the NeRF and translate into incorrect reconstructions (e.g. errors from D-ToF and I-ToF for translucent material or noisy background and inaccurate depth discontinuities at edges for AS). Interestingly, the D-ToF prior can improve Figure 5: **Fully Supervised Monocular Depth.** Monocular depth tends to overfit on the specific noise of the sensor the network is trained on. Prediction from Active Stereo GT is robust on the material while depth map is blurry, while both I-ToF and D-ToF has strong material dependent artifact but sharp on the edges. Figure 6: **Dense SfM.** A scene with our GT (left), Kinect [43] (top) and TSDF [74] (bottom) fusion approaches. Inherent sensor noise due to MPI (white), transparent objects (red), and diffuse texture-less material (yellow) persists. Figure 7: **Sensor Fusion.** Scene (left) with I-ToF depth (centre) and ToF+RGB Fusion [27] (right). Fusion can help to resolve some material induced artefacts (yellow) as well as MPI (blue). the overall reconstruction for most of the scene but fails for the bottle, where the AS can give better depth priors. This is also visible in the synthesised depth. Leveraging synthetic depth GT (last row) mitigates these issues and positively affects the view synthesis with higher SSIM. ## 6 Discussion & Conclusion This paper shows that questioning and investigating commonly used 3D sensors helps to understand their impact on dense 3D vision tasks. For the first time, we make it possible to study how sensor characteristics influence learning in these areas objectively. We quantify the effect of various photometric challenges, such as translucency and reflectivity for depth estimation, reconstruction and novel view synthesis and provide a unique dataset to stimulate research in this direction. While obvious sensor noise is not "surprising", our dataset quantifies this impact for the first time. For instance, interestingly, D-ToF supervision is significantly better suited (13.02 mm) for textured objects than AS, which in return surpasses I-ToF by 3.55 mm RMSE (cf. 2). Same trend holds true on mostly texture-less backgrounds where D-ToF is 37% more accurate than I-ToF. For targeted analysis and research of dense methods for reflective and transparent objects, a quantitative evaluation is of utmost interest - while our quantifiable error maps allow specifying the detailed deviations. Although our dataset tries to provide scenes with varying backgrounds, the possible location of the scene is restricted due to the limited working range of the robot manipulator. Aside from our investigations and the evaluation of sensor signals for standard 3D vision tasks, we firmly believe that our dataset can also pave the way for further investigation of cross-modal fusion pipelines. Figure 8: **Reconstruction Results.** The results of an implicit scene reconstruction with a Neural Radiance Field (NeRF) are shown. Images are synthesised for depth, surface normals and RGB for an unseen view, which is shown together with the prediction errors. The columns allow us to compare different methods where a NeRF [42] is trained solely on RGB (first column) and various depth maps for regularisation as proposed in [49]. The last column illustrates synthesised results from training with GT depth for comparison. Differences are visible, especially for the partly reflective table edges, the translucent bottle and around depth discontinuities. \begin{table} \begin{tabular}{c|c c c c c c|c} \hline \multicolumn{2}{c|}{RGB} & \multicolumn{4}{c|}{Depth} & \multicolumn{2}{c}{Normal} \\ \cline{2-7} Modality & PSNR & SSIM \(\uparrow\) & Ann-Rad.1 & Sq.1d & RMSE \(\downarrow\) & \(\sigma<1.25\) & Cas. Sim. \\ \hline RGB Only & **23.406** & 0.520 & 0.328 & 111.229 & 226.187 & 0.631 & 0.084 \\ \hline + AS & 17.570 & 0.656 & 0.113 & 16.009 & 94.520 & 0.853 & 0.071 \\ + I-ToF & 18.042 & 0.653 & 0.526 & 91.428 & 27.374 & 0.520 & 0.102 \\ + D-ToF & 31.812 & 0.883 & 0.112 & 24.988 & 119.455 & 0.882 & 0.031 \\ + Syn & 22.002 & **0.394** & **0.001** & **0.059** & **3.529** & **1.000** & **0.001** \\ \hline \end{tabular} \end{table} Table 3: **Novel View Synthesis from Implicit 3D Reconstruction.** Evaluation against GT for RGB, depth and surface normal estimates for different optimisation strategies (RGB-only for supervision and \(+\) respective sensor depth). We indicate **best**, 2nd best and 3rd best. Depth metrics are shown. **D-ToF**
2310.14978
LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks with TTFS Coding
The biological neurons use precise spike times, in addition to the spike firing rate, to communicate with each other. The time-to-first-spike (TTFS) coding is inspired by such biological observation. However, there is a lack of effective solutions for training TTFS-based spiking neural network (SNN). In this paper, we put forward a simple yet effective network conversion algorithm, which is referred to as LC-TTFS, by addressing two main problems that hinder an effective conversion from a high-performance artificial neural network (ANN) to a TTFS-based SNN. We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks, including image classification, image reconstruction, and speech enhancement. With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs. The study, therefore, paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, Haizhou Li
2023-10-23T14:26:16Z
http://arxiv.org/abs/2310.14978v1
# LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks with TTFS Coding ###### Abstract The biological neurons use precise spike times, in addition to the spike firing rate, to communicate with each other. The time-to-first-spike (TTFS) coding is inspired by such biological observation. However, there is a lack of effective solutions for training TTFS-based spiking neural network (SNN). In this paper, we put forward a simple yet effective network conversion algorithm, which is referred to as LC-TTFS, by addressing two main problems that hinder an effective conversion from a high-performance artificial neural network (ANN) to a TTFS-based SNN. We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks, including image classification, image reconstruction, and speech enhancement. With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs. The study, therefore, paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms. Deep Spiking Neural Network, Time-to-first-spike Coding, ANN-to-SNN Conversion, Image Classification, Image Reconstruction, Speech Enhancement ## I Introduction Over the last decade, we have witnessed tremendous progress in artificial intelligence technologies, that include computer vision [1, 2, 3, 4], speech processing [5, 6], natural language processing [7, 8], and robotics [9, 10]. However, the core computational model behind this revolution, i.e., artificial neural network (ANN), is computationally expensive to operate. This prompts the researchers to improve the computational efficiency of ANNs, for example, through model compression [11, 12], network accelerator [13], and reduction of on-chip data movements [14]. Nevertheless, the high computational cost remains a major roadblock to the deployment of ANNs on power-constrained platforms, such as wearable and mobile devices [15]. The human brains evolve over many millennia under strong ecological pressure to be highly efficient and effective, therefore, it is worthwhile to look into the computation principles adopted by biological neural networks. Motivated by this, the spiking neural networks (SNNs), which were initially introduced to simulate neural computations [16], are now considered a power-efficient alternative to the mainstream ANNs, with a great potential to become the solution for power-constrained platforms. SNNs, which emulate the information processing mechanism of biological neural networks [17, 18, 19, 20, 21], represent and transmit information through asynchronous action potentials or spikes. Due to the complex spatial-temporal dependency of spike trains and the discontinuity at the spike generation time, the canonical back-propagation (BP) algorithm is not directly applicable to the training of SNNs [22, 23]. The surrogate gradient learning method [24] has been introduced recently to address these problems. It models the spiking neuron as a self-recurrent neural network that explicitly captures the spatial-temporal dependency between input and output spike trains. Furthermore, the continuous surrogate gradient functions are introduced during gradient back-propagation that effectively addresses the discontinuity issue at the spike generation time. Despite much progress on a host of machine learning and neuromorphic benchmarks [25, 26, 27, 28, 29, 30, 31], it is computationally prohibitive to exactly model the temporal dynamics of SNNs due to exorbitant memory requirement, even for a network of less than ten layers [27]. Besides, this method suffers from the vanishing and exploding gradient problem that is notorious for recurrent neural networks, making long-range temporal credit assignments ineffective. A more biologically plausible way entails considering propagating spikes only when the neuron fires spikes, which reduces the overall number of gradient propagations on neuromorphic hardware [32]. Zhu et al. [32] have rigorously proved that event-based propagation allocates gradients from one output spike to its corresponding input spikes, thus preserving the sum of gradients between layers. Armed with this insight, they successfully trained SNNs with temporal gradients on the CIFAR-100 dataset for the first time. In another vein of research, ANN-to-SNN conversion methods are introduced as an alternative solution to address the difficulties in direct SNN training. A large body of these network conversion methods takes the firing rate of spiking neurons to approximate the activation value of analog neurons used in the ANN, which we refer to as _rate conversion_ in this paper. By carefully determining the firing threshold of spiking neurons or the weight normalization factor, the pre-trained network parameters of ANN can be directly transferred to the SNN with an equivalent network structure [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]. In this way, we can avoid the expensive spatio-temporal credit assignments in surrogate gradient learning. The rate conversion methods map ANN to SNN with high precision on major machine learning benchmarks, such as ImageNet-12 [43, 48, 49, 50, 51], with a high latency [42] due to the requirement of a large simulation time window. The benefits that rate-based SNNs bring are hence limited. It has long been identified in the biological neural systems that the precise spike firing time, in addition to the spike firing rate, carries a significant amount of information [52]. Based on these insights, the time-to-first-spike (TTFS) coding scheme was formulated [53, 54, 55, 56], where only one single spike is allowed within each time window as shown in Fig. 1 (a), and a stronger stimulus is transduced into an earlier firing spike. In this way, with a fewer number of spikes, the TTFS coding scheme is expected to be more efficient computationally than rate-based coding. Embracing the TTFS coding, Rueckauer et al. [42] proposed an ANN-to-SNN conversion algorithm, where the activation value of ANN was treated as the instantaneous firing rate, and subsequently converted to the equivalent spike time in SNN. However, the scalability of this conversion algorithm to deeper network architecture was not demonstrated. The TDSNN [57] algorithm has been proposed to achieve better scalability, which introduces an auxiliary ticking neuron for each operating spiking neuron and a TTFS-like reverse coding scheme. However, the additional spikes generated from auxiliary ticking neurons adversely affect the model efficiency. In contrast, our method uses the same network architecture as the ANN, which does not require any additional neurons or spikes. Recently, the T2FSNN [58] algorithm has been introduced with improved conversion performance on TTFS-based SNNs, while their results still lag behind rate-based SNNs. Moreover, their kernel-based dynamic threshold is considered more computationally expensive on hardware implementation than ours layer-wise dynamic thresholds. Similar to the TDSNN work, Han and Roy [59] proposed a Temporal-Switch-Coding (TSC) scheme and a corresponding TSC spiking neuron model. However, this coding scheme requires two spikes to encode the information of each analog neuron, whereby the information is represented as the time difference between these two spikes. Besides, the introduced TSC neuron model is computationally more expensive than the Integrate-and-Fire neuron model adopted in other works. In what follows, in contrast to the rate conversion introduced earlier, we refer to the ANN-to-SNN conversion based on the TTFS coding as _TTFS conversion_. In this work, we aim to bridge the accuracy gap between TTFS conversion and rate conversion, thereby fully realizing the computational advantages of SNNs on power-constrained platforms. Toward this goal, we make the following three contributions: 1. We perform a comprehensive analysis on the problems underlying TTFS conversion, including temporal dynamics problem and time-warping problem. 2. We propose a simple yet effective TTFS conversion algorithm that can effectively address all the above problems. As shown in Fig. 1 (c), it establishes a near-perfect mapping between activation values of ANNs and spike times of SNNs, which leads to a near-lossless TTFS conversion. 3. We successfully implement the TTFS conversion algorithm for image classification, image reconstruction, and speech enhancement tasks. To the best of our knowledge, this is the first work that applies TTFS-based SNN in solving challenging signal reconstruction tasks. The rest of this paper is organized as follows: In Section II, we introduce the research problems to set the stage for our study. In Section III, we present the proposed TTFS conversion algorithm to address the identified problems. In Section IV, we first evaluate the proposed conversion algorithm on the image classification task. Then, we thoroughly evaluate the effectiveness of the proposed algorithm through a series of ablation studies. In Section V, we further evaluate the proposed algorithm on two signal reconstruction tasks, i.e. image reconstruction and speech enhancement. Finally, Section VI concludes the study. ## II Preliminaries and Problem Analysis We begin by introducing the analog and spiking neuron models, as well as the TTFS coding scheme. We then analyze the problems underlying TTFS conversion. ### _Preliminaries_ #### Ii-A1 Analog neuron model In ANN-to-SNN conversion, an ANN is first trained, wherein analog neurons are employed and formulated as follows, \[a_{i}^{l}=f(\sum_{i}^{N}w_{ij}^{l}a_{j}^{l-1}+b_{i}^{l}) \tag{1}\] where \(a_{j}^{l-1}\) and \(a_{i}^{l}\) are the input and output of neuron \(i\) at layer \(l\), respectively. \(w_{ij}^{l}\) is the synaptic weight between pre-synaptic neuron \(j\) and post-synaptic neuron \(i\), and \(b_{i}^{l}\) is the bias term of neuron \(i\). \(f(\cdot)\) denotes the activation function, for which we use a modified version of the Rectified Linear Unit (ReLU) function. Specifically, we clamp the activation value to be within \([0,1]\), similar to the ReLU6 function [60], and we refer to it as ReLU1 hereafter. As will be explained in the following section, this ensures a one-to-one correspondence can be established between ANN and SNN for lossless TTFS conversion. #### Ii-A2 Spiking neuron model For SNN, to be converted from the pre-trained ANN, we employ the Rectified Linear Postsynaptic Potential (Rel-PSP) spiking neuron model proposed in [61], whose membrane potential \(V_{i}^{l}(t)\) can be expressed as \[V_{i}^{l}(t)=\sum_{j=1}^{N^{l-1}}w_{ij}^{l}K(t-t_{j}^{l-1}) \tag{2}\] where \(K(\cdot)\) refers to the PSP kernel function, which is defined as \[K(t-t_{j}^{l-1})=\begin{cases}t-t_{j}^{l-1}&\text{if}\quad t>t_{j}^{l-1}\\ 0&\text{otherwise}\end{cases} \tag{3}\] For \(t>t_{j}^{l-1}\), Eq. (2) can be further simplified as \[V_{i}^{l}(t)=\sum_{j=1}^{N^{l-1}}w_{ij}^{l}(t-t_{j}^{l-1}) \tag{4}\] The neuron \(i\) fires a spike once its membrane potential exceeds the firing threshold \(\vartheta\), whose spike time \(t_{i}^{l}\) is defined as \[t_{i}^{l}=\mathcal{F}\left\{t|V_{i}^{l}(t)=\vartheta,t\geq 0\right\} \tag{5}\] The neuronal dynamics of the ReL-PSP spiking neuron model are illustrated in Fig. 1 (b). Without loss of generality, we set the firing threshold \(\vartheta\) to 1 in this work. #### Ii-A3 TTFS encoding scheme To encode the first spike time of a ReL-PSP spiking neuron with the activation value of a ReL1U1 neuron, we follow the TTFS encoding scheme. For each layer that with separate time window, as shown in Fig. 1 (a) and (c), we encode the activation value \(a_{i}^{l}\) into the spike time \(t_{i}^{l}\) as per \[\frac{t_{i}^{l}-t_{min}^{l}}{t_{max}^{l}-t_{min}^{l}}=1-a_{i}^{l} \tag{6}\] where \(t_{max}^{l}\) and \(t_{min}^{l}\) are the maximum and minimum permissible spike time of SNN layer \(l\). We define the time window \(T^{l}=t_{max}^{l}-t_{min}^{l}\). Without loss of generalizability, we fix this value to be 1 for all layers, i.e., \(T=1\). Hence, we can establish the following encoding function \[t_{i}^{l}=l+1-a_{i,n}^{l}, \tag{7}\] ### _TTFS Conversion Problems_ #### Ii-B1 Temporal dynamics problem Following the above TTFS encoding scheme, a larger activation value in the ANN layer shall be encoded into an earlier spike in the corresponding SNN layer, and vice versa. The additional temporal dynamics introduced during the conversion process may, however, give rise to missing spikes and premature spikes. The missing spike problem happens, during ANN-to-SNN conversion, when an analog neuron receives multiple tiny weighted inputs, while their corresponding spikes as a whole are insufficient to trigger an output spike within the given time window. In contrast, the premature spike problem, which has been also described in [42], happens when the positive weighted spikes arrive earlier than the negative ones, causing the actual number of output spikes more than expected. To better understand the premature spike problem, let's consider a network formed by two input neurons \(A\) and \(B\) that is connected to an output neuron \(C\), with synaptic weights \(w_{CA}=5\) and \(w_{CB}=-10\). As shown in Fig. 2(a), assuming the activation value of analog neurons \(A\) and \(B\) is \(a_{A}=0.8\) and \(a_{B}=0.4\). As these two weighted inputs cancel out each other, the net input received by the neuron \(C\) will be \(0\). According to Eq. (6), the converted spike time of spiking neurons \(A\) and \(B\) are \(t_{A}=0.2\) and \(t_{B}=0.6\). Assuming the firing threshold is \(1\), the earlier input spike from neuron \(A\) will trigger neuron \(C\) to fire a spike at \(t_{C}=0.4\), before receiving the inhibition input from neuron \(B\). In this example, both neurons \(A\) and \(B\) contribute to neuron \(C\) in the ANN, while the contribution of neuron B has been discarded in the SNN due to the additional temporal dynamics introduced after the TTFS conversion. We refer to both missing spike and premature spike problems as the temporal dynamics problem, which will compound across network layers and may eventually lead to a fatal mismatch between the outputs of ANN and SNN. To eliminate this problem, we propose a dynamic firing threshold mechanism Fig. 1: (a) Comparison of rate and TTFS coding schemes. (b) Illustration of spiking neuronal dynamics with ReL-PSP kernel function. (c) Illustration of inference pipeline of the TTFS converted SNN, wherein each layer operates in consecutive but non-overlapping time windows. The inset on top shows the data distribution of activation values in an ANN and spike times of the converted SNN. The activation values are mapped to spike times in a one-to-one correspondence following the TTFS encoding scheme. Fig. 2: Illustration of the premature spike problem. (a) For ANN, the net inputs from neurons A and B are summed to \(0\), causing neuron C to remain inactivated. (b) For SNN, after TTFS conversion, the earlier spike from neuron A will cause neuron C to fire a ‘premature’ spike at \(t_{c}=0.4\), before the arrival of the inhibition input from neuron B. to ensure spiking neurons in the \(l\)-th layer only fire within the permissible time window \([lt,(l+1)T)\), after all their input spikes are received. This ensures the contributions from pre-synaptic spiking neurons are fully considered by the post-synaptic spiking neuron. The details of this mechanism will be explained in Section III-A. #### Ii-B2 Time-warping problem As we already introduced in Section II-A2, the spiking neuron will fire a spike when the membrane potential \(V_{i}^{l}(t)\) reaches the firing threshold \(\vartheta\). According to Eq. (5), the spike time \(t_{i}^{l}\) can be calculated as \[t_{i}^{l}=\frac{\vartheta+\sum_{j=0}^{N^{l-1}}w_{ij}^{l}t_{j}^{l-1}}{\sum_{j=0 }^{N^{l-1}}w_{ij}^{l}} \tag{8}\] Following Eq. (6), the activation value \(a_{j}^{l-1}\) of the analog neuron should be encoded into spike time \(t_{j}^{l-1}\) as per \[t_{j}^{l-1}=l-a_{j}^{l-1} \tag{9}\] Taking Eq. (9) into Eq. (8), we can establish the following relationship between the activation value \(a_{j}^{l-1}\) and the spike time \(t_{i}^{l}\) \[t_{i}^{l}=\frac{\vartheta+l\times\sum_{j=0}^{N^{l-1}}w_{ij}^{l}-\sum_{j=0}^{N^ {l-1}}w_{ij}^{l}a_{j}^{l-1}}{\sum_{j=0}^{N^{l-1}}w_{ij}^{l}} \tag{10}\] According to Eq. (10), the spike time \(t_{i}^{l}\) depends on both weighted inputs \(\sum_{j=0}^{N^{l-1}}w_{ij}^{l}a_{j}^{l-1}\) and weight sum \(\sum_{j=0}^{N^{l-1}}w_{ij}^{l}\). When transferring an activation value \(a_{j}^{l-1}\) from an analog neuron to a spiking neuron, this additional dependency on the weight sum will cause a significant mismatch between the ANN and SNN outputs if not properly addressed. Ideally, identical inputs should produce consistent outputs in \(t_{i}^{l}\). Yet, variations in the weight sum for each neuron \(i\) cause deviations in the output for the same input value. We refer to this problem as the _time-warping problem_. As will be introduced in Section III-B, we propose a weight regularization strategy to ensure the weight sum equals \(1\), thereby eliminating this time-warping problem. ## III LC-TTFS conversion algorithm ### _Solve temporal dynamics problem with dynamic threshold_ As discussed in Section II-B1, the temporal dynamics problem will result in a mismatch between the ANN and SNN outputs. To address this problem, we propose a dynamic firing threshold for spiking neurons. As shown in Fig. 3(a), the dynamic firing threshold \(\vartheta^{l}(t)\) for neurons in the \(l\)-th layer is determined according to the following piecewise linear function \[\vartheta^{l}(t)=\begin{cases}\infty&\text{if}\quad t<Tl\\ 1&\text{if}\quad t\in[Tl,T(l+1))\\ -\infty&\text{if}\quad t\geq T(l+1)\end{cases} \tag{11}\] The role of the proposed dynamic firing threshold is to set the spike time outside the permissible time window to the two boundary values. This is equivalent to applying a transformation function \(F^{l}(t)\), defined as in Eq. (12) and illustrated in Fig. 3(b). Following this dynamic firing threshold, the earliest spike time for neurons in the \(l\)-th layer is \(Tl\), and the latest spike should fire before \(T(l+1)\). As such, the time window is non-overlapping for each layer, and spiking neurons at layer \(l\) only start to fire after all input spikes from layer \(l-1\) are being integrated, therefore, overcoming the temporal dynamics problem. \[F^{l}(t)=\begin{cases}Tl&\text{if}\quad t<Tl\\ t&\text{if}\quad t\in[Tl,T(l+1))\\ T(l+1)&\text{if}\quad t\geq T(l+1)\end{cases} \tag{12}\] It is worth noting that there are two main differences between our proposed dynamic threshold and the method proposed in [42]. Concretely, 1) the dynamic threshold introduced in [42] depends on the weight of each neuron, resulting in the firing threshold varying from neuron to neuron that will cause significant hardware overhead. In contrast, our method shares one firing threshold for all neurons in the same layer. 2). our Fig. 3: (a) Illustration of the dynamic firing threshold. (b) Illustration of the effective spike time transformation achieved by the dynamic firing threshold. The x- and y-axis represent the spike time before and after applying the dynamic firing threshold, respectively. (c) Illustration of the ReLU1 activation function used in the ANN. Note that the activation values of analog neurons in (c) can be mapped to spike times in (b) in a one-to-one correspondence, while the order of the mapping is reversed following the TTFS encoding scheme. method does not require calculating the missing spikes which is computationally expensive. The dynamic firing threshold ensures that the spiking neurons fire only within their permissible time window. To achieve a lossless conversion from the ANN, the activation value of each ANN layer should also be bounded within a particular interval. To this end, we employ the ReLU1 activation function, which is formulated as follows \[y(x)=\begin{cases}0&\text{if}\quad x\leq 0\\ x&\text{if}\quad x\in(0,1]\\ 1&\text{if}\quad x>1\end{cases} \tag{13}\] As shown in Figs. 3(b) and 3(c), with the proposed ReLU1 function, the ANN activation region is one-to-one mapped to the spike time region. The ablation study performed in Section IV-D5 highlights the necessity of this dynamic threshold mechanism toward a lossless TTFS conversion. ### _Solve time-warping problem with weight regularization_ To deal with the time-warping problem introduced in Section II-B2, we proposed a two-step weight regularization strategy to ensure the weight sum in Eq. (10) will take a constant value of 1. In the first step, we impose a soft constraint by penalizing those weight sums that are not equal to 1 with an L1 loss function \(\mathcal{L}_{W}\) that is defined as follows \[\mathcal{L}_{W}=\sum_{l=0}^{L-1}\sum_{i=0}^{N^{l}}\left|\sum_{j=0}^{N^{l-1}}w_ {ij}^{l}-1\right| \tag{14}\] As shown in Fig. 4(a), this step drives the overall weight sum distribution towards 1, which can largely alleviate the time-warping problem. However, these seemingly small deviations from the ideal value of 1 will compound across layers and significantly deteriorate the performance of converted SNNs. To fully resolve the time-warping problem, we further impose a hard constraint by distributing the weight sum deviation evenly across all contributing synapses for each neuron. As shown in Fig. 4(b), by introducing soft and hard constraints during ANN pre-training, we ensure all weight sums are exactly equal to 1. As the soft constraint already drives the weight sum to approach the ideal value, the additional hard constraint has little interference to the learning dynamics and network convergence. This has been confirmed by the ablation study that will be introduced in Section IV-D. With the time-warping problem resolved, we now can establish the relationship between \(t_{i}^{l}\) and \(a_{i}^{l}\). Considering weight regularization, we obtain \[t_{i}^{l}=1+l-\sum_{j=0}^{N^{l-1}}w_{ij}^{l}a_{j}^{l-1}, \tag{15}\] here, \(\sum_{j=0}^{N^{l-1}}w_{ij}^{l}=1\) and \(\vartheta=1\). With the Eqns. (12, 13) and visualization in Fig.3 (b,c), we yield \[F^{l}(x)=T(l+1)-y(T(l+1)-x). \tag{16}\] As mentioned in Section II-A3, we adopt \(T=1\) in this work, thereby obtaining \[F^{l}(x)=l+1-y(l+1-x). \tag{17}\] Substitute Eqn. (15) into Eqn. (17), we have \[F^{l}(t_{i}^{l}) =l+1-y(l+1-t_{i}^{l}) \tag{18}\] \[=l+1-y(l+1-1-l+\sum_{j=0}^{N^{l-1}}w_{ij}^{l}a_{j}^{l-1})\] \[=l+1-a_{i}^{l}\] Since \(F^{l}(t_{i}^{l})=t_{i}^{l}\) holds true within the permissible time window, we ultimately obtain \[t_{i}^{l}=l+1-a_{i}^{l}, \tag{19}\] This result is consistent with the encoding scheme for neurons in layer \(l-1\), as depicted by Eqn. (9). However, it introduces a T steps shift to have a non-overlapping window. ### _Pre-activation normalization strategy_ The BN is an important technique to accelerate the deep neural network training, which normalizes the pre-activation of each layer to follow normal distribution so as to mitigate the internal covariate shift problem. We indeed concur that the bias term from an ANN can be transposed into an SNN by injecting a constant input current at the beginning of each applicable time window. This strategy has proven effective in addressing the internal covariate shift issue in many cases [51, 62]. However, in this work, this approach is not directly Fig. 4: Illustration of weight sum distributions of a randomly selected network layer, trained on the CIFAR-10 dataset, after imposing the (a) soft constraint, and (b) both soft and hard constraints. applicable to our proposed TTFS conversion method. The reason for this lies in the fact that once the BN parameters are merged with the weight sum constraint inherent in the pre-trained ANN model, the constraint becomes invalidated. Essentially, the preservation of this constraint is crucial for the functionality of our proposed framework. To compensate for the absence of BN, we propose to normalize the pre-activation distribution of each layer implicitly by introducing a new loss term \(\mathcal{L}_{A}\) as given in Eq. (20). Since we expect the activation values to lie within \((0,1]\) as desired for ReLU1, therefore, we apply an L1 loss to the pre-activation of each layer, encouraging them to fit a normal distribution with zero mean and standard deviation of 1/3, i.e., \(\mathcal{N}(0,1/9)\). This ensures \(99.7\%\) pre-activation values will lie within the interval \([-1,1]\), such that the activation values will mostly lie within \((0,1]\). \[\mathcal{L}_{A}=\sum_{l=0}^{L-1}\sum_{i=0}^{N^{l}}\left|\sum_{j=0}^{N^{l-1}}w_ {ij}^{l}a_{j}^{l-1}-A_{i}^{l}\right| \tag{20}\] where \(A_{i}^{l}\) is a vector, wherein entry \(i\) is the normalized form, draw from \(\mathcal{N}(0,1/9)\), of the pre-activation \(\sum_{j=0}^{N^{l-1}}w_{ij}^{l}a_{j}^{l-1}\). In Fig. 5, we show an example of how the pre-activation normalization strategy effectively drives the distribution of pre-activation values towards a normal distribution of \(\mathcal{N}(0,1/9)\). Without the normalization, the pre-activation values are skewed towards a mean of \(-0.3\). The effectiveness of this strategy in pre-training high-performance ANNs will be further demonstrated in our ablation study introduced in Section IV-D4. ### _Overall LC-TTFS algorithm_ The proposed LC-TTFS conversion algorithm consists of two stages. In the first stage, we pre-train an ANN with the constraints described above, and the overall loss function is defined as follows \[\mathcal{L}=\mathcal{L}_{ce}+\lambda_{W}\mathcal{L}_{W}+\lambda_{A}\mathcal{L }_{A} \tag{21}\] where the \(\mathcal{L}_{ce}\) is the cross-entropy loss for classification tasks, \(\mathcal{L}_{W}\) is the weight regularization loss defined in Eq. (14), and \(\mathcal{L}_{A}\) is the pre-activation normalization loss defined in Eq. (20). \(\lambda_{W}\) and \(\lambda_{A}\) are hyperparameters that balance the contribution of each individual loss term. Additionally, the hard constraints are imposed after each weight update to ensure a unity weight sum for all the neurons. In the second stage, the weights of the pre-trained ANN are directly copied to the SNN to perform inference. We set the threshold of neurons in the last SNN layer to be infinity, such that the decision can be made based on the accumulated membrane potential. By directly mapping the pre-activation of ANN into the neuron membrane potential of SNN, it frees the ANN from applying the activation function in the output layer that deteriorates the pre-training. ## IV Experiments on Image Classification In this section, we first evaluate the effectiveness of the proposed LC-TTFS conversion algorithm on the image classification task. Then, we perform a comprehensive analysis of the conversion efficacy and computational efficiency of converted SNNs. Finally, we present ablation studies that are designed to validate the effectiveness of each individual component of the proposed algorithm. ### _Experimental Setup_ #### Iv-A1 Datasets We perform image classification experiments on CIFAR-10 and CIFAR-100 [63] datasets, which are commonly used for benchmarking SNN learning algorithms. The CIFAR-10 dataset consists of 60,000 colored images with a standard dataset split of 45,000, 5,000, and 10,000 for train, validation, and testing. These images are categorized into 10 classes with an image size of 32\(\times\)32\(\times\)3. The CIFAR-100 dataset is an extended version of CIFAR-10, which includes the 32\(\times\)32\(\times\)3 images from 100 classes. We follow the same image pre-processing procedures as outlined in [62]. #### Iv-A2 Implementation details To facilitate the comparison with other existing works, we adopt VGGNet [64] and ResNet [2]. In particular, we follow the same VGG-11, VGG-16, and ResNet-20 network architectures as described in [49], except that we do not include BN layers. The dropout is applied after every layer except the pooling layers in VGG-11 and VGG-16, whereas it is only applied after the fully connected (FC) layers in ResNet-20. Given the absence of BN layers, it is important to have a proper weight initialization. To this end, we initialize the weights of convolutions layers to the normal distribution \(\mathcal{N}(0,\frac{2}{k^{2}n})\), where \(k\) and \(n\) correspond to the kernel size and the number of output channels at each layer. For FC layers, the weights are initialized to the normal distribution \(\mathcal{N}(0,0.0001)\). We use the PyTorch library for all experiments, as it supports accelerated and memory-efficient training on multi-GPU machines. We train VGG-11 and VGG-16 for 300 epochs and ResNet-20 for 200 epochs, and we adopt the SGD optimizer with a momentum of \(0.9\) and weight decay of \(0.0005\). We initialize the learning rate at \(0.01\) and decay its value by 10 at 0.6, 0.8, and 0.9 of the total number of epochs. We follow a discrete-time simulation for SNNs, with a time step of 0.02 ms Fig. 5: Comparison of the pre-activation value distributions with (blue) and without (brown) applying the pre-activation normalization technique proposed in this work. Note that the data distribution approaches a normal distribution of N(0,1/9) after applying the pre-activation normalization. The data is extracted from a randomly selected network layer trained on the CIFAR-10 dataset. It is better to view this figure in color. and 50 time steps for each layer. We report the classification accuracy on the test set. ### _Experimental Results and Analysis_ #### Iii-B1 Classification Accuracy Table I reports our experimental results on CIFAR-10 and CIFAR-100 datasets. For the CIFAR-10 dataset, our VGG-11 and ResNet-20 models achieved a SNN model classification accuracy of 91.25% and 92.67%, outperforming all other existing conversion methods. Our VGG-16 model also achieved competitive accuracy to other prior works using the same network structure. The same conclusion can also be drawn from the results of the CIFAR-100 dataset. Our ResNet-20 model achieves an accuracy of 72.36%, which is the best-reported result on this dataset as far as we know. Similarly, our VGG-16 model outperforms other works using temporal coding, which is also competitive to methods based on rate coding. The efficacy of our algorithm in achieving a lossless TTFS conversion is pronounced when looking at the conversion error. Notably, as shown in the last column of Table I, the conversion errors from the pre-trained ANNs are negligible. To better understand the origin of the conversion errors, we have plotted the activation values of the pre-trained ANN against those mapped back from the spike times of the converted SNN (following the TTFS encoding scheme). As illustrated in Fig. 6, the data distribution of the mapped back activation values closely follows that of the pre-trained ANN, except for the quantization errors arising from the discretized SNN simulation. The effect of such quantization errors is marginal from our experimental results and can be easily addressed by increasing the temporal resolution of SNN. ### _Computational Efficiency_ The TTFS-based SNNs are believed to be computationally more efficient than their rate-based counterparts. To shed light on this point, we follow the practices adopted by Intel [65] and compute the power proxy via the following equations: \[P_{proxy}=SynOPs+NeuronOPs, \tag{22}\] where \(SynOPs\) and \(NeuronOPs\) are the total number of synaptic operations and the total number of neuron updates, respectively. As extrapolated from the Loihi architecture [66], the energy weightage of a singular NeuronOPs corresponds approximately to that of around 10 SynOPs. This is due to the NeuronOPs being a multi-bit operation. Importantly, our utilized neuron model, ReLU-PSP, avoids the exponential leaky mechanism present in the LIF model. To update the neuronal state, ReLU-PSP requires only an addition operation at each time step, which is almost equivalent to a SynOPs. With reference to the configuration specifics reported by Lee et al. [67], we have computed and tabulated the total \(P_{proxy}\) cross our work and that of Diehl et al. [39], and Segupta et al. [48]. As illustrated in Table II, our TTFS conversion algorithm evidently surpasses the energy efficiency metrics of traditional rate-based conversion methods. ### _Ablation Studies_ Here, we present the ablation studies that are designed to validate the necessity and contribution of each individual component in the proposed algorithm. All the experiments are performed on the CIFAR-10 dataset using VGG-11. The experimental results are summarized in Table III with more detailed discussions presented in the following. #### Iii-D1 Model 1 We began by removing all the constraints that applied to the pre-trained ANN (i.e., soft and hard constraints for the weight sum, ReLU1 activation function, and pre-activation normalization) and the converted SNN (i.e., dynamic firing threshold). As a result, the accuracy of the ANN improved by only 0.03% over the baseline model. It suggests applying the proposed set of constraints has negligible influence on the ANN pre-training. The converted SNN, however, failed to perform the image classification task with the test accuracy dropping to a chance level, indicating it is crucial to apply the proposed set of constraints for a lossless TTFS conversion. #### Iii-D2 Model 2a and 2b We further studied the necessity of soft and hard constraints for the weight sum regularization. Dropping the hard constraint while keeping all the rest constraints during the ANN pre-training had minimal impact on the ANN model, whereas the accuracy of the converted SNN dropped by 4.62%. It suggests the hard constraint for weight sum regularization is essential to eliminate the time-warping problem discussed in Section III-B. We further explored dropping both the soft and hard constraints during the ANN pre-training. Interestingly, this allows the ANN model to perform better than the baseline model. However, we noticed that the accuracy of the converted SNN dropped by 62.92%, implying the soft constraint is critical for alleviating the time-warping problem. #### Iii-D3 Model 3 ReLU1 activation function bounds the activation values within a particular interval, so as to establish a one-to-one correspondence between the ANN and SNN. To understand the contribution of this constraint, we replaced the ReLU1 activation function with a standard ReLU function, while keeping all the rest constraints during the ANN pre-training. Interestingly, compared to the baseline model with Fig. 6: Comparison of the distribution of activation values of a pre-trained ANN and those mapped back from the spike times of the converted SNN following the TTFS encoding scheme. The data is extracted from a randomly selected network layer trained on the CIFAR-10 dataset. full constraints, the model using the ReLU function performs slightly worse than the one using the ReLU1 function. This may be explained by the fact that the ReLU1 activation function can indirectly regularize the weight sum and hence lead to easier fulfillment of the weight sum constraint. However, when changed to the ReLU activation function, it results in a severe accuracy drop of 26.43% on the converted SNN. These results indicate the ReLU1 function is imperative to ensure a one-to-one correspondence between the activation value of ANNs and the spike time of SNNs. #### Iii-B4 Model 4 Similar to Models 2 and 3, we pre-trained the ANN with all constraints except for the pre-activation normalization. This relaxes the pre-activation values from falling into the value range desired by the ReLU1. It, therefore, leads to poorer accuracy in the pre-trained ANN. We notice that the accuracy drop is more significant for networks with more layers (data not shown here), which is due to the internal covariate shift problem discussed in Section III-C. As expected, the absence of the pre-activation normalization technique did not affect the SNN conversion, and it can still achieve comparable accuracy to the pre-trained ANN. #### Iii-B5 Model 5 Finally, we follow the same ANN pre-training procedures as the baseline model, while during the network conversion, we replaced the dynamic firing threshold with a fixed threshold of \(1\). This modification results in a 13.48% accuracy drop during the network conversion. To further illustrate the effectiveness of applying the dynamic threshold to address the temporal dynamics problem discussed in Section II-B1, we plotted the spike time distribution using the fixed threshold and the proposed dynamic threshold. As shown in Fig. 7 (a), the spike times spread outside their allocated time intervals, and this problem becomes more obvious for deeper layers. Consequently, it leads to a high level of mismatch from the pre-trained ANN. In contrast, with the proposed dynamic threshold, the neurons at different layers only spike in their allocated time interval, i.e., \([Tl,T(l+1))\). This ensures the pre-synaptic spiking neurons fully contribute to their post-synaptic spiking neurons, eliminating the temporal dynamics problem. Although this non-overlapping time window may scarify the synchronized benefit of SNN, we highlight that our goal is to achieve lossless conversion from ANN to SNN using temporal coding without any training in SNN. To this end, the ANN activation and SNN spiking time are needed to be perfectly matched with each other. Therefore, we adopt the proposed synchronized approach to realize this requirement. It is worth noting that the synchronized layer-wise processing is also used in other exciting temporal coding works, such as [57, 58, 68]. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Diehl et al. & Segupta et al. & Ours \\ \hline VGG9 & 730.4M & 794M & 117M \\ ResNet9 & 956.4M & 1,226.6M & 107.8M \\ ResNet11 & 854.6M & 796.3M & 144.4M \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison of the Total \(P_{proxy}\). \begin{table} \begin{tabular}{l l l c c c} \hline \hline **Dataset** & **Method** & **Network Architecture** & **Neural Coding** & **Acc. (\%)** & \(\Delta\)**Acc. (\%)** \\ \hline \multirow{8}{*}{CIFAR10} & SPIKE-NORM [48] & VGG-16 & Rate & 91.55 & -0.15 \\ & PTL [62] & VGG-11 & Rate & 91.24 & 0.65 \\ & CQ trained SNN [51] & VGG-11 & Rate & 82.09 & -0.05 \\ & CQ trained SNN [51] & VGG-16 & Rate & 92.48 & -0.08 \\ & RMP [43] & VGG-16 & Rate & 93.63 & -0.01 \\ & RMP [43] & ResNet-20 & Rate & 91.36 & -0.11 \\ & Hybrid Training [49] & VGG-16 & Rate & 91.13 & -1.68 \\ & Hybrid Training [49] & ResNet-20 & Rate & 92.22 & -0.93 \\ & T2FNN [58] & VGG-16 & Temporal & 91.43 & - \\ & TSC [59] & VGG-16 & Temporal & 93.63 & -0.01 \\ & TSC [59] & ResNet-20 & Temporal & 91.42 & -0.05 \\ & **Ours** & **VGG-11** & **Temporal** & **91.25** & **-0.05** \\ & **Ours** & **VGG-16** & **Temporal** & **92.72** & **-0.07** \\ \hline \multirow{8}{*}{CIFAR100} & CQ trained SNN [51] & VGG-like & Rate & 71.52 & -0.4 \\ & RMP [43] & VGG-16 & Rate & 70.93 & -0.29 \\ \cline{1-1} & RMP [43] & ResNet-20 & Rate & 67.82 & -0.9 \\ \cline{1-1} & Hybrid Training [49] & VGG-11 & Rate & 67.87 & -3.34 \\ \cline{1-1} & T2FNN [58] & VGG-16 & Temporal & 68.79 & - \\ \cline{1-1} & TSC [59] & VGG-16 & Temporal & 68.18 & 0.25 \\ \cline{1-1} & TSC [59] & ResNet-20 & Temporal & 70.97 & 0.54 \\ \cline{1-1} & **Ours** & **VGG-16** & **Temporal** & **70.15** & **-0.13** \\ \cline{1-1} & **Ours** & **ResNet-20** & **Temporal** & **72.36** & **0.13** \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of the classification accuracy of different SNN models on the CIFAR-10 and CIFAR-100 datasets. Note that Acc. refers to the accuracy of converted SNN, and \(\Delta\) Acc. refers to the difference between the pre-trained ANN and the converted SNN. ## V Experiments on Signal Reconstruction In the previous section, we demonstrate the effectiveness and scalability of the proposed LC-TTFS conversion algorithm on image classification tasks. Existing TTFS-based learning algorithms often employ methods that achieve TTFS-based learning by either dropping or masking subsequent spikes after the first one during the training phase. Moreover, while there are gradient-based direct training algorithms cited in [45, 48], they have manifested convergence challenges, especially concerning deep neural networks. Although these methods might prove efficient for classification tasks, their efficacy diminishes notably when deployed for signal reconstruction tasks. In contrast, our proposed method adopts a conversion-based approach. This approach fundamentally eradicates the convergence issue and paves the way for establishing a direct and lossless mapping from the ANN. This ensures that information integrity remains uncompromised throughout the process. Performing TTFS encoding at each layer, allows us to preserve the information across the network and opens up the opportunity to perform signal reconstruction tasks with TTFS-based SNNs. In this section, we demonstrate the applicability and superior learning capability of the proposed LC-TTFS algorithm on two signal reconstruction tasks: image reconstruction and speech enhancement tasks. ### _Image Reconstruction with Autoencoder_ Here, we first demonstrate the applicability of our algorithm to image reconstruction tasks with an autoencoder network. The autoencoder is a typical neural network that is used to learn compact latent representations of input signals via a bottleneck layer that has a reduced latent dimension. From this compact latent representation, the autoencoder then reconstructs the original input signals as accurately as possible [69]. #### V-A1 Experimental Setup The experiments on image reconstruction are performed using the MNIST dataset [70]. Following the approach from [62], we use a fully-connected autoencoder with a 784-128-64-32-64-128-784 architecture and train it to minimize the MSE between the original input and the reconstructed output signal. The initial model was pre-trained as an ANN before being converted into a TTFS-based SNN using our proposed algorithm. We utilized the SGD optimizer for pre-training, with a cosine annealing learning rate schedule [71]. The output from spiking neurons was processed through a sigmoid function to generate the reconstructed image. Evaluation of our model was carried out using two commonly used image quality metrics, PSNR and SSIM, both of which provided insight into the quality of reconstructed images. We provide the results of these evaluations on the test set. #### V-A2 Result and Analysis Table IV summarises the results of on the image reconstruction task. Our TTFS-based SNN model achieves a comparable performance to the ANN-based counterpart in terms of the PSRN and SSIM metrics. In addition, the qualitative results shown in Fig. 8 demonstrate the TTFS-based SNN model can effectively reconstruct the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Model** & **Soft** & **Hard** & **ReLU1** & **Norm** & **Dynamic** & **ANN (Acc.\%)** & **SNN (Acc.\%)** & \(\Delta\)**Acc.\%** \\ \hline Baseline & ✓ & ✓ & ✓ & ✓ & ✓ & 91.30 & 91.25 & 0.05 \\ 1 & ✗ & ✗ & ✗ & ✗ & ✗ & 91.33 & 10.00 & 81.33 \\ 2a & ✓ & ✗ & ✓ & ✓ & ✓ & 91.57 & 86.95 & 4.62 \\ 2b & ✗ & ✗ & ✓ & ✓ & ✓ & 92.10 & 29.18 & 62.92 \\ 3 & ✓ & ✓ & ✗ & ✓ & ✓ & 90.87 & 64.44 & 26.43 \\ 4 & ✓ & ✓ & ✓ & ✗ & ✓ & 89.60 & 89.67 & -0.07 \\ 5 & ✓ & ✓ & ✓ & ✓ & ✗ & 91.30 & 77.82 & 13.48 \\ \hline \hline \end{tabular} \end{table} TABLE III: Summary of the result of ablation studies. Note that \(\Delta\) Acc. refers to the difference between the pre-trained ANNs and the converted SNNs given in columns 7 and 8, respectively. **Soft**: soft constraint for weight sum; **Hard**: hard constraint for weight sum; **ReLU1**: ReLU1 activation function; **Dynamic**: dynamic firing threshold; **Norm**: pre-activation normalization. Fig. 7: Illustration of spike timing distribution of SNN models (a) with a fixed firing threshold and (b) with a dynamic firing threshold. images with high quality. Altogether, these results suggest the proposed conversion algorithm is highly effective for the image reconstruction task. By representing information using spike times, the TTFS-based SNN is expected to greatly improve the energy efficiency over the rate-based counterparts. Following the same evaluation metrics introduced in Section IV-C, we report the energy proxy in Table IV. As shown, the energy efficiency of our TTFS-based model remains competitive with a high-optimized rate-based SNN reported in [62]. ### _Time-domain Speech Enhancement_ Speech enhancement, which improves the intelligibility and quality of noisy speech signals [66], has proven its importance in a vast amount of applications, including voice-based communication, hearing aids, speech recognition, and speaker identification [72, 73, 74, 75, 76]. Conventional speech enhancement methods are typically based on statistical models that estimate the data distribution of clean and noise signals, such as spectral subtraction [77] and Wiener filter [78]. However, these methods have shown limited improvements in speech quality under real-world scenarios. Recently, the ANN-based speech enhancement models have greatly improved the speech quality and intelligibility under complex acoustic environments [79, 80, 81, 82, 83]. Given a huge demand for speech enhancement technologies on mobile and IoT devices that have a limited power budget, it is, therefore, beneficial to develop power-efficient SNN-based speech enhancement models. Speech enhancement can be considered as separating the human voice from the background noise. Inspired by the recent success of the time-domain speech separation model ConvTasNet [84], we proposed a TTFS-based speech enhancement model. As illustrated in Fig. 9(a), the proposed speech enhancement model takes the noisy speech waveform as input and outputs an enhanced speech waveform. This model consists of three parts: an encoder, an enhancement module, and a decoder. The **encoder** transforms the noisy speech waveform \(x(t)\) into a high-dimensional feature representation, namely embedding coefficients, using a 1D convolutional layer. The 1D convolutional layer contains \(N(=128)\) filters, and each filter is configured to have a time window of \(L(=20)\) and a stride of \(L/2(=10)\) samples. For each time window, the **enhancement module** estimates a mask, using a stack of dilated convolutional layers, to separate the noise from the human voice. A \(1\times 1\) convolution layer is first applied to normalize the encoded feature representation, and the dilated convolution layers with the filter of 128, kernel size of \(1\times 3\), and stride of \(1\) are repeated \(10\) times with doubled dilation rate of \([1,2,4,...,512]\). The mask for human voices is then estimated by applying another \(1\times 1\) convolution layer. Subsequently, the feature representation of the enhanced speech is obtained by masking the background noise with the estimated mask. Finally, the **decoder** reconstructs a high-quality speech waveform \(y(t)\) from the masked feature representation using a 1D deconvolutional layer, which takes a reverse operation to the 1D convolutional layer in the encoder. Following the ConvTasNet, we train the proposed SNN-based speech enhancement model to minimize the multi-scale scale-invariant signal-to-distortion ratio (SI-SDR) [85] loss, which is defined as: \[\mathcal{L}_{SI-SDR}=-10log_{10}\bigg{(}\frac{\|\frac{\langle s,s\rangle}{ \langle s,s\rangle}s\|^{2}}{\|\frac{\langle\hat{s},s\rangle}{\langle s,s \rangle}s-\hat{s}\|^{2}}\bigg{)} \tag{23}\] where \(\hat{s}\) and \(s\) are enhanced and reference clean speech signals, respectively. We normalize these two signals to zero means to ensure scale invariance. #### Iv-B1 Experimental Setup To test our speech enhancement model, we employed a widely recognized dataset by Valentini et al. [86]. This dataset includes clean utterances from the Voice Bank corpus [87] and its noisy version created by combining clean utterances with environmental noise samples. The training set comprises 11,572 utterances mixed with ten types of noise at four different SNRs: 15 dB, 10 dB, 5 dB, and 0 dB. The test set contains 824 distinct utterances blended with five additional noise types at four SNRs: 17.5 dB, 12.5 dB, 7.5 dB, and 2.5 dB. Following the precedent set by SEGAN [79] and CNN-GAN [80], we reduced the original sampling rate to 16 kHz, without additional pre-processing. Using the LC-TTFS algorithm, we pre-trained the ANN-based speech enhancement model for 100 epochs, utilizing an early stopping scheme and an Adam optimizer. The model was then converted into an SNN-based module, employing the membrane potential in the \(1\times 1\) spiking convolution layers. We evaluate the speech enhancement models using the following standard metrics, which are available on the publisher's \begin{table} \begin{tabular}{l l c c c} \hline \hline **Model** & **Coding** & **PSNR** & **SSIM** & **Energy Proxy** \\ \hline SNN [62] & Rate & 20.74 & 0.84 & 715.8K \\ **ANN (ours)** & - & 20.90 & 0.917 & - \\ **SNN (ours)** & Temporal & 20.83 & 0.916 & 581.2K \\ \hline \hline \end{tabular} \end{table} TABLE IV: Comparison of the results of different methods on the image reconstruction task. Fig. 8: Comparison of the original and the reconstructed images from our TTFS-based autoencoder. For each image pair, the left one is the original image, and the right one is the reconstructed image. website1. Footnote 1: [https://www.rcpress.com/downloads/K14513/K14513_CD_Files.zip](https://www.rcpress.com/downloads/K14513/K14513_CD_Files.zip) 1. PESQ: Perceptual evaluation of speech quality. The wide-band version recommended by ITU-T P.862.2 standard [88] is used in this work. 2. CSIG: Mean option score (MOS) prediction of the signal distortion attending only to the speech signal [89]. 3. CBAK: MOS prediction of the intrusiveness of background noise [89]. 4. COVID: MOS prediction of the overall effect [89]. All metrics are calculated by comparing the enhanced speech to the clean reference speech, we report the average values for all 824 utterances in the test set. #### V-B2 Result and Analysis Table V compares the results of our ANN- and SNN-based speech enhancement models with other existing works using the four evaluation metrics introduced earlier. Our ANN-based model outperforms other existing methods across all the evaluation metrics, suggesting the effectiveness of the proposed model architecture. Moreover, the TTFS-based SNN model achieved comparable performance to the pre-trained ANN model, demonstrating the capability of the proposed TTFS conversion algorithm in solving the challenging speech enhancement task. We also performed subjective evaluation by listening to the enhanced speech signals generated by both the ANN- and SNN-based speech enhancement models. We find the SNN-enhanced speech samples are nearly indistinguishable from those high-quality ones generated by the ANN model. We publish some enhanced speech examples from the test set online to demonstrate our model performance2. In addition, we select a random speech sample from the test set and plot the power spectrum for the corresponding noisy, clean, ANN- and SNN-enhanced speech waveforms as shown in Fig. 9(b). It is clear that the SNN-enhanced speech spectrum exhibits a high level of similarity to the ANN-enhanced one, and both of them are very close to the ground truth clean speech spectrum. These results again highlight the effectiveness of the proposed model architecture and the TTFS conversion algorithm. Footnote 2: The listening examples are available online at [https://drive.google.com/file/d/1g5OCATyH1B3U5z_x6qDZam8bMcmElxR/view?usp=sharing](https://drive.google.com/file/d/1g5OCATyH1B3U5z_x6qDZam8bMcmElxR/view?usp=sharing) ## VI Discussion and Conclusion In this work, we identify and thoroughly investigate two major problems underlying the effective TTFS conversion, namely the temporal dynamics problem and time-warping problem. Based on this study, we further propose a novel TTFS conversion algorithm to address these problems, namely LC-TTFS. Firstly, to tackle the temporal dynamics problem, we introduce a dynamic firing threshold mechanism for spiking neurons that only allows neurons to fire within the allocated time window. In this way, the causal relationship between the input and output neurons is maintained throughout the network layers. Secondly, we apply a set of well-designed loss functions during the ANN pre-training to eliminate the \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & PESQ & CSIG & CBAK & COVID \\ \hline Noisy & 1.97 & 3.35 & 2.44 & 2.63 \\ Wiener [79] & 2.22 & 3.23 & 2.68 & 2.67 \\ SEGAN [79] & 2.16 & 3.48 & 2.94 & 2.80 \\ CNN-GAN [80] & 2.34 & 3.55 & 2.95 & 2.92 \\ **ANN (Ours)** & **2.36** & **3.63** & **3.03** & **2.98** \\ **SNN (Ours)** & **2.35** & **3.64** & **3.01** & **2.98** \\ \hline \hline \end{tabular} \end{table} TABLE V: Comparison of the experimental results of different speech enhancement models. The results are higher the better. Fig. 9: (a) SNN-based speech enhancement network architecture. (b) From top to bottom: the power spectrum of noisy, clean, ANN-enhanced, and SNN-enhanced speech waveforms. time-warping problem. Finally, we apply the pre-activation normalization technique during the ANN pre-training to alleviate the internal covariate shift problem due to lack of BN layer. With these problems being well addressed, we establish a near-perfect mapping, apart from the marginal discretization error in the SNN simulation, between the ANN activation values and the SNN spike times, leading to a near-lossless TTFS conversion. This also enables us to go beyond the commonly considered classification tasks and opens up a new avenue for solving high-fidelity signal reconstruction tasks with TTFS-based SNNs. The SNNs thus converted have demonstrated superior classification and signal reconstruction capabilities on image classification, image reconstruction, and challenging speech enhancement tasks. By representing information using spike times, instead of firing rates, we show TTFS-based SNNs can significantly improve the computational efficiency over both ANN and rate-based SNNs. By avoiding costly and ineffective direct SNN training, the proposed algorithm, therefore, opens up myriad opportunities for deploying efficient TTFS-based SNNs on power-constrained edge computing platforms. The carefully designed ablation studies on each individual component of the proposed algorithm highlight the necessity and synergy of these algorithmic components in achieving a near-lossless TTFS conversion. We would like to acknowledge that the proposed TTFS-based SNN requires a separate and non-overlapping time window for each layer, which adversely affects the inference speed. Therefore, we are interested in studying whether a perfect mapping can still be achieved under a shared time window to improve the inference speed, and we leave this as future work.
2308.01859
Effects of Cytoskeletal Network Mesh Size on Cargo Transport
Intracellular transport of cargoes in the cell is essential for the organization and functioning cells, especially those that are large and elongated. The cytoskeletal networks inside large cells can be highly complex, and this cytoskeletal organization can have impacts on the distance and trajectories of travel. Here, we experimentally created microtubule networks with varying mesh sizes and examined the ability of kinesin-driven quantum dot cargoes to traverse the network. Using the experimental data, we deduced parameters for cargo detachment at intersections and away from intersections, allowing us to create an analytical theory for the run length as a function of mesh size. We also used these parameters to perform simulations of cargoes along paths extracted from the experimental networks. We find excellent agreement between the trends in run length, displacement, and trajectory persistence length comparing the experimental and simulated trajectories.
Nimisha Krishnan, Niranjan Sarpangala, Maria Gamez, Ajay Gopinathan, Jennifer L Ross
2023-08-03T16:30:25Z
http://arxiv.org/abs/2308.01859v1
# Effects of Cytoskeletal Network Mesh Size on Cargo Transport ###### Abstract Intracellular transport of cargoes in the cell is essential for the organization and functioning cells, especially those that are large and elongated. The cytoskeletal networks inside large cells can be highly complex, and this cytoskeletal organization can have impacts on the distance and trajectories of travel. Here, we experimentally created microtubule networks with varying mesh sizes and examined the ability of kinesin-driven quantum dot cargoes to traverse the network. Using the experimental data, we deduced parameters for cargo detachment at intersections and away from intersections, allowing us to create an analytical theory for the run length as a function of mesh size. We also used these parameters to perform simulations of cargoes along paths extracted from the experimental networks. We find excellent agreement between the trends in run length, displacement, and trajectory persistence length comparing the experimental and simulated trajectories. **Keywords:** intracellular transport, microtubule, cytoskeleton, kinesin, cargo transport ## 1 Introduction The movement and positioning of large objects inside cells requires energy-using active transport by motor proteins traversing along cytoskeletal filaments [1]. This process of intracellular transport is responsible for the organization and reorganization that cells need to survive. Intracellular transport is especially important in cells that are long and extended, such as cilia and axons, or particularly crowded and viscous. In mammalian cells, which are differentiated into a myriad of cell types, diffusion of large cellular components is impeded by the complex viscoelastic nature of the cell interior, so active intracellular transport is required. Cytoskeletal filaments, microtubules and actin, serve as the tracks for intracellular transport. Microtubules are particularly used for long-distance transport [1, 2]. Prior works have shown that the arrangement of the cytoskeletal filaments can affect the transport properties of single motors and teams of motors attached to cargoes [3, 4, 5, 6, 7, 8, 9, 10, 11]. For long-distance transport, the microtubules are arranged in logical parallel bundles to take advantage of kinesin motors that move distally toward the microtubule plus ends and cytoplasmic dynein motors that move inward to the microtubule minus ends [12, 13]. In other cell types or locations, the cytoskeletal networks are more complicated. For instance, in muscle cells, microtubules create a cross-hatched network creating intersections for organelles and plasma membrane to anchor during large scale extensions and contractions [14, 15]. Prior experimental cellular work has demonstrated that the organization of the cytoskeleton can control the association, dissociation, and trajectory of vesicles that can dynamically change in time and space [11, 16]. There are still open questions about how dense, complex, and crowded conditions can regulate, control, and inhibit intracellular transport. In order to probe the parameters of control, we created microtubule networks of varying densities, characterized by the mesh size, the distance between intersections of microtubules. Using these networks we experimentally probed the trajectories of kinesin-laden quantum dots as they traverse the network. The same networks were used as the basis for simulated trajectories for cargoes where the rates of dissociation at intersections and along the filaments were determined from experiments, making for closer comparisons. We also deduced an analytical function for the run length dependence on mesh size to compare to both experiments and simulations. By comparing the experimental, simulated, and theoretical results, we determined the effects of the network mesh size on the motion of kinesin-driven cargoes. ## 2 Methods ### Materials and Reagents Unless otherwise stated, all reagents were purchased from ThermoFisher. #### Microtubule preparation Lyophilized 488-tubulin and unlabeled-tubulin were purchased from Cytoskeleton. Tubulin was resuspended in PEM-80 (80 mM PIPES pH 6.9, 1 mM MgCl\({}_{2}\) and 1 mM EGTA) to a final concentration of 5 mg/ml. Labeled tubulin was added to unlabeled tubulin at a 1:10 ratio. To polymerize the tubulin, we added GTP to a final concentration of 1 mM and incubated for 20 min at 34\({}^{o}\)C. Finally, we added 20 \(\mu\)M Taxol to stabilize the polymerized microtubules and incubated at 34\({}^{o}\)C for 20 min to equilibrate the Taxol. Microtubules were kept on the bench and further dilutions of microtubules required 20 \(\mu\)M Taxol to keep the filaments stabilized. #### 2.1.2 Kinesin preparation Kinesin motors were expressed and purified from the pWC2 plasmid available at AddGene to create a protein with a kinesin-1 motor truncated at amino acid 401, a BCCP tag to allow biotinylation during expression in bacteria, and a 6x-his tag to purify using a nickel affinity column. Kinesin was purified using standard protocols previously described [17, 18, 19]. Briefly, the plasmid containing the kinesin construct was transfected into BL21 cells (New England Biolabs) and bacteria were selected using ampicillin in the media. Overnight cultures that included biotin in the media were pelleted and the bacteria was lysed using sonication and chemical lysis. The supernatent was separated from the bacteria debris using centrifugation and then incubated with nickel beads to bind the 6x-his tagged protein. Kinesin was eluted using imidazole and fractions with kinesin were desalted to remove excess imidazole. Kinesin was aliquoted and snap frozen using liquid nitrogen and stored at -80\({}^{o}\)C. #### 2.1.3 Microtubule network preparation Microtubule networks of varying filament mesh density were made by flow-aligning microtubules in a crossed-path flow chamber as previously described [5]. We made the chamber by adhering four square pieces of double-sided tape on a glass slide such that it made a crossed flow path (Fig. 1A). The slide was bound to a silanized cover glass treated with hydrophobic silane, PlusOne Repel silane (Cytiva) as previously described [20]. To create the sample, the following reagents were flowed into the chamber. First, we flowed 15 \(\mu\)l of 10% \(\alpha\)-tubulin antibody (YL1/2) into the chamber and incubated for 5 minutes. This surface layer provided a specific interaction to the microtubules and helped to elevate them above the polymer surface coating, which was added next. The polymer surface was made by adding 10 \(\mu\)l of 5% Pluronic F-127 block copolymer from both directions of the flow chamber and incubating for 5 min. The pluronic blocks the surface from other proteins non-specifically binding. Next we washed the chamber with wash buffer (90 \(\mu\)l PEM-80, 10 \(\mu\)l of 0.5% Pluronic F-127). Now that our surface was well coated and blocked, we flowed 10 \(\mu\)l of polymerized microtubules diluted to 0.5 mg/ml tubulin concentration. We flowed from the x-direction, incubated for 2 minutes, washed with wash buffer, and incubated for another 3 minutes. We repeated the same process in the y-direction. The chamber was imaged to ensure microtubule networks were bound to the surface and at the densities needed. #### Quantum-dot cargo preparation Quantum dot cargoes were made by mixing streptavidin-labeled quantum dots (ThermoFisherFisher) with biotinylated kinesin at a ratio of 1:2 and incubated for one hour on ice (Fig. 2). These cargoes were then diluted by 30 times in PEM-80 to be used in Figure 1: Microtubule network creation and analysis. (A) Cartoon schematic of crossed flow path sample chamber and microtubule network. (B) Example image of microtubule channel for a network created in a crossed channel chamber. (C) Network image from panel (B) binarized to make a black and white image to be used for mesh analysis. (D) Skeletonization of network used to automatically detect intersections and branch lengths. (E) Extracted network used to perform simulations of motors on networks with the same organization as experiments. (F) Comparison between the mesh size measured from ImageJ and extracted using MatLab. Not all networks used in experiments were extracted and used for simulations. For all images, the scale bar is 5 \(\mu\)m. microtubule networks of varying densities. The final step of sample chamber preparation was to add the kinesin cargo sample to the chamber that has been examined on the microscope. The final flow through contains quantum dots diluted to 1:30 in PEM-80 with 2 mM ATP, 66 mM DTT, and an oxygen scavenging system, which was 0.66 mg/ml glucose oxidase, 1.5% final dilution of aqueous catalyse (Sigma catalogue number C30), and 20 mg/ml glucose in PEM-80. The microtubule network in the crossed part of the flow chamber had varying densities allowing us to take data in several locations within the same chamber (Fig. 1). If needed, kinesin cargo sample was replenished into the same sample chamber to replace the ATP and oxygen scavenging species that degraded during the assay and allow longer imaging. #### Microscopy imaging Image data was captured with a Nikon Ti-E microscope using epi-fluorescence and total internal reflection fluorescence (TIRF) microscopy as previously described [4, 5, 19]. Microtubules were imaged in epi-fluorescence in the green fluorescence channel using a Hg-Xe illumination source with 480 \(\pm\) 25 nm excitation filter, a 500 nm long pass filter, and a 525 \(\pm\) 55 nm emission filter (Chroma). The illumination for the TIRF system was a custom-built laser system using a 647 nm solid state laser brought into the back of the 60x, 1.49 NA objective, as previously described [4]. The filter set had no excitation filter, a 640 nm long pass for the dichroic, and a 680 \(\pm\) 50 nm emission filter (Chroma). All images were made using an IXON electron-multiplier CCD camera (Andor) with a pixel size of 160 nm. The laser and camera systems were controlled through Nikon Elements software and images were recorded to RAM and saved a.nd2 files as uncompressed tif stacks and metadata. The time series data sets of quantum dot cargoes were taken for 2 mins with 1 s in between frames with an exposure time of 100 ms. ### Quantitative image analysis #### Network mesh size characterization The control parameter for these studies was the network mesh size, which was denoted as the distance between neighboring intersections of the microtubule network. We noticed that the mesh size could change significantly over the imaging region of our camera, which was 82 \(\mu\)m on a side. In order to have the entire region have a similar mesh size, we divided each image into quarters for analysis of both the mesh size and the trajectories (see below). This gave more consistent network mesh sizes over an area of 41x41 \(\mu m^{2}\). The same networks were extracted and used as the basis for simulated trajectories (see below). We quantified the distance between intersections using the FIJI/ImageJ AnalyzeSkeleton (2D/3D) plugin [21]. First, we smoothed the images to remove fluctuations in the background caused by shot noise. Next, we performed background subtraction on the images to remove global intensity variations due to the imaging. We converted the image into a binary image by using the auto threshold function to make the microtubules white on a black background (Fig. 1C). This helps in distinguishing signal (microtubules) from background. We then skeletonized the image using the binary/skeleton command in FIJI/ImageJ (Fig. 1D). Finally, we applied the AnalyzeSkeleton (2D/3D) plugin with prune ends and prune cycle shortest branch enabled. The data reported were the largest shortest path, detailed info, and labeled skeletons. The resulting data displayed the branch information, which was saved as a text file. We used the branch length given in microns as the data for the mesh size. The statistics of the data were calculated and the distribution was normal, so the mean and the median were the same. The error bars reported are the standard error of the mean. The number of branches analyzed are given in Appendix table 1. Figure 2: Quantum dot cargo methods and trajectory analysis. (A) Cartoon schematic of quantum dot cargo attached to a kinesin motor that can walk along microtubules. (B) Example cargo trajectories (magenta) are displayed on a dense microtubule network (white) with a small mesh size. Only a subset of total trajectories are shown. (C) Example cargo trajectories (magenta) along a sparse microtubule network (white) with a large mesh size. (D) Example simulated trajectories (magenta) along dense extracted network (white) with a small mesh size. (E) Example simulated trajectories (magenta) along sparse extracted network (white) with a large mesh size. For all images, the scale bar is 5 \(\mu\)m. #### Tracking and transport analysis We used Fiji/ImageJ tracking plugin Trackmate [22] to track quantum dot trajectories. Within Trackmate, we used a setting of 6 pixels for the diameter of the objects and we allowed a gap in time of 2 frames, so that more than 2 frames without detecting the object nearby resulted in terminating the measurement. We also used a minimum cut off run length of 3 pixels (160 nm/pixel) and 3 frames (1 s/frame). The localization was allowed to be sub-pixel. Each of the tracks were manually checked against the movie to ensure the trajectory tracked was reasonable. The x,y position data over time were used as the trajectories for further analysis. Example trajectories on different networks are show in figure 2. The number of tracks analyzed for each network is given in Appendix 0.A1. The run length of a trajectory was determined as the contour length of the trajectory where the absolute value of all the displacements were summed. The displacement of a trajectory was defined as the end-to-end distance of the trajectory. The instantaneous speed was calculated as the positive displacement between two frames divided by the time between frames. The average speed was determined as the run length divided by the total time the quantum dot was associated to the network. For all data types, the data was averaged and the standard deviation or standard error of the mean was used as error bars. The number of tracked trajectories used for each network is given in Appendix 0.A1. To quantify the characteristic persistence length of the trajectory, we calculated the mean squared displacement (MSD) and plotted it versus the contour length of the trajectory. The MSD was calculated by measuring the displacement for all points along the trajectory at a specific lag time (time between frames). For each lag time, the displacement values were squared and averaged. This is performed for all the lag times that have more than 5 data points and plotted for only the first 80 points. Once we calculate the MSD as a function of lag time, we also determined the run length (total contour length from initial time) of the trajectory for all time. The run length from zero time is plotted as the x-coordinate and the MSD as a function of lag time is plotted on the y-coordinate. The MSD was fit to a worm-like chain model with this equation: \[MSD(L_{c})=2L_{p}L_{c}\left[1-\frac{L_{p}}{L_{c}}\left(1-\exp(-L_{c}/L_{p})\right] \tag{1}\] where \(MSD(L_{c})\) is the mean squared displacement as a function of the contour length, \(L_{c}\), and \(L_{p}\) is the persistence length, which is a fit parameter. ### Network extraction and characterization We used the Matlab tool, FIRE [23] on skeletonized images (Fig. 1D) to extract corresponding networks (numerical matrices of filaments and vertices) for cargo transport simulations. We filtered out any filaments that were too short to be a real filament (typical cut-off length is 0.48 \(\mu\) m). We then characterized the network by quantifying the mesh size, as the mean of the distances between filament intersections and the persistence length of filaments. To obtain the persistence length, we restructured the network data so as to represent filaments as trajectories on a 2D cartesian plane. Then we measured the MSD of these paths as a function of path length, and fit the worm-like chain model (Eq. 1) as described above. ### Simulations on extracted network The computational model for cargo transport on the network was similar to previous works [24, 25]. A cargo of radius \(r\) was initialized at a random point in the 2-dimensional box containing the extracted network. It was allowed to diffusively search and bind to a filament. The simulation time starts from the time when the cargo bound to the filament. After binding, the cargo ballistically moved toward one of the filament ends, _i.e._ in a time step \(\Delta t\), the cargo position moved by \(v\Delta t\) towards the next vertex, where \(v\) is the velocity of cargo. The polarity of the filament was chosen randomly when the cargo bound and was fixed throughout the given cargo run. As a cargo walked ballistically on the filament, it could stochastically detach anywhere along the path with a rate, \(k_{off}=v\tilde{k}_{d}\), where \(\tilde{k}_{d}\) was the detachment rate per unit length along a filament. In addition to stochastic detachments a the cargo walks along a filament, it also could detach with a fixed probability, \(P_{d}\) at filament intersections. A cargo run stopped when it detached from the filament. Cargo was assumed to interact with the intersection when it was closer than one cargo radius (\(r\)) to one of the filament intersections. The value of the off rate, \(\tilde{k}_{d}\) and detachment probability at intersections, \(P_{d}\), were determined from the analysis of run lengths from manual tracking of experimental videos (see Sec 3.2). Finally, the cargo was assumed to detach from filaments when it reached one of the filament ends. In our model, we simulated only cargo transport along the filament and considered only ballistic motion for analysis to compare with experimental tracks. This was different from previous models where diffusive transport of cargo in the cytoplasm was considered [24, 25] in addition to the ballistic phase on filaments. Since our goal was to analyze the impact of network features on transport, which is independent of time, we decided to work with quantities that do not depend on time. Instead, we examined run length, displacement, and MSD as a function of path length to compare with experiments. Thus \(v\) in our model was a free parameter that was chosen depending on computational convenience. We performed N=1000 cargo runs for each network. ## 3 Results and Discussion ### Network mesh size We created artificial cargoes from quantum dots decorated with kinesin-1 motors added to microtubule networks of various mesh sizes. The mesh size of the networks and a variety of transport parameters were quantified to determine how the microtubule network density affected the mobility of kinesin cargoes. The average mesh sizes measured from experimental images using ImageJ ranged from 1.5 \(\mu\)m to 5 \(\mu\)m between intersections (Fig. 1F). The experimental networks were extracted to be used for simulations. When the mesh sizes were characterized from simulations using Matlab, they were 37% higher than the experimental characterization on 97% of the networks (Fig. 1F). It is possible that in the extraction, we lost small intersections that were actually there in experiments, resulting in a larger mesh size in simulations. This could result in altered quantitative results from simulations compared to experiments. We expected that the trends should be the same, if the correct underlying mechanisms were being simulated, which we test below. For plotting of experimental and simulated data, we chose to use the characteristic mesh sizes determined from experimental images using ImageJ in all figures. ### Run length The run length of a quantum dot cargo is the total distance, or contour length, traveled along the microtubules of the network before detaching. Experimental data was collected and long trajectories were manually tracked, regardless of network mesh size. The run length, \(s\), was determined from the manually tracked trajectories and the data was binned with either 8 \(\mu\)m or 1 \(\mu\)m bins (Fig. 3A, blue). We deduced the two parameters that controlled run length via dissociation, namely the off rate for cargoes between intersections, \(k_{off}\), and the probability of detaching at the intersections, \(P_{d}\), by fitting the run length histograms to exponential decay functions of the form: \(y=ae^{-(\tilde{k_{d}}+P_{d}/\lambda)s}\), where \(\lambda\) is the mean mesh size for all extracted networks, \(a\) is an arbitrary normalization parameter, and \(\tilde{k_{d}}\) is the detachment rate per unit length. The parameter \(\tilde{k_{d}}\) is equivalent to the inverse of the "natural" run length for the cargoes in the absence of intersections and is equal to the off rate between intersections divided by the cargo velocity: \(k_{off}/v\) (Fig. 3A, blue). Because we have two unknown parameters, we also need to use a second set of data to deduce the off rate between intersections. Using only the subset of tracks that never visited an intersection during their trajectories, the run lengths were again binned Figure 3: Determination of simulation parameters and analytical theory. (A) Distribution of run lengths of all tracks (blue circles, N=246) fit to exponential decay (blue line). Distribution of run lengths of tracks that did not visit an intersection (red circles, N=20) fit to exponential decay (red line). Short run lengths were excluded from the fit because manual tracking had a systematic bias against short run lengths. (B) Analytical theory for the average run length as a function of mesh size (Eq. 2) for various values of \(P_{d}\). The legend indicates the value of \(P_{d}\) plotted with the two values from the histograms, 0.62 and 0.75 in gray. and fit to \(y=ae^{-(\tilde{k_{d}}+1/\lambda)s}\) (Fig. 3A, red). The two histograms had fit parameters of \(A=\tilde{k_{d}}+P_{d}/\lambda\) and \(B=\tilde{k_{d}}+1/\lambda\), which were used to deduce \(\tilde{k_{d}}\) and \(P_{d}\) (Table B4). The histogram bin size had an effect on the fitting parameters and hence the values of \(k_{off}\) and \(P_{d}\) used for simulations. Histograms with 8 \(\mu\)m bins resulted in a \(P_{d}\) value of 0.75 and histograms with 1 \(\mu\)m bins resulted in a \(P_{d}\) value of 0.62. Assuming the distribution of run lengths has this form: \(ae^{-(\tilde{k_{d}}+P_{d}/\lambda)s}\), then the average run length, \(\langle s\rangle\), for a network with a mesh size, \(\lambda\), should be given by finding the average of this expression given by: \[\langle s\rangle=1/(\tilde{k_{d}}+P_{d}/\lambda). \tag{2}\] Thus the dependence of the average run length (\(\langle s\rangle\)) with mesh size (\(\lambda\)) is non-linear and saturates to \(1/\tilde{k_{d}}\) for large mesh sizes (Fig. 3B). Given the uncertainty of the estimation of \(P_{d}\), we can examine the sensitivity of this analytical expression to the value of \(P_{d}\) (Fig. 3B). For small \(P_{d}\), the run length saturates to the natural run length for cargoes on a single microtubule. When \(P_{d}\) approaches 1, the run lengths are depressed and only reach the natural run length at higher mesh sizes (Fig. 3B). This theoretical equation is general in that it allows us to compute run lengths at any values of motor off rate, mesh size, and detachment probability at intersections. This equation will also allow future experiments to compute the detachment probability at intersections without having to manually filter trajectories that encounter intersections. For the simulations, we used a value of \(P_{d}\) equal to 0.75 and a \(\tilde{k_{d}}\) equal to 0.1. We compared our analytical results to both our experimental quantification of run length and simulations of the run length on our microtubule networks. For the experimental data, we observed that as the mesh size increased, the average run length also increased (Fig. 4Ai,ii). We assigned an artificial cut-off between low mesh size (less than 2 \(\mu m\)) and high mesh size (larger than 2 \(\mu m\)). Using this cut-off, we found that the median run length for low mesh size was 3.0 \(\pm\) 0.1 \(\mu m/s\) and the high mesh size was 3.9 \(\pm\) 0.1 \(\mu m/s\) (Fig. 4Aii). There was a distinct difference in the statistics for networks with mesh sizes above and below this cut-off (Fig. 4Aii). Indeed, performing the Kolmogorov-Smirnoff statistical test (KS Test), we found the probability was p = 0.0003 or 0.03% that the small and large mesh size results were the same (Fig. 4Aii). Thus, we concluded that the threshold at a mesh size of 2 \(\mu m\) is a reasonable cut-off between low and high mesh sizes for further comparisons. The experimental run length results implied that quantum dots cover longer distances when the microtubule tracks were more open, with fewer intersections. This result makes sense, since smaller mesh sizes should have more intersections, and kinesin has been shown to have a high probability of terminating a run when contacting a microtubule intersection [5]. Using the deduced probabilities for dissociation between or at intersections (Fig. 3A), we were able to simulate trajectories on different networks and quantify the run lengths as a function of mesh size (Fig. 4B). The simulated run lengths increased with increasing mesh size just like the experimental data with a similar slope (Fig. 4Bi). Using the same mesh size cut-off, we compared the simulated run lengths to find the low mesh size median run length was 4.2 \(\pm\) 0.8 and the high mesh size median run length was 4.8 \(\pm\) 0.1. These results were significantly different with a probability of 0.0015% that they are the same distribution using the KS Test. We can compare the experimental and simulated trajectory run lengths to the analytical expression for average run length as a function of mesh size (Eq. 2, Fig. 4C). Plotting all together, it is clear that the simulations have systematically longer run lengths than the experimental results. Given the sensitivity to the value of \(P_{d}\), it is possible that adjusting this parameter or the natural run length, \(1/\tilde{k_{d}}\), could cause the difference. Examining the data with multiple values of \(P_{d}\) plotted, we estimate that the probability of detaching at an intersection is between 0.3 and 0.75 for the Figure 4: Total run lengths for experimental and simulated trajectories. (A) Experimental results. (i) Plot showing the average run length (\(\mu\)m) for each network of a given average mesh size (\(\mu\)m). Best fit slope is 0.5 \(\pm\) 0.1. (ii) Comparison of the distribution of the average run lengths for networks with low mesh size less and high mesh size. (B) Simulation results. (i) Average run lengths of simulated trajectories on the same networks. Each data point represents average over N=1000 cargo runs, error bars represent standard error of the mean. Best fit slope is 0.4 \(\pm\) 0.1. (ii) Comparison of the distribution of the average run lengths for low mesh size and high mesh size. All fit parameters for data are given in Appendix table A2. (C) Comparison of experimental run lengths (magenta circles) and simulated run lengths (blue circles) as a function of network mesh size with the analytical theory with various values of \(P_{d}\), given in the legend. experimental data and between 0.2 and 0.5 for the simulation data (Fig. 4C). This is a not surprising considering that the extracted networks have larger mesh sizes compared to experiments (Fig. 1). Thus, for the same values of \(P_{d}\) and \(1/\tilde{k_{d}}\), the number of intersections encountered is smaller, and the run lengths will thus be larger. Another difference between simulations and experiment could results from track switching at intersections. In our simulations, we did not include track switching as an option at intersections (Table B4) because switching has been shown to be infrequent for kinesin cargoes with one or two motors [5]. Further, in our manual tracking of long trajectories, we only observed switching at intersections with about 5% probability, which matches prior reports for single GFP-kinesin [5]. In order to check if 5% switching probability at intersections could alter the results, we simulated trajectories in the networks including this probability. We found that this small switching probability has no effect on the run lengths we observe in simulations (Appendix Fig. B1), justifying our choice to not include it in the simulations. Further, this also implies that there is likely on average one active motor per quantum dot, although there is a small probability that some quantum dots have two motors. ### Instantaneous and average speeds We used metrics that were independent of time to allow us to compare between the experiment and simulation data. In our model, the speed \(v\) was a free parameter that was chosen depending on computational convenience. Given these assumptions, we would expect that the average and instantaneous velocities should be independent of mesh size. As a rationale check, we calculated the instantaneous speed of the quantum dot cargoes between two time points. We found that there was a shallow trend in the median of the instantaneous speed as a function of average mesh size (Fig. 5A). Using the same threshold at a mesh size of 2 \(\mu m\), we found that the two distributions have different medians of 0.052 \(\pm\) 0.004 \(\mu m/s\) for the low mesh size and 0.060 \(\pm\) 0.002 \(\mu m/s\) for the high mesh size (Fig. 5B). The distribution of instantaneous speeds was wider with a standard deviation of 0.018 \(\pm\) 0.004 compared to the standard deviation of the high mesh size data, which had a standard deviation 0.010 \(\pm\) 0.002 (Fig. 5B). This was likely because, at a smaller mesh size, there was a higher probability of a cargo encountering an intersection between two frames and affecting the instantaneous speed. Comparing the data with the KS Test, we found that the distribution in instantaneous velocities for the low and high mesh sizes were not statistically different, with a 5.6% probability that they are the same. We also quantified the average velocity of the cargo trajectories, given by the total run length (Fig. 4) over the total association time. Since the network intersections reduced both the association time and the run length by the same mechanism, specifically causing the dissociation of the cargoes, we expected both parameters to decrease similarly. Indeed, we found that the average speed of the cargoes was unaffected by mesh size, as expected (Fig. 5C). The average speed distributions had medians of 0.82 \(\pm\) 0.06 for low mesh and 0.80 \(\pm\) 0.03 for high mesh size (Fig. 5D). The probability that these two distributions are the same is 39% using the KS test (Fig. 5D). These checks indicated that our model assumptions about velocity were reasonable. ### Displacement and tortuosity The displacement is the end-to-end length of the cargo's trajectory. We quantified the displacement for experimentally measured trajectories and found that it increased with the mesh size (Fig. 6Ai). The median displacement for low mesh size was 1.77 \(\pm\) 0.07 and for high mesh size was 2.26 \(\pm\) 0.08 (Fig. 6Aii). The probability that the distributions in displacement were the same was 0.01% using the KS Test. The simulated trajectories also showed the same trend in displacement, increasing with mesh size (Fig. 6Bi). The median displacement for low mesh size was 3.12 \(\pm\) 0.06 and for high mesh size was 3.80 \(\pm\) 0.01 (Fig. 6Bii). The probability that the distributions in displacement were the same was 0.0015% for high and low mesh sizes using the KS Test. Although the trends are similar, the absolute numbers for the simulated displacements were higher than the measured displacements for the same networks. Like the run length, the displacements of simulated trajectories were not affected by a 5% probability to switch microtubules at intersections (Appendix Fig. B1). We can determine the tortuosity of the trajectories by dividing the contour length by the displacement. This is a parameter used for examining the flow of material through porous media, and can be used to characterize the mobility. We find that the Figure 5: Cargo speeds were independent of mesh size. (A) Plot showing the median instantaneous speed (\(\mu\)m/s) for each network of a given average mesh size (\(\mu\)m). The best fit slope was 0.005 \(\pm\) 0.003. (B) Comparison of the distribution of the median instantaneous speed for low and high mesh sizes, which were statistically the same. (C) Plot showing the median average speed (\(\mu\)m/s) for each network of a given average mesh size (\(\mu\)m) with the best fit slope of -0.03 \(\pm\) 0.03. (D) Comparison of the distribution of the median average speed for low and high mesh sizes, which were statistically the same. All fit parameters for data are given in Appendix table A2. average tortuosity for low mesh size networks is 1.7 \(\pm\) 0.1 and for high mesh size is 1.6 \(\pm\) 0.1, which are the same. Thus, the run length and displacement are being rescaled within the network by the same process, most likely the presence of the intersections. #### 3.4.1 Mean square displacement The characteristic persistence length of the trajectories of the quantum dot cargoes was characterized using a mean squared displacement (MSD) as a function of trajectory contour length. Fitting the data this way removed the need to consider time dependence, as described above. Each MSD was fit to a worm-like chain model (Eq. 1) to determine the persistence length, \(L_{p}\) of the trajectory (Fig. 7Ai,Bi). For both experimental data and simulated data, the MSD was cut off at a countour length of 6 \(\mu\)m for the fitting in order to compare them. For the experimental trajectories, the persistence length depended on the mesh size linearly (Fig. 7Aii). The median for low mesh size was 0.44 \(\pm\) 0.06 and the median for high mesh size was 0.8 \(\pm\) 0.2. The standard deviation for the high mesh size data Figure 6: Total displacement for experimental and simulated trajectories. (A) Displacement of experimental trajectories. (i) Plot showing the average displacement (\(\mu\)m) against average mesh size (\(\mu\)m). The best fit slope is 0.35 \(\pm\) 0.06. (ii) Comparison of the distribution of average displacements for low mesh size and high mesh size. (B) Displacement of simulated trajectories. (i) Plot showing the average displacement (\(\mu\)m) with average mesh size (\(\mu\)m). The best-fit slope is 0.5 \(\pm\) 0.1. (ii) Comparison of the distribution of average displacements for low mesh size and high mesh size. All fit parameters for data are given in Appendix table A2. was large (SD = 0.7 \(\pm\) 0.2) compared to the low mesh size data (SD = 0.25 \(\pm\) 0.06), which resulted in a probability of 4.2% that these distributions are the same using the KS Test (Fig. 7Aiii). The simulation trajectory data showed the same linear trend with mesh size (Fig. 7Bii), but the data was less spread out for both small and large mesh sizes. The median for low mesh size was 2.8 \(\pm\) 0.3 and the median for high mesh size was 4.1 \(\pm\) 0.3. The KS test revealed that the probability they are the same distribution is only 0.3% (Fig. 7Biii). ## 4 Conclusion The organization of cytoskeletal filaments is known to affect the motion of motors and cargoes. Here, we took the approach to characterize the network using the mesh Figure 7: Mean squared displacement and persistence length of trajectories. (A) Mean squared displacement for experimental data (i) All experimental data sets from low mesh size (cyan circles) and high mesh size (green circles) plotted together. These data were fit to the worm-like chain model (Eq. 1) to find the persistence length. (ii) Persistence length (\(\mu\)m) plotted against mesh size (\(\mu\)m). The best fit slope is 0.2 \(\pm\) 0.1 (iii) Comparison of median of persistence lengths for low mesh size and high mesh size. (B) Mean squared displacement for simulation data (i) All data sets from low mesh size (cyan circles) and high mesh size (green circles) plotted together. These data were fit to the worm-like chain model (Eq. 1) to find the persistence length. (ii) Persistence length (\(\mu\)m) plotted against mesh size (\(\mu\)m). The best fit slope is 1.6 \(\pm\) 0.2 (iii) Comparison of average of persistence length for low mesh size and high mesh size simulations. All fit parameters can be found in Appendix table A2. size and examine the effects of the mesh size on the run length, displacement, and mean square displacement of the motion through the network. We found that these parameters are sensitive to mesh size, even over the small range in mesh size that we are able to realize in experiments, 1 \(\mu\)m to 5 \(\mu\)m. Despite the small range of mesh sizes, there were significant changes in all the trajectory parameters. Using experimental trajectories, we deduced the parameters for cargo detachment at intersections and between intersections. We used these parameters to create an analytical theory for the average run length as a function of mesh size, which had similar trends as our experimental data and was sensitive to the probability of detaching at intersections. Using the exact same networks extracted from the experimental data and the off rates, we were able to simulate cargo trajectories through the networks. The simulation data had the same trends and similar quantitative results as the experiments for run length, displacement, and mean squared displacement for real cargoes assembled with kinesin motors. We anticipate that future work with different mesh sizes, filament organizations, and motor types will be modeled with the same fundamental principles we uncover here. Specifically, we anticipate that other motors would have different reactions to intersections and composite motor systems would further increase complexity. Acknowledgments.We would like to acknowledge the support of the members of the Ross Lab in the Physics Department at Syracuse University. NK was partially supported on funds from Syracuse University, the National Science Foundation grant NSF BIO-2134215 to JLR and National Science Foundation grant DMREF-2118403 to JLR. AG, MG and NS acknowledge support from the National Science Foundation (NSF-DMS-1616926 to AG) and NSF-CREST: Center for Cellular and Biomolecular Machines at UC Merced (NSF-HRD-1547848 and NSF-HRD-2112675 to AG). AG and NS also acknowledge support from the NSF Center for Engineering Mechanobiology grant CMMI-1548571. NS acknowledges Graduate Dean's dissertation fellowship from the University of California, Merced. ## Declarations * Funding: The research leading to these results received funding from Syracuse University and the National Science Foundation grants NSF BIO-2134215 and DMREF-2118403 as well as NSF-DMS-1616926, NSF-HRD-1547848, NSF-HRD-2112675, NSF-CMMI-1548571 and UC Merced Graduate Division. Employment: JLR and NK are employed by Syracuse University. NS and AG are employed by University of California, Merced. * Data Availability: Microscopy and simulation data generated for this manuscript are available upon request from the corresponding authors JLR for experimental data and AG for simulation data. * Authors' contributions: NK conceived and design the work, performed data or the acquisition, analysis, and interpretation of data, drafted and edited the manuscript, and is accountable for the work. NS performed simulations and theoretical calculations, analyzed and interpreted data, drafted and edited the manuscript and is accountable for the work. MG performed simulations and analyzed and interpreted data. AG performed theoretical calculations, interpreted data, drafted and edited the manuscript and is accountable for the work. JLR designed the experimental work, analyzed and interpreted data, drafted and edited the manuscript and is accountable for the work. ## Appendix A Experimental Data Appendix In this appendix, we list the parameters of the data for the trajectories for each experimental movie. \begin{table} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline Network name & Number of branches & Average mesh size & Mesh size standard Error & Number of tracked trajectories \\ \hline HDN001.1 & 566 & 1.675 & 0.025 & 21 \\ HDN001.2 & 620 & 1.73 & 0.029 & 41 \\ HDN001.3 & 423 & 1.839 & 0.039 & 61 \\ HDN001.4 & 601 & 1.676 & 0.024 & 50 \\ HDN002.1 & 581 & 1.678 & 0.026 & 20 \\ HDN002.2 & 536 & 1.784 & 0.035 & 19 \\ HDN002.3 & 418 & 1.882 & 0.039 & 40 \\ HDN002.4 & 469 & 1.765 & 0.033 & 36 \\ HDN005.1 & 1001 & 1.96 & 0.032 & 120 \\ HDN005.2 & 1213 & 1.928 & 0.028 & 115 \\ HDN005.3 & 994 & 1.687 & 0.02 & 185 \\ HDN008.1 & 148 & 2.537 & 0.142 & 24 \\ HDN008.2 & 89 & 3.816 & 0.35 & 40 \\ HDN008.3 & 131 & 3.021 & 0.178 & 56 \\ LDN001.2 & 181 & 2.029 & 0.084 & 34 \\ LDN001.3 & 138 & 2.554 & 0.19 & 32 \\ LDN001.4 & 209 & 1.884 & 0.065 & 41 \\ LDN002.1 & 106 & 2.67 & 0.202 & 41 \\ LDN002.2 & 103 & 2.783 & 0.21 & 53 \\ LDN002.3 & 182 & 2.675 & 0.182 & 44 \\ LDN002.4 & 136 & 2.372 & 0.19 & 49 \\ LDN004.1 & 107 & 3.049 & 0.255 & 69 \\ LDN004.2 & 203 & 2.188 & 0.098 & 71 \\ LDN004.3 & 114 & 3.764 & 0.293 & 117 \\ LDN004.4 & 183 & 2.218 & 0.153 & 108 \\ LDN005.1 & 76 & 3.643 & 0.296 & 36 \\ LDN005.2 & 122 & 3.309 & 0.235 & 41 \\ LDN005.3 & 136 & 3.694 & 0.277 & 56 \\ LDN005.4 & 99 & 3.076 & 0.269 & 42 \\ LDN007.1 & 90 & 3.496 & 0.288 & 48 \\ LDN007.2 & 48 & 4.002 & 0.602 & 28 \\ LDN007.3 & 91 & 3.146 & 0.284 & 52 \\ LDN007.4 & 71 & 4.065 & 0.559 & 63 \\ LDN008.1 & 75 & 2.202 & 0.177 & 12 \\ LDN008.3 & 44 & 4.517 & 0.71 & 37 \\ VHD001.1 & 924 & 1.663 & 0.019 & 82 \\ VHD001.2 & 1003 & 1.687 & 0.019 & 73 \\ VHD001.3 & 858 & 1.696 & 0.022 & 177 \\ VHD002.4 & 1067 & 1.673 & 0.019 & 45 \\ VHD003.3 & 718 & 1.663 & 0.022 & 64 \\ VHD006.3 & 700 & 1.602 & 0.019 & 37 \\ \hline \end{tabular} \end{table} Table A1: Summary of mesh size data from experimental networks \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Y-axis parameter** & **Intercept** & **Slope** & **R-** & **Chi-** & **Reference** \\ & & & **squared** & **squared** & **to Figure** \\ \hline Run length - Experiments & 2.19\(\pm\)0.27 & 0.52\(\pm\)0.1 & 0.4 & 11.81 & Fig. 3A(i) \\ Run length - Simulations & 3.42\(\pm\)0.26 & 0.42\(\pm\)0.1 & 0.38 & 6.31 & Fig. 3B(i) \\ Instantaneous speed & 0.05\(\pm\)0.01 & 0\(\pm\)0 & 0.08 & 0.01 & Fig. 5A \\ Average speed & 0.88\(\pm\)0.09 & -0.03\(\pm\)0.03 & 0.02 & 1.33 & Fig. 5C \\ Average displacement - Experiments & 1.21\(\pm\)0.17 & 0.35\(\pm\)0.06 & 0.44 & 4.64 & Fig. 6A(i) \\ Average displacement - Simulations & 2.17\(\pm\)0.21 & 0.54\(\pm\)0.08 & 0.60 & 4.1 & Fig. 6B(i) \\ Persistence length - Experiments & 0.23\(\pm\)0.3 & 0.22\(\pm\)0.12 & 0.09 & 13.12 & Fig. 6A(i) \\ Persistence length - Simulations & -2.69\(\pm\)2.51 & 1.56 \(\pm\) 0.19 & 0.22 & 58.9 & Fig. 6B(i) \\ \hline \end{tabular} \end{table} Table 2: The fit parameters for the linear fits to the data as a function of mesh size for all figures, as denoted. ## Appendix B Simulation Appendix
2304.01554
MEnsA: Mix-up Ensemble Average for Unsupervised Multi Target Domain Adaptation on 3D Point Clouds
Unsupervised domain adaptation (UDA) addresses the problem of distribution shift between the unlabelled target domain and labelled source domain. While the single target domain adaptation (STDA) is well studied in the literature for both 2D and 3D vision tasks, multi-target domain adaptation (MTDA) is barely explored for 3D data despite its wide real-world applications such as autonomous driving systems for various geographical and climatic conditions. We establish an MTDA baseline for 3D point cloud data by proposing to mix the feature representations from all domains together to achieve better domain adaptation performance by an ensemble average, which we call Mixup Ensemble Average or MEnsA. With the mixed representation, we use a domain classifier to improve at distinguishing the feature representations of source domain from those of target domains in a shared latent space. In empirical validations on the challenging PointDA-10 dataset, we showcase a clear benefit of our simple method over previous unsupervised STDA and MTDA methods by large margins (up to 17.10% and 4.76% on averaged over all domain shifts).
Ashish Sinha, Jonghyun Choi
2023-04-04T06:13:33Z
http://arxiv.org/abs/2304.01554v2
# _MEnSA_: Mix-up Ensemble Average for Unsupervised Multi Target Domain Adaptation on 3D Point Clouds ###### Abstract Unsupervised domain adaptation (UDA) addresses the problem of distribution shift between the unlabeled target domain and labelled source domain. While the single target domain adaptation (STDA) is well studied in both 2D and 3D vision literature, multi-target domain adaptation (MTDA) is barely explored for 3D data despite its wide real-world applications such as autonomous driving systems for various geographical and climatic conditions. We establish an MTDA baseline for 3D point cloud data by proposing to mix the feature representations from all domains together to achieve better domain adaptation performance by an ensemble average, which we call **M**ixup **E**nsemble **A**verage or **M**En**s**A**. With the mixed representation, we use a domain classifier to improve at distinguishing the feature representations of source domain from those of target domains in a shared latent space. In extensive empirical validations on the challenging PointDA-10 dataset, we showcase a clear benefit of our simple method over previous unsupervised STDA and MTDA methods by large margins (up to \(17.10\%\) and \(4.76\%\) on averaged over all domain shifts). We make the code publicly available here. ## 1 Introduction For real-world applications ranging from a surveillance system to self-driving cars, deep learning (DL) for 3D data has made significant progress in a wide variety of tasks including classification, segmentation, and detection [10, 16, 33, 49, 55]. Despite the impressive success of DL on 2D vision tasks, its success in 3D data regime involving point cloud data is yet limited by several factors as follows. First, as the point clouds usually do not come with color or textural information, it is not trivial to encode the visual appearances of the structure. Second, annotation cost for 3D is more expensive than that in 2D; the annotation of 3D point clouds may require several rotations, which sometimes is non-trivial due to partial occlusions. Third, the domain gap that arises from the difference in distribution between the original training data (source domain) and the deploying environment (target domain) is larger than that of 2D data owing to the characteristic of 3D geometry [19]. In this work, we address the challenge of reducing the domain gaps for 3D point cloud data, which alleviates the need for extensive annotation across all domains. Specifically, we focus on unsupervised domain adaptation (UDA), that involves transferring knowledge from a label-rich domain, _i.e_., source domain to a label-scarce domain, _i.e_., target domain to reduce the discrepancy between source and target data distributions, typically by exploiting the domain-invariant features [11, 14, 22, 47]. Unfortunately, most of the existing literature on UDA primarily focuses on 2D data. The mortality risk and associated costs of conducting real-world experiments for autonomous driving and robotics systems have led to the increasing prevalence of synthetic data, particularly 3D data, in the research community [43]. This necessitates the need to develop effective domain adaptation methods for 3D data across different domains, including real-to-sim or sim-to-real adaptation, to ensure successful deployment in real-world scenarios. There are numerous works addressing the single-target domain adaptation (STDA) for 3D point clouds [1, 19, 36]. However, when 3D point cloud data of objects is collected under different environmental conditions using various depth cameras or LIDAR sensors for autonomous driving cars, it results in differences in statistical properties such as point cloud density, noise, and orientation. As a result, there is a pressing need for developing Multi-target Domain Adaptation (MTDA) methods, specifically for 3D point cloud data. Despite the well-studied 2D data regime [6, 13, 30], MTDA in 3D point cloud domain remains an unexplored area in the literature. In the context of both STDA and MTDA, if the category configurations are identical across all domains, one straightforward solution could be to extend STDA to MTDA by using one model per target domain. However, at inference time, it becomes challenging to determine the appropriate model to use when information about the target domain is not available. Moreover, as the number of target domains increases, the computational complexity increases accordingly. Additionally, the model may experience catastrophic forgetting [21, 39, 44], that involves a neural network trained on a particular task forgetting the previously learned information when trained on a new task. As a result, the network's performance on the initial task deteriorates. This can be a significant challenge when adapting models to multiple target domains, as the model must be able to generalize well across all domains without forgetting the previously learned information. Therefore, we argue that it is preferable to have _a single model_ that can adapt to multiple targets. Hence, we propose to learn a single MTDA model for 3D point cloud. We illustrate the differences between STDA and MTDA in Figure 1. To learn _a single MTDA model_, we first model the multiple \(N\) targets as a random variable. We then generate shared information between source and \(N\) target domains as \(N\) realizations of the shared representations by mixing them. Then, we propose to take an ensemble average of the shared (, mixed) representation for training a model that is invariant to multiple domains, calling it **Mixup** **Ensemble**A**_verage_ or **MEnSA**. The shared representations are learned in a latent space for its low domain gaps [53] in a min-max manner; maximizing the mutual information (MI) in the embedding space between the domains and domain-specific information while minimizing the MI between the domains and the domain-invariant information [13]. We show that our proposed method outperforms several STDA and MTDA approaches proposed for both 2D and 3D regimes on the multiple target domains evaluated on challenging PointDA-10 benchmark dataset [36] by large margins. In summary, we present the following contributions: * We show that a straightforward extension of domain adaptation methods designed for STDA, in particular 2D data, is non-trivial and does not transfer well to MTDA, specifically in the case of 3D data. * We propose a simple and novel ensemble-average based mixup approach, named MEnSA, to address the challenging yet unaddressed task of adapting a _single_ model across multiple target domains by learning on a single source domain, on point cloud data. * Extensive validations on PointDA-10 dataset demonstrates a significant benefit of our simple approach over previous unsupervised STDA and MTDA methods by large margins (up to 17.10% and 4.76% on averaged over all domain shifts). * To the best of our knowledge, this is the first work that benchmarks and addresses the task of MTDA on 3D data, specifically 3D point clouds. ## 2 Related Work ### 3D Point Clouds 3D visual data is represented in various ways; 3D mesh, voxel grid, implicit surfaces and point clouds. Deep neural networks (DNNs) have been employed to encode the different modalities of 3D data [27, 29, 40, 48, 9]. Among them, point clouds, represented by a set of \(\{x,y,z\}\) coordinates, is the most straightforward modality to represent 3D spatial information. PointNet [33] was the pioneering model to encode point clouds, taking advantage of a symmetric function to obtain the invariance of point permutation. But it ignores the local geometric information, which may be vital for describing the objects in 3D space. PointNet++ [34] proposed to stack PointNets hierarchically to model neighborhood information and increase model capacity. PointCNN [24] proposed \(\mathcal{X}\)-Conv to aggregate features in local patches and apply a bottom-up network structure like typical CNNs. Recent works [16, 52] propose to attend to point-point inter Figure 1: Illustrative comparison of Single Target Domain Adaptation (STDA) and Multi Target Domain Adaptation (MTDA) setup. \(S\) is the labelled source dataset, while \(T_{i}\) are the unlabelled target datasets for \(i=1,2,...,n\). STDA is a set-up where a single model is adapted to perform accurately on the target domain given a labelled source and unlabelled target data. MTDA is a set-up where a single model is adapted across all unlabelled target domains by learning on the labelled source domain. actions using self-attention layers and achieve state-of-the-art accuracy on "supervised" classification and segmentation tasks. Despite the wide usage, point cloud data suffers from labelling efficiency. In real-world scenario, some parts of an object may be occluded or lost (, chairs lose legs) while scanning from acquisition devices,, LIDAR, making annotation difficult. To alleviate the annotation cost, unsupervised domain adaption (UDA) method for point clouds could be a remedy. ### Single Target Domain Adaptation (STDA) STDA is an unsupervised transfer learning approach which focuses on adapting a model to perform accurately on unlabeled target data while using labelled source data. Most of the prior works are proposed for 2D data [12, 41, 42, 14]. They are categorized as (1) adversarial, (2) discrepancy, and (3) reconstruction-based approaches. The adversarial approach refers to a model with a discriminator and a generator, where the generator aims to fool the discriminator until the discriminator is unable to distinguish the generated features between the two domains [8, 12, 35, 42]. These approaches have been proposed using either gradient reversal [12] or a combination of feature extractor and domain classifier to encourage domain confusion. The discrepancy based approaches [26] rely on measures between source and target distributions that can be minimized to generalize on the target domain. The reconstruction-based approaches focus on the mapping of the source domain to the target domain data or vice versa [3, 18]. They often rely on the use of GAN [15] in order to find a mapping between source and target. The STDA methods for 3D point clouds include a self-adaptive module for aligning local features [36], deformation reconstruction as a pretext task [1] or generating synthetic data from source domain to closely match data from target domain [19]. Recent works [1, 19, 38, 54, 46] have been proposed which either use an augmentation method as a self-supervised task or generate synthetic data from source domain to mimic the target domain for UDA on point-clouds in a STDA setting. Nevertheless, extending these approaches in MTDA scenario is not straightforward. ### Multiple Target Domain Adaptation (MTDA) MTDA requires adapting a model to perform accurately across multiple unlabeled target domains using labelled data from a single source domain. However, the existing MTDA literature has primarily focused on 2D data [6, 13, 30], where they either use target domain labels [13] or not [6, 25, 32, 30]. Gholami [13] proposed an approach to adapt to multiple target domains by maximizing the mutual information between domain labels and domain-specific features while minimizing the mutual information between the shared features. Chen [6] proposed to blend multiple target domains together and minimize the discrepancy between the source and the blended targets. Liu [25] proposed to use a curriculum learning based domain adaptation strategy combined with an augmentation of feature representation from a source domain to handle multiple target domains. Nguyen [30] proposed to perform UDA by exploiting the feature representation learned from different target domains using multiple teacher models and then transferring the knowledge to a common student model to generalize over all target domains using knowledge distillation. Although effective on 2D vision tasks, these methods often fail to generalize well on 3D vision tasks due to their design that focuses on images, and disregards local geometric information, and the problem of catastrophic forgetting that can occur during alternate optimization [30]. Consequently, MTDA for 3D vision tasks remains an underexplored research area despite its numerous real-world applications. Thus, we propose the first MTDA method for 3D point cloud. ## 3 Approach ### Overview Ganin [12] argues that representations that are indistinguishable between the source and target domains are crucial for domain invariant inference. In the context of image classification [50, 51], a common data augmentation technique known as "mixing" or linear interpolation of two images has been employed to make two samples indistinguishable from each other. However, when considering domain-invariance of point clouds, directly mixing the input point clouds presents a challenge, as not all points are _equally_ important in describing the object, and it is not trivial to determine which points to mix and which points to exclude. Instead, we encode the point clouds using a DNN, which implicitly weighs the important points and their point-point interactions, and use the embeddings for mixing. As argued in [50, 51], mixing can act as an effective regularizer for guiding the model to be discriminant of source domain from the target domains for point clouds, while remaining indiscriminant of the domain shifts across multiple domains. This enables a model to generalize across multiple domains. We illustrate the overview of our proposed MTDA approach MEnAs in Figure 2. We employ an adversarial training strategy [12] to reduce the distribution shifts across multiple domains, using gradient reversal for the _domain confusion loss_. Specifically, we first encode the point clouds by the feature extractor module \(F\) using a variant of the node attention module proposed in PointDAN [36]. This module \(F\) preserves both local geometric structures and the global relations between the local features, resulting in a tensor \(F_{T}\) that is split into two branches. The first branch, a _domain classifier_\(D\), is composed of a Gradient Reversal Layer (GRL) [12] and a fully connected layer. The GRL helps in building a feature representation of the raw input \(\mathcal{X}\) that is good enough to predict the correct object label \(\mathcal{Y}\), but such that the domain label of \(\mathcal{X}\) cannot be easily deduced from the feature representation. This promotes domain confusion, where the feature extractor \(F\) attempts to confuse the domain classifier \(D\) by bridging the two distributions closer. The second branch is an object classifier \(C\) consisting of a fully connected layer and a SoftMax activation function. \(D\) uses \(F_{T}\) to classify the feature representations into source or target domain, while \(C\) classifies them into \(K\) classes. Thus, \(F\) is adversarially trained by minimizing the object classifier's classification and maximizing the domain classifier's classification loss. Our model's core is the _domain mix-up module_, which is explained in detail in the following section. ### Domain Mixup Module Inspired by the mixup [51] approach for 2D data, we propose to mix the feature embeddings obtained by \(F\), _but_ from multiple domains in the latent space. Unlike the methods for 2D data where the input images are blended by an alpha factor [5, 50], we propose mixing the feature embeddings, since the feature embeddings from the deeper layers of the network contains information about the global shape of the point cloud and local point-point interaction, as demonstrated in [46] applied to STDA set-up. Specifically, we linearly interpolate the source (\(F_{s}\)) and target feature (\(F_{T_{i}}\)) embeddings to obtain \(F_{i}^{m}\) and the corresponding mixed _soft_ domain labels \(L_{i}^{m}\) as: \[F_{i}^{m}=\lambda F_{s}+(1-\lambda)F_{T_{i}}, \tag{1}\] \[L_{i}^{m}=\lambda L_{s}+(1-\lambda)L_{T_{i}}, \tag{2}\] where \(L_{s}\) and \(L_{T_{i}}\) denote the domain labels of source and target domain which are set to \(1\) and \(0\), respectively. The use of soft labels is essential in creating a continuous probability distribution that indicates the likelihood of a sample belonging to a particular domain. Unlike hard domain labels that limit the classification of samples to just one domain, soft labels promote the learning of domain-invariant features that are useful for both domains and not biased towards one or the other. The linear interpolation of feature embeddings serves two purposes. Firstly, it helps create a continuous domain-invariant latent space, enabling the mixed features to be mapped to a location in-between the latent space of source and target domain [2]. This continuous latent space is crucial for domain-invariant inference across multiple domains. Secondly, it acts as an effective regularizer, helping the domain classifier \(D\) improve in predicting the soft scores Figure 2: **Overview of our MTDA model.** The labeled source data \(S\), and the unlabeled target data \(T_{i}\) from multiple domains \(i=1..n\), are taken as input by the feature extractor. The source feature \(F_{s}\) is used by object classifier \(C\) and domain classifier \(D\) to predict the category label, and domain of the input resp. The target feature \(F_{T_{i}}\) is used by \(C\) to calculate the discrepancy loss between the source and target features. \(F_{T_{i}}\) is also used by \(D\) to differentiate between source and target domain. \(F_{s}\) and \(F_{T_{i}}\) are fed to the domain mixup module to get mixed domain features \(M\) to predict the soft scores for source/ target. The model is optimized using a combination of object classification loss, domain confusion loss and discrepancy loss. for domains (source or target) of the mixed feature embeddings \(F_{i}^{m}\), similar to [50, 51]. Since our approach involves multiple target domains, we model domain invariant representation obtained by the mixup \(F_{i}^{m}\) as a random variable. By using multiple realizations of the'mixup' representation for different domains, we learn domain-invariant information that is robust to domain shifts. Baseline mixup (Sep).The standard approach for utilizing the stochastic realization of mixed embeddings, involves mixing the feature embeddings of the source domain \(S\) and each of the target domains \(T_{i}\) from a set of target domains \(\mathcal{T}\) to train a model. Specifically, each mixup feature is fed into the domain classifier \(D\) separately for each of the target domains \(\mathcal{T}\), which predicts a soft score, _i.e_., the mixup ratio for source \(S\) and target domain \(T_{i}\). Then, the cross-entropy loss is calculated and back-propagated over the Gradient Reversal Layer (GRL). We call this approach as the 'Sep.' method and is illustrated in Figure 3 (a). Mixup Ensemble Average (MEnsA).The sequential training approach employed in the Sep. method may not allow the model to effectively learn the interaction between the source and multiple target domains due to catastrophic forgetting [28, 39], as the method performs a pair-wise mixup between the source and target domains. This results in the model forgetting previously learned domain-invariant features when exposed to a new target domain. To alleviate this problem, we propose a simple method of taking an _ensemble average_ of the mixed feature embeddings from the multiple targets \(F_{i}^{m}\) as: \[F_{m}^{M}=\frac{1}{n}\sum_{i=1}^{n}F_{i}^{m}. \tag{3}\] We call it **Mixup**\(\mathbf{En}\)_semble_**Average** or **MEnsA**, illustrated in Figure 3 (b). The soft scores for the source and target domains are obtained by feeding the mixed feature \(F_{i}^{m}\) to the domain classifier \(D\), and the mapping between the source and each target domain is optimized by reproducing kernel Hilbert space (RKHS) _i.e_. MMD. We posit that the ensemble average effectively captures shared information across _all_ domains while mitigating conflicting information among them. Consequently, the model trained on this averaged representation, captures differences between the source domain and multiple target domains in a consolidated manner, resulting in improved generalization over domain shifts across multiple target domains. Our method differs from [46] in that they propose a pair-wise mixup at the input and intermediate stage followed by the reconstruction of image samples. In contrast, we explore mixing in a 3D MTDA setup by mixing the latent features from all domains into one, rather than pairwise mixing. Our approach is designed to capture shared domain-invariant features across multiple domains, whereas pairwise mixup only focuses on learning domain-invariant features between the source and one target domain, ignoring the shared features across multiple domains, thereby suffering from catastrophic forgetting. ### Objective Function The complete architecture is trained end-to-end by minimizing \(\mathcal{L}\), which is a weighted combination of supervised classification loss on the source domain (\(\mathcal{L}_{cls}\)), domain confusion loss (\(\mathcal{L}_{dc}\)), mixup loss (\(\mathcal{L}_{mixup}\)) and MMD loss (\(\mathcal{L}_{mmd}\)), defined as: \[\mathcal{L}=\log\left(\sum(e^{\gamma(\mathcal{L}_{cls}+\eta\mathcal{L}_{dc}+ \zeta\mathcal{L}_{adv})})\right)/\gamma, \tag{4}\] Here, \(\eta\), \(\zeta\) and \(\gamma\) are balancing hyperparameters. The classification, domain confusion and adversarial loss are cross-entropy losses, defined as: \[\mathcal{L}_{cls} =\mathcal{L}_{CE}(C(F_{s}),y_{s}), \tag{5}\] \[\mathcal{L}_{dc} =\mathcal{L}_{CE}(D(F_{s}),L_{s})+\mathcal{L}_{CE}(D(F_{T_{i}}, L_{T_{i}})),\] \[\mathcal{L}_{adv} =\lambda_{1}\mathcal{L}_{mmd}+\lambda_{2}\mathcal{L}_{dc}+\lambda _{3}\mathcal{L}_{mixup},\] where \(C\) is the object classifier, \(D\) is the domain classifier, \(y_{s}\) is the ground truth object label, \(L_{s}\) is the domain label for source and \(L_{T_{i}}\) is the target domain label set as 1 and 0. \(\lambda_{1},\lambda_{2}\) and \(\lambda_{3}\) are balancing hyperparameters with constant values of \(5.0\), \(5.0\) and \(1.2\) respectively, and are chosen empirically. The MMD loss and mixup loss are defined as: \[\mathcal{L}_{mmd}=\mathcal{L}_{rbf}(C(F_{s}),F_{T_{i}},\sigma), \tag{6}\] \[\mathcal{L}_{mixup}=\mathcal{L}_{CE}(D(F_{m}^{M}),L_{i}^{m}),\] where \(\mathcal{L}_{rbf}\) is a radial basis function. ## 4 Experiments ### Experimental Set-up Dataset.We evaluate our method on PointDA-10, a benchmark dataset proposed by [36] for the task of point cloud classification. PointDA-10 consists of three subsets of three widely used datasets: ShapeNet [4], ScanNet [7] and ModelNet [45], each containing \(10\) common classes (chair, table, monitor, etc.). **ModelNet-10 (M)**, called ModelNet hereafter, contains samples of clean 3D CAD models. **ShapeNet-10 (S)**, called ShapeNet hereafter, contains samples of 3D CAD models collected from online repositories. **ScanNet-10 (S*)**, called ScanNet hereafter, contains samples of scanned and reconstructed real-world indoor scenes. Samples from this dataset are significantly harder to classify because (1) many objects have missing parts due to occlusion, and (2) some objects are sampled sparsely. For more details, we refer the readers to the supplementary material. Implementation Details.The proposed approach is implemented on PyTorch [31] framework with Adam [20] as the optimizer for training. The learning rate is assigned as \(10^{-3}\) under the weight decay of \(5^{-4}\) with \(\beta_{1}\) and \(\beta_{2}\) kept as \(0.9\) and \(0.999\). All models were trained for \(100\) epochs with a batch size of \(64\). We set \(\lambda_{1},\lambda_{2}\) and \(\lambda_{3}\) used in Equation 5 to \(5.0\), \(5.0\) and \(1.2\) respectively. For Equation 1 and 2, \(\lambda\in[0,1]\) is a mixup ratio and \(\lambda\sim\beta(\alpha,\alpha)\), where \(\beta\) is a beta function and \(\alpha\) is set to \(2.0\) for all experiments. We sample \(\lambda\) from a beta distribution, \(\beta(\alpha_{1},\alpha_{2})\) such that \(\alpha_{1}\) = \(\alpha_{2}\), as it enables sampling values from a non-skewed distribution. Motivated by [30], we use scheduled tuning for \(\eta\) in Equation 4 as: \[\eta=s\cdot e^{\left(\frac{\log\frac{f}{\lambda_{e}}}{N_{e}}\cdot e\right)}. \tag{7}\] where \(s\) is the starting value of \(0.1\), \(f\) is the final value of \(0.9\), \(N_{e}\) is the total number of epochs and \(e\) is the current epoch. This helps in measuring the importance of domain confusion loss over time to adversarially raise the error rate of the domain classifier, thereby forcing it to improve at distinguishing the domains over time. Baselines.We compare the proposed approach with general purpose UDA methods including maximum mean discrepancy (MMD) [26], adversarial discriminative domain adaptation (ADDA) [12], domain adversarial neural network (DANN) [42] and maximum classifier discrepancy (MCD) [37]. It is also compared with STDA method on point clouds [36]. We also compared our approach to MTDA approaches for 2D vision tasks involving blending targets [6], exploiting shared and private domain spaces [13] and knowledge distillation from multiple teachers to a common student model [30] with minor modification to use 3D point cloud data. In adapting these methods in MTDA scenario, we follow the authors' implementations and the hyperparameters are kept the same as proposed in the respective papers. Since [30] was proposed for MTDA on 2D vision, the authors used ResNet50 [17] as the teacher model and AlexNet [23] as the student model for knowledge distillation. For modifying the approach to 3D MTDA, we used PointNet [33] as a compact student model and PCT [16] as a large teacher model. 'No adaptation' refers to the model trained only by source samples as a naive baseline, and 'Supervised' refers to the training performed with labelled target samples. Evaluation Metric.We compare the MTDA performance of the proposed method to the previous works and summarize them in Table 1. We use the same pre-processing steps for all methods. In all the experiments, we report the top-\(1\) classification accuracy on the test set, averaged over \(3\)-folds, for each target domain. ### Results and Discussion We summarize comparative results for classification on PointDA-10 in Table 1. The proposed approach outperforms UDA methods, STDA method for point clouds and MTDA approaches designed for 2D vision modified for 3D point clouds. Despite the large domain gap rising due to sim-to-real or real-to-sim adaptation on \(M\to S^{*}\) and \(S^{*}\to M\), respectively, the proposed approach significantly improves the overall performance. MCD and DANN outperform most of the other methods, but performs worse than our approach. It is partly because they disentangle the domain-shared features from the domain-specific features, thus achieve better domain generalization. Moreover, we observe that a simple extension of STDA methods to MTDA does not adapt well on multiple Figure 3: **Comparative illustration of the mixup methods of proposed ‘MEnsA’ to a baseline method (‘Sep’). They mix feature embeddings of source and \(N\) target domains (here, we use \(N=2\) for visualization clarity). The Sep. mixup method mixes source domain embeddings to each of the target domain embeddings to create \(n\) mixed embeddings, \(F_{i}^{m}\). Each of which is passed to domain classifier \(D\) to predict soft scores (mixup ratio between source and target domain) instead of hard labels for the domains.** target domains. For instance, MMD and DANN achieve an average accuracy of around 42 % in the STDA setup while they barely reach an accuracy of 40 % in the MTDA setup. Interestingly, MCD still performs better than most other methods. Furthermore, UDA methods designed for point clouds also do not perform well when applied to multiple targets, possibly due to catastrophic forgetting during sequential training on multiple target domains. We discuss the performance of methods in STDA setup in more detail in Table 4 of the supplementary due to space sake. The MTDA methods designed for 2D vision tasks, such as AMEAN, MT-MTDA, and MTDA-ITA, do not perform well on 3D data due to their failure in capturing the local and global geometry of the data while aligning the features across domains. While methods designed for 2D tasks focus on aligning the global image features, local geometry plays a crucial role in achieving good performance for 3D data [36]. This suggests that modality difference can cause a performance drop due to the inherent property differences of each modality, such as brightness or texture in 2D data and geometry, point density, or orientation in 3D data. By incorporating local and global geometry information, our approach is able to align features across domains while preserving the intrinsic structures of 3D data, leading to better domain adaptation performance. Furthermore, the node attention module helps in focussing on important regions of the point cloud, which is critical for accurate classification. These design choices allow our model to effectively capture the modality-specific properties of 3D data, resulting in superior performance compared to existing MTDA methods. For MT-MTDA that uses knowledge distillation, a larger teacher model and a compact student model is desired. However, if the teacher model fails to align local structures to the global structure, it becomes challenging to transfer accurate knowledge to the student model, leading to relatively disappointing results. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Source Domain & ModelNet (M) & ScanNet (S*) & \multicolumn{3}{c}{ShapeNet (S)} \\ Src \(\rightarrow\) Tgt & M \(\rightarrow\) S* & M \(\rightarrow\) S & S* \(\rightarrow\) M & S* \(\rightarrow\) S & S\(\rightarrow\)M & S\(\rightarrow\)S* & Average \\ \hline No adaptation (Baseline) & 35.07 & 11.75 & 52.61 & 29.45 & 33.65 & 11.05 & 28.93 \\ \hline MMD [26] & 57.16 & 22.68 & 55.40 & 28.24 & 36.77 & 24.88 & 37.52 \\ DANN [12] & 55.03 & 21.64 & 54.79 & 37.37 & **42.54** & 33.78 & 40.86 \\ ADDA [42] & 29.39 & 38.46 & 46.89 & 20.79 & 35.33 & 24.94 & 32.63 \\ MCD [37] & **57.56** & 27.37 & 54.11 & 41.71 & 42.30 & 22.39 & 40.94 \\ PointDAN [36] & 30.19 & 44.26 & 43.17 & 14.30 & 26.44 & 28.92 & 31.21 \\ \hline AMEAN [6] & 55.73 & 33.53 & 51.50 & 30.89 & 34.73 & 22.21 & 38.10 \\ MTDA-ITA [13] & 55.23 & 20.96 & 56.12 & 33.71 & 32.33 & 25.62 & 37.33 \\ MT-MTDA [30] & 45.43 & 25.72 & 28.25 & 19.51 & 24.65 & **35.27** & 29.81 \\ \hline **MEnsA (Ours)** & 45.31 & **61.36** & **56.67** & **46.63** & 37.02 & 27.19 & **45.70** \\ \(\hookrightarrow\) w/o mixup & 28.48 & 40.05 & 33.89 & 12.14 & 27.83 & 24.48 & 27.81 \\ \hline Supervised in each domain & 77.99 & 67.18 & 79.83 & 66.27 & 63.41 & 53.02 & 67.95 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative classification results (%) on PointDA-10 dataset in MTDA setting. For every source domain, we report performance for each target domain. **bold** and second best in underline. ‘No adaptation’ refers to the model trained only by source samples and ‘Supervised’ denotes the model when trained with labelled target data \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Source Domain & ModelNet (M) & ScanNet (S*) & \multicolumn{3}{c}{ShapeNet (S)} \\ Src \(\rightarrow\) Tgt & M \(\rightarrow\) S* & M \(\rightarrow\)S & S* \(\rightarrow\) M & S* \(\rightarrow\) S & S\(\rightarrow\)M & S\(\rightarrow\)S* & Average \\ \hline **MEnsA (Ours)** & 45.31 & **61.36** & **56.67** & **46.63** & **37.02** & 27.19 & **45.70** \\ Mixup Sep & 41.32 & 47.98 & 56.18 & 42.19 & 28.85 & 36.69 & 42.20 \\ \hline Factor-Mixup & 41.31 & 41.49 & 50.77 & 38.82 & 30.77 & 36.81 & 40.00 \\ Concat-Mixup & 49.20 & 29.57 & 50.47 & 37.5 & 33.05 & 25.64 & 37.57 \\ Inter-Mixup & **50.95** & 28.65 & 51.71 & 34.38 & 32.21 & **40.80** & 39.78 \\ \hline Best of all methods & 50.95 & 61.36 & 56.67 & 46.63 & 37.02 & 40.80 & 48.91 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative classification results (%) on PointDA-10 dataset in MTDA setting in different mixup scenarios. For every source domain, we report performance for each target domain. Best result in **bold** and second best in underline. AMEAN and MTDA-ITA perform better than other MTDA baselines. MTDA-ITA finds a strong link between the shared latent space common to all domains, while simultaneously accounting for the remaining private, domain-specific factors. Whereas AMEAN mixes the target domains and creates sub-targets implicitly blended with each other, resulting in better performance. Nonetheless, our approach outperforms AMEAN, as we takes features focus on learning domain-invariant features that are hard to distinguish from their originating domain. This forces the model to improve its classification performance independent of the domain, resulting in better overall performance. Additionally, in Table 5 of the supplementary, we highlight the importance of each module used in the pipeline by conducting an ablation study on each loss term of \(\mathcal{L}_{adv}\) in Equ. 5. It can be clearly observed that the mixup module significantly improves performance. Moreover, we show how adversely the class-imbalance in PointDA-10 affects class-wise classification accuracy in Table Tab. 6 of the supplementary due to space sake. Most classes show satisfactory improvements with our proposed approach except for _Bed_, _Bookshelf_ and _Sofa_, which highlights the weakness of our model that neglects the scale information, and when different classes share very similar local structures, the model possibly aligns similar structures across these classes (, large columns contained both by _Lamps_ and round _Tables_, small legs in _Beds_ and _Sofas_ or large cuboidal spaces present in _Beds_ and _Bookshelves_). ### Variants of the Mix-up Methods To further investigate the effect of equal weight averaging that is proposed in the MEnSA, we vary scaling schemes in the averaging of the mixup representations. Here, we evaluate three different formulations for mixing, and name it as _Factor_-Mixup, _Concat_-Mixup and _Inter_-Mixup. Factor-MixupWe mix the feature embeddings from multiple domains together and observe the effect of scaling factor in averaging in Equ. 3 as: \[F_{m}^{factor}=\lambda F_{s}+\sum_{i=1}^{n}\frac{1-\lambda}{n}F_{T_{i}}. \tag{8}\] Concat-MixupInstead of summing the feature embeddings of the domains, we consider concatenation of the mix-ups with the intuition of learning the proper weights for each mixup embedding for downstream tasks. We use a scaling factor \(\lambda\) and \(\frac{1-\lambda}{n}\) for balancing between source and targets both in feature and label as: \[F_{m}^{concat}=[\lambda F_{s},\frac{1-\lambda}{n}F_{T_{1}},...,\frac{1- \lambda}{n}F_{T_{n}}], \tag{9}\] \[L_{m}^{concat}=[\lambda,2\frac{1-\lambda}{n},..,N\frac{1-\lambda}{n})], \tag{10}\] where \([\cdot,\cdots,\cdot]\) denotes concatenation operation. Inter-MixupIn addition to aggregating all the domains together in MEnSA, we also consider a linear interpolation of the target domains excluding the \(F_{s}\) for both feature and label as: \[F_{m}^{T}=\lambda F_{T_{1}}+(1-\lambda)F_{T_{2}}. \tag{11}\] \[L_{m}^{T}=\lambda L_{T_{1}}+(1-\lambda)L_{T_{2}}. \tag{12}\] We devised Inter-Mixup, with the intuition that regularizing the target domains alone should help the model to learn a mapping where it is able to learn the target domain-invariant features promoting better MTDA, thus learning a good separation between the source and target domains in the latent space. We compare the performance of the variants with MEnSA and Sep., and summarize the results in Table 2. As Scaler-Mixup is a linear interpolation of all the domains together, the mixed feature representation obtained by Scaler-Mixup has large values in each dimension, which may lead to gradients with large magnitude. It may hurt the accuracy. Unlike MEnSA and other mixup variants, Concat-Mixup concatenates the feature embeddings from multiple domains. As the number of domains increases, the shared latent space between the domains mixed becomes smaller. Therefore, it becomes difficult for the model to learn domain-invariant features across all domains, leading to poor performance among all other variants of mixup. Interestingly, we observe that mixing the target domains together with the source domain in Inter-Mixup performs better on ScanNet which has real-world samples. We believe it is because the model is able to learn better domain-invariant features between the real and synthetic domains, as the samples from ScanNet are more sparse and occluded as compared to other domains. Moreover, we show the visualization of the feature embeddings using t-SNE plots in the supplementary. ## 5 Conclusion We model the multi target domains as a random variable and propose to mix latent space embeddings of all domains in an ensemble average to encode domain invariant information for the 3D point cloud for the first time in literature. The mixed representation helps the domain classifier to learn better domain-invariant features and improve the domain adaptation performance in multi-target domain adaptation set-up. We demonstrated the efficacy of our approach on the point cloud DA benchmark dataset of PointDA-10 by showing that our approach significantly outperforms UDA, STDA and MTDA methods proposed for 2D data.
2301.12490
How does HCI Understand Human Autonomy and Agency?
Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities.
Dan Bennett, Oussama Metatla, Anne Roudaut, Elisa Mekler
2023-01-29T16:54:03Z
http://arxiv.org/abs/2301.12490v2
# How does HCl Understand Human Agency and Autonomy? ###### Abstract. Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities. Autonomy, agency, user experience, theory, delegation, Self Determination Theory, boundary objects, mixed initiative + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: 2022), taking stock of notions of autonomy and agency in HCI literature. Our contribution is three-fold: First, we show that autonomy and agency currently figure not as effective boundary objects in the HCI landscape, but as vague _umbrella constructs_(Ross, 2018) - broad concepts, subsuming a surprising diversity of understandings and theoretical underpinnings. We find that "autonomy" and "agency" are often used interchangeably, and given a wide range of different meanings. These include the _implicit experience_ that our actions are responsible for outcomes (Hoffmann et al., 2017); the _material_ influence we have on a situation (Krishnan et al., 2017), and the _experience_ that outcomes are congruent with one's values (Krishnan et al., 2018). Despite this breadth of understandings, we find that few works give specific definitions of autonomy and agency, nor examine the relationships of these concepts to expected outcomes. Second, we outline ways in which HCI could move beyond this umbrella approach and develop agency and autonomy as well-functioning boundary objects. Specifically, we identify four aspects which characterise approaches to agency and autonomy in HCI. Works were distinguished by their focus on 1. issues of _self-causality_ and personal _identity_; 2. the _experience_ and the _material_ expression of autonomy and agency; 3. particular _time-scales_; and 4. emphasis on _independence_ and _interdependence_. These aspects can help future HCI research identify relevant prior work, draw meaningful distinctions within understandings of autonomy and agency, and clarify approaches to their support. Finally, we use these four aspects of autonomy and agency to articulate open questions and future directions for work in HCI. We outline avenues to clarify issues of autonomy and agency and their relationships to valued outcomes, and capitalise on commonalities between research in different areas. We also articulate ethical challenges around technology and human agency and autonomy. ## 2. Background In technology research "autonomy" is often linked to robotics and AI [e.g., "autonomous vehicles', 118], and "agent" can refer to software agents which carry out tasks, and coordinate their actions with people (Krishnan et al., 2017). Our review focuses exclusively on _human_ autonomy and agency. Work on autonomous and agentive technologies, and agency in non-humans, will only be discussed insofar as it bears on human agency and autonomy. Autonomy and agency are closely related terms, though they are distinct in etymology, and often -- but not always -- in usage. "Autonomy" derives from the Greek _autos_, meaning "self", and _nomos_, meaning "custom or law" (Krishnan et al., 2017), together taken to mean "self-governance" (Krishnan et al., 2017). By contrast, "agency" has its roots in the Latin _agere_. "to set in motion, drive forward; to do, perform" (Krishnan et al., 2018), reflecting an emphasis on self-causality. While modern scientific and philosophical usage of the terms often follows these etymologies (e.g., 1; 25; 93; 109; 163), the distinction between performing and governing is not always easily drawn in everyday life. Recent philosophy has emphasised that while distinct, the terms are tightly entangled (Ross, 2018), and elsewhere the terms are often treated as (near) synonymous: In HCI, for example, Peters et al. recently defined autonomy as "feeling agency, acting in accordance with one's goals and values" (Krishnan et al., 2017, p. 2). This entanglement is also visible in the long history of the terms in philosophy and other disciplines. When Aristotle articulated his account of the 'eudaimonic' good life, he gave a central role to _autarkeia_, or self-sufficiency in action and thought (Krishnan et al., 2017). This emphasis on individualism developed over several centuries, most notably in Kant's culturally definitive account of individual autonomy as the basis of rational agency (Krishnan et al., 2017). Recent philosophy has added complexity to such views: both by emphasising the presence of different orders of agency and autonomy at different timescales (Krishnan et al., 2017; 84; 11), and by treating individual expressions of agency and autonomy as inherently tied to their socio-material contexts (Krishnan et al., 2017; 93; 127). This tension between individual and context is also found in modern positive psychology frameworks such as Self-Determination Theory, which outlines an "organismic integrative" approach (Krishnan et al., 2017): a complex relationship of mutual constitution between the individual and their social context, in which the individual not only self-governs and acts, but also autonomously integrates aspects of their context into their identity and behaviour (Krishnan et al., 2017). In such accounts, understanding autonomy or agency is a complex and multi-dimensional matter that requires understanding both causal influence, and how contexts and outcomes inform an individual's identity, (Krishnan et al., 2017) goals, and values (Krishnan et al., 2017; 93). Crucially, this emphasis on the complex entanglement of causation, value, and identity is not only relevant to "higher order" motivational and decisional issues. Recent experimental work on low-level sense-of-agency, focusing on the moment-to-moment experience of self-causation, also finds that objective correlates of agency can be impacted by personality factors (Krishnan et al., 2017), and by social factors (Krishnan et al., 2017). ### Agency and Autonomy in HCI Agency and autonomy have long been focal to HCI scholarship. Schneiderman's 1985 _Eight Golden Rules of Interface Design_ urged designers to "support internal locus of control" (Krishnan et al., 2017, p. 75). CHI workshops in 1996 (Schneiderman, 1996) and 2014 (Schneiderman, 1996) addressed issues of autonomy in current technology use, covering the risks of undermining human autonomy (Schneiderman, 1996), and theory and design strategies related to user autonomy (Schneiderman, 1996). The first of these led to the influential Value-Sensitive Design approach (Schneiderman, 1996). Organisers of the latter developed the METUX (Motivation, Engagement, and Thriving in User Experience) framework (Krishnan et al., 2017), which provides guidelines for design for human well-being, and draws attention to the multi-level nature of autonomy. Autonomy and agency have figured in a range of work aiming to characterise good user experience (UX) (e.g., 68; 69; 159) and they remain central concepts in recent strands of HCI research: For example, in Human-Computer Integration, agency is suggested to be central to categorising and understanding situations where "user and technology together form a closely coupled system" (Krishnan et al., 2017, p. 1). Beyond theoretical work, much recent empirical and design work addresses the question of how and when autonomy and agency should be supported. This work, discussed in greater depth in our literature review below, varies considerably in the kinds of agency and autonomy it addresses - covering for example, the experience of causing discrete events during minimal interactions (e.g., 36); the experience of autonomy in episodes of gaming (e.g., 44); but also the lived experience and personhood of dementia sufferers (e.g., 31]; and the material fact of control or influence in tasks (e.g., 134). At present it is unclear how such different approaches to autonomy and agency, at different scales of behaviour, relate to one another. Moreover the use of near-identical language across this diversity of cases can make it difficult for HCI researchers to identify work relevant to their particular concerns. At the same time, we argue that there is value in understanding the interrelation of these diverse approaches: The tight integration of technologies into our bodies, behaviours and lives (122) has implications for autonomy and agency across multiple levels. Recent work in philosophy of cognition, for example, has indicated risks for integrated technology whereby delegation to technology at sensorimotor timescales (milliseconds) might be expected to impact on autonomy and agency in decision-making and everyday life (115; 171). To understand such scenarios it is imperative to grasp how different aspects of agency and autonomy relate to one-another. While recent years have seen repeated calls to deal with autonomy in a nuanced and multi-faceted way (e.g., 26; 64; 75), it remains unclear what a multilevel understanding of agency and autonomy, adequate to the range of work in HCI, might look like. Some calls for nuance have focused only on particular domains, such as multiple sclerosis support (64). The METUX framework (28; 132) offers a less domain specific approach and outlines several "spheres" of life into which interactions and their implications can be categorised -- from interaction at the interface, to the user's wider life and society. Specifically, the METUX calls attention to the presence of multiple levels of need satisfaction, and provides a heuristic for "organizing thinking and evaluation" (132, p. 7) in design work. However, METUX does not focus specifically on autonomy, and the authors do not relate their account to previous work on autonomy and agency in HCI, nor address the full range of issues presented here. Also, the basis for choosing its approach to distinguishing aspects of autonomy issue -- "spheres" of life -- is not made clear. As the results of our literature review will show, a number of other aspects seem operative in approaches to autonomy and agency, and offer clarity in distinguishing understandings and issues. To our knowledge, no work has attempted to survey the breadth of approaches to autonomy and agency in HCI, to understand how they relate, and how they might coordinate and build upon one another. This is the goal of our paper. ## 3. Review Method This paper aims to give an overview of how HCI researchers have addressed and understood issues of agency and autonomy. To this end, we review a corpus of papers that spans 32 years of HCI research. ### Source Selection We searched the ACM Digital Library on May 21st, 2022, for relevant publications from the CHI conference, ACM Transactions of Computer-Human Interaction (ToCHI), and International Journal of Human-Computer Studies (IJHCS), Behaviour and Information Technology (BIT), and Human-Computer Interaction, as these have previously been considered high-impact venues in HCI (73). We searched these venues for the keywords "agency" and "autonomy" in keywords and abstracts. This resulted in 207 full papers (see Figure 1 for breakdown per publication venue). ### Selection of papers for inclusion in the review We first reviewed abstracts and titles for all papers for whether they concerned human autonomy or agency. Where abstracts focused on non-human agency and autonomy (e.g., of robots, software agents), we reviewed the full text, excluding it if there was no discussion of human agency and autonomy. Example exclusions include one paper which dealt with the agency of parrots in animal-computer interaction (97), and another focusing on the calibration of autonomous braking systems in cars (57). In total, 46 papers were removed in this way, leaving 161 papers for analysis. ### Coding Procedure #### 3.3.1. Developing the Coding Rubric We analysed our corpus using a rubric, which we developed through thematic analysis (76) of a subset of 77 papers. This subset was obtained through an initial search on the ACM DL, restricted to 10 years of research, at CHI and ToCHI. The analysis followed the process outlined by Braun Figure 1. Summary of the literature review procedure. and Clarke (2018). Our goal for the rubric was to identify aspects and understandings of agency and autonomy in prior HCI work, whether they operated explicitly or implicitly. We therefore opted for an inductive coding process, which was additionally informed by our reading of the work on agency and autonomy discussed in our background section. The first author read and coded the full texts of all 77 papers in NVivo, focusing on passages dealing with autonomy and agency. Codes were generated at sentence and paragraph level, aiming at granular description of how autonomy and agency were addressed. Through this initial step, 466 distinct codes were identified. These were then thematically grouped into higher-level codes. Finally, the higher-level codes were collated into 7 categories for the analysis of the wider corpus (see Table 1 for descriptions and definitions of terms). At each stage of the thematic analysis, the first author checked that the grouped codes remained descriptive of the accounts in the papers. Also, following recommendations in (Zhu et al., 2017), peer validation was conducted throughout this process through regular meetings among the co-authors to review and clarify coding and grouping decisions. #### 3.3.2. Coding of the Full Corpus The first author read and analysed the full texts of all 161 papers, using the coding rubric. For each of the 7 categories in the rubric, the approaches to autonomy and agency in each paper were categorised, and informative quotes supporting this categorisation were recorded. To calibrate analysis, the first and fourth author independently coded the same, randomly selected subset of 13 papers. Differences were resolved through discussion, with only minor revisions to the coding rubric (i.e., clarifying the definitions of the different time-scales). The coding spreadsheets are included as supplementary material. ## 4. Results In the following, we report our analysis of the 161 papers reviewed. We first summarise the range of subject matters addressed and the benefits and values ascribed to agency and autonomy. We then address how authors articulated their understandings of autonomy and agency, before finally presenting the aspects we found in authors' understandings of agency and autonomy. Table 1 provides a summary and guide to the analysis categories. ### Subject matter The papers in our corpus covered a wide range of subject matters, from the design and evaluation of input and control methods (Kumar et al., 2017), to issues of everyday living (Beng et al., 2017), family and parenthood (Kumar et al., 2017). The largest group dealt with issues of ageing and accessibility (n=33): Here autonomy and agency often concerned how individuals could be supported in retaining personhood (Kumar et al., 2017), maintaining social and family ties (Kumar et al., 2017; Kumar et al., 2017), and living independent lives (Beng et al., 2017), a focus also seen in work on parenting (n=6), children (n=6), and in work concerned with HCI4D (n=6). Across all these domains, issues of autonomy and agency commonly focused on material matters, addressing constraints imposed by health, and by social and material conditions. The next largest group focused on gaming, social media, and entertainment (n=32). Here, issues of autonomy and agency usually centred on people's experiences: how free, independent, self-governing, and in control they felt during the interaction. The remaining papers focused on a range of other subjects; from work (n=17), to input methods (n=6). ### Value and Outcomes Across our corpus it was evident that autonomy and agency are widely considered desirable qualities in HCI. However, in many papers this value was left implicit or unclear (n=57). For example, one paper reported that "users desire agency and control in all aspects of the system" (Kumar et al., 2017, p. 1270) but did not elaborate on why this might be. Other papers sought to understand how technologies could support agency and autonomy, but did not address their value (e.g. (Kumar et al., 2017, 2017)). In the following, we examine in more detail the reasons _why_ agency and autonomy are valued, as well as what beneficial _outcomes_ HCI researchers associate them with. #### 4.2.1. Agency and Autonomy as Intrinsic Goods Several papers (n = 21) indicated that autonomy and agency have _intrinsic_ value, in and of themselves. Often this was indicated by explicit reference to autonomy or agency as "basic psychological need" (e.g., (Kumar et al., 2017)), "fundamental human need" (e.g., (Kumar et al., 2017)), or "universal need" (e.g., (Kumar et al., 2017, 2017)). Lukoff et al., for instance, argue that "sense of agency matters in its own right. Feeling in control of one's actions is integral to autonomy, one of the three basic human needs outlined in self-determination theory" (Kumar et al., 2017, p. 3). Elsewhere, autonomy was considered a "right" drawing on the "UN Convention on the rights of people with disabilities" (Beng et al., 2017, p. 3249), a "central ethical value" (Kumar et al., 2017, p. 3) drawing on Value Sensitive Design, or fundamental to human dignity (Kumar et al., 2017). Other papers hinted at the intrinsic value of agency and autonomy by problematising their absence. Irani and Six Silberman (Irani and Silberman, 2017), for example, explored accounts on Turkopticton that describe crowdworkers as "exploited cogs in other people's plans, toiling in digital sweatshops with little creativity or agency" (Irani and Silberman, 2017, p. 4574). #### 4.2.2. Beneficial Outcomes of Agency and Autonomy for the User The majority (n=90) of reviewed papers located the value of agency and autonomy in their benefits for users. The nature of these benefits, however, varied considerably from work to work: Many papers valued agency and autonomy as key constituents of users' mental and physical well-being and sense-of-self. Specifically, agency and autonomy were associated with a view of positive outcomes for the user, including improved life satisfaction (e.g., (Kumar et al., 2017)); overcoming barriers to acting or choosing (via e.g. ageing, disability, resources, or other material factors) (e.g., (Kumar et al., 2017, 2017); (Kumar et al., 2017)); and acting as a buffer against adverse effects, such as stress, impaired sleep and diminished social interactions (e.g., (Kumar et al., 2017)). Likewise, notions of agency and autonomy were considered crucial to physiological and physical health (e.g. (Kumar et al., 2017, 2017)), supporting better health outcomes (Beng et al., 2017), and contributing to positive developmental outcomes (Kumar et al., 2017, 2017; Kumar et al., 2017; Kumar et al., 2017). Next, several papers linked autonomy and agency to good user experience (UX) - often in a rather general manner (e.g. (Kumar et al., 2017, 2017; Kumar et al., 2017)), with no concrete ties to specific aspects or qualities of experience. Some works were slightly more precise, trying agency and autonomy to technology satisfaction (e.g. (Kumar et al., 2017)), sense of control (Kumar et al., 2017, 2017; Kumar et al., 2017), or some measure of game enjoyment (Kumar et al., 2017, 2017; Kumar et al., 2017). Other works linked greater agency and autonomy to greater sense of meaning for users, for instance, by allowing users to create their own meaning in exhibitions (Wang et al., 2017) and educational settings (Wang et al., 2017). Besides individual meaning, autonomy and agency in interpersonal interactions was associated with more meaningful communication (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) and facilitating group meaning (Wang et al., 2017). Agency and autonomy were also valued for a range of desirable pragmatic outcomes for the user, from enhanced learning (Wang et al., 2017), to improved privacy and security (e.g., 63; Wang et al., 2017; Wang et al., 2017). One paper noted that "autonomy is like the "muscle" of privacy" (Wang et al., 2017, p. 358), and in line with this a number of works explored the role of autonomy and agency in supporting privacy (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Concurrently, some works indicated tensions between agency and autonomy versus safety, security and privacy, e.g., in the case of children or people with diminished capacities. These works (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) emphasised that granting full agency and autonomy might leave people to pose risks to their own physical or digital safety, and suggest a need to balance these values. Finally, 20 papers suggested that user autonomy and agency contribute to desirable practical outcomes in tasks and activities (e.g., 33; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), such as increased feelings of responsibility (Wang et al., 2017), or trust within organisations (Wang et al., 2017). #### 4.2.3. Beneficial Outcomes of Agency and Autonomy for Other Stakeholders Beyond the individual user, benefits of autonomy or agency could also accrue to social groups, organisations and companies (n=23). In some, but not all cases, these beneficial outcomes appear likely to align with the users' own values and choices -- for example, where autonomy and agency were seen as supporting meaningful sharing in communities (Wang et al., 2017; Wang et al., 2017) and families (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). In other instances, organisations and companies benefited from users' autonomy and agency, with mixed outcomes for the user. In some works, for instance, organisations benefited from efficiency and motivational outcomes, while individuals also benefited materially in the form of improved health outcomes (Wang et al., 2017) financial benefits (Wang et al., 2017), and lifestyle flexibility (Wang et al., 2017). In other cases, the benefits to the user were primarily experiential, while the organisation saw material benefits such as improved work-outcomes, and stronger attachment in games, products and services (e.g., 8; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Finally, in some instances, it was not clear that the user gained benefits that they would have chosen for themselves. For example, where organisations granted a "voice" to users, while exploiting the appearance of inclusion to provide legitimacy for existing goals (Wang et al., 2017; Wang et al., 2017). In another case, workers were afforded autonomy in task scheduling and decision making, but any benefits of this to the worker were counterbalanced by the burden of additional stress, responsibility, unaccounted labour, and skipped toilet breaks (Wang et al., 2017). \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Category** & **Description** & **Details / Definitions** \\ \hline Subject Matter & What was the domain or subject focus of the paper? & Assigned based on keywords and paper topic. (See additional materials) \\ \hline Value and Benefits & What values and benefits did authors associate with autonomy and agency? & * _Not clear_ (n=57). * _Intrinsically good_ (n=21). * _Benefits for the User_ (n=90). * _Benefits for Other Stakeholders_ (n=23). \\ \hline Articulating Understanding & How did authors articulate their understanding of agency and autonomy? & * _Explicit definition_ (n=58). * _Meaning indicated via associative sentences_ (n=47). * _Other_ (n=55). \\ \hline & Self-Causality and Identity & Were autonomy and agency described in terms of the user’s causal involvement (acting and making decisions), and/or in terms of the user’s self and values? & * _Executing_: personal and causal involvement in tasks, processes, outcomes (n=122). * _Decision_: exercising personal choice, and making decisions about tasks, processes, outcomes (n=99). * _Self-Congruence_: Outcome in line with user’s values and goals, regardless of causal involvement (n=41). * _Self-Construction_: Impact on identity, values, goals (n=24). * _Material_: E.g. discussion focusing on the user’s _material_ ability to effect some outcome, manage self-care, etc. (n=113). * _Experiential_: _e.g._ discussion focusing on the user’s first person experience, sense, or feelings of agency and autonomy (n=79). \\ \cline{2-4} & Time-scales & What were the time-scales of the relevant experiences and activities? & * _Micro-interaction_: A few seconds or less (n=13) * _Episode_: Up to a few hours (n=113) * _Life_: Days or above (n=77) We used direct statements of time where available (n=29), inferred from context where possible, and did not code where unclear. \\ \cline{2-4} & Independence or Interdependence & Did the discussion of agency and autonomy emphasise independence, or interdependence? & * _Independence_: emphasis on acting, thinking, being independent of others (n=50). * _Interdependence_: emphasis on reliance on others, acting as part of a group, or the social grounds of agency and autonomy Both of these could coincide in the same paper (n=54). \\ \hline \end{tabular} \end{table} Table 1. Categories of analysis and main findings. For all “aspects” papers could be coded in multiple categories. See supplementary material for detailed coding for each paper in the corpus. ### How Did Authors Articulate Understandings of Agency and Autonomy? Only one third of papers in our corpus (n=58) gave explicit definitions of autonomy or agency. Lukoff et al., for example, defined sense-of-agency as "an individual's experience of being the initiator of their actions in the world [...] broken down into feelings of agency, that is, the in-the-moment perception of control, and judgments of agency, that is, the post hoc, explicit attribution of an action to the self or other" [110, p. 3]. However, even some explicit definitions were rather unspecific ("Human autonomy is the ability to be one's own person" and impacts well-being" [5, p. 10]) or tautological ("autonomy ("a sense of freedom and autonomy to play the game as desired" [180, p. 6]). In 47 papers, definitions were implied via associative statements. For example, the work by da Rocha et al. [39] contained a number of statements which together suggested that they used "autonomy" to refer to a player's ability to act for themselves without support in gaming sessions (e.g., "The goal is to improve a player's autonomy and game enjoyment by removing barriers that can hinder their ability to learn a game or engage in gameplay." [39, p. 10]). In another 55 papers, the meaning of autonomy or agency was less clear, or rather inconsistent. For instance, Goncalves et al. described agency in terms of decision making, distinguished from mere execution: "decision making on the part of both players [...] Avoid the perception that one player is just an "executor" of the game or another player's orders" [62, p. 5]. However, later statements in the same paper specifically associated task _execution_ with agency, noting that the players who gave orders "would like to have more _agency_ in the action (e.g. in shooting monsters, in rescuing people)" [62, p. 8, our emphasis]. A paper grounded in Actor Network Theory first stated that agency arises from network effects, and is not the property of individual artifacts [80, p. 828], but later discussed "the agency of the artifacts" [80, p. 829]. Finally, some papers [e.g., 2, 47, 52, 100, 173] explicitly indicated in the abstract or introduction that agency or autonomy was a significant themes of the paper but then never used the terms directly again. Very few papers explicitly engaged with existing theoretical accounts of agency and autonomy. However, in some papers that did, a certain understanding of agency or autonomy was implicit via the concepts and theories cited. Self-Determination Theory (including METUX), for example, was most frequently mentioned (n=17), particularly in works focusing on the player experience [e.g., 44, 96, 147] or user experience [e.g., 129, 130]. Research focusing on the immediate sense of control over actions often resorted to concepts from cognitive science, such as the notion of intentional binding [e.g., 36, 106, 143]. Other conceptual frameworks mentioned include Actor Network Theory [e.g., 4, 58], value-sensitive design [e.g., 7], social anarchism [92], and Kantian philosophy [64]. ### Aspects of Autonomy and Agency A few papers in our corpus explicitly noted that agency and autonomy had "many different facets" [64, p. 12]. In line with this we identified a variety of facets of autonomy and agency in the reviewed works. By analysing the corpus as a whole we found that these facets, and author' approaches to them, could be usefully organised around four broad aspects: (1) A focus on _self-causality_ or _identity_. (2) A focus on the _experience_, or _material_ aspects of agency and autonomy. (3) The _time-scale_ over which autonomy and agency play out. (4) The degree to which either _independence_ or _interdependence_ are emphasised. These aspects are summarised and defined in Table 1 and discussed in detail in the subsections below. #### 4.4.1. Self-causality and Identity Accounts of autonomy and agency in our corpus contained elements of both _causality_ (related to the level and directness of the user's causal involvement), and _identity_: (related to the user's self and values). Nearly all discussions of autonomy and agency (n=159) dealt with causal issues to some degree, while just under one third (n=54) also dealt with issues of identity. We discuss these separately below. _Causality._ Causal aspects of agency and autonomy divided into cases of _execution_: concerning active and direct material involvement in tasks and outcomes, and _decision_: concerning choice and decision making. Discussions in terms of _execution_ varied considerably in complexity, and were found in the majority of papers (n=122). Some papers focused on very limited interactions, for example, the users' sense of having caused individual events by simple interaction gestures such as pushing a button, with no element of personal choice or decision [e.g. 89, 90, 134, 154]. Most papers focusing on execution approached agency and autonomy in a less atomistic manner: focusing, for example, on narrative choice [e.g., 30] and control of action in video games [e.g., 17, 88, 96, 147], people's involvement in everyday personal activities [162, 23], or data analysis [116]. Roughly two thirds of papers (n=99), discussed not only execution but also _decision_ and choice. Often the nature of the activity under study made it hard to separate decision and execution (for example, agency in movement, directing attention, and sense-making in a museum [74]), but there were notable exceptions to this. Lazar et al., for example, distinguished "the capacity to act" from "role in decisions" [102, p. 1]. Similarly, Inman and Ribes linked these aspects of agency with different approaches to design, suggesting that "seamless" designs "[grant] agency to users by lowering technical barriers of entry, [and] facilitating quick access to common operations" [78, p. 2], while "seamful" designs "allow users to make up their own minds" [78, p. 3]. Meanwhile 36 papers focused significantly on decision or choice, giving little or no attention to execution. In some of these cases the tasks under study (such as giving consent [85]) were decisional in nature, but others discussed tasks with more substantial executional components, including gaming [42, 44], and use of assistive technologies [64, 77]. Finally, some works clearly distinguished between executional or decisional factors and saw each as having a different impact. Two papers, for example, dealt with multiplayer games where decisional and executional roles were split between players, with one directing and the other acting. In both cases there was some ambiguity in how these aspects differently supported or undermined agency and autonomy [88, 62]. Karaosmanoglu et al., for example, stated both that "agency/control [was] transferred to the" player in charge of decision making [88, p. 10] and that in the executional role "agency increased as they had a way to noticeably affect the game world" [88, p. 12]. A number of other papers suggested that having a role in decision without involvement in execution resulted in diminished agency: Performing activities for oneself was seen to support agency and sense-of-self for people in supported living [102, 169]. Loss of executional agency was associated with loss of flexibility and adaptation both during supported communication [162], and in interaction with intelligent technologies [108, 155, 176]. IdentitySome papers addressed agency and autonomy primarily in terms of identity. Again, these divided into two further categories: _self-congruence_ (n=41) which concerns the alignment of outcomes with users' values and goals, regardless of their involvement in decision or execution; and _self-construction_ (n=24) concerning effect on user's identity, values, and goals. Deterding [44], for example, distinguished self-congruence from both decision and execution, stating that Self-Determination Theory "does not equate autonomy with [...] the presence of choice [...] Choice is'merely' a facilitating condition that makes it more likely that individuals find a self-congruent activity" [44, p. 3932]. However, despite this theoretical distinction, all but three [164, 4, 12] of the papers which emphasised autonomy in these terms also retained some focus on executional and decisional aspects. One paper explicitly distinguished between aspects of causality and self-congruence of outcomes, and emphasised that the former supported the latter: "as individuals express _technical agency_ by participating they can then advance their objectives in conversation, [...] _colloquial agency_" [162, p. 2, our emphasis]. Often self-congruence of outcomes was emphasised in cases of active self expression by the user [e.g., 5, 49, 94, 108], or action and decision in pursuit of values and goals [e.g. 30, 129, 102]. Some papers suggested potential difficulties in ascertaining when activities and outcomes were self-congruent, since multiple conflicting goals and values might weigh on the same activity for a single user. Three papers referred to cases like this as "paradoxes" of agency [110, 113, 14]. In a study of how contexts support autonomy in gaming [44] one participant reported playing multiplayer game sessions a regular competitive gaming group (or "clan") during which he could not "decide voluntarily to leave" [44, p. 3935]. The author related this to autonomy by emphasising that this outcome was not congruent with their "spontaneous desire" [44, p. 3935]. However, given the player's wider commitment to social and competitive gaming, it seemed likely that the alternative outcome would not be congruent with _longer term_ values and goals. Here both choices might be self-congruent, and different time-scales seemed important in distinguishing the impact of the different motives at play (see section 4.4.3 below for more discussion of these issues). In 24 papers discussion of agency and autonomy concerned the genesis of users' values and goals -- what we termed issues of _self-construction_. As one paper put it: Autonomy is the ability "to be one's own person, to be directed by considerations, desires, conditions, and characteristics that are not simply imposed externally upon one" [66, p. 9]. Such papers focused on how technologies can "shape the identity" [117, p. 3] of users, whether or not this is intentional, or desired. Mentis et al. suggested that assistive technologies which neglect psychological, cultural, and emotional aspects, risk "reifying [disabled peoples'] dependency and their loss of self" [117, p. 3]. Other papers discussed more positive examples of self-definition and self-change: Discussing, for example, the autonomous integration of social values in the context of self-control apps for children's video viewing [70], or how self-tracking systems for chronic health conditions could support agency by supporting users' reflection, self-appraisal, and self-awareness [14, 71]. #### 4.4.2. Material and Experiential Papers in our corpus also addressed _material_ and _experiential_ aspects of agency and autonomy. We use "material" here to refer to both the material expression of autonomy and agency (e.g., as Coyle et al. note: " the fact of controlling an action" as distinct from "the immediate sense or experience of having done so" [37, p. 2026].), and also wider material factors which may impinge on this (e.g., being subject to coercion, lacking economic means, power or influence). Papers across our corpus discussed both material and experiential aspects, though few explicitly distinguished between them. Many papers (n=80) focused exclusively on material aspects of agency and autonomy. Such papers discussed, for example, the material ability of people to act independently or under their own volition (e.g., support for personal mobility [23, 128], communication [102, 45, 10], or everyday living [114, 157, 169]), or the material ability to pursue one's own preferences and choices (e.g., at work [166, 8, 106, 79, 14], in social engagements [85], or with respect to health [50, 53, 123]). A smaller number of papers (n=46) focused exclusively on experiential aspects -- for example, the sense-of-agency when triggering an event [178], or the experience of autonomy while playing computer games [17]. Some papers discussed examples of material agency or autonomy with no clear experiential component: These papers focused on the autonomy and agency of organisations rather than individuals [e.g., 18, 175], and others drew on Actor Network Theory's account of agency -- a network effect of relations between human and non-human actors, sharply distinguished from intentionality [151, 58, 58, 80, 107]. Finally, 33 papers discussed both material and experiential aspects. Here, it was sometimes emphasised that these aspects might not align with one another. A number of papers indicated that the sense-of-agency in quite minimal interactions could be manipulated, by manipulating sense-of-ownership [89], haptic feedback [152], or the timing of events [89, 90]. Some noted that users can exhibit sense-of-agency with respect to situations where they have no causal influence [36, 89, 90]: Two papers showed that when the electrical stimulation of muscles was used to initiate the user's response to an event, the user's experience of agency was increased by delaying the stimulation slightly (while still pre-empting the user's own response) [89, 90] Several other papers drew on Self-Determination Theory, which (as discussed above) emphasises that sense of autonomy is supported by outcomes congruent with values [135, 139], raising the _possibility_ that it may be manipulated without material change in the user's independence, range of choice, or influence on events [44]. However as noted above, _in practice_ in all these cases, material and experiential aspects of agency or autonomy did not diverge. #### 4.4.3. Time-scales Aspects of autonomy and agency were often differentiated by the time-scales of activities and experiences. We found that papers addressed three broad time-scales: _micro-interactions_ (autonomy and agency experienced or exercised over a few seconds or less), _episodes_ of interaction (seconds to hours), or autonomy and agency at longer time-scales in _life_ (days to years). The character of autonomy and agency, and authors' approaches to them differed significantly between these three time-scales. While some features -- such as a focus on self-causation -- were consistent across all scales, other features differed significantly. While these differences were apparent over the corpus as a whole, only a few papers _explicitly_ addressed time-scales distinctions, or addressed issues of autonomy and agency at more than one time-scale. Some however, did address issues of time-scale, noting for example that "immediate autonomy... can have an effect on events in in the distant future; and vice versa, long-term decisions can have an impact on the very present." (64, p. 3). Such works pointed to a range of tensions, trade-offs, and synergies across time-scales. 72 papers focused purely on _episodes_ of interaction -- shorter than a day in length. All of these addressed issues of self-causality (i.e., execution of actions (n=60) and decision or choice (n=42)). Relatively few (n=13) also discussed identity-related aspects of agency and autonomy. Substantial discussion of self-congruence and identity was mostly limited to a few studies in which users were interviewed about their daily lives, or which made use of observations and experiments in the field (e.g., 40; 70; 153; 70). While these papers still focused on episode-level aspects of autonomy and agency, the studied episodes were embedded in participants' everyday lives, allowing results to be informed by this. For example, Deterding discussed how individual episodes of life-situated video-game play were impacted by contextual factors, such as the need to "make time" for play by clearing other tasks (44). Other examples deployed apps for emotional communication to families and couples (108) for a number of weeks, or combined lab-based experiments on the use of assistive communication-devices with user interviews (162). Meanwhile, 35 papers focused solely on time-scales in wider _life_. Again, all these papers focused to some degree on issues of self-causation, whether executional (n=25) (e.g., "re-interpreting [a generated conversational prompt] to take the lead in a conversation" (51, p. 8)) or decisional (n=20) (e.g., decisions about parenting in divorced families (18)). Half of these papers (17) also discussed identity-related aspects of autonomy, such as how assistive technologies "shape the identity of the very people they are to help" (117, p. 3). Some of these papers indicated that length of engagement might have implications for agency and autonomy; via the potential to imbue technologies and places with meaning over time (4), or via habit formation and reflective self-change (6). Finally, a number of papers addressed "micro-interactions", under a few seconds in length (n=13) dealing with the _experience_ of self-causality while triggering individual events (89; 90; 134; 154). This work focused exclusively on this very short time-scale, isolating agency in _execution_ from issues of _decision, self-congruence_, and more generally from any wider interaction context. Six of these papers (15; 16; 35; 36; 37; 106) focused on the specific neuro-scientific construct _sense-of-agency_, which refers to the _implicit_ sense of authorship of an action. The operationalisation of this via millisecond-level variations in a reaction-time metric -- so-called _intentional binding_ -- was seen as promising for understanding "the awareness of owning the actions" outcomes" (35, p. 2426), and "how people experience interactions with technology" (37, p. 2025). However, two recent papers in this group noted the uncertain relationship of this measure to other constructs of agency, and their results indicated that the relationship of temporal binding to conscious user experience remained unclear (15; 16). _Relationships between time-scales_. Just under a quarter of papers (n=41) discussed autonomy and agency in both short _episodes_ of interaction, and also on longer time-scales. In most examples, one time-scale or the other received only cursory discussion. Wan et al. (169), for instance, focused on day-to-day agency of people with dementia, though in the course of this they described some shorter episodes in which agency was at issue. However, some works addressed tensions and trade-offs between agency and autonomy at different time-scales. Three papers described apparent "paradoxes" (110; 113; 110) whereby restricting agency or autonomy also seemed to increase agency or autonomy. These cases spanned a range of contexts -- from digital self-control (110) to spousal surveillance in dementia care (113), and in all cases, the restricted and supported aspects of agency were characterised by different time-scales: participants accepted restrictions in _episodes_ of interaction in order to gain agency or autonomy in their wider life. For example, users blocked immediate access to digital distractions to achieve longer-term personal goals (110), and people with mild cognitive impairment accepted blocks or vetos on access to risky websites to retain their longer-term freedom to browse the web independently (113). As well as trade-offs and tensions, some papers indicated synergies across different time-scales. Several papers described how episodes of executional agency in crafting, caring for others, or simply performing everyday activities independently, could support agency in self-definition in wider life. For example, in the context of digital social sharing for people with dementia, Lazar et. al referenced arguments that "low-level, proprioceptive control over one's environment is part of the creation of agency" (102, p. 2157) in wider life. #### 4.4.4. Independence or Interdependence? Lastly, several papers placed individual independence at the centre of their discussions of autonomy and agency (n=50). In line with a statement by Guldpenfennig et al., we found that "autonomy" was often "translated to "independence" without any further differentiations" (64, p. 13). Meanwhile, we found that other papers emphasised _inter_dependence (n=54): focusing on how social contexts support agency and autonomy, and provide the necessary horizon against which these ideas must be understood. This tension between independence and interdependence was most notable in discussions of "autonomy", though also present in some discussions of "agency". In some work "autonomy" appeared as a near synonym for "independence". For example, Garg noted that "autonomy-related changes occur in the relationship between parents and children..., as teenagers try to assert their independence" (59, p. 1), and Partala defined autonomy as "to actively participate in determining own behavior without external influence" (130, p. 788). Along similar lines a number of papers addressed _trade-offs_ between independence and interdependence: emphasising how the role of others could undermine independence and thereby agency or autonomy. In a cooperative gaming scenario, for example, it was suggested that "tight dependence on each other led some players... to report a lack of autonomy" (62, p. 11) (note though that no comparison was made to a version of the game in which these roles were more independent). Other works discussed independence with regards to barriers to agency and autonomy which followed from health conditions, or other material factors. Here, independence was emphasised insofar as there was a risk that it could be lost, as subjects were impeded in some way from acting independently for themselves, resulting in meaningful impacts on their lives. One paper, for example described work "to make board games accessible, ensuring the autonomy of players with visual impairment" (Sundhi et al., 2017, p. 2), and other papers discussed autonomy of mobility (Sundhi et al., 2017; Sundhi et al., 2018). However, such focus on independence was often situated against a wider backdrop of interdependence (Sundhi et al., 2017; Sundhi et al., 2018; Sundhi et al., 2019; Sundhi et al., 2019). Characteristic of this, one paper noted that while people might wish to "accomplish as much independence as possible... social networks contribute significantly to every person's autonomy and welfare" (Sundhi et al., 2018, p. 10), and that "the liberal-individualist account of autonomy over-emphasizes physical independence and does not sufficiently recognize the interdependency of all people" (Sundhi et al., 2018, p. 128). Elsewhere, papers described ways in which interdependence might support agency and autonomy: one paper found that individuals' sense of agency in online posting was boosted when they saw evidence of others' online activity (Sundhi et al., 2018), and several papers noted the crucial role played by contexts-of-living in determining the autonomy outcomes produced by deployed technologies (Sundhi et al., 2017; Sundhi et al., 2018; Sundhi et al., 2018; Sundhi et al., 2018; Sundhi et al., 2019; Sundhi et al., 2019). ## 5. Discussion Our results demonstrate a consensus among HCI researchers that there is value in supporting human autonomy and agency, and that these concepts are key to fundamental questions about user experience and the impact of technologies on identity and personhood. However, behind this consensus we find considerable diversity in _how_ these concepts are valued, and even in understandings of _what_ agency and autonomy entail. As such, our findings indicate that autonomy and agency currently function as resonant, albeit vague, _umbrella concepts_: gathering together a wide range of perspectives on what may or may not be related phenomena (Sundhi et al., 2019). Indeed, our analysis revealed structure and meaningful distinctions running across the corpus: specifically, concerning the time-scales of behaviour; the focus on experiential or material issues; the focus on causal or decisional involvement, or on issues of identity; and in terms of how strongly independence was emphasised. However, we found these distinctions mostly operated implicitly. It was rare for authors to explicitly articulate particular aspects of agency and autonomy, let alone discuss coordination, tensions, and trade-offs between them. Previous work has argued that such vaguely defined umbrella concepts "challenge our ability to accumulate and communicate knowledge" (Sundhi et al., 2018, p. 2), and we found evidence of this in our corpus. Within particular domains, we found some evidence of authors building upon one another's findings (e.g., dementia care (e.g., Sundhi et al., 2018), low-level agency in minimal interactions (e.g., Sundhi et al., 2018)). However, we found few cases where work built on findings outside its own immediate domain or context. This was the case despite evidence of considerable commonalities between research in different contexts, - and points to missed opportunities. For example, we found papers dealing with digital fabrication (Sundhi et al., 2018) internet-of-things (Sundhi et al., 2018), and accessible technologies (Sundhi et al., 2018) all addressing how to balance executional and decisional aspects of autonomy and agency when delegating to technology and other people. It seems likely that there are opportunities for each of these communities to learn from the findings of the others. One response to this situation might be to abandon efforts to maintain and develop these wider understandings of agency and autonomy and instead isolate particular aspects as a series of distinct, well-defined, and independent constructs (e.g., Sundhi et al., 2018). However, in line with previous discussions of the 'usability' construct (Sundhi et al., 2019), we do not see this as the best way forward. We suggest that the problem is not simply that autonomy and agency are umbrella concepts. Many important scientific constructs -- including affect, emotion and socio-economic status -- function in this way, gathering together a complex of different perspectives and subconcepts. While this brings limitations in treating the concepts as unified unitary concepts, this can still support communication and the identification of important research topics (Sundhi et al., 2019). Instead, we argue that, in addition to an occasional local lack of specificity and precision, a larger problem is that HCI currently _lacks coordination between different approaches to agency and autonomy_. We suggest there is value in developing existing approaches to autonomy and agency: partly by clarifying individual aspects and local understandings, but also by clarifying the relationships between these aspects, and coordinating understandings and issues across contexts and communities in HCI; to help autonomy and agency function as _boundary objects_. ### Clarifying Agency and Autonomy as Boundary Objects Boundary objects are flexible yet robust concepts that can help coordinate the perspectives and activities of different communities of practice (Sundhi et al., 2018), without requiring strict consensus on precise definitions (Sundhi et al., 2018). Previous work has emphasised that, to function well, boundary objects should be sufficiently flexible to meet the informational and communicative needs of the communities involved (Sundhi et al., 2018). Our review suggests that agency and autonomy already fulfil this criterion, as illustrated by their continuing significance in a range of different issues and contexts, over three decades of HCI research. However, while such interpretive flexibility is a well known property of boundary objects (Sundhi et al., 2018), Star and others have emphasised that interpretive flexibility is not sufficient on its own (Sundhi et al., 2018); boundary objects can fail to function well, if they are not also robust 1 enough in usage to retain a broader identity across interpretations. This is required to support coherence across intersecting communities (Sundhi et al., 2018). For example, in sustainability studies, it was found that the concept of "resilience" did not succeed as a coordinating boundary object (Sundhi et al., 2018), since different communities' understandings did not overlap in important ways. Footnote 1: Brian Cantwell-Smith suggests that this robustness might consist in some degree of consistency in the ontologies of the multiple communities involved, but points to the difficulty in specifying this further without filling into naive realism (Dip, 192-27-20). A similar problem is addressed in Hasko Chang’s recent pluralist scientific realism by focusing on what he calls the ‘operational closure’ of theories and the manner in which they are _constrained_ by reality (Dip, 169-32). While we believe these resources can inform future discussion of coordination between communities in HCI, that discussion is beyond the scope of this paper. What do robustness and flexibility mean in practice? Star (Sundhi et al., 2018) emphasises that boundary objects rely on both 1) the refinement of the concepts to address local concerns, and 2) the coordination of these local understandings with the concept's wider identity. In terms of autonomy and agency in HCI, we suggest this might mean 1) _local_ attention by researchers on the aspects of autonomy and agency which are relevant to concerns in their area of work, and 2) _integrative_ work to relate these aspects to one another, and to situate local findings and understandings in the context of wider understandings. Our analysis points to concrete ways in which the HCI community might approach these two activities, to develop autonomy and agency into better functioning boundary objects: #### 5.1.1. Articulate Locally Relevant Aspects of Agency and Autonomy First, to advance _focused_ local work within particular communities and contexts, there is a need to be more explicit in identifying, defining, and clarifying the aspects of agency and autonomy which are at issue in particular areas of HCI research. Our findings indicate that currently only a minority of papers provide explicit working definitions of agency and autonomy that are specific to the work carried out. Fewer still articulate distinctions within these definitions that are relevant to their work. In future, more work might follow the example of papers in our corpus (e.g., 64, 162) and clarify issues in their domain by articulating distinctions between aspects of agency and autonomy: for example, between decisional and executional autonomy (64), or between technical (closely related to our our category of _execution_) and colloquial agency (mapping to our categories of decision/self-congruence) (162). Such distinctions were operative in much work in our corpus, but left implicit. Articulating them explicitly could provide a common vocabulary to help coordinate efforts, identify design strategies, sensitive researchers to relevant issues, and support researchers in identifying relevant work. The aspects we articulate in our review - self-causality and identity; experiential and material; time-scales; independence and interdependence, - offer one potential source of such distinctions, grounded in existing work in HCI, and sometimes (as in the case of decisional and executional aspects (e.g. 64, p. 3)) directly echoing categories which are already in use. Note that these aspects are considered neither definitive nor exhaustive. Nonetheless, we found that they captured important distinctions across 30 years of HCI research. We further suggest that they are sufficiently flexible and descriptive to avoid bias towards any particular theoretical perspective or approach. The METUX framework offers another category of distinction, describing seven _spheres_ of life which are expected to be relevant to issues of wellbeing (132), and though we suggest this lacks the granularity of our four aspects, there may be cases in which a single-dimension is adequate and even preferable. More broadly, Self-Determination Theory - the theoretical basis of METUX - offers a wide range of theoretical resources for understanding autonomy on different scales. At present only limited aspects of SDT have found application in HCI (largely SDT's Basic Needs Theory component (136), and certain standardised questionnaires). Future research might focus on underused resources such as Organismic Integration Theory (138), and Cognitive Evaluation Theory (137) which might bring clarity to understandings of longer term autonomy, users' integration of values, and wider contextual factors which might impact on autonomy (160; 9). #### 5.1.2. Investigate Relationships Between Different Aspects Future research might also give more attention to _integrative_ work: understanding how particular aspects of autonomy and agency relate to one another, and how local understandings fit into the wider account of agency and autonomy in HCI. While focused work on isolated aspects of agency and autonomy is currently common (if often lacking in explicit definition), very little work attempts to integrate and relate different understandings and aspects of agency and autonomy. First, we found that it is rare for papers to address issues of autonomy and agency on more than one broad time-scale. This leaves a range of open questions: Does support for agency and autonomy in short episodes of interaction impact upon wider life (e.g., the potential "contradictory parallel effects" highlighted by the METUX framework (132, p. 7))? Conversely, do episodes of autonomy and agency in the lab function the same way as similar episodes when they are situated in participants' everyday lives? Does that wider context of autonomy experience in life override, or otherwise modulate, episode level experience? Some papers in our corpus provide a model for addressing such questions by studying episodes of agency situated in participants' wider lives (e.g., 40, 70, 153; 78, 153), or using field studies and interview studies to contextualise results in lab results. Another avenue is to leverage the wider resources of Self-Determination Theory (discussed above) to understand how other longer-term factors - such as value integration and contextual factors -- impinge on episode-level experiences of autonomy. The recently validated User Motivation Inventory might be useful here (24); as might be the example of Tyack and Wyeth's work on autonomy in gaming (161), which augments single-session questionnaire-based studies with semi-structured interviews to address autonomy experience in everyday contexts of play. Beyond time-scale, our findings show that other aspects of autonomy and agency were also mostly dealt with in isolation. Several papers focused on low-level sense-of-agency in micro-interactions (e.g. 89, 90, 134, 154), operationalised via a metric called _temporal binding_ (i.e., a reaction-time based measure, which is expected to correlate with the user's _implicit_ sense of having caused an action (121)). To date HCI research has focused on this construct in isolation from factors such as decision, self-congruence of outcomes, and social context. This is an understandable limitation in early work. However, our review shows the important role of these factors in HCI's understandings of agency and autonomy, and recent work in cognitive science indicates that these factors (in particular social (112; 167) and personality factors (67)) may affect temporal binding. One route to address this in future work is to investigate the relationship between temporal binding and predictions drawn from Self-Determination Theory's account of autonomy (135). This might, for example, mean introducing meaningful (self-congruent or incongruent) choices into existing experimental paradigms. Work in this direction might also help clarify open questions around how temporal binding measures relate to UX (15). Finally, we suggest there is value in continuing wider inter-community efforts to coordinate understandings -- extending the previous CHI workshops on autonomy (26; 56), and our own work in this paper. Such work can serve to take stock of the interrelations between different communities within HCI for whom autonomy and agency are important concepts. If such events and projects are framed in a sufficiently pluralistic manner, they can support the "tacking back-and-forth" (103, p. 605) between local and wider community perspectives necessary to support the maintenance of boundary objects: helping articulate potentially valuable consistencies and commonalities in approach. ### Clarifying how Aspects of Agency and Autonomy Relate to Outcomes While many of the reviewed works associated autonomy and agency with a wide range of positive outcomes, we often found a lack of specificity in _how_ these outcomes might be linked to particular expressions of autonomy and agency. This was least problematic in work which pointed to relatively modest conclusions, e.g., linking outcomes readily measurable at episode level (e.g., task performance, 149) to autonomy and agency during single-episodes of interaction. Here, even if it was not directly stated which aspects of autonomy and agency were considered relevant, this was often evident via the nature of experimental manipulations (e.g., 3), or via questionnaires which addressed well-defined operationalisations of autonomy (e.g., 19). However, some such scenarios raised important questions. For example, in work which operationalised sense-of-agency via temporal binding (e.g., 15), the association between temporal binding and UX was generally asserted rather than directly tested. Only in more recent work was the relationship between temporal binding and user experience directly examined, and ultimately found to be somewhat unclear (e.g., 15). Elsewhere the relationship between agency, autonomy and UX was unclear in other ways. In some instances, autonomy was treated as an _antecedent_ of good UX (e.g., 3, 37); in others as a _component_ of good experience, and a positive experiential phenomenon in its own right (e.g., 96). This suggests that the UX community lacks agreement on clear models of how these constructs relate to one another. In line with this, the links between autonomy or agency and UX were often articulated in quite general terms. Very few papers investigated how autonomy and agency related to specific components of user experience, such as sense-of-presence (81). Future work might aim to move beyond the current understanding of agency and autonomy as components of generically positive experience, and instead seek to better understand how and why specific aspects of autonomy and agency foster specific aspects of positive experience. Outside UX and player experience, papers often connected support for agency and autonomy to "high value" outcomes, such as sense-of-self and well-being (e.g., 110). However, this connection was always drawn on the basis of prior work in other domains, and not explicitly examined in the case of technology use. Again, reviewed works often left it unclear which aspects of autonomy and agency were expected to lead to these positive outcomes. Exceptions to this were found in some qualitative field work, addressing for example, how low level "proprioceptive" agency in execution could be expected to lead to higher level agency outcomes and well-being (102, p. 2157). Or in work that reported on the complex trade-offs in well-being support between what we have characterised as execution, decision, and self-congme (64). However, as we have noted, autonomy and agency were generally studied in single sessions of interaction, without addressing the user's wider life context, where high-value outcomes like well-being, motivation, and sense-of-self play out. In fact, our corpus contained no works that compared such outcomes pre- and post-session, or observed it over long term studies. Of course, it is challenging to address long-term outcomes in HCI studies and interventions (95). However, without such work, it is hard to see how we can meaningfully investigate the impact of agency and autonomy on outcomes such as well-being and sense of self. In addition to the qualitative work discussed above, we suggest that the wider resources of Self-Determination Theory, beyond Basic Needs Satisfaction might be drawn upon here, and that work by Tyack and Wyeth provides an exemplary and achievable model for such future work (161). Another complex question raised by our analysis concerns how we can disentangle experiential and material aspects of autonomy and agency and their impact on outcomes. As reported in section 4.4.2, we found suggestions that some experiential and material aspects could be decoupled. For example, that the experience of autonomy and agency can be altered without affecting the user's causal involvement or material control over outcomes (e.g., 44). Similarly, it was found that sense-of-agency over sensorimotor actions could predict training outcomes even where users had no causal role in the outcome (90). These examples indicate that there are clear opportunities for exploitation of such gaps - sometimes to the detriment of the user: for instance, using the feeling of autonomy or agency as an alibi for materially reduced efficacy, choice, or scope of control. This is illustrated in our corpus in work by Bakewell et al., where companies granted workers self-management in certain aspects of their working life in a way which created wider, and sometimes undignified, restrictions on behaviour and efficacy (8). Based on our findings, one straightforward recommendation for future work is to be (more) attentive to this distinction between experiential and material aspects of autonomy and agency. It is crucial that HCI researchers clearly report the relevant aspect(s) of autonomy and agency, and focus analysis appropriately, particularly where technology is likely to impact outcomes meaningful to the user. Future work might also seek to understand whether \(-\) as predicted by SDT (135; 139) \(-\) the congruence of outcomes will impact on autonomy independently of changes in, e.g., available choice, executional involvement, or the user's independence of action. For example, the degree of control afforded in game and VR environments may make them a suitable vessel for such work, though this will require careful study design. ### Ethical Challenges Finally, our analysis raises ethical challenges. First, there is the question of who benefits from the user's agency or autonomy. Our corpus contained examples where these benefits primarily accrued to others \(-\) employing organisations, or sellers of products and services. This might not always be a problem in itself. One paper, for example, reported that delegation of worker decision-making to AI reduced attentiveness and engagement, leading to poorer task outcomes (104). In this case while the organisation is the primary beneficiary of the worker's agency, it is not immediately clear that this is problematic for the worker. However, other papers gave examples where benefits and burdens were more problematically distributed: users gained only limited and localised control or choice, or only the experience of agency and autonomy, while in return accepting increased burdens and material restrictions on aspects of behaviour (e.g., 8). One key problem here is that autonomy and agency are resonant words, associated with a wide range of meanings. Sometimes, autonomy and agency are "human rights" (e.g., 7, 162) or are meaningfully associated with well-being (e.g., 99, 133). In other cases, agency and autonomy have more limited ramifications, making it more acceptable to treat them as manipulable parameters which can improve work outcomes (e.g., 8, 104) or the appeal of our products (e.g., 3, 153). This ambiguity can allow vague and under-specified claims of autonomy support to serve as cover for exploitative or manipulative practices. Clarity in addressing agency and autonomy therefore has not only practical implications for HCI research, but also ethical importance. A simple step towards addressing this is to spell out (expected) outcomes; not every study addressing autonomy and agency can plausibly attach its outcomes to the most lofty ramifications and values of these concepts. For example, while agency has been hypothesised as the ground of identity and the self-other distinction (98), the relevance of this to agency during button pushing seems limited. Instead, authors should focus on plausible, direct, outcomes which are relevant to the time-scale and scope of the work carried out, and relevant to the specific aspects of autonomy and agency involved. Publication pressures can easily drive us to inflate the value of our work, but in the case of autonomy and agency there seems a concrete risk in this value-inflation: it can provide a vague, peer-reviewed alibi for those who might exploit humans under the banner of apparent autonomy support. Another ethical question raised by our analysis concerns autonomy in self-construction, and how we should handle the delicate issue of becoming involved in the user's processes of self-change, and self-control. This seems particularly relevant for behaviour change technologies and technologies which support reflection. Across our corpus, three papers addressed different contexts of self-change: helping children internalise behavioural norms (70), supporting trafficked women to understand and address their exploitation by others (156), and helping abusive men take responsibility for their behaviour (14). These particular examples stand out as being carefully conducted, and in each case working towards outcomes likely congruent with users' long-term goals and values. However, these cases also illustrate the delicate balance involved in such work. The third example (14), in particular, involved a degree of short-term thwarting of autonomy, intentionally restricting meaningful choices to encourage the men to reappraise their identities and responsibility. Such work requires careful reasoning about what aspects of autonomy and agency will be restricted and supported, on what time-scales, and with what consequences. It requires reasoning about whether the long-term outcomes can be expected to be congruent with the user's values and goals, and if not, then what warrants the intervention. Not all cases of persuasive technology are so dramatic and significant as the above examples, but the issues raised here can be informative elsewhere. One special case, for example, concerns apps for self-control, where user's goals, values and intentions may vary over time, and care will be required in deciding how, and how much, to hold users accountable to their past intentions (111). Again, we suggest that one route to the required care in such cases is to understand the different scales and aspects of autonomy and agency involved, and clarity about what is supported and thwarted, when and where. Prior work that deals with such trade-offs, and with the warranting of agency delegation (e.g., 64, 166, in our corpus), can be useful in guiding reasoning in such situations. Thinyane and Bhat's work on the support of trafficked women (156) offers a sophisticated discussion of relevant issues, articulated with reference to Sen's Capabilities framework (142). Likewise, we see much promise in future work developing guidelines specific to user autonomy and agency in persuasive and reflective technologies, and more broadly in technologies for self-change. ### Limitations and Open Questions Addressing concepts of such complexity inevitably means that certain issues must be left out of scope. First, this review focuses on agency and autonomy, and does not consider any of the agency-adjacent terms which have proliferated over the past few decades, such as "empowerment" (141), "efficacy" (10), or "competence" (13). Agency and autonomy seemed to us to be the primary concepts here, whereas the other concepts can often be considered special cases, derivatives, or complexes which involve aspects of autonomy and agency alongside other factors. Moreover, "empowerment" is already the subject of a recent review at CHI (141). That said, fitting these adjacent constructs into the overall landscape of agency and autonomy (e.g., by drawing on the four aspects outlined in this review) could provide useful insight. Second, the scope of the review -- spanning over 30 years of research -- means that we have addressed agency and autonomy at a relatively high level. This is in line with the goals of this paper: to provide a view of how these concepts and their accompanying issues are understood and can be better coordinated across HCI. However, future reviews which address understandings of these themes in specific domains and areas of research would help clarify understandings in particular communities and contexts, as well as furthering the development of agency and autonomy as boundary objects for HCI. ## 6. Conclusion This paper presents a review of 161 papers which address issues of agency and autonomy, over 32 years of HCI research. Our analysis identifies both the high value of these concepts for HCI research, their flexibility, and the degree of ambiguity in how they are understood. We find that, at present, these terms are treated as umbrella concepts -- broad concepts, subsuming a surprising diversity of understandings and theoretical underpinnings. The terms are largely used interchangeably and given a wide range of different meanings. This makes it difficult to address these concepts as a whole, leaves it unclear how different understandings of autonomy and agency relate to each other and to particular outcomes, and may impede researchers in identifying and building upon relevant prior work. To address this situation our analysis identified four aspects which help clarify understandings of agency and autonomy in HCI: 1. issues of _self-causality_ and personal _identity_; 2. the _experience_ and _material_ expression of autonomy and agency; 3. particular _timescales_; and 4. emphasis on _independence_ and _independence_. These aspects may guide researchers to relevant prior work, and help situate their own work in the landscape of research on these concepts. We also point to future work which can develop agency and autonomy as boundary objects for HCI: constructs which coordinate the perspectives of various communities of practice (103), driving design and understanding forward across multiple domains. To this end, we outlined avenues for HCI researchers to both clarify local understandings of autonomy and agency, relevant to particular issues, and coordinate these with wider community understandings. Specifically, we recommend that HCI researchers (1) explicitly state definitions and understandings of these concepts. (2) Leverage the four aspects articulated in our review to specify aspects of agency and autonomy that are relevant to their concerns. (3) Pursue greater specificity in linking particular aspects of autonomy and agency to outcomes. Finally, we call for (4) more integrative work, both to understand how different aspects of autonomy and agency interrelate, and to identify commonalities across different HCI communities and domains. ## Acknowledgments Funded by the European Union (ERC, THEORYCRAFT, 101043198). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
2308.11492
A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles
Multi-modal sensor integration has become a crucial prerequisite for the real-world navigation systems. Recent studies have reported successful deployment of such system in many fields. However, it is still challenging for navigation tasks in mine scenes due to satellite signal dropouts, degraded perception, and observation degeneracy. To solve this problem, we propose a LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and graph optimization. The front-end consists of multiple parallel running LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer information are tightly fused in an error-state Kalman filter. Instead of the commonly used feature points, we employ surface elements for registration. The back-end construct a pose graph and jointly optimize the pose estimation results from inertial, LiDAR odometry, and global navigation satellite system (GNSS). Since the vehicle has a long operation time inside the tunnel, the largely accumulated drift may be not fully by the GNSS measurements. We hereby leverage a loop closure based re-initialization process to achieve full alignment. In addition, the system robustness is improved through handling data loss, stream consistency, and estimation error. The experimental results show that our system has a good tolerance to the long-period degeneracy with the cooperation different LiDARs and surfel registration, achieving meter-level accuracy even for tens of minutes running during GNSS dropouts.
Yusheng Wang, Yidong Lou, Weiwei Song, Bing Zhan, Feihuang Xia, Qigeng Duan
2023-08-22T15:14:40Z
http://arxiv.org/abs/2308.11492v1
A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles ###### Abstract Multi-modal sensor integration has become a crucial prerequisite for the real-world navigation systems. Recent studies have reported successful deployment of such system in many fields. However, it is still challenging for navigation tasks in mine scenes due to satellite signal dropouts, degraded perception, and observation degeneracy. To solve this problem, we propose a LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and graph optimization. The front-end consists of multiple parallel running LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer information are tightly fused in an error-state Kalman filter. Instead of the commonly used feature points, we employ surface elements for registration. The back-end construct a pose graph and jointly optimize the pose estimation results from inertial, LiDAR odometry, and global navigation satellite system (GNSS). Since the vehicle has a long operation time inside the tunnel, the largely accumulated drift may be not fully by the GNSS measurements. We hereby leverage a loop closure based re-initialization process to achieve full alignment. In addition, the system robustness is improved through handling data loss, stream consistency, and estimation error. The experimental results show that our system has a good tolerance to the long-period degeneracy with the cooperation different LiDARs and surfel registration, achieving meter-level accuracy even for tens of minutes running during GNSS dropouts. SLAM, multi-modal fusion, mine service vehicle. ## I Introduction The continuous spread of COVID-19 has promoted a growing demand of robotics in a great variety of scenes, from hospitals, construction sites, assembly plants, to mines. This surge of interest is motivated by a wide range of unmanned applications, such as autonomous service robots and robotaxis. The deployment of autonomous mine vehicles is of particular interest since the potential benefits include access to unreachable or dangerous locations and monitoring personnel in unsafe areas. These peculiarities will have affirmative impact on the mine operation, production, and safety. Localization and environment perception are the essential capabilities for autonomous mine vehicles operation. In typical GPS-denied underground mine environments, many researches have proposed to use simultaneously localization and mapping (SLAM) to solve these problems [1, 2]. Unfortunately, most mature SLAM approaches have undesirable performance when deployed in real-life mine environments: the poor illumination renders visual-SLAM systems unreliable, the slippery terrain makes the wheel odometry inaccurate, and the explosion-proof requirements limit the large deployment of wireless sensors such as UWB and RFID. As light detection and ranging (LiDAR) sensors are less sensitive to illumination variations and provide direct, high-fidelity, long-range 3D measurements, they have been widely accepted for odometry estimation in the past decade. Typically, LiDAR odometry algorithms estimate the ego motion of the vehicle through registering consecutive LiDAR frames. When it comes to perceptually-challenging mine environments, the presence of self-repetitive and symmetric areas increases the difficulty of constraining the relative motion along the main shaft of the mine tunnel. Techniques employed to mitigate this issue consist of observability analysis [3], degeneracy detection and mitigation [4, 5] and the integration of other sources of measurements, such as inertial data from an IMU. The challenges of autonomous mine vehicles SLAM extend to engineering implementation. SLAM algorithms must operate onboard with limited computational budget, and deliver vehicle pose estimations with low latency regularly. Moreover, these SLAM systems are required to withstand intermittent sensor measurements and recover from transitory faulty states. In this paper, we present a LiDAR odometry system that enables robust and accurate state estimation and environment reconstruction for autonomous mine vehicles. Concretely, the contributions of this paper include: 1. We develop a LiDAR-inertial system that incorporates the Bing Zhan is with the CHC NAVIGATION, 599 Gaqing Road, Building D, 201702, Shanghai, China. (email: [email protected]) Feihuang Xia is with the Beijing Lishedachuan Co., Ltd, 5 Yheyuan Road, Qigeng Duan is with the Beijing Lishedachuan Co., Ltd, 5 Yheyuan Road, Beijing 100871, China and the Department of Geography and Resource Management, The Chinese University of Hongkong, Hong Kong Special Administrative Region, Hong Kong 999077, China. (email: [email protected]). information from two LiDARs, an IMU, and wheel odometers using error-state Kalman Filter (ESKF) and graph optimization. Instead of using the commonly used iterated closet point (ICP) or feature points for registration, we merge laser scans through surfel fusion. * To fully compensate the largely accumulated drifts inside the tunnel, we develop a loop closure aided re-initialization method after long periods of GPS-dropouts. In that case, an estimation of the accumulated drift during the GPS outages is provided, then the errors both inside and outside the tunnel can be well eliminated. * The proposed pipeline is thoroughly evaluated in various mine tunnel environments across a long-time span. The results show our system drifts only 1.86 m after travelling up to 6.6 km (5 km tunnel). The reminder of this paper is organized as follows. Section II reviews the relevant scholarly works. Section III gives an overview of the proposed system. Section IV presents the detailed graph optimization process applied in our system, followed by the experimental results described in Section V. Finally, Section VI concludes this paper and demonstrates future research directions. ## II Related Work Prior works on point cloud registration and LiDAR SLAM in tunnel-like environments are extensive. In this section, we briefly review scholarly works on these two aspects. ### _Point Cloud Registration_ The point cloud registration calculates the frame-to-frame displacement and matches the consecutive scans based thereon. It can be broadly classified into three different categories: point based, feature based and mathematical property based methods. The point based methods can be treated as dense approaches since they make full use of points from raw LiDAR scans. On the other hand, the feature-based approaches are regarded as sparse methods, as they only employ a select number of points for tracking. Furthermore, the mathematical property methods take advantage of statistical models, and transform the discrete representations of a single scan into a continuous distribution. Many point based methods are the variations of ICP [6], which is an iterative two-step process. The algorithm first establishes the correspondences between the source and target point clouds. Then a transformation is calculated to reduce the distance between all corresponding points. Both step is repeated until reaching preset criteria. Considering its large computation burden with increased point cloud numbers, many approaches have tried to reduce its cost for real-time operation [7, 8, 9]. The feature based methods receive growing interest in recent years due to their simplicity and relatively lower computational complexity. Through extracting and matching feature points on planar surface and edges from the current and previous scan, the relative motion can be estimated accordingly. Similar to the Dilution of Precision (DOP) concept in the field of satellite navigation, the feature distribution also has a great influence to the state estimation results. The frame-to-frame registration is prone to fail once the environment is mostly planar. Therefore, some methods [10, 11, 12] use surfel as small planar feature and register new points by minimizing the point to plane residuals. In this paper, we add the degeneracy analysis [5] to the surfel matching to further improve the matching accuracy inside the tunnels, where the state estimation problem is only performed in the well-conditioned direction. The normal distribution transform (NDT) [13] is a widely used mathematical property based methods. NDT divides the 3D space into small cells, and calculate the local probability density function (PDF) in each cell. Then the point-to-distribution correspondences are computed within a scan pair to find the optimal transformation. NDT aims to maximize the likelihood of all points described by the normal distributions. This reduces the memory consumption as well as computation time for nearest-neighbor searches. ### _LiDAR SLAM in Tunnel-like Environments_ The current LiDAR SLAM have proved to be accurate and robust enough for many scenarios, and we are mainly focused in handling degeneracy here. LiDAR SLAM usually produces a large drift error in scenes with textureless surface or repeated structures, such as indoor environments or tunnels. Researchers have proposed adding auxiliary sensors, degeneracy analysis, and geometric structures to cope with this problem. Adding sensor means introducing additional constraints. Cameras have proved to be a good complementary sensor in some LiDAR degenerated districts [14, 15, 16], but they are still inaccurate in perceptually-degraded mine tunnels. As an environment-insensitive sensor, the ultra-wideband (UWB) provides less cumulative errors and has attracted more and more interest in recent years [17]. However, they need to be largely deployed along the mine tunnel, which does not satisfy the explosion-proof regulation of most mine tunnels. As discussed in our previous work [18], the LiDARs of limited field of view (FoV) but high point cloud density is less likely to fail at degenerated districts. On the other hand, the spinning LiDAR with 360\({}^{\circ}\) FoV can provide consistent state estimation towards irregular movement and better loop closing. In this paper, we leverage the advantages of both LiDARs in the system design. One of the early works of degeneracy analysis is proposed by Zhang et al. [5], which leverages the minimum eigenvalue of the information matrices to determine system degeneracy. However, this metric is difficult to interpret because of unclear physical meaning. Zhen et al. define the localizability vector through projecting information matrix into the eigenspace, and model the degeneration with a frictionless force-closure [19].Taglique et al. [20] also use the smallest eigenvalue of point-to-plane cost to indicate the least observable direction. Instead of solving the degeneracy problem using the LiDAR-inertial SLAM directly, it will switch to other parallel-running odometry algorithms [21] when the metric is below a self-defined threshold. The degeneracy analysis based methods have been largely deployed to tunnel exploration tasks [20, 22]. We also introduce this degeneracy analysis feature into our system to further improve system accuracy. Man-made environments are often with strong structural regularity, lines, surfaces and objects. These geometric features have been widely exploited in LiDAR SLAM to improve state estimation accuracy [23, 24, 25]. Zhou et al. adopt the planar constraints to optimize plane parameters in the back-end [26], achieve promising results in a single-layer indoor environment. Zhou et al. [27] propose to use the principal component analysis (PCA) to extract sphere features along with planar, edge, and ground features. The results show that the spherical features can improve the system stability in highway scenarios. ## III Tightly Coupled Lidar-Inertial SLAM The pipeline of the proposed system is visualized in Fig. 1. The two LiDARs are sent to different state estimation module, which estimates the full LiDAR state by registering surfels in a scan to the map via a tightly coupled ESKF. Then the weights are calculated accordingly and integrated with the GNSS measurements. Once the GNSS signal is available again after long dropouts, the re-initialization module is awakened to further optimize the trajectory and the mapping result. Before diving into details of methods, we first define the frames and notations used throughout this article in TABLE I. In addition, we denote \((\cdot)^{\text{B}}_{\text{L}}\) as the transformation from LiDAR frame to IMU frame. Besides, we follow the "boxplus" and "boxminus", \(\boxplus\) and \(\boxplus\), operation defined in [28] to parameterize the state error on manifold. For a manifold \(\mathcal{M}\) with dimension \(n\), we have: \[\mathcal{M}=\text{SO}(3):\quad\textbf{R}\boxplus\textbf{r}=\text{ RExp}(\textbf{r});\textbf{R}_{1}\boxplus\textbf{R}_{2}=\text{Log}(\textbf{R}_{2}^{ \text{T}}\textbf{R}_{1})\] \[\mathcal{M}=\mathbb{R}^{n}:\qquad\quad\textbf{a}\boxplus \textbf{b}=\textbf{a}+\textbf{b};\qquad\quad\textbf{a}\boxplus\textbf{b}= \textbf{a}-\textbf{b}\] \[\text{Exp}(\textbf{r})=\textbf{I}+\frac{\textbf{r}}{\|\textbf{ r}\|}\sin(\|\textbf{r}\|)+\frac{\textbf{r}^{2}}{\|\textbf{r}\|^{2}}(1-\cos(\|\textbf{r}\|)), \tag{1}\] where \(\text{Exp}(\textbf{r})\) is the exponential map in [28] and \(\text{Log}(\cdot)\) is its inverse map. Therefore, we have the following expression for a compound manifold \(\mathcal{M}=\text{SO}(3)\times\mathbb{R}^{n}\), \[\begin{bmatrix}\textbf{R}\\ \textbf{a}\end{bmatrix}\boxplus\begin{bmatrix}\textbf{r}\\ \textbf{b}\end{bmatrix}=\begin{bmatrix}\textbf{R}\boxplus\textbf{r}\\ \textbf{a}+\textbf{b}\end{bmatrix};\begin{bmatrix}\textbf{R}_{1}\\ \textbf{a}\end{bmatrix}\boxplus\begin{bmatrix}\textbf{R}_{2}\\ \textbf{b}\end{bmatrix}=\begin{bmatrix}\textbf{R}_{1}\boxplus\textbf{R}_{2}\\ \textbf{a}-\textbf{b}\end{bmatrix}. \tag{2}\] Utilizing the definitions above, we can easily derive: \[\boxplus\mathcal{M}\times\mathbb{R}^{n}\to\mathcal{M};\boxplus \mathcal{M}\times\mathcal{M}\to\mathbb{R}^{n},\] \[(\textbf{x}\boxplus\textbf{u})\boxplus\textbf{x}=\textbf{u}; \textbf{x}\boxplus\textbf{y}\boxplus\textbf{x})=\textbf{y};\textbf{x}, \textbf{y}\in\mathcal{M},\textbf{u}\in\mathbb{R}^{n} \tag{3}\] Besides, for the vehicle full state **x**, we denote the following definitions used in the iterated Kalman filter: **x**: The ground true value of state **x**. **x**: The propagated value of state **x**. **x**: The updated value of state **x**. **x**: The error between the ground true **x** and its estimation \(\mathbf{\hat{x}}\). **x**: The estimation of state **x** in the \(\kappa\)-th iteration of the iterated Kalman filter. ### State Transition Model The raw accelerometer and gyroscope measurements, \(\mathbf{\hat{a}}\) and \(\mathbf{\hat{\omega}}\), are given by: \[\mathbf{\hat{a}}_{k}=\mathbf{a}_{k}+\textbf{R}_{W}^{B_{k}}\textbf{ g}^{W}+\textbf{b}_{a_{k}}+\textbf{n}_{a},\] \[\mathbf{\hat{\omega}}_{k}=\mathbf{\omega}_{k}+\textbf{b}_{\omega_ {k}}+\textbf{n}_{a\nu} \tag{4}\] where \(\textbf{n}_{a}\) and \(\textbf{n}_{a\nu}\) are the zero-mean white Gaussian noise, with \(\textbf{n}_{a}\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{a}^{2})\), \(\textbf{n}_{a\nu}\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{a}^{2})\). The gravity vector in the world frame is denoted as \(\textbf{g}^{W}=[0,0,\textbf{g}]^{T}\). The odometer mounted on the wheel is utilized to measure the longitudinal velocity of the train along the rails, and the model of odometer sensor is given by: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{5}\] where \(\textbf{c}^{0_{k}}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). Then the pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{6}\] where \(\textbf{c}^{0_{k}}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{7}\] where \(\textbf{v}^{0}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{8}\] where \(\textbf{c}^{0_{k}}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{9}\] where \(\textbf{c}^{0_{k}}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{10}\] where \(\textbf{v}^{0}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{11}\] where \(\textbf{v}^{0}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{12}\] where \(\textbf{v}^{0}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{13}\] where \(\textbf{v}^{0}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o, \tag{14}\] where \(\textbf{v}^{0}\) denotes the scale factor of the odometer modeled as random walk, with \(\textbf{n}_{s}o\sim\mathcal{N}(\textbf{0},\boldsymbol{\sigma}_{s}^{2}o)\). The pose estimation can be achieved through synchronously collected gyroscope and odometer output, and the displacement within two consecutive frames \(k\) and \(k{+}1\) can be given as: \[\textbf{c}^{0_{k}}\mathbf{\hat{\omega}}^{0}=\textbf{v}^{0}+\textbf{n}_{s}o,\ \[\mathbf{\hat{p}}_{\mathbf{\hat{0}}_{k}}^{\mathbf{\hat{0}}_{k+1}}=\mathbf{p}_{ \mathbf{\hat{0}}_{k}}^{\mathbf{\hat{0}}_{k+1}}+\mathbf{n}_{\mathbf{\hat{p}}^{0}}, \tag{6}\] where \(\mathbf{n}_{\mathbf{\hat{p}}^{0}}\) is also the zero-mean white Gaussian noise. Based thereupon, we can derive the kinematic model as: \[\dot{\mathbf{R}}_{B_{k}}^{W}=\mathbf{R}_{B_{k}}^{W}\big{[}\mathbf{ \hat{\omega}}_{k}-\mathbf{b}_{\omega_{k}}-\mathbf{n}_{\omega}\big{]}_{A^{ \prime}}\ \dot{\mathbf{p}}_{B_{k}}^{W}=\mathbf{v}_{B_{k}}^{W},\] \[\dot{\mathbf{v}}_{B_{k}}^{W}=\mathbf{R}_{B_{k}}^{W}\big{(}\mathbf{ \hat{a}}_{k}-\mathbf{b}_{a_{k}}-\mathbf{n}_{a}\big{)}+\mathbf{g}^{W},\] \[\dot{\mathbf{b}}_{\omega_{k}}=\mathbf{n}_{\mathbf{b}_{\omega}}, \dot{\mathbf{b}}_{a_{k}}=\mathbf{n}_{\mathbf{b}_{\omega}},\] \[\mathbf{g}^{W}=\mathbf{0},\ \dot{\mathbf{c}}^{O_{k}}=\mathbf{0},\ \dot{\mathbf{R}}_{k}^{B_{k}}=\mathbf{0},\ \dot{\mathbf{p}}_{k_{k}}^{B_{k}}=\mathbf{0}. \tag{5}\] where \(\mathbf{R}_{B_{k}}^{W}\), \(\mathbf{p}_{B_{k}}^{W}\), and \(\mathbf{v}_{B_{k}}^{W}\) denote the IMU attitude, position, and velocity expressed in the global frame. \(\mathbf{b}_{a}\) and \(\mathbf{b}_{\omega}\) are the IMU biases modeled as random walk process driven by \(\mathbf{n}_{\mathbf{b}_{a}}\) and \(\mathbf{n}_{\mathbf{b}_{\omega}}\). \(\left|\kappa\right|_{A}\) is the skew-symmetric cross product matrix of vector \(\kappa\in\mathbb{R}^{3}\). The extrinsic between LiDAR and IMU is defined as \(\mathbf{T}_{k_{k}}^{B_{k}}=\{\mathbf{R}_{k_{k}}^{B_{k}},\mathbf{p}_{k_{k}}^{B_ {k}}\}\). We can thereby derive the continuous model at the IMU sampling period \(\Delta t\) can be discretized as: \[\mathbf{x}_{l+1}=\mathbf{x}_{l}\boxplus\big{(}\Delta t\mathbf{f}(\mathbf{x}_ {i},\mathbf{u}_{i},\mathbf{w}_{i})\big{)}, \tag{6}\] where the state \(\mathbf{x}_{l}\), input \(\mathbf{u}_{i}\), process noise \(\mathbf{w}_{i}\) and the function \(\mathbf{f}\) are defined as: \[\mathbf{f}(\mathbf{x},\mathbf{u},\mathbf{w})=\begin{bmatrix} \mathbf{\hat{0}}-\mathbf{b}_{\omega}-\mathbf{n}_{\omega}\\ \mathbf{\hat{0}}^{W}_{B}+\frac{1}{2}\big{(}\mathbf{R}_{B}^{W}(\mathbf{\hat{a} }-\mathbf{b}_{a}-\mathbf{n}_{a})+\mathbf{g}^{W}\big{)}\Delta t\\ \mathbf{R}_{B}^{W}(\mathbf{\hat{a}}-\mathbf{b}_{a}-\mathbf{n}_{a})+\mathbf{g}^ {W}\\ \mathbf{n}_{\mathbf{b}_{\omega}}\\ \mathbf{n}_{\mathbf{b}_{a}}\\ \mathbf{0}_{3\times 1}\\ \mathbf{0}_{3\times 1}\\ \mathbf{0}_{3\times 1}\\ \mathbf{R}_{0}^{B}\big{(}\mathbf{c}^{O_{k}}\mathbf{\hat{0}}^{O}-\mathbf{n}_{ \mathbf{\hat{0}}^{O}}\big{)}\\ \mathbf{0}_{3\times 1}\end{bmatrix}. \tag{7}\] ### _Point Cloud Processing_ Two LiDARs, one Livox Avia and one Robosense RS-16 LiDAR, are included in our system. The former one is a hybrid solid-state LiDAR with non-repetitive scan pattern, while the latter one is a common mechanical spinning LiDAR. For each LiDAR, we extract surfels through clustering points based on their positions and timestamps, then fitting ellipsoids to them. We first divide the 3D space into voxels and cluster LiDAR points within each voxel and with adjacent timestamps together. Secondly, we fit an ellipsoid to each sufficiently large point sets based on an empirical threshold. These ellipsoids are surfels extracted, with their centers and shapes determined by the sample mean and covariance of points in the cluster. We employ a multi-resolution approach where the clustering and surfel extraction processes are repeated for multiple voxel sizes. More explicitly, we first cut the space into voxels, each with the size of the preset coarse map resolution. Thus, for each LiDAR scan, the contained points are distributed into voxels and are indexed into a Hash table. If all the contained points within a voxel lie on a plane, we store the plane points and calculate the surfel parameters. Each surfel is represented by its normal vector \(\mathbf{n}\), which is the eigenvector corresponding to the smallest eigenvalue of the covariance matrix, and the center position \(\mathbf{p}\). Otherwise, the current voxel will break into eight octants and repeat plane checking until reaching the threshold. This process makes the voxels having different sizes, each voxel contains one plane feature fitted from the raw LiDAR points. To reduce the computational time cost, we measure the planarity of surfel by computing a score based on the spectrum of its covariance matrix and merely retain surfels that are sufficiently planar. Given a LiDAR point \(\mathbf{p}_{i}^{W}\) accumulated in the world frame, we first search its nearest surfel. Then the point-to-plane distance can be calculated by: \[d_{l}=\mathbf{n}_{l}^{t}(\mathbf{p}_{l}^{W}-\mathbf{p}_{l}). \tag{8}\] In practice, each point \(\mathbf{p}_{l}^{t}\) in the LiDAR scan contains both the ranging and beam-drecting noises, denoted as \(\mathbf{n}_{l}^{L}\). The real position of the point \(\mathbf{p}_{gt_{l}}^{L}\) satisfies: \[\mathbf{p}_{gt_{l}}^{L}=\mathbf{p}_{l}^{L}+\mathbf{n}_{l}^{L}. \tag{9}\] Considering the corresponding IMU state expressed in the global frame, \(\mathbf{T}_{\mathbf{\hat{p}}_{i}}^{W}=\{\mathbf{R}_{Bi_{i}}^{W},\mathbf{p}_{i }^{W}\}\), and LiDAR-IMU extrinsic \(\mathbf{T}_{t_{l}}^{Bi}\), we can obtain the true position of \(\mathbf{p}_{i}^{W}\) through: \[\mathbf{p}_{l}^{W}=\mathbf{T}_{\mathbf{\hat{p}}_{i}}^{W}\mathbf{T}_{t_{l}}^{Bi }(\mathbf{p}_{l}^{L}+\mathbf{n}_{l}^{L}). \tag{10}\] Submitting (10) into (8), we can get the measurement model: \[\mathbf{h}_{i}(\mathbf{x}_{i},\mathbf{n}_{l}^{L})=\mathbf{n}_{i}^{T}\left( \mathbf{T}_{\mathbf{\hat{p}}_{i}}^{W}\mathbf{T}_{t_{l}}^{Bi}(\mathbf{p}_{l}^{L}+ \mathbf{n}_{l}^{L})-\mathbf{p}_{l}\right). \tag{11}\] The estimated pose is used to register the new points to the global map. When the new points belong to an unpopulated voxel, it will construct the voxel. Otherwise, it will be added to an existing voxel and update the parameters within the voxel. An example of the surfel global map is visualized in Fig. 2. Fig. 2: An example of accumulated surfel map, where each surfel is represented by markers of different colors. ### _Iterated Kalman Filter_ For each LiDAR input, we employ an iterated Kalman filter to estimate the system state utilizing the state transition model (6) and measurement model (11). By setting the process noise to zero, we can perform the forward propagation upon received IMU data using the error state dynamic model [29]: \[\mathbf{\hat{x}}_{i+1} =\mathbf{\hat{x}}_{i}\boxplus\big{(}\Delta\mathbf{tf}(\mathbf{ \hat{x}}_{i},\mathbf{u}_{i},\mathbf{0})\big{)}\] \[=\mathbf{F}_{\mathbf{\hat{x}}}\mathbf{\hat{x}}_{i}+\mathbf{F}_{ \mathbf{w}}\mathbf{w}_{i}. \tag{12}\] Here \(\mathbf{\hat{x}}_{0}=\mathbf{\bar{x}}_{k-1}\), where \(\mathbf{\bar{x}}_{i}\) is the optimal state estimation of the LiDAR scan at \(t_{i}\). The matrix \(\mathbf{F}_{\mathbf{\hat{x}}}\) and \(\mathbf{F}_{\mathbf{w}}\) is listed at the top of this page in (13). \(\mathbf{A(u)}^{-1}\) follows the definition in [30]: \[\mathbf{A(u)}^{-1} =\mathbf{I}-\frac{1}{2}[\mathbf{u}]_{A}+\big{(}1-\alpha\;(\| \mathbf{u}\|)\big{)}\frac{|\mathbf{u}|_{A}^{2}}{\|\mathbf{u}\|^{2}},\] \[\propto(u)=\frac{u}{2}\frac{\cos(u/2)}{\sin(u/2)}. \tag{14}\] Besides, the propagated covariance \(\mathbf{\bar{P}}_{i}\) can be calculated by: \[\mathbf{\bar{P}}_{i+1}=\mathbf{F}_{\mathbf{\bar{P}}}\mathbf{\bar{P}}_{i} \mathbf{F}_{\mathbf{\bar{w}}}^{T}+\mathbf{F}_{\mathbf{w}}\mathbf{Q}\mathbf{F} _{\mathbf{w}}^{T};\mathbf{\bar{P}}_{0}=\mathbf{\bar{P}}_{k-1}, \tag{15}\] where \(\mathbf{Q}_{i}\) is the covariance of the noise \(\mathbf{w}_{i}\). We need to compensate for the relative point cloud motion before they are integrated with the propagated state \(\mathbf{\hat{x}}_{i}\) and covariance \(\mathbf{\bar{P}}_{i}\) to produce an optimal state update. This process is implemented from [31], where (6) is propagated backward as \[\mathbf{\bar{x}}_{i-1}=\mathbf{\bar{x}}_{i}\boxplus\big{(}-\Delta\mathbf{tf} (\mathbf{x}_{i},\mathbf{u}_{i},\mathbf{0})\big{)}. \tag{16}\] The backward propagation utilizes the left IMU and wheel odometer measurement as the input to compute a relative pose between point sampling time and the scan end time. After the motion compensation process, we can view all the points within the scan are sampled at the same time. Then we can derive the residual at the \(\kappa\)-th iteration \(\mathbf{z}_{i}^{\kappa}\) as: \[\mathbf{z}_{i}^{\kappa}=\mathbf{h}_{i}(\mathbf{\hat{x}}_{i}^{\kappa},\mathbf{ 0})=\mathbf{n}_{i}^{T}\left(\mathbf{T}_{i}^{\mathbf{W}}\mathbf{T}_{i}^{B_{i} }\mathbf{p}_{i}^{L}-\mathbf{p}_{i}\right). \tag{17}\] The total measurement noise is \(\mathbf{n}_{i}^{T}\mathbf{T}_{i}^{\mathbf{W}}\mathbf{T}_{i}^{B_{i}}\mathbf{n }_{i}^{L}\sim\mathcal{N}(\mathbf{0},\mathbf{R}_{i})\). Combining the prior distribution from the forward propagation \(\mathbf{x}_{i}\boxminus\mathbf{\hat{x}}_{i}\) and the measurement model, we can derive a posterior distribution of the state \(\mathbf{x}_{i}\), denoted as \(\mathbf{\bar{x}}_{i}^{\kappa}\), and its maximum a posterior (MAP) form: \[\min_{\mathbf{x}_{i}^{\kappa}}\Bigg{(}\|\mathbf{x}_{i}\boxminus\mathbf{\hat{x }}_{i}\|_{\mathbf{\bar{P}}_{i}}^{2}+\sum_{j=1}^{\text{m}}\|d_{j}-\mathbf{H}_{j }\cdot(\mathbf{x}_{i}\boxminus\mathbf{\bar{x}}_{i})\|_{\mathbf{\bar{P}}_{j}} ^{2}\Bigg{)}. \tag{18}\] Let \(\mathbf{H}=\big{[}\mathbf{H}_{1}^{\kappa^{T}},\mathbf{H}_{2}^{\kappa^{T}},..., \mathbf{H}_{m}^{\kappa^{T}}\big{]}^{T}\), \(\mathbf{R}=\text{diag}(\mathbf{R}_{1},\mathbf{R}_{2},...,\mathbf{R}_{m})\), \(\mathbf{P}=(\mathbf{J}^{\kappa})^{-1}\mathbf{\bar{P}}_{i}(\mathbf{J}^{\kappa})^ {-T}\), and \(\mathbf{z}_{i}^{\kappa}=\big{[}\mathbf{z}_{i}^{\kappa T},\mathbf{z}_{i}^{ \kappa T},...,\mathbf{z}_{m}^{\kappa T}\big{]}^{T}\), this MAP problem can be solved by iterated Kalman filter: \[\mathbf{K}=(\mathbf{H}^{T}\mathbf{R}^{-1}\mathbf{H}+\mathbf{P}^{-1})^{-1} \mathbf{H}^{T}\mathbf{R}^{-1},\] \[\mathbf{\hat{x}}_{i}^{\kappa+1}=\mathbf{\hat{x}}_{i}^{\kappa}\boxplus\big{(}- \mathbf{K}\mathbf{z}_{i}^{\kappa}-(\mathbf{I}-\mathbf{K}\mathbf{H})(\mathbf{J }^{\kappa})^{-1}(\mathbf{\hat{x}}_{i}^{\kappa}\boxminus\mathbf{\hat{x}}_{i}) \big{)}, \tag{19}\] where \(\mathbf{K}\) is the Kalman gain, \(\mathbf{H}\) is the Jacobin matrix of the measurement model \(\mathbf{h}_{1}(\mathbf{x}_{i},\mathbf{u}_{i}^{L})\), and \(\mathbf{J}^{\kappa}\) is the partial differentiation of \((\mathbf{\hat{x}}_{i}^{\kappa}\boxplus\mathbf{\hat{x}}_{i}^{\kappa})\boxminus \mathbf{\hat{x}}_{i}\) w.r.t. \(\mathbf{\hat{x}}_{i}^{\kappa}\) evaluated at zero. This process repeats until convergence, \(\|\mathbf{\hat{x}}_{i}^{\kappa+1}\boxminus\mathbf{\hat{x}}_{i}^{\kappa}\|<\epsilon\), then the optimal state and covariance estimation are: \[\mathbf{\bar{x}}_{i}=\mathbf{\hat{x}}_{i}^{\kappa+1},\mathbf{\bar{P}}_{i}=( \mathbf{I}-\mathbf{K}\mathbf{H})\mathbf{P}. \tag{20}\] The state update is then used to transform each scan point to the global frame and inserted into the map. ### _Graph Optimization_ We construct a pose graph at the back-end to integrate pose information from two LiDAR odometries, inertial odometry, GNSS, and detected loop closures. This state estimation process can be formulated as a maximum-a-posterior (MAP) problem. Given the measurements \(\mathbf{z}_{k}\) and the history of states \(\mathbf{x}_{k}\), the MAP problem can be formulated as: \[\mathbf{x}_{k}^{*}=\operatorname*{argmax}_{\mathbf{\hat{x}}_{k}}\text{p}\left( \boldsymbol{x}_{k}|\boldsymbol{x}_{k}\right)\propto\text{p}(\mathbf{x}_{0}) \text{p}\big{(}(\mathbf{z}_{k}|\mathbf{x}_{k})\big{)} \tag{21}\] If the measurements are conditionally independent, then (21) can be solved through least squares minimization: \[\boldsymbol{\chi}^{*}=\operatorname*{argmin}_{\mathbf{\hat{x}}_{k}}\sum\sum_{l=1 }^{k}\|\boldsymbol{r}_{l}\|^{2} \tag{22}\] where \(\boldsymbol{r}_{l}\) is the residual of the error between the predicted and measured value. For the sake of decreasing system memory usage and increasing computation efficiency, we employ the sliding window to keep a relative steady number of nodes in the local graph. Given a sliding window containing \(k\) keyframes, \(\boldsymbol{X}=[\mathbf{\bar{x}}_{1}^{T},\mathbf{\bar{x}}_{2}^{T},...,\mathbf{ \bar{x}}_{k}^{T}]^{T}\), we maximize the likelihood of the measurements, and the optimal states can be acquired through solving the MAP problem: \[\min_{\mathbf{x}}\{\left\lVert\mathbf{r}_{p}\right\rVert^{2}+\mathcal{W} _{inertial}\sum_{l=1}^{N_{g}}\left\lVert\mathbf{r}_{y_{l}}\right\rVert^{2}+\mathcal{W }_{avia}\sum_{l=1}^{N_{g}^{grain}}\mathbf{r}_{Li}^{aria}\] \[+\mathcal{W}_{rs}\sum_{i=1}^{N_{g}^{rs}}\mathbf{r}_{Li}^{rs}+\mathcal{W }_{GNSS}\sum_{i=1}^{N_{g}}\left\lVert\mathbf{r}_{g_{i}}\right\rVert^{2}\} \tag{23}\] where \(\mathbf{r}_{p}\) is the prior factor marginalized by Schur-complement [32, 33], \(\mathbf{r}_{g_{i}}\) is the residual of IMU-odometer preintegration result [18]. \(\mathbf{r}_{Li}^{aria}\) and \(\mathbf{r}_{Li}^{rs}\) define the residual of Avia and RS-16 LiDAR odometry. Finally, the GNSS constraints is denoted by \(\mathbf{r}_{g_{i}}\). Note that we use manually established value for the LiDAR odometry covariance and directly use the GNSS covariance predict from the raw measurements. Since the residuals are expressed in different frames, we unify their expression in the inertial frame using the calibrated sensor extrinsic such that: \[\mathbf{r}_{g_{i}}=\mathbf{R}_{ir}^{B}(\mathbf{p}^{W_{i}}-\mathbf{p}^{W_{i-1}}- \mathbf{p}_{W}^{B}), \tag{24}\] where the extrinsic \(\mathbf{T}_{W}^{B}=\{\mathbf{R}_{W}^{B},\mathbf{p}_{W}^{B}\}\) transform the GNSS pose into inertial coordinates. We denote \(N_{g}\), \(N_{g}^{aria}\), \(N_{L}^{rs}\), and \(N_{g}\) as the number of four factors within the sliding window. \(\mathcal{W}\) with footnotes defines the respective weighting factors. We assume the short-term inertial preintegration result is accurate and set it as the reference to calculate other weighting factors. For the short period k to k+1, we denote \(\mathbf{p}_{jk}^{\mathcal{J}_{k+1}}\) and \(\mathbf{p}_{k}^{L_{k+1}}\) as the pose estimation results of inertial and LiDAR odometry. Then the LiDAR odometry weighting factor can be expressed using: \[\mathcal{W}_{L}=\left(1-\left(\frac{\left\lVert\mathbf{p}_{\mathcal{L}_{k}}^{ L_{k+1}}\right\rVert-\left\lVert\mathbf{p}_{\mathcal{J}_{k}}^{J_{k+1}} \right\rVert}{\left\lVert\mathbf{p}_{\mathcal{J}_{k+1}}^{J_{k+1}}\right\rVert }\right)^{2}\right)\mathcal{W}_{inertial} \tag{25}\] The \(\mathcal{W}_{GNSS}\), on the other hand, is a combination of the DOP value, satellite number, and the real-time kinematic (RTK) solution status. ### _Re-initialization_ The autonomous mine service vehicle starts in the open district, then enters the mine tunnel and travels for a long time, finally leaves the tunnel. In that case, the vehicle state may have accumulated a significant amount of drift inside the tunnel. If we directly fuse the large error LiDAR odometry and GNSS measurements, the accumulated drift may be not fully or over compensated, leading to extra pose estimation errors. Therefore, we propose to use a two-step re-initialization process for drift elimination. 1. _Loop detection_: Once the vehicle leaves the tunnel, the GNSS signal is available again. This will awake a iris loop detection thread based on [33], with the global pose and mapping updated upon detected loop. Note that we merely employ the 360deg spinning LiDAR for loop detection. 2. _Full recovery_: After the loop is founded and the RTK is at the fixed solution, we will add the global measurements to the pose graph as presented in Section IV-D. After this full recovery, the estimated state is again aligned with global. ### _Hardware and Software Level Verification_ The hardware-level verification is conducted at the data preprocessing stage, including data stream existence, frequency, and individual verification. The data stream existence test aims to find out whether the required data input exist or not. Since the LiDAR-inertial odometry has a filter-based structure, and it will fail immediately when no input is from either IMU or LiDAR. Therefore, each of our LiDAR-inertial odometry will reinitialize and restart when either IMU or LiDAR stream is lost for one second. Otherwise, the filter-based system may generate large and unrecoverable drift as visualized in Fig. 3. The data frequency test also follows this idea. The system set the stream with the lowest frequency as the primary input, and monitor the counts of other data within two consecutive frames continuously, e.g., the LiDAR is set as the primary input (10 Hz), and approximately twenty frames of IMU input (200 Hz) should be found within two successive LiDAR scans. Once this criterion is not hold for thirty seconds, the system will send a warning to the user interface for a manual check, e.g., a yellow warning sign on the central control screen. The individual verification is mainly for the LiDAR sensors. Since the mine service vehicle operates in the narrow tunnel, once the vehicle is facing direct to the wall, the LiDAR-inertial odometry may fail against the textureless and flat terrain. Therefore, we monitor the Euclidian distance of the point clouds within each scan, if 70 % of the points are below two meters to the LiDAR, the current frame is discarded for pose estimation. Once this stage retains for more than ten seconds, the system will switch to inertial odometry temporarily. The software-level test is performed for parallel pose estimation modules to remove clearly wrong results. We set the maximum speed of the vehicle as 30 km/h, and verify whether the displacement of each odometry is beyond this limit or not, e.g., once the displacements of two successive LiDAR odometry (10 Hz) is beyond 2.0 m, it will be discarded for pose graph construction, since it is clearly wrong pose estimation results. Similarly, we use the steering angle information to monitor the individual yaw estimation results. Fig. 3: Illustration of the influence of temporal IMU data loss to the LiDAR-inertial system. The real-time mapping has a sudden vertical drift w.r.t. a 0.6 s IMU data loss. ## IV Experiments To evaluate the performance of the proposed method, we conducted experiments in Madiliang mine of Ordos, China. As visualized in Fig. 4(a), the mine service vehicle transport food, staff worker, and some necessities between ground office and underground mine face. The underground mine tunnel is around 2.5 km long, with many branches along the path to the mine face as pictured in Fig. 4(b). We collected the dataset utilizing several autonomous mine service vehicles as shown in Fig 4(c). The vehicle is composed of one Robosense RS-16 spinning LiDAR with 360\({}^{\circ}\)\(\times\)30\({}^{\circ}\) FoV and one Livox Avia non-repetitive scanning LiDAR with 70.4\({}^{\circ}\)\(\times\)77.2\({}^{\circ}\) FoV. We use a ASENSING INS570D integrated navigation unit to provide GNSS and inertial measurements. We also collect the wheel encoder readings of the two rear wheels. The localization ground truth is kept by a MAPSTNAV POS620 high precision integrated navigation system with fiber optic gyros. POS620 supports precise post processing procedure, which can achieve centimeter-level localization accuracy in the long tunnels. In addition, we manually setup several check points inside the tunnel (on the wall or on the floor) using total station to further verify the localization and mapping accuracy. All our algorithms are implemented in C\({}^{++}\) and executed in Ubuntu Linux using the ROS [34]. We use an onboard computer with two NVIDIA Jetson Xaiver NX for real-time processing in the vehicle. Since all our sensors are hardware synchronized, we record the GNSS timestamp for each SLAM pose output. Then we can directly evaluate the localization accuracy through timestamp matching. ### _Ground Tests_ The first experiments seek to evaluate the positioning and mapping accuracy of our system in the open sky environment. We first stay still waiting for the RTK initialization, and our SLAM algorithm will automatically align the estimation coordinates with global coordinates in this stage. Since there are no open source algorithms which integrate two LiDARs of different mechanism currently, we select three state-of-the-art (SOTA) open source algorithms, Lio-sam [35], Lili-om [36], and Fast-lio2 [4], that apply to both spinning and non-repetitive LiDARs for comparison. For the latter two approaches, we add the GNSS constraints of our approach to the back-end utilizing GTSAM [37]. In addition, we record the same GNSS timestamp to the selected approaches for positioning evaluation. The first sequence is in the open sky, 2.7 km in length with time duration of 417.5 s. We plot the mapping result of Livox Avia LiDAR in Fig. 5, in which the consistent and clear building edges indicating our method is of high precision globally. Besides, we also provide four close views in Fig. 5 for the readers to inspect the local registration accuracy. We denote INS570D as the direct output of the INS570D integrated navigation unit. The 3D root mean square error (RMS) and maximum error (MAX) is computed and reported in TABLE II. Since the RTK status is always at fixed point solution along the journey, all the methods can be well-constrained by the global position measurements, and achieve a comparable accuracy with the post processed results. The LiDAR SLAM approaches cannot achieve a better accuracy than the INS570D as the range measuring error of LiDAR sensor is above 2 cm. In addition, we plot the sequential position and attitude error curves of our approach in Figure 6. We can infer that the largest error is in the vehicle moving direction (x direction), which doubles that of the y and z directional errors. Therefore, we have reason to believe that the time synchronization within different sensors is not accurate enough. Besides, we find that the asynchronous communication and delay between ROS nodes influence the performance of the SLAM algorithms. Different computers will output non identical results, and the odometry result for the same data input is not completely identical for different trials. Fig. 4: Visualization of the mine service vehicle in (a) and an example of inner view of the mine tunnel. The second sequence is half indoor and half outdoor, 320 m in length with time duration of 278 s. As pictured in Fig. 7, the vehicle starts in the outdoor and slowly drives into the mining truck maintenance garage. The overall mapping result is visualized in Fig. 8, illustrating that the cooperate mapping benefits both the 360deg coverage of Robosense RS-16 LiDAR and the high density of Livox Avia LiDAR. To further verify our mapping accuracy, we transform the local coordinates into WGS-84 and project the map onto satellite image as visualized in Fig. 9. The trees and building edges are well-matched to the background, demonstrating that our method is of high precision globally. To quantitively present the localization accuracy of various methods, we compute the RMS and MAX errors and report Figure 5: The Livox Avia mapping result of ground test, sequence one. The top view of the overall mapping is pictured in the middle, (a), (b), (c), and (d) visualize the mapping in detail. (a) shows the mine trucks, (b) is the dormitory for miners, (c) presents the mine conveyor belt, and (d) is a crossing. Figure 6: The 3D positioning and 3D attitude error curves of our method for ground test, sequence one. Figure 7: The trajectory of our approach plotted onto satellite image. them in TABLE II. The advantage of adding LiDAR sensor is now obvious, most of the approaches have an improvement of 25% than that of INS570D. This significant increment happens mainly indoors, where the GPS signal is blocked, and the IMU mechanization cannot hold for one minute. LiDAR SLAM now acts as a strong pose constraint at the GNSS outages, which significantly improve the localization accuracy. ### _Inderground Tunnel Tests_ The second experiments seek to evaluate the positioning and mapping accuracy of our system in the mine tunnel. As visualized in Fig. 10, the mine service vehicle stays still outside of the tunnel for RTK and coordinates initialization. Then it enters the tunnel, travels for more than 1700 s, and leaves the tunnel finally, the overall time consumption is 2137 s. The global map is plotted in Fig. 11, in which the top view of the map demonstrate that our map is of high consistency horizontally. Besides, the side view of our map illustrates our result is of high consistency vertically. In addition, we present some of the mapping details both inside and outside of the mine tunnel, the clear and vivid structure on the wall or on the ground demonstrating our mapping is of high precision locally. The underground tunnel is dominated by flat walls and ground, which is of low texture and repetitive patterns, making it one of the most challenging scenarios for SLAM algorithms. We observe that all the selected methods fail to provide entire state estimation results throughout the tunnel. They either'stops' at certain areas or fails completely with great errors. On the contrary, our approach can provide seamless and accurate pose estimation result along the path. As pictured in Fig. 12, the GNSS-IMU-odometer tightly coupled output of INS570D can provide continuous pose output regardless of the environment variations. However, the GNSS measurements merely give a one-off correction when satellite signal is available, and the accumulated errors are not corrected where satellite signal is not available. On the other hand, our approach utilizes the detected loop and GNSS positioning to perform the one-off correction. The following accumulated errors are then corrected by ICP-based map matching. In this way, we can correct the errors spread along the trajectory. To further present the superiority of our system, we plot the absolute error over distance in Fig. 13 for detailed reference. We can directly infer that our method has magnitudes of higher accuracy than that of the INS570D. The reason is two-fold: Firstly, the mine service vehicle has an explosion-proof design, and the tires are made of rubber, which easily lead to wheel slippage in the mine tunnel. Therefore, the wheel odometer is not accurate, especially when the vehicle enters or leaves tunnel branches. Secondly, the accumulated errors are only corrected once when the GNSS signal is available, leaving the remaining errors unsolved. As shown at distance 6600 m in Fig. 13, the errors are corrected with many outliers observable. On the other hand, our loop detection and re-initialization process smooths Fig. 8: The cooperate mapping result of two LiDARs in the middle, and the color is coded by height variations. (a) and (b) shows the example of indoor and outdoor mapping. (c) and (d) presents the advantage of the cooperate mapping, where both coverage and density is ensured. Fig. 10: The trajectory of our approach plotted onto satellite image. Fig. 9: The mapping result projected onto satellite image, the color is coded by height variations. the whole trajectory bi-directionally. The RMS and MAX error of our system is 2.465 m and 16.861 m. ### _Consistency Tests_ The third experiments seek to evaluate the consistency of our system within different journeys or datasets. We use GTSAM to perform pose graph optimization, which may generate non-identical pose estimation results even for the same datasets of different trials. The consistency measures the similarity of different paths, which strongly describes the GNSS dropout performance. Besides, the vehicles need to transport staff or necessities to given branch tunnels, the global consistency is of vital importance for such tasks. We seek to check the consistency within different platforms and trials. In addition to the onboard computer, we use a laptop with Intel i7-10510U CPU, 16 GB RAM for comparison. We perform three runs on each computer for the same dataset, the parameters remain unchanged for different trials. The trajectories of different trials are visualized in Fig. 14, where we can find the paths over various trials does not change much. The maximum trajectory-to-trajectory error is below 20 cm, which is acceptable for cross platform vehicle navigation. ### _Ablation Study_ In this experiment, we aim to understand the contribution of different sensors and factors. We hereby define the following notations in TABLE II for illustration. The first test seeks to understand the contribution of different LiDARs. We employ a tunnel-only sequence for illustration, and we plot the individual trajectories in Fig. 15. ours w/o Livox fails shortly in the tunnel and generates large errors. This is due to the flat wall in the tunnel, even the extracted surfel is repetitive, and the point-to-plane pose estimation is not accurate. On the contrary, ours w/o RS survives in this scenario. The Fig. 11: The top view of the global map in (a), and the four dashed white circle indicate the tunnel branches where the vehicle enters and transport staff workers. (b), (c), and (d) are the detailed view of cooperate mapping result, where (b) shows the ground view, (c) presents the entrance of tunnel, (d) gives a tunnel branch. (e) is the side view of the global map. Fig. 12: The trajectory comparison of our method and INS570D w.r.t. post processed ground truth. Fig. 13: The absolute translational errors over distance. reason is twofold: Firstly, the surfel extracted from the surrounding tunnel walls are of high similarity and are harmful for pose estimation. The restricted FoV of Livox Avia is now an advantage in the tunnels, where the observable features are mainly in front of the vehicle and the harmful features are largely omitted. Secondly, Avia has a higher density than RS-16. The increased density leads to more observable features in single scan, where slight environment changes can be detected. Such changes include road signs, holes on the wall, or road curbs in the tunnel, which can provide strong constraint as discussed in our previous work [38]. These two reasons can be interpreted as the DOP in the field of GNSS, where each extracted surfel can be viewed as a satellite. The satellite distribution is now represented using surfel distribution, where Avia has a lower DOP due to better distribution. Therefore, the pose estimation utilizing Avia has an optimal solution. However, the RS-16 is still indispensable of the system. As visualized in the two insets of Fig. 15, when RS-16 is excluded for pose estimation, the system will generate slight errors with irregular motion (sharp turning or forward-backward moving). The omnidirectional view of RS-16 now effectually constrains the pose estimation result, especially for the yaw direction. To further reveal this effect, we plot the two maps around the same tunnel branch for illustration in Fig. 16. It is evident that the global map is consistent and the local map is clear when RS-16 LiDAR is utilized for pose estimation. The second test seeks to reveal the effectiveness of surfel-based scan matching. We also implement a LOAM-like [39] feature points based scan matching approach of our system, denoted as ours w/o Surfel. In addition, we also treat feature points with large intensity variations as edge points as presented in our previous work [18]. Since the surfel based scan matching also suffers from the degeneration for RS-16 as mentioned in the last paragraph, we only select Avia LiDAR for illustration. We select two scenarios, one on the ground and one in the tunnel for comparison. The extracted surfel and feature points in different scenarios are visualized in Fig. 17. It is seen that the edge and planar points have a good distribution on the ground, which can provide strong pose constraint in all direction. However, the planar points have a bad distribution in the tunnel, where only the lateral motion can be constrained, leading to partial unobservability of the longitudinal direction. On the other hand, the extracted surfel have a uniform distribution both on the ground and in the tunnel, which should give better pose estimation results. We hereby utilize a tunnel-only sequence to verify our assumption. As pictured in Fig. 18, the vehicle starts at the middle of tunnel, travels to the last tunnel branch, and returns to the start point, then finally leaves the tunnel. Although ours w/o Surfel maintains consistent and accurate pose estimation for more than 1 km in the tunnel, it still fails to keep this result to the ground due to degeneration problems. On the other hand, the surfel based scan matching can finish the whole journey, and achieves 40% lower return-to-start-point error as visualized in Fig. 18, satisfying with our hypothesis. The third test seeks to understand the contribution of GNSS re-initialization stage. The sequence used is the same with the underground tunnel test. We denote ours w/o RE as our system merely utilizing GNSS pose graph optimization without re-initialization stage. When re-initialization is not included, our system will directly integrate the pseudorange and carrier-phase measurement into joint state estimation when integer ambiguity is at the fixed solution. Although the maintained pose graph is updated upon the GNSS measurements, the accumulated errors are not eliminated completely as pictured in Fig. 19(a). This is because ours w/o RE uses the same coordinate transformation matrix along the journey, which is approvable when GNSS is always available or temporally unavailable. However, this impact is not negligible in the case of longer dropouts, while the Figure 14: The consistency test across different platforms and trials, and the three insets denote the start, middle, as well as the end of the journey. Figure 15: The trajectory comparison of LiDAR contribution ablation study. The two insets denote the errors caused by irregular motions of ours w/o RS. SLAM algorithms suffer from drift during the absence of GNSS measurements. Therefore, there is a large deviation between the pose estimation of SLAM and GNSS, leading to not fully compensated states as visualized in Fig. 19(a). If we manually increase the weight of GNSS measurements, the drift outdoor can then be fully corrected. However, the trajectory is not Figure 16: Visualization of the mapping difference without Robosense RS-16 LiDAR. (a) is the result of our approach whereas (b) is from ours w/o RS. The two insets in (a) and (b) presents the mapping details of a tunnel branch. Figure 19: The trajectory comparison of our approach with or without GNSS re-initialization stage. Ours w/o RE has a lower GNSS weight in (a), where drifts outdoor are not fully compensated. On the other hand, the GNSS weight is higher in (b), where many bulges exist on the path. The upper inset in (b) denote an example of such bulges Figure 17: Visualization of the extracted feature points in (a) and surfel in (b), the top two are on the ground while the bottom two are in the tunnel. The red and green points in (a) are the extracted planar and edge points. The markers in (b) are the extracted surfel. Figure 18: The trajectory comparison of surfel based and feature points based LiDAR SLAM. The three insets denote the starting area, tunnel branch and places where feature points based approach has large outliers. coherent with several bulges caused by this "forced" optimization as visualized in Fig. 19(b). This non-consistent state also leads to large map distortion, especially in the rolling direction, where the tunnel is rotated for more than 30 degrees. ### _Runtime Efficiency_ Since our system is designed to provide state estimation results to the autonomous mine service vehicles, the runtime efficiency is of prime concern for real-world deployment. We thereby drive the vehicle into the tunnel, travels for 4067 s, and record the per-module time consumption along the journey. Note that our algorithm only utilizes two cores out of the six cores of the ARM Carmel CPU (the internal CPU of Xaiver NX). We plot three key processes in the LiDAR odometry in Fig. 20(a). The preprocessing stage include outlier and distortion removal, down sampling, and voxelizing. Then the surfel extraction and association denote the time consumption of surfel feature extraction and surfel matching process. It is seen the LiDAR odometry can reach real-time performance even on this embedded ARM CPU, the average runtime is 27.20 ms. Since the real-time map rendering is too time-consuming for this power-limited platform, we disable the visualization thread and plot the state optimization time usage in Fig. 20(b). We can infer that the time used for optimization is increasing continuously along the journey due to enlarged pose graph. Besides, when the vehicle drives out of the tunnel, the loop detection and re-initialization thread is invoked, leading to a large increment after 3500 s. The average time consumption inside the tunnel is 8.12 ms, whereas that of the leaving tunnel is 29.57 ms. Note that the time consumption of IMU/odometer preintegration is negligible, and we do not consider this impact in the experiment. We also record the CPU and memory usage along the journey. Since our LiDAR odometry and state optimization process are running in different threads, we continuously record the thread statistics as shown in Fig. 21. It is seen that the CPU and memory usage of LiDAR odometry thread is almost steady for the entire journey. Since our memory is 16 GB, the average memory consumption of LiDAR odometry is 1.31 GB. On the other hand, the GTSAM-based graph optimization is constantly occupying computational resources due to increased graph size. ## V Conclusion In this paper, we proposed a localization and mapping framework for autonomous mine service vehicles, achieving accurate and robust pose estimation results in such scenes. Our system integrates measurements from two LiDARs, an IMU, wheel odometers, and optionally a GNSS receiver in a tightly-coupled manner. The front-end includes two parallel running ESKF based LiDAR-inertial odometry. Different from common algorithms utilizing feature points, we extract surfel elements for scan-to-scan registration. Pose results from different estimation engine are jointly optimized at the backend using pose graph optimization. To fully alleviate the long-term accumulated drift in the tunnel, also known as the GNSS dropouts, we utilize a loop detection based re-initialization process for state alignment. The proposed method has been extensively validated in real-world mine environments, with an acceptable accuracy in most scenarios. In addition, our system Fig. 21: The CPU and memory usage of LiDAR odometry thread and state optimization thread in (a) and (b). Fig. 20: The time consumption of the LiDAR odometry part of our system in (a) and the state optimization part in (b). has been successfully deployed for several autonomous mine service vehicles for state estimation. There are several directions for future research. Check-point based mapping evaluation in tunnels is desirable, which helps to understand the system mapping performance. Another research direction concerns developing a safer and easier-to-use platform for large deployment. Many engineering designs, such as explosion-proof shell and long during continuous operation, need to be considered. ## Acknowledgment We would like to thanks the Suzhou plusgo CO., Ltd and Tsinghua University Suzhou Automotive Research Institute for program support and data collection.
2306.07984
Cross Chain Bribery Contracts: Majority vs Mighty Minority
Bribery is a perilous issue in the real world, especially in an economical aspect. This fraudulence is unavoidable, and more importantly, it is more difficult to trace in case smart contracts are utilized for bribing on a distributed public blockchain. In our paper, we propose a new threat to the security of a blockchain system, cross-chain bribery using smart contracts. An arbitrary wealthy briber can utilize cross-chain smart contracts to manipulate a consensus mechanism on a victim's blockchain or to disgrace a victim's blockchain. To better understand this threat, our paper proposes a framework to analyze bribery using cross-chain smart contracts. We analyze the amount of incentive to bribe rational miners in a victim's blockchain and also a full cost of conducting a cross-chain bribery attack. The result is that such attacks can be carried out with a reasonable amount of money or cryptocurrencies.
Quang Tran, Lin Chen, Lei Xu, Yang Lu, Rabimba Karanjai, Weidong Shi
2023-06-08T16:19:57Z
http://arxiv.org/abs/2306.07984v1
# Cross Chain Bribery Contracts: Majority vs Mighty Minority + ###### Abstract Bribery is a perilous issue in the real world, especially in an economical aspect. This fraudulence is unavoidable, and more importantly, it is more difficult to trace in case smart contracts are utilized for bribing on a distributed public blockchain. In our paper, we propose a new threat to the security of a blockchain system, cross-chain bribery using smart contracts. An arbitrary wealthy briber can utilize cross-chain smart contracts to manipulate a consensus mechanism on a victim's blockchain or to disgrace a victim's blockchain. To better understand this threat, our paper proposes a framework to analyze bribery using cross-chain smart contracts. We analyze the amount of incentive to bribe rational miners in a victim's blockchain and also a full cost of conducting a cross-chain bribery attack. The result is that such attacks can be carried out with a reasonable amount of money or cryptocurrencies. ## 1 Introduction Blockchain [8, 16, 19] provides a decentralized method for records keeping and information/transactions validation. Various applications are developed on top of the blockchain platform. One of these applications is a smart contract, which brings many advantages, e.g., saving time, reducing conflicts, and saving money. Several popular blockchain platforms support smart contracts such as Ethereum, EOS, Hyperledger Fabric, and Stellar, which can be feasible for solving a range of business challenges. [2]. Even though smarts contracts provide a fundamental feature to extend usages of the blockchain, people tend to consider an actual usage of them while underestimating adverse effects they can cause. Smart contracts can be utilized in a destructive manner, i.e., bribery to undermine existing consensus mechanisms to gain financial benefits [3, 5, 18]. As one of the most popular blockchain construction method, the proof-of-work (PoW) based mining process requires a vast computational power to solve mathematical puzzles to produce a valid block. It is tough, if not impossible, for individual miners to compete with professional mining farm corporations. Therefore, individual miners usually join a mining pool to maximize their profits. In other words, we can claim that the miners in public blockchain are not united, and consolidated. They are forming up as a group based on the fundamental factor-maximizing efficiency and incentives. Thus, they can quickly change their mind and easily be manipulated, targeted in a bribery contract attack. Unlike prior efforts primarily focusing on analyzing selfish mining behaviors using incentives restricted within a specific blockchain system to maximize a briber's beneficial rewards, we aim at outlining the possibility of security and stability of public blockchains and cryptocurrency systems as an economy driven game. We propose a cross-chain bribery attack scheme in which a briber targets at a distributed public blockchain to undermine consensus mechanism on a victim chain through a short-term bribing and manipulating bribed miners. The example scenario in our paper uses the Ethereum blockchain platform as a model to analyze how such an attack can be carried out. The cross-chain attack can be applied to any public blockchain since there always exists selfish behaviors within a public blockchain. Our contributions in this paper include: * Providing a holistic view of selfish behaviors on blockchain by showing the feasibility of external influences in the form of cross chain bribery smart contracts. * Investigating the effect of cross chain external incentives to consensus mechanism of victim blockchain. * Analyzing possible scenarios and consequences using selfish miners and validators in Ethereum blockchain as an example. * Providing a possibility of strategy by rational players who maximize profit in a multi-chain and multi-currency setting. * Suggesting the necessity of additional research on selfish behaviors of rational miners in mining pools. The rest of the paper is organized as follows. In section 2, we discuss selfish behaviors in blockchain and related works. Then we introduce cross chain incentives in the form of bribery smart contracts and analyze their influences on selfish behaviors on the victim chain. After the analysis, we discuss future research direction and open problems. In the end, we conclude the paper. ## 2 Selfish Mining Strategy and Related Works In a blockchain system built on proof-of-work (PoW) and longest chain principle, a regular user works on the latest block on the longest branch to produce a new block. After the user successfully makes a valid block, he/she broadcasts the new block to the whole network immediately to claim the reward. However, selfish miners who want to maximize their benefits can adopt a different strategy to get more rewards. One possible strategy is to hold produced blocks and submits them together to the system. The idea of selfish mining gains more attention since it was first proposed in 2014 [11]. The paper pointed out that a simple majority is not enough to protect the security of Bitcoin in a case of the existence of selfish miner. Specifically, they propose an attack that selfish miners can push their revenue up while gaining more rational miners to join the selfish mining pool until it becomes a majority. Following this inspiration, other models are proposed to stir up the attention of researchers and developers in understanding such a potential threat of selfish mining in a public blockchain platform [3, 18]. Jian Niu and Chen Feng analyzed selfish mining in the Ethereum platform using a 2-dimensional Markov model to determine a threshold that makes such fraud strategy profitable, which is lower than the one in Bitcoin. However, in reality, the chance of selfish mining occurrence can be meager, or it might not happen. Two essential conditions need to be met which are computational power, and hashing power. Selfish miners must have a strong computational power and control enough hashing power so that they can generate blocks quicker than honest and majority miners. It becomes a game of cat and mouse. Besides controlling computation to perform selfish mining, a new methodology, known as a bribery contract, is also considered which can achieve similar effects [14, 15]. A smart contract is an application written in a programming language by developers running on top of a supported blockchain platform. A smart contract contains a set of rules proposed by a creator to those who interact and accept these rules. When these pre-defined conditions are met, a contract agreement is automatically enforced, and a transaction is also automatically generated to a network to verify before inserting it into a blockchain. A smart contract feature can cause an impact on the current incentive and fairness mechanisms in Figure 1: Selfish miners withhold broadcasting new blocks. a blockchain consensus protocol. Selfish miners make use of electronic contracts to create a bribery attack on a targeted blockchain platform by giving bribees incentive rewards for doing fraudulence. One of an example is that briber creates a bribery contract to give an amount of coin token as a reward for those miners who withhold announcing new successful mining block until briber successfully mines and announces this block to a network. This attack can be achieved easily by any wealthy adversary without the need of competing hashing and computational power with existing miners. Importantly, this fraudulence favors both the briber and bribee since they both get some profit for doing so, especially in the case of the Ethereum platform. A briber gets a block mining reward while a bribee also gets both uncle block reward and an additional bribery contract reward. ## 3 Selfish Behavior with Cross-chain Incentives ### Bribery Contracts - the New Threat to Consensus Blockchain has been used to build various applications across many business platforms [20, 21], and the smart contract is a fundamental feature to extend blockchain's impacts [7, 13, 22]. Most existing works consider an actual usage of smart contracts, but tend not to pay attention to the other side of it, i.e., smart contracts can be used in a destructive manner. In particular, smart contracts can be utilized for bribery to affect existing consensus mechanisms. Bribery attacks or bribery contracts are severe problems like the fact that it can manipulate or destroy the fundamental assumption of standard smart contract execution model which primarily relies on consensus or majority accepted outcome [5, 6]. An arbitrary person participates in a game and accepts game's rules does not necessarily mean that this person will never be manipulated to change his/her mind when he/she is offered an appealing compensation for violating game's rules. This attack can be achieved easily by any wealthy adversary. Remarkably, tracing a briber in a blockchain system seems to be extremely hard since it is designed as a decentralized environment and to protect user anonymity [12]. Importantly, the fraudulence behavior favors both briber and bribees since they both achieve some goals after. ### Influences from Outside of a Blockchain A distributed blockchain is built on the platform of many algorithms, security protocols which aim at maintaining unity and fairness for all joining parties. While the algorithms, protocols, and other factors in the blockchain network are fixed and are not likely to change, the human behavior of users in the system is complicated and may be prone to change. Unlike a machine which operates exactly as it is programmed, many people may change their behavior based on the surrounding conditions (and thus deviate from the protocol), and this is mainly the case for those who are chasing economic benefits in a system of financial incentives like a blockchain. The Fig. 2 shows the power hash rate of Bitcoin and Ethereum mining pools. Note that these mining pools are not operated by individual ones. Instead, these are groups of miners who join a mining pool to get a better profit. In other words, these can be rational miners who intentionally chase the profit in mining. If they are offered a better amount, there is no doubt that they can easily switch. "Why buy when you can rent?" [4] since there always exist rational miners in a distributed-public blockchain. Note that our paper is aimed to discuss on possibility of cross-chain bribery attack. A briber or a group of briber targets and attacks a public blockchain on purpose. They are whales and have enough fund to perform an attack on another blockchain in favor of controlling block generation or disgracing another blockchain. Remarkably, the briber does not require to have a majority of hashing power nor to participate in mining. They can create a cross-chain contract to fascinate those rational miners in a targeted blockchain network. ## 4 Case Study and Analysis In this section, we provide an example scenario of cross-chain bribery contract using decentralized-base platform Ethereum as a model architecture and victim chain. Note that the scenario can be applied on any public blockchain platform. Figure 2: Bitcoin and Ethereum mining pool hash rate power ### Ethereum Platform Ethereum [9], a second largest decentralized cryptocurrency platform by market capitalization 5, allows for the execution of smart contracts on the blockchain. To create a protocol for smart contracts which offer beneficial and efficient interactions between participants in the network, Ethereum builds a Turing-complete machine, called as Ethereum Virtual Machine or EVM, as the heart of its blockchain platform. Developers can create their application, run on EVM using any friendly programming languages, to create their own arbitrary rules of ownership, transaction formats, and state transition functions. Thus, in term of a smart contract, Ethereum can sometimes be considered as a "world computer". Footnote 5: Available from: [https://coinmarketcap.com/all/views/all/](https://coinmarketcap.com/all/views/all/) In Ethereum, a user is called a client. A client that runs mining on Ethereum blockchain is called a miner/a node. A client can send ETH, which is the cryptocurrency of Ethereum, to a smart contract or to other clients by issuing a transaction. Every transaction that is deployed costs some gas fee to execute. The gas fee is an incentive reward to a miner who collects those transactions and attaches them into a new mining block. On Ethereum blockchain, a new mining block can be an empty block or a block that contains a number of transactions which are limited by "gas limit". Moreover, there is no centralized party to validate new mining blocks. By default, one node can connect up to 25 peers in the network to form up a subset of nodes. Many subsets of nodes in the Ethereum network take responsibility for broadcasting and validating new mined blocks when they are announced. ### Ethereum Blockchain Structure In Ethereum, there are two types of blocks - an uncle block and a main block. A main block is a valid block and is appended to a longest chain. Unlike Bitcoin platform which does not accept a late broadcasting block as an uncle block, Ethereum offers this feature to maintain the security of the chain, which allows for faster block generation time (\(\approx\)15 seconds in Ethereum, and \(\approx\)10 minutes in Bitcoin). A standard Ethereum's block structure consists of three components: a block header (contains parent's block hash, account's address, Merkle Tree root hash, a time stamp, block's difficulty, and a nonce,...), list of references to uncle blocks (max uncle reference is 2), and a set of transactions. While a main block contains three components as above, an uncle block only contains a block header. To be considered as an uncle block, this block requires to be referenced by another next main block within 7 rounds (see Fig. 3). Otherwise, it will be considered as "block lost". ### Mining Reward In Bitcoin blockchain, because block generation time is high (\(\approx\)10 minutes) which overcomes block propagation delay, an orphan block-a block is broadcast after a main block- is discarded and is not given any reward. Unlike Bitcoin, Ethereum aims to increase transaction's throughput by decreasing a block generation time (\(\approx\)15 seconds). Thus, to maintain the security of its chain, a late broadcasting block is also accepted as an uncle block and is given a partial reward. Hence, there are three types of block rewards: a main block reward, an uncle block reward, and a nephew block reward [10, 17]. The main block reward is being used in both Bitcoin and Ethereum, which gives a reward to encourage those miners to solve a computational puzzle as fast as possible. The uncle and nephew block rewards are exclusive rewards by Ethereum. Because uncle blocks are submitted later than the main block, they are only given a partial reward which depends on when another main block references them. And the main block which references the uncle block is also given an additional reward, which is a nephew reward, to encourage miners attaching uncle blocks to maintain the chain's security. Each miner who successfully mines a main block can receive a reward of 2.0 ETH (not include nephew rewards, transactions fee reward). The uncle block reward is not a fixed number. The amount of reward is various since an uncle block is required to be referenced by a later main block. The sooner it is referenced, the higher the reward it is given. An equation to calculate an uncle block reward as: \[U_{R}=(U_{n}+8-B_{n})*R/8\] \(U_{R}\): uncle reward \(U_{n}\): uncle block height \(R=2.0\) ETH \(B_{n}\): referenced by main block's height In our theory scheme, we propose two more additional rewards to achieve a bribery attack: a bribe uncle reward and a bribe acceptance reward. The bribe uncle reward is given to those miners who accept to mine a block on a fork-chain instead of a longest chain. The bribe accepted reward is a reward giving \begin{table} \begin{tabular}{l l l l} \hline \hline & Ethereum & Bitcoin & Usage \\ \hline Main Reward & Yes & Yes & Incentive mining block \\ \hline uncle Reward & Yes & No & Maintain chain’s security \\ \hline Nephew Reward & Yes & No & Encourage to reference uncle block \\ \hline Bribe Uncle Reward & No & No & Special reward in our theory giving to miner who mines on fork chain \\ \hline Bribe Accepted Reward & No & No & Special reward in our theory giving to nodes who accept blocks on fork chain \\ \hline \hline \end{tabular} \end{table} Table 1: Mining Reward Figure 3: Uncle block reward varies by time to those clients/nodes which accept and insert a block on a fork-chain into their blockchain. The amount of each reward is discussed later in the following section. ### Bribery Contract and Pseudocode In our scheme of cross-chain bribery attack, we use the Ethereum blockchain platform as a model architecture to provide an example scenario. Note that this scheme can be applied on any public blockchain which is targeted by the briber. To set up our scheme, we consider the case of one briber who targets Ethereum to perform the bribery cross-chain attack. The briber has an account with his/her crypto-coins on another blockchain platform and also has enough money to perform this attack. Remarkably, a briber can be any individual-wealthly person or a group of people. They do not necessarily have a majority or even own any of Ethereum's computational power to do this bribery cross-chain attack. Their goal is to manipulate miners/nodes in the Ethereum network to control the generation of blocks or to disgrace Ethereum blockchain. Miners run the mining process to find a block in getting an incentive reward. They can be an individual miner or can join a mining pool to pursue a more beneficial reward. Notably, they all share a common concept - maximizing the efficiency and profit of their mining process. More importantly, even if a miner is honest in the classical case, it does not necessarily mean they remain to be honest when they are offered a better compensation [18, 11, 3, 5]. Thus, a bribery cross-chain contract is feasible when both a briber and bribees can achieve their goals. In our scenario, we propose a \(BribeContract\) that rewards miners who intentionally mine blocks on a fork-chain, and also rewards bribees who accept and insert this new mining block into their chain. These pre-defined rules are created by the briber. Furthermore, a transaction reward is automatically issued when bribees prove that they meet the requirements. Our proposed approach requires one briber to pay a full cost to perform this cross-chain attack. The amount of the bribe incentive reward should be enough appealing so that bribees are willing to join. We discuss more on this in the following section. 1. **Briber creates a BribeContract on another blockchain platform.** The BribeContract specifies the rules for the attack, such as the amount of the bribe incentive reward, the criteria for miners to be considered as bribees, and the process for issuing the transaction reward. 2. **Briber sends the bribe incentive reward to the BribeContract**. The bribe incentive reward is locked in the BribeContract and cannot be withdrawn by the briber. 3. **Miners mine blocks on a fork-chain of the target blockchain.** The miners are incentivized to mine blocks on the fork-chain because they will receive the bribe incentive reward if their blocks are accepted by the bribees. 4. **Bribees accept and insert the new mining blocks into their chain.** The bribees are incentivized to accept and insert the new mining blocks into their chain because they will receive the transaction reward. 5. **The target blockchain is forked.** The new mining blocks on the fork-chain are accepted by the bribees and become part of their chain. This creates a fork in the target blockchain. 6. **The briber achieves their goal.** The briber's goal may be to manipulate the miners/nodes in the target network to control the generation of blocks or to disgrace the target blockchain. **Pseudocode:** ``` 1:procedureBribe Contract 2:- Briber \(B\) creates a bribe contract \(\beta\) 3:- Contract \(\beta\) contains: 4:+ \(\alpha_{a}\): contract creator's address 5:+ \(\mu_{m}\): amount incentive for mining block on fork-chain 6:+ \(\mu_{a}\): amount incentive for accepting mining block on fork-chain 7:+ \(\gamma\): total side-chain token deposits into the contract 8:+ \(f(b)\): a context to identify arbitrary bribee (b) 9:+ \(f(c)\): a context function of condition (c) is met 10:+ \(f(t)\): a context to terminate contract when a fork-chain becomes a main chain. 11:+ \(S_{b}\): starting block on fork chain. 12:endprocedure ``` **Algorithm 1** Propose Contract ``` procedureProve 2:- Bribee (b) proves contract's condition (c) is met if\(c=\)_mining block on fork chain_then 4:\(b\) commits a fork-chain to contract \(\beta\) endif 6:if\(c=\)accepting block on fork chain_then 7:\(b\) commits a fork-chain to contract \(\beta\) 8:endif endprocedure ``` **Algorithm 2** Prove and Commit ``` procedureVerify - A block should stay within a range of required number blocks 3:- For example: \(S_{b}\)... \(\leftarrow\)\(U_{v}\)\(\leftarrow\)\(U_{v+1}\)\(\leftarrow\)\(U_{v+2}\)...\(\leftarrow\)\(U_{v+7}\) if\(c=\) mining block on fork-chainthen if\(U_{v}\in[S_{b},U_{v+7}]\)then 6:\(b\leftarrow\mu_{m}\) endif endif 9:if\(c=\)accepting block on fork-chainthen if\(U_{v}\in[S_{b},U_{v+7}]\)then 10:\(b\leftarrow\mu_{a}\) endif endif endprocedure ``` **Algorithm 3** Verify Algorithm 1 presents the contract proposal. This contract includes all required variables, functions to detect, to verify and to give incentive reward to bribees when a contract's condition is qualified. Algorithm 2 shows the process of a bribee, proving how he/she is qualified to receive a reward. If a bribee mined a block on a fork-chain, he/she should send its chain which is started from block \(S_{b}\) up to the latest block on a fork-chain. The similar request is also required in case of the bribees accepting and inserting a mined block on a fork-chain. Due to that Ethereum's chain structure design is tamper-free, a verifying block to be submitted must stay within a range of multiple blocks on the fork-chain. Algorithm 3 verifies this requirement to give a corresponding reward to bribees. ### Incentive Discussion The cross-chain bribery framework we described before needs to encourage both bribers and bribees to participate in such an attack. The incentive mechanism consists of two parts: * Incentive reward that bribees receive when participating in a cross-chain bribery attack. * The cost that a briber must pay so that they can achieve his/her goals. **Bribees**: Although miners can be honest in the traditional mining process (without bribery), it does not necessarily mean they remain to be honest when they are offered a better compensation. The amount of Ethereum token that one miner receives can be described as: \[\theta_{h}=\frac{\alpha}{\beta}\times R\times\frac{3600}{\phi},\] where: (i) \(\theta_{h}\) denotes a number of receiving tokens in one hour (ii) \(\alpha\) denotes a hashing power of hardware using to solve a computational puzzle (iii) \(\beta\) denotes a network hash rate (iv) \(R\) denotes a main block reward (\(R=2\) ETH) (v) \(\phi\) denotes a block generation time [1]. Unlike Bitcoin, it is less common to use ASIC to mine Ethereum. Most miners use GPU or CPU for mining. Thus, we set the value \(\alpha\) in the range \([10,400]\) MHHash/s in our model. Fig. 4 (left) shows the hourly reward that honest miners can receive by their contribution of a hash rate in mining a new block. The maximum ETH token that miners (\(\alpha=400\) MHash/s) can receive in one hour is around \(\theta_{h}=0.0014\) ETH/h. If a briber offers these rational miners a better incentive amount (i.e. \(\tau_{h}=0.002\) Token-worth($) in cross-chain for accepting new blocks, and \(\gamma=3.0\) Token-worth($) for mining on the fork-chain), rational miners have a high possibility to join our bribery cross-chain attack. The incentive bribe reward that bribees-rational miners in mining pool-can get if they join the attack as: \[\Gamma=\tau_{h}+\frac{\alpha}{\beta}\times 3\times\frac{3600}{\phi}.\] **Briber**: A briber can be any wealthy-individual person or a group of people. They have enough fund to perform a cross-chain bribery attack. Their goal is to control blocks' generation or to disgrace Ethereum blockchain by bribing rational miners/nodes in the network. The major issue is how much it costs to perform such an attack. Fig. 4 (right) and Table 2 show that the budget of cross-chain bribery attack is acceptable to bribers. **Consequences:** In our model, a block generation time \(\phi\) is around 15 seconds. Within six hours, the number of blocks can be generated by bribed miners is \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Number of Bribees** & **1 Hour (\$)** & **3 Hours (\$)** & **6 Hours (\$)** \\ \hline 10,000 & 100,400 & 301,300 & 602,500 \\ \hline 20,000 & 102,200 & 306,600 & 613,300 \\ \hline 30,000 & 104,000 & 312,000 & 624,000 \\ \hline 40,000 & 105,800 & 317,300 & 634,700 \\ \hline 50,000 & 108,000 & 324,000 & 648,000 \\ \hline \end{tabular} \end{table} Table 2: Cost of cross-chain bribery attack Figure 4: Hourly reward and bribery contract cost around 1440 blocks. If a briber targets one public blockchain to deploy cross-chain bribery attacks within a short time (i.e. 6 hours of attack), their cost (see Table 2 and Fig. 4) to perform such an attack is lower than the potential damage caused to a victim blockchain. The attacker can potentially undermine a consensus mechanism of a victim chain or secretly shot a price of the victim chain currency while gaining financial benefits of dropped price or aim to disgrace the targeted chain. ## 5 Future Research and Open Problems Our work points to a new direction of analyzing the security and stability of public blockchains and cryptocurrency systems. Distinguishing from prior efforts primarily focusing on analyzing selfish behaviors using incentives confined within a blockchain system (e.g., block reward), our approach applies a holistic view of of systems where a network of public blockchains and crypto-currencies are considered as inter-connected systems where external influences can happen in the form of cross-chain transactions and contracts. This may significantly change the selfish behaviors of users, and affect our understanding of sustainability and stability of public blockchain systems. Our preliminary results suggest the necessity of additional research, in particular, detailed modeling of decisions and strategies that may be adopted by rational players in the existence of cross-chain incentives for selfish behaviors, risk posed by cross-chain bribery contracts to stability and well-being of victim chains, holistic analysis of strategy by rational players who maximize profit in multi-chain and multi-currency setting, and possible mitigation strategies and design options. Holistic analysis under the agent perspective raises many open problems with respect to the nature of players in public blockchains. Although researchers are amenable to introduce rational players in analyzing public blockchains, there is a lack of consensus on proper assumptions of the players and implications of the assumptions to long term sustainability of public blockchain systems. In the presence of rational players with in-chain behaviors influenced by cross-chain rewards, how does it affect a theoretical analysis of blockchain consensus and security properties? In particular, an adversary can potentially gain profit from one chain by deliberately inducing disruptions to other chains. It means that selfish players who engage in bad behaviors within a blockchain may not necessarily look for rewards within the same blockchain. ## 6 Conclusion Blockchain has found a variety of applications and it is critical to guarantee the correctness of the system. We study a new threat to the security of a blockchain, cross-chain bribery using smart contracts. Bribery is a perilous problem in the real world, especially in an economical aspect. It is unavoidable fraud and more importantly, difficult to find since it is utilized by cross-chain smart contracts on the distributed public blockchain. Recent studies have shown corrupted fraud utilizing smart contracts to conduct bribery. In our paper, we improve this idea by proposing a cross-chain bribery attack to undermine a victim's consensus mechanism. In this paper, we outline the possibility of a cross-chain bribery attack by bribing selfish, rational miners in a targeted network and discuss the potential example scenario. A cross-chain bribery attack is feasible to facilitate on public blockchains and cryptocurrency systems due to the fact that there always exist rational miners who are incentivized by beneficial rewards. The possibility of all miners avoid short term benefit to protect a long term one seems to be negligible. People might realize they did harmful things to others. However, in terms of getting a better beneficial reward, no one would even doubt trying to do such a thing. Remarkably, the cost of carrying out one cross-chain bribery attack can be acceptable while it can cause tremendous on a victim's chain such as dropping the price, or disgracing victim's blockchain.
2304.07125
Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering
Having an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the context of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as anaphora and ellipsis. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into intent-explicit questions. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking conversational aspect out of the scenario by generating self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and eloquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
Munazza Zaib, Quan Z. Sheng, Wei Emma Zhang, Adnan Mahmood
2023-04-14T13:42:32Z
http://arxiv.org/abs/2304.07125v1
Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering ###### Abstract Having an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the _context_ of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as _anaphora_ and _ellipsis_. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into _intent-explicit questions_. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking _conversational_ aspect out of the scenario by generating the self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and more elquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model. Conversational question answering, information retrieval, question reformulation, deep learning. ## I Introduction Conversational question answering (ConvQA) is a relatively new paradigm and considerable task that possesses the potential to revolutionize the way humans interact with machines [1]. It, in fact, requires a system to answer a set of interrelated questions posed by any user [2, 3, 4]. In human conversations, these sequential questions could be _implicit_ and are very easy for them to understand [5]. However, ConvQA-based machines are expected to learn and resolve such implicit dependencies from the given context. For instance, let us consider a ConvQA session pertinent to a TV show in Table I. In order to interpret and answer Q2, the system is expected to have the information about Q1 and A1. Similarly, it would be difficult for the system to find the answer to Q3 since there could be many first seasons of different series. To retrieve the correct answer, the system needs to incorporate the show name in the question. Furthermore, the task of question rewriting (QR) has been extensively researched upon by researchers in the information extraction community. Nevertheless, it is fairly new in the field of ConvQA and is recently introduced as an independent task in some of ConvQA models [6, 7, 8]. Simply put, QR refers to the task of reformulating the given question by adding missing information or resolving co-references. This process generates a stand-alone question by extracting it out of the conversational context [9]. However, this results in losing valuable cues from the conversational flow. Also, the resulting rephrased questions might be long and verbose which, in turn, results in difficulty in retrieving evidence from the given context. Furthermore, the datasets available for question rewriting in ConvQA are quite small, thereby hindering the training process of the model. To address these particular shortcomings, we propose an ensemble model entitled, CONVSR (CONVQA using Structured Representations), which instead of rewriting an incomplete or ambiguous question generates the intermediate structured representations (SR) based on the given context and the question. These representations, comprising the context and the question entities, can ultimately be used to fill up the missing gaps and answer the question at hand. The key intuition behind such a model is that an incomplete question only needs to refer to the last few questions in order to fill in the missing gaps because the conversational flow keeps on changing [10, 11, 12]. Hence, to accommodate the changing conversational flow, we propose to select \(k\) history turns using dynamic history selection process. Some questions would be requiring both the context and the question entities in a bid to disambiguate the current question, whereas, for some questions, only the context entities would be enough. For instance, to answer Q2 in Table I, the model needs to refer back to the intermediate representations captured for Q1. In this case, the model needs to have both the context entity (FRIENDS) and the question entity (Monica Geller) to decipher '_she_' in Q2. However, to answer Q4, the model only needs the context entity i.e., FRIENDS. Hence, our proposed model consists of the following main stages: _i) Question understanding_ which encompasses assessing a question based on the given context; _ii) Dynamic history selection_ that conducts 'hard selection' for the relevant history turns. This method attends to the previous history turns based on a semantic similarity score. If a conversational turn equals or surpasses a threshold value, then it is considered important for predicting the answer; _iii) Entity generation_ which works towards identifying the context and question entities (if any) from the selected history turns; and _iv) Answer prediction_ that retrieves the most relevant answer span from the given context based on the selected history turns and their respective SRs. In a nutshell, the technical contributions of this research work can be summarized as the following: * We highlight the limitations of the previous approaches and propose a framework to address these limitations. Our framework presents an alternative of question rewriting task to complete the ambiguous questions by generating intermediate structured representations. * We propose a dynamic history selection policy based on 'hard history selection' to only select the relevant subset of conversational turns. * We study the effect of SRs on traditional ConvQA baselines by skipping the dynamic history selection process and appending the history turns in different settings. * We demonstrate by our experimental results that ConvQA models suffer from a decline in accuracy by incorporating QR task within the model, thus, proving the effectiveness of our approach. To the best of our knowledge, our proposed research is one of the few research works that implement the task of question resolution within the ConvQA setting. The rest of this paper is organized as follows. Section II overviews the techniques on conversational question answering and question completion. Section III illustrates the technical details pertinent to our proposed CONVSR model. Section IV describes the experimental set up and Section V reports the experimental results. Finally, Section VI offers some concluding remarks. ## II Related Work ### **Conversational Question Answering** Recent advancements in natural language processing have led to the development of (ConvQA) systems, which aim to provide accurate and contextually relevant answers to user queries in a conversational setting. These advancements are mainly owing to the rapid progress of the pre-trained language models [13, 14, 15, 16, 17] and the availability of the task-specific datasets such as QuAC [2] and CoQA [3]. These developments have taken the NLP and IR communities by storm and have resulted in several state-of-the-art models. Many research works have introduced novel and different strategies to tackle this challenge. The task of ConvQA presents several challenges to the researchers hence resulting in considerable interesting yet innovative research works over the past few years. One of the key challenges in the task of ConvQA is to incorporate the conversational history effectively so that the model can best interpret the current question accurately. Some popular strategies include prepending the conversational turns [2, 3, 18] and dynamic history selection either utilizing attention mechanism [11] or reward-based reinforcement learning [19]. Several other works also demonstrate the effectiveness of FLOW-based mechanisms [10, 12, 20] to capture the intermediate latent representations to help the answering process. We employ a dynamic history selection process to obtain question-relevant conversation history turns. Integrating non-relevant conversational turns tend to bring noise into the input provided to the model, which in turn, results in the model's performance degradation [11, 19, 21]. The process is based on hard history selection and will be discussed in Section III. ### **Question Completion** A popular research direction that aims to address the challenges pertinent to an incomplete or ambiguous question is question rewriting (QR). The task of QR is recently adopted in the field of ConvQA to rephrase and generate a self-contained question that can be answered from the given context [22, 23, 24, 25, 8]. However, the task of QR takes the conversational questions out of the context by transforming them into self-contained questions, which in turn, negates the whole idea of ConvQA setting [5]. Question resolution is another approach that adds relevant and significant terms from the previous conversation turns to fill in the missing information gaps [26]. These techniques are widely used to resolve co-dependencies and anaphora among the conversational history turns. We work on the second type of question completion technique and show in our experimental results in Section V that question reformulation with added valuable cues performs better than rewriting questions from the scratch. ## III Methodology ### **Task Formulation** We assume the traditional setting of ConvQA where a user starts the conversation with a particular question or information need and the system searches the given context to provide an answer after each of the user's questions [21]. The follow-up questions may be incomplete or ambiguous requiring more context to be interpreted by the model (example: 'What was she obsessed about?'). The task of CONVSR is to capture the context entities and question entities from the previous relevant conversation turns and utilize them as additional cues to answer incomplete questions. The term _context entity_ corresponds to an entity mentioned in the previous conversational context and the term _question entity_ corresponds to the entities given in the previous questions. Essentially, an SR for any given question can be represented as shown in Table II. More formally, given a conversational context \(C\), previous history turns \(H\) and a potentially ambiguous or incomplete question \(Q\) which may need the understanding of the previous conversation turns, the task of CONVSR is to first select the relevant history turns \(H^{{}^{\prime}}\) and then capture the structured representations _SR_ in the form of context entity _CE_ and question entity _QE_. These SRs are then infused into the ConvQA model to be utilized to generate the correct answer \(A\). An illustration of our proposed model CONVSR is depicted in Fig. 1. ### **Pipeline Approach** Over the past few years, a number of research works [6, 8, 9, 27, 28] have envisaged various models to tackle the complexity of ConvQA task by decomposing it into QR and QA subtasks. Question rewriting, being the initial sub task, generates self-contained question by rewriting the given incomplete question from the scratch. Different approaches are in use to generate these rewrites such as language models [9, 27, 28, 29] and neural networks [6]. The QR models are trained on a recently introduced CANARD [23] dataset, which is based on QuAC's [2] original questions and their respective rewrites. The dataset has around 40K question pairs generated by human annotators. Following [29], we adopt GPT-2 [16] to train the QR model. In the training process, we provide the conversational turns and the current question as the inputs and the model generates a context-independent rewrite that is to be answered without taking the conversational history into consideration. Once the rewrites are generated, the next sub-task is of the QA module to find a relevant answer from the given context. Since it is assumed that all the co-references and anaphoras have been resolved in the QR task, most research works employ a traditional QA model instead of the ConvQA framework to answer the current question. However, we utilize conversational history along with SRs in our proposed model, therefore, for a fair comparison we also utilize conversational history along with the rewritten question as input to the QA model. We put together the process of predicting an answer as: \[P=(a_{i},|q_{i},C,H)\approx P^{qr}(q^{{}^{\prime}}_{i}|q_{i},H)\cdot P^{qa}(a _{i}|q^{{}^{\prime}},C,H) \tag{1}\] where \(P^{qr}(\cdot)\) and \(P^{qa}(\cdot)\) are the likelihood functions of the two sub-task models, respectively. \(q^{{}^{\prime}}\) represents the rewritten question by the QR model and it serves as an input to the QA model along with the given context and history turns. The pipeline model is shown in Fig. 1(a). The dotted line represents that conversational history forms an input to the ConvQA model along with the rewritten question. The primary limitation of using this approach is that the QA model never gets to be trained on the user's actual questions, and tends to loose the understanding of the conversational context. Also, the input of a QA model is highly dependent on the output of the QR model, which increases the chances of QA model being suffered by error propagation from QR model. Fig. 1: An illustration of our proposed model Conversational Question Answering with Structured Representations (CONVSR). CE and QE in CONVSR denote the context entity and question entity. ### _The CONVSR Model_ ### **Dataset Description** #### Iv-A1 **QuAC** Question Answering in Context (QuAC) [2] consists of 100k question-answer pairs in a teacher-student information-seeking setting. The student seeks information on a topic provided with some background information, and the teacher attempts to satisfy the student's information need by engaging into a conversation. Since the test set is not made publicly available, we randomly distribute 5% of conversational dialogues in the training set following the strategy described in [8]. We, then, utilize the distributed chunk as our validation set and report the test results. #### Iv-A2 **Canard** CANARD [23], a dataset based on QuAC, consists of 40k question-answer pairs. The main idea behind CANARD is to convert the context-dependent questions of QuAC into context-independent or self-contained questions. These rewritten questions have the similar answer as that of the original questions. We utilize the training and development sets for training and validating the QR model, and the test set for evaluating the ConvQA models. ### **Training and Finetuning** To train the question understanding and entity generation modules, we have followed the technique of distantly supervised labeling introduced in [5]. The idea behind the technique is based on an intuition that if a piece of information (either entity or context) is essential for interpreting the follow-up question and has been omitted implicitly by the user, then it should be added in the completed version of the question. Based on this idea, the data for training is generated. We start with the complete questions and gather all the context and question entity mentions from it. For the incomplete or ambiguous follow-up questions, we keep on adding these entities to fill in the missing information. The entities are considered to be relevant for the incomplete question if an answer span is retrieved by adding them. For training and evaluating the QR model, we use a publicly available dataset, CANARD [23] following the strategies discussed in [6, 29]. The ConvQA models are trained on QuAC [2] dataset with Adam optimizer with a learning rate of _3e-5_. ### **ConvQA Models** Since both pipeline and CONVSR are model-agnostic, any ConvQA model can be utilized in the framework. The chosen models are widely utilized for comparison and have been proven to be performing well in ConvQA setting. We test the same models in both approaches to have a fair evaluation: * BERT [13]: BERT is a pre-trained contextualized word representation model that is known to have empirically powerful results on different natural language tasks. BERT also works well on ConvQA datasets, although it was not designed for the task of ConvQA. It receives the context passage, current question, and conversational history as input. * BERT-HAE [30]: BERT-HAE is based on BERT and introduces the idea of history answer embeddings to model the conversational history the concept to model the conversational history. These contextualized history answer embeddings encode the answer tokens from the previous conversational turn into the model. * RoBERTa [17]: BERT is improved using advanced pre-training strategies to get the robustly optimized weights on huge corpora and the model is named as RoBERTa. It takes the same input as BERT unless stated otherwise. Apart from evaluating the above models with dynamic history selection, we also experiment with the traditional ConvQA setting where the history turns with no selection criteria whatsoever, are appended to the current question. Prepending previous conversational turns to the current question and the given context is still considered a simple yet very efficacious baseline in almost all ConvQA tasks. Hence, we experiment with the same here as well. Within prepending the conversational turns, we further investigate the effect of prepending only the initial turn (_prepend init_), prepending only the last turn (_prepend prev_), prepending initial and last history turns (_prepend init + prev_), and prepending all the history turns (_prepend all_). For all these other experiments, we leverage RoBERTa [17] to be the base model and adapt it according to the need of the task. The reason for choosing RoBERTa as a base model is that it is a top-performing model on the leaderboard of different conversational datasets and has shown its effectiveness in the ConvQA domain. ### **Evaluation Metrics** For evaluation purpose, we follow the metrics used in [2] to assess the performance of the models on the QuAC and CANARD datasets. The metrics include not only the F1 score but also the human equivalence score for questions (HEQ-Q) and dialogues (HEQ-D). HEQ-Q is the measure of the model's performance in retrieving the more accurate (or at least similar) answers for a given question. HEQ-D represents the same performance measure but instead of a question, it evaluates the overall dialog. ## V Results and Analysis We conduct experiments on CONVSR and the competing baselines using the QuAC and CANARD datasets and report the results in this section. ### **CONVSR is viable for addressing incomplete questions in ConvQA** The first and foremost takeaway from the experiments is that our model works well in the ConvQA setting. The experiments are particularly designed to tackle the problem of incomplete or ambiguous questions. Instead of re-writing the questions to fill in the missing gaps in the given question, our model generates intermediate representations based on context and question entities. These entities aid the answering process by providing cues to interpret the questions. From Table III, we can clearly see that CONVSR consistently improves the model on both datasets. ### **Convyst outperforms all the traditional baselines** We observe from Table IV that generating SRs yields better results even in the traditional prepend baselines. Out of all the variations, _prepend prev_ provides the highest F1 score. It confirms the intuition that incomplete questions usually take the context and entities of the last question asked to fill in the missing information gap. _Prepend init_ results in a low F1 score mainly because of the reason that the flow of the conversation keeps on changing. The first question asked in the conversation does not necessarily provide the related context and question entities to the current question. Table V shows the accuracy scores without utilizing SRs in traditional prepend baselines. Comparing the two tables, we can clearly see that SRs provide an edge to the model in predicting correct answer spans. ### **Role of slots in SR** The two slots in SRs play a vital role in understanding an incomplete question. Table VI shows a comparison of the F1 score when the slots are omitted on purpose one by one. The role of question entities is crucial. Skipping question entities from a question results in a major decline in the F1 score. ### **Verbose questions lead to decline in F1 score** The results in Table VII show that question re-writing results in lengthy questions, which may cause losing valuable cues from the conversation flow, hence, the decline in results. Also, QR results in more proper nouns, which shows that generating QRs requires mapping more entities within the given context. This mapping adds more complexity in generating questions from the scratch. Consequently, this may also result in a decline in the F1 score. ## VI Conclusion and Future Work In this paper, we have argued that generating the paraphrases of incomplete and ambiguous questions can take out questions from the conversational context, thereby impeding the underlying essence of conversational question answering (ConvQA). Moreover, the rewritten questions are lengthy and verbose and, thus, add complexity to the answer retrieval part. In an attempt to overcome these issues, we have proposed CONVSR, a conversational question answering model which utilizes structured representations in the form of both context entity and question entity for predicting the answer span. Our experimental results demonstrate the significance of structured representation (SR) generation within a ConvQA setting. Our model significantly improves ConvQA performance on both QuAC and CANARD datasets, i.e., as compared to the existing state of the art. Our approach leverages strategies from different research fields and their strategic paradigms. The idea of generating intent-explicit SRs is taken from symbolic AI, whereas, the tasks of question rewriting and question answering have their roots embedded in the IR community. One of the promising directions for our future work involves generating more context-aware SRs that can be utilized on the heterogeneous sources of text-based ConvQA. Furthermore, we plan to scale up our proposed model to target the open-domain ConvQA setting.
2301.02683
Classifying topological neural network quantum states via diffusion maps
We discuss and demonstrate an unsupervised machine-learning procedure to detect topological order in quantum many-body systems. Using a restricted Boltzmann machine to define a variational ansatz for the low-energy spectrum, we sample wave functions with probability decaying exponentially with their variational energy; this defines our training dataset that we use as input to a diffusion map scheme. The diffusion map provides a low-dimensional embedding of the wave functions, revealing the presence or absence of superselection sectors and, thus, topological order. We show that for the diffusion map, the required similarity measure of quantum states can be defined in terms of the network parameters, allowing for an efficient evaluation within polynomial time. However, possible ''gauge redundancies'' have to be carefully taken into account. As an explicit example, we apply the method to the toric code.
Yanting Teng, Subir Sachdev, Mathias S. Scheurer
2023-01-06T19:00:21Z
http://arxiv.org/abs/2301.02683v1
# Classifying topological neural network quantum states via diffusion maps ###### Abstract We discuss and demonstrate an unsupervised machine-learning procedure to detect topological order in quantum many-body systems. Using a restricted Boltzmann machine to define a variational ansatz for the low-energy spectrum, we sample wave functions with probability decaying exponentially with their variational energy; this defines our training dataset that we use as input to a diffusion map scheme. The diffusion map provides a low-dimensional embedding of the wave functions, revealing the presence or absence of superselection sectors and, thus, topological order. We show that for the diffusion map, the required similarity measure of quantum states can be defined in terms of the network parameters, allowing for an efficient evaluation within polynomial time. However, possible "gauge redundancies" have to be carefully taken into account. As an explicit example, we apply the method to the toric code. ## I Introduction In the last few years, machine learning (ML) techniques have been very actively studied as novel tools in many-body physics [1; 2; 3; 4; 5; 6; 7]. A variety of valuable applications of ML has been established, such as ML-based variational ans\(\hat{\text{a}}\)ze for many-body wave functions, application of ML to experimental data to extract information about the underlying physics, ML methods for more efficient Monte-Carlo sampling, and employment of ML to detect phase transitions, to name a few. Regarding the latter type of applications, a particular focus has recently been on topological phase transitions [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. This is motivated by the challenges associated with capturing topological phase transitions: by definition, topological features are related to the global connectivity of the dataset rather than local similarity of samples. Therefore, unless the dataset is sufficiently simple such that topologically connected pairs of samples also happen to be locally similar or features are used as input data that are closely related to the underlying topological invariant, the topological structure is hard to capture reliably with many standard ML techniques [11; 12]. In this regard, the ML approach proposed in Ref. [12], which is based on diffusion maps (DM) [32; 33; 34; 35], is a particularly promising route to learn topological phase transitions; it allows to embed high-dimensional data in a low-dimensional subspace such that pairs of samples that are smoothly connected in the dataset will be mapped close to each other, while disconnected pairs will be mapped to distant points. As such, the method captures the central notion of topology. In combination with the fact that it is unsupervised and thus does not require _a priori_ knowledge of the underlying topological invariants, it is ideally suited for the task of topological phase classification. As a result, there have been many recent efforts applying this approach to a variety of problems, such as different symmetry-protected, including non-Hermitian, topological systems [36; 37; 38; 39; 40; 41], experimental data [42; 39], many-body localized states [43], and dynamics [44]; extensions based on combining DM with path finding [36] as well as with quantum computing schemes [45] for speed-up have also been studied. As alluded to above, another very actively pursued application of ML in physics are neural network quantum states: as proposed in Ref. [46], neural networks can be used to efficiently parameterize and, in many cases, optimize variational descriptions of wave functions of quantum many-body systems [47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. In particular, restricted Boltzmann machines (RBMs) [4] represent a very popular neural-network structure in this context. For instance, the ground states of the toric code model [57] can be exactly expressed with a _local_ RBM ansatz [58], i.e., where only neighboring spins are connected to the same hidden neurons. When additional non-local extensions to the RBM ansatz of Ref. [58] are added, this has been shown to also provide a very accurate variational description of the toric code in the presence of a magnetic field [59]. In this work, we combine the DM approach of Ref. [12] with neural network quantum states with the goal of capturing topological order in an unsupervised way in interacting quantum many-body systems. We use a local network ansatz, with parameters \(\Lambda\), as a variational description for the wave functions \(\ket{\Psi(\Lambda)}\) of the low-energy subspace of a system with Hamiltonian \(\hat{\mathcal{H}}\). While we also briefly mention other possible ways of generating ensembles of states, we primarily focus on an energetic principle: we sample wavefunctions such that the probability of \(\ket{\Psi(\lambda)}\) is proportional to \(\exp(-\bra{\hat{\mathcal{H}}}_{\Lambda}/T)\) where \(\bra{\hat{\mathcal{H}}}_{\Lambda}=\bra{\Psi(\Lambda)}\hat{\mathcal{H}}\ket{ \Psi(\Lambda)}\). As illustrated in Fig. 1(a), the presence of superselection sectors in the low-energy spectrum of \(\hat{\mathcal{H}}\) implies that the ensemble of states decays into disconnected subsets of states for sufficiently small \(T\) (at least at fixed finite system size); these can be extracted, without need of prior labels, with dimensional reduction via DM (and subsequent \(k\)-means clustering), and thus allow to identify topological order. For sufficiently large \(T\), more and more high-energy states are included and all sectors are connected, see Fig. 1(b), as can also be readily revealed via DM-based embedding of the states. Importantly, DM is a kernel technique in the sense that the input data \(x_{l}\) (in our case the states \(\ket{\Psi(\Lambda_{l})}\)) does not directly enter as a high-dimensional vector but only via a similarity measure \(S(x_{l},x_{l^{\prime}})\), comparing how "similar" two samples \(l\) and \(l^{\prime}\) are. In the context of applying DM to the problem of topological classification, it defines what a smooth deformation ("homotopy") of samples is. We discuss two possible such measures. The first one is just the quantum mechanical overlap, \(S_{\text{q}}(\Lambda_{l},\Lambda_{l^{\prime}})=|\langle\Psi(\Lambda_{l})|\Psi (\Lambda_{l^{\prime}})\rangle|^{2}\), of the wave functions. Although conceptually straightforward, its evaluation is computationally costly on a classical computer as it requires importance sampling. The local nature of our network ansatz allows us to also construct an alternative similarity measure that is expressed as a simple function of the network parameters \(\Lambda_{l}\) and \(\Lambda_{l^{\prime}}\) describing the two states to be compared. This can, however, lead to subtleties associated with the fact that two states with different \(\Lambda\) can correspond to the same wave functions (modulo global phase). We discuss how these "gauge redundancies" can be efficiently circumvented for generic states. We illustrate these aspects and explicitly demonstrate the success of this approach using the toric code [57], a prototype model for topological order which has also been previously studied with other ML techniques with different focus [58, 59, 60, 15, 16, 17, 18]. We show that the DM algorithm learns the underlying loop operators wrapping around the torus without prior knowledge; at low \(T\), this leads to four clusters corresponding to the four ground states. At larger \(T\), these clusters start to merge, as expected. Interestingly, the DM still uncovers the underlying structure of the dataset related to the expectation value of the loop operators. Finally, we also show that applying a magnetic field leads to the disappearance of clusters in the DM, capturing the transition from topological order to the confined phase. The remainder of the paper is organized as follows. In Sec. II, we describe our ML approach in general terms, including the local network quantum state description we use, the ensemble generation, a brief review of the DM scheme of Ref. [12], and the similarity measure in terms of neural network parameters. Using the toric code model as an example, all of these general aspects are then discussed in detail and illustrated in Sec. III. Finally, explicit numerical results can be found in Sec. IV and a conclusion is provided in Sec. V. ## II General algorithm Here, we first present and discuss our algorithm [see Fig. 2(a)] in general terms before illustrating it using the toric code as an example in the subsequent sections. Consider a system of \(N\) qubits or spins, with associated operators \(\{\hat{\mathbf{s}}\}=\{\hat{\mathbf{s}}_{i},i=1,\cdots,N\}\), \(\hat{\mathbf{s}}_{i}=(\hat{s}_{\text{r}}^{x},\hat{s}_{i}^{y},\hat{s}_{i}^{z})\), and interactions governed by a local, gapped Hamiltonian \(\hat{\mathcal{H}}=\mathcal{H}(\{\hat{\mathbf{s}}\})\). We represent the states \(|\Psi(\Lambda)\rangle\) of this system using neural network quantum states [46], \[|\Psi(\Lambda)\rangle=\sum_{\mathbf{\sigma}}\psi(\mathbf{\sigma};\,\Lambda)\ket{\mathbf{ \sigma}}, \tag{1}\] where \(\mathbf{\sigma}\!=\!\{\sigma_{1},\sigma_{2},...,\sigma_{N}|\sigma_{i}=\pm 1\}\) enumerates configurations of the physical spin variables in a local computational basis (e.g. \(s^{z}\)-basis) and \(\Lambda\) is the set of parameters that the network \(\psi\) depends on to output the wavefunction amplitude \(\psi(\mathbf{\sigma};\,\Lambda)=\langle\mathbf{\sigma}|\Psi(\Lambda)\rangle\) for configuration \(\ket{\mathbf{\sigma}}\). Because the physical Hilbert space scales exponentially with the system size, there is a trade-off between the expressivity versus efficiency when choosing a network architecture (or ansatz) \(\psi\), so that the weights \(\Lambda\) can approximate the state \(|\Psi(\Lambda)\rangle\) to a reasonable degree and Figure 1: (a) An illustration of a “low-energy” ensemble. Two (or more) initial states, \(|\Psi(\Lambda^{0})\rangle\) and \(|\Psi(\Lambda^{1})\rangle\), from two distinct topological sectors are chosen as “seeds” (green dots). The dots denote the dataset (later fed into the DM), which are a set of quantum states labeled by network parameters \(\Lambda\). This dataset is generated using the procedure outlined in Sec. II.1 and Algorithm. 1, where the next state \(\Lambda^{\prime}\) (blue dots at each arrow) is proposed by a random local perturbation and accepted with probability based on the energy expectation \(\langle H\rangle_{\Lambda^{\prime}}\). In the small-\(T\) regime, the full dataset is not inter-connected by such local perturbations and cluster among each topological sectors (at left and right valley). (b) An illustration of a “high-energy” ensemble. The states are generated using the same algorithm as before, however with a large hyperparameter \(T\) (compared to the energy gap \(\Delta\)). In this regime, the dataset include some of the low-energy states (blue dots), but also some high-energy states (red dots). Because the high-energy states are agnostic of the low-energy topological sectors, there exist paths (denoted by arrows among dots in the elliptical blob) such that the two initial seeds from distinct topological sectors effectively “diffuse” and form one connected cluster. can at the same time be an efficient representation (with minimal number of parameters \(\Lambda\) that scale as a polynomial in \(N\)). To reach the ground state or, more generally, the relevant low-energy sector of the Hamiltonian \(\hat{\mathcal{H}}\) for the low-temperature physics, we minimize the energy in the variational subspace defined by Eq. (1) using gradient descent with a learning rate \(\lambda\), \[\Lambda\to\Lambda-\lambda\,\partial_{\Lambda}\left<\hat{\mathcal{H}}\right>_{ \Lambda},\quad\left<\hat{\mathcal{H}}\right>_{\Lambda}=\left<\Psi(\Lambda) \right|\hat{\mathcal{H}}\left|\Psi(\Lambda)\right>. \tag{2}\] Here, the quantum mechanical expectation value \(\left<\hat{\mathcal{H}}\right>_{\Lambda}\) is evaluated using importance sampling (see Appendix B). While there are exponentially many states in the Hilbert space, the low-energy sector of a local Hamiltonian is expected to occupy a small subspace where states obey area law entanglement [61, 62] whereas a typical state obeys volume law [63, 64]. Motivated by these considerations, we consider a class of networks that naturally describe quantum states that obey area-law entanglement. Pictorially, in such networks, the connections from the hidden neurons (representing the weights \(\Lambda\)) to the physical spins are _quasi-local_[51, 53, 54, 55]. In that case, it holds \[\psi(\boldsymbol{\sigma},\Lambda)=\phi_{1}(\boldsymbol{\sigma}_{1},\Lambda_{ 1})\times\phi_{2}(\boldsymbol{\sigma}_{2},\Lambda_{2})\times\cdots, \tag{3}\] where \(\boldsymbol{\sigma}_{j}\!=\!\{\sigma_{k}\}_{k\in j}\) denote (overlapping) subsets of neighboring spins with \(\cup_{j}\boldsymbol{\sigma}_{j}=\boldsymbol{\sigma}\) and \(\Lambda_{j}\) are the subsets of the network parameters (weights and biases) that are connected to the physical spins in j. ``` procedure(\(\{\Lambda\}_{n=1}^{N}\)) init: optimized parameters \(\Lambda\) for\(k\) independent times do: for\(n\) sampling steps do: Propose new parameter \(\Lambda_{p}=f(\Lambda_{t})\) Accept with probability determined by energy \(\left<\hat{\mathcal{H}}\right>_{\Lambda}\) and parameter \(T\): \(\Lambda_{t+1}=\mathbb{P}_{\mathrm{accept}}(\Lambda^{\prime}|\Lambda;T)\) return the last \(m\) states for each \(k\): \(\{\Lambda_{i}|i=n-m,...,n\}_{k}\) ``` **Algorithm 1** Ensemble generation ### Dataset: network parameter ensembles The dataset we use for unsupervised detection of topological order consists of an ensemble of wavefunctions \(\{|\Psi(\Lambda)\rangle\}_{l}\), parameterized by the set of network parameters \(\{\Lambda\}_{l}\). While, depending on the precise application, other choices are conceivable, we generate this ensemble such that the relative occurrence of a state \(|\Psi(\Lambda)\rangle\) is given by \(\rho_{T}(\Lambda)=\exp(-\left<\hat{\mathcal{H}}\right>_{\Lambda}/T)/Z\), with appropriate normalization factor \(Z\). As such, a small value of the "temperature-like" hyperparameter \(T\) corresponds to a "low-energy" ensemble while large \(T\) parametrize "high-energy" ensembles. In practice, to generate this ensemble, we here first optimize the parameters \(\Lambda\) via Eq. (2) to obtain wavefunctions with lowest energy expectation values. As Eq. (1) does not contain all possible states, this will, in general, only yield approximations to the exact low-energy eigenstates of \(\hat{\mathcal{H}}\). However, as long as it is able to capture all superselection sectors of the system as well as (a subset of) higher energy states connecting these sectors, Eq. (1) will be sufficient for our purpose of detecting topological order or the absence thereof. We perform this optimization several times, \(\Lambda\to\Lambda_{l}^{0}\), with different initial conditions, to obtain several "seeds", \(\Lambda_{l}^{0}\); this is done to make sure we have a low-energy representative of all superselection sectors. Ideally the dataset is sampled directly from the the target probability distribution \(\rho_{T}\), if for instance, one has access to an experimental system at finite temperature. Here, we adopt a Markov-chain-inspired procedure for generating the ensemble based on \(\rho_{T}\) for each of these seeds. Specifically, starting from a state \(\Lambda\), we propose updates on a randomly chosen local block of parameters connected to the spins at sites j, \[\Lambda\,\to\,\Lambda^{\prime}=\{\Lambda_{1},\Lambda_{2},\cdots,u(\Lambda_{ j}),\cdots,\Lambda_{N}\}, \tag{4}\] where the update \(u\) only depends on \(\Lambda_{j}\). The proposed parameter \(\Lambda^{\prime}\) given the current parameter \(\Lambda\) is accepted with probability \[\mathbb{P}_{\mathrm{accept}}(\Lambda^{\prime}|\Lambda;T)=\min\Bigl{(}1,\,e^{- \frac{\left<\hat{\mathcal{H}}\right>_{\Lambda^{\prime}}-\left<\hat{\mathcal{H }}\right>_{\Lambda}}{T}}\Bigr{)}. \tag{5}\] This means that if the proposed state \(\Psi(\Lambda^{\prime})\) has a lower energy expectation value than \(\Psi(\Lambda)\), then the proposal will be accepted; otherwise, it will be accepted with a probability determined by the Boltzmann factor. The entire ensemble generation procedure is summarized in Algorithm 1. ### Diffusion map As proposed in Ref. [12], DM is ideally suited as an unsupervised ML algorithm to identify the presence and number of superselection sectors in a collection of states, such as \(\{|\Psi(\Lambda)\rangle\}_{l}\) defined above. To briefly review the key idea of the DM algorithm [32, 33, 34, 35] and introduce notation, assume we are given a dataset \(X=\{x_{l}|l=1,2,...,M\}\), consisting of \(M\) samples \(x_{l}\). Below we will consider the cases \(x_{l}=\Lambda_{l}\) and \(x_{l}=|\Psi(\Lambda_{l})\rangle\); in the first case, the samples are the network parameters parametrizing the wavefunction and, in the second, the samples are the wavefunctions themselves. To understand DM intuitively, let us define a diffusion process among states \(x_{l}\in X\). The probability of state \(x_{l}\) transitioning to \(x_{l^{\prime}}\) is defined by the Markov transition matrix element \(p_{l,l^{\prime}}\). To construct \(p_{l,l^{\prime}}\), we introduce a symmetric and positive-definite kernel \(k_{\epsilon}(x_{l},x_{l^{\prime}})\) between states \(x_{l}\) and \(x_{l^{\prime}}\). Then the transition probability matrix \(p_{l,l^{\prime}}\) is defined as \[p_{l,l^{\prime}}=\frac{k_{\epsilon}(x_{l},x_{l^{\prime}})}{z_{l}},\quad z_{l}=\sum_ {l^{\prime}}k_{\epsilon}(x_{l},x_{l^{\prime}}), \tag{6}\] where the factor \(z_{l}\) ensures probability conservation, \(\sum_{l^{\prime}}p_{l,l^{\prime}}=\!1\,\forall l\). Then spectral analysis on the transition probability matrix leads to information on the _global_ connectivity of the dataset \(X\), which, in our context of \(X\) containing low-energy states, allows to identify superselection sectors and, thus, topological order [12]. To quantify how strongly two samples \(x_{l}\) and \(x_{l^{\prime}}\) are connected, one introduces the \(2t\)-step diffusion distance [32, 33, 34, 35], \[D_{2t}(l,l^{\prime})=\sum_{l^{\prime\prime}}\frac{1}{z_{l^{\prime\prime}}}[(p ^{t})_{l,l^{\prime\prime}}-(p^{t})_{l^{\prime},l^{\prime\prime}}]^{2}, \tag{7}\] where \(p^{t}\) denotes the \(t\)-th matrix power of the transition probability matrix \(p\). It was shown that \(D_{2t}\) can be computed from the eigenvalues \(\lambda_{n}\) and right eigenvectors \(\psi_{n}\) of the transition matrix \(p\): with \(\sum_{l^{\prime}}p_{l,l^{\prime}}\left(\psi_{n}\right)_{l^{\prime}}\!=\! \lambda_{n}\left(\psi_{n}\right)_{l}\), and in descending ordering \(\lambda_{n}>\lambda_{n+1}\), it follows \[D_{2t}(l,l^{\prime})=\sum_{n=1}^{M-1}\lambda_{n}^{2t}[(\psi_{n})_{l}-(\psi_{n} )_{l^{\prime}}]^{2} \tag{8}\] after straightforward algebra [35]. Geometrically, this means that the diffusion distance is represented as a Euclidean distance (weighted with \(\lambda_{n}\)) if we perform the non-linear coordinate transformation \(x_{l}\rightarrow\{(\psi_{n})_{l},n=0,\ldots M-1\}\). Furthermore, as the global connectivity is seen from the long-time limit, \(t\rightarrow\infty\), of the diffusion distance, the largest eigenvalues are most important to describe the connectivity. To be more precise, let us choose a kernel \(k_{\epsilon}\) of the form \[k_{\epsilon}(x_{l},x_{l^{\prime}})=\exp\left(-\frac{1-S(x_{l},x_{l^{\prime}})} {\epsilon}\right), \tag{9}\] where \(S\) is a _local similarity measure_ which obeys \(S\in[0,1]\), \(S(x_{l},x_{l^{\prime}})=S(x_{l^{\prime}},x_{l})\), and \(S(x,x)=1\). Here "local" means that \(S(x_{l},x_{l^{\prime}})=\sum_{i}\mathcal{S}_{i}(x_{l},x_{l^{\prime}})\) where \(\mathcal{S}_{i}(x_{l},x_{l^{\prime}})\) only depend on the configuration of \(x_{l}\) and Figure 2: (a) Overview of the ML algorithm applied in this work: the “seeds” \(\{\Lambda^{0}\}\) are computed using variational Monte Carlo (see Appendix B), a Markov-chain algorithm is used to generate the network parameter ensemble dataset (Sec. II.1), then a similarity metric is used for the definition of kernels in the DM method (Sec. II.2 and Sec. II.3), and finally \(k\)-means is applied to the low-dimensional embedding in the subspace provided by the dominant DM eigenvector components. (b) The square lattice geometry for the toric code model, where the qubits \(\hat{s}_{i}\) are defined on the links of the lattice (grey dots). The Hamiltonian [given in Eq. (16)] is written in terms of the operators \(\hat{\mathcal{P}}_{P}\) (supported by spins on plaquette \(P\) denoted by the red square) and star \(\hat{\mathcal{S}}_{S}\) (supported by spins on star \(S\) denoted by the blue links). The two blue lines along \(x(y)\) directions denote the Wilson loop operators \(\hat{W}_{1,x}(\hat{W}_{2,g})\) along the straight paths \(\bar{x}(\bar{y})\). (c) An illustration of the quasi-local ansatz in Eq. (17). The ansatz is a product over local function \(\phi\) of spins in plaquette (or star), which depends on parameters \(\{w_{Xj},b_{X}\}\) for \(X=P(S)\) being plaquette (or star). \(x_{l^{\prime}}\) in the vicinity of site \(i\). While we will discuss possible explicit forms of \(S\) for our quantum mechanical \(N\) spin/qubit system in Sec. II.3 below, a natural choice for a classical system of \(N\) spins, \(x_{l}=\{\mathbf{S}^{l}_{j},(\mathbf{S}^{l}_{i})^{2}=1,i=1,2,\ldots,N\}\), is \(S_{\text{cl}}(x_{l},x_{l^{\prime}})=\sum_{i}\mathbf{S}^{l}_{i}\cdot\mathbf{S}^{l}_{i}/N\). In Eq. (9), \(\epsilon\) plays the role of a "coarse graining" parameter that is necessary as we only deal with finite datasets \(X\): for given \(X\), we generically expect \(k_{\epsilon}(x_{l},x_{l^{\prime}})=p_{l,l^{\prime}}=\delta_{l,l^{\prime}}\) as \(\epsilon\to 0\), i.e., all samples are dissimilar if \(\epsilon\) is sufficiently small and all eigenvalues \(\Lambda_{n}\) approach 1. In turn, for \(\epsilon\to\infty\) the coarse graining parameter is so large that all samples become connected, \(k_{\epsilon}(x_{l},x_{l^{\prime}})\to 1\); as \(p_{l,l^{\prime}}\to 1/M\), we will have \(\lambda_{n>0}\to 0\), while the largest eigenvalue \(\lambda_{0}\) is always 1 (as a consequence of probability conservation). For values of \(\epsilon\) in between these extreme limits, the DM spectrum contains information about \(X\), including its topological structure: as shown in Ref. [12], the presence of \(k\in\mathbb{N}\) distinct topological equivalence classes in \(X\) is manifested by a range of \(\epsilon\) where \(\lambda_{1},\ldots\lambda_{k-1}\) are all exponentially close (in \(\epsilon\)) to 1, with a clear gap to \(\lambda_{n\geq k}\). Furthermore, the different samples \(l\) will cluster--with respect to the normal Euclidean measure, e.g., as can be captured with \(k\)-means--according to their topological equivalence class when plotted in the mapped \(k-1\)-dimensional space \(\{(\psi_{1})_{l},(\psi_{2})_{l},\ldots,(\psi_{k-1})_{l}\}\). In the following, we will use this procedure to identify the superselection sectors in the ensemble of wave functions defined in Sec. II.1. To this end, however, we first need to introduce a suitable similarity measure \(S\), to be discussed next. ### Local similarity measure A natural generalization of the abovementioned classical similarity measure \(S_{\text{cl}}=\sum_{i}\mathbf{S}^{l}_{i}\cdot\mathbf{S}^{l}_{i}/N\), which can be thought of as the (Euclidean) inner product in the classical configuration space, is to take the inner product in the Hilbert space of the quantum system, \[S_{\text{q}}(\Lambda_{l},\Lambda_{l^{\prime}})=|\langle\Psi(\Lambda_{l})| \Psi(\Lambda_{l^{\prime}})\rangle|^{2}. \tag{10}\] While this or other related fidelity measures for low-rank quantum states could be estimated efficiently with quantum simulation and computing setups [65; 66; 67; 68], estimating \(S_{\text{q}}\) is generally a computationally expensive task on a classical computer, as it requires sampling over spin configurations for our variation procedure. To make the evaluation of the similarity measure more efficient, we here propose an alternative route that takes advantage of the fact that we use a local ansatz for \(\psi(\mathbf{\sigma};\Lambda)\), see Eq. (3). Our goal is to express the similarity measure directly as \[S_{\text{n}}(\Lambda_{l},\Lambda_{l^{\prime}})=\frac{1}{N_{\text{j}}}\sum_{ \text{l}}f((\Lambda_{l})_{\text{j}},(\Lambda_{l^{\prime}})_{\text{j}}), \tag{11}\] where \(f\) only compares a local block of parameters denoted by \(\text{j}\) and is a function that can be quickly evaluated, without having to sample spin configurations. Furthermore, \(S(x_{l},x_{l^{\prime}})=S(x_{l^{\prime}},x_{l})\) can be ensured by choosing a function \(f\) that is symmetric in its arguments and \(S\in[0,1]\) is also readily implemented by setting \(N_{\text{j}}=\sum_{\text{j}}\) and appropriate rescaling of \(f\) such that \(f\in[0,1]\). The most subtle condition is \[S_{\text{n}}(\Lambda_{l},\Lambda_{l^{\prime}})=1\quad\Longleftrightarrow\quad |\Psi(\Lambda_{l})\rangle\propto|\Psi(\Lambda^{\prime}_{l})\rangle\,, \tag{12}\] since, depending on the precise network architecture used for \(\psi(\mathbf{\sigma};\Lambda)\), there are "gauge transformations" \(g\in\mathcal{G}\) of the weights, \(\Lambda_{l}\to g[\Lambda_{l}]\), with \[|\Psi(\Lambda_{l})\rangle=e^{i\vartheta_{g}}\,|\Psi(g[\Lambda_{l}])\rangle \tag{13}\] for some global phase \(\vartheta_{g}\). We want to ensure that \[S_{\text{n}}(\Lambda_{l},\Lambda_{l^{\prime}})=S_{\text{n}}(\Lambda_{l},g[ \Lambda_{l^{\prime}}])=S_{\text{n}}(g[\Lambda_{l}],\Lambda_{l^{\prime}}) \tag{14}\] for all such gauge transformations \(g\in\mathcal{G}\). A general way to guarantee Eq. (14) proceeds by replacing, \[S_{\text{n}}(\Lambda_{l},\Lambda_{l^{\prime}})\quad\longrightarrow\quad\max_{g, g^{\prime}\in\mathcal{G}}S_{\text{n}}(g[\Lambda_{l}],g^{\prime}[\Lambda_{l^{ \prime}}]). \tag{15}\] However, in practice, it might not be required to iterate over all possible gauge transformations in \(\mathcal{G}\) due to the locality of the similarity measure. In the following, we will use the toric code and a specific RBM variational ansatz as an example to illustrate these gauge transformations and how an appropriate function \(f\) in Eq. (11) and gauge invariance (14) can be implemented efficiently. Finally, note that, while we focus on applying DM in this work, a similarity measure in terms of neural network parameters can also be used for other kernel techniques such as kernel PCA. Depending on the structure of the underlying dataset, DM has clear advantage over kernel PCA: the former really captures the global connectivity of the dataset rather than the subspace with most variance that is extracted by the latter. This is why kernel PCA fails when identifying, e.g., winding numbers, in general datasets where DM still works well [12]. Specifically for our case study of the toric code below, we find that kernel PCA can also identify topological sectors for small \(T\) and without magnetic field, \(h=0\), as a result of the simple data structure; however, only DM works well when \(h\) is turned on, as we discuss below. ## III Example: Toric Code Now we illustrate our DM-based ML algorithm using the toric code model [57], defined on an \(L_{x}\times L_{y}\) square lattice with spin-\(1/2\) operators or qubits on every bond, see Fig. 2(b), leading to a total of \(N=2L_{x}L_{y}\) spins; throughout this work, we will assume periodic boundary conditions. Referring to all four spins on the edges of an elementary square (vertex) of the lattice as plaquette \(P\) (star \(S\)), the plaquette and star operators are defined as \(\hat{\mathcal{P}}_{P}=\prod_{i\in P}\hat{s}_{i}^{x}\) and \(\hat{\mathcal{S}}_{S}=\prod_{i\in S}\hat{s}_{i}^{x}\), respectively. The toric code Hamiltonian then reads as \[\tilde{H}_{\rm tc}=-J_{P}\sum_{P}\hat{\mathcal{P}}_{P}-J_{S}\sum_{S}\hat{ \mathcal{S}}_{S}, \tag{16}\] where the sums are over all plaquettes and stars of the lattice. All "stabilizers" \(\hat{\mathcal{P}}_{P}\), \(\hat{\mathcal{S}}_{S}\) commute among each other and with the Hamiltonian. Focusing on \(J_{P},J_{S}>0\), the ground states are obtained as the eigenstates with eigenvalue \(+1\) under all stabilizers. A counting argument, taking into account the constraint \(\prod_{S}\hat{\mathcal{S}}_{S}\!=\!\prod_{P}\hat{\mathcal{P}}_{P}\!=\!\mathds{1}\), reveals that there are four, exactly degenerate ground states for periodic boundary conditions. To describe the ground-states and low-energy subspace of the toric code model (16) variationally, we parameterize \(\psi(\mathbf{\sigma};\,\Lambda)\) in Eq. (1) using the ansatz \[\psi_{\rm rbm}(\mathbf{\sigma};\,\Lambda)= \prod_{P}\cos(b_{P}+\sum_{j\in P}w_{Pj}\sigma_{j})\] \[\times \prod_{S}\cos(b_{S}+\sum_{j\in S}w_{Sj}\sigma_{j}), \tag{17}\] proposed in Ref. [58], where every plaquette \(P\) (star \(S\)) is associated with a "bias" \(b_{P}\) (\(b_{S}\)) and four weights \(w_{P,j}\) (\(w_{S,j}\)), all of which are chosen to be real here, i.e., \(\Lambda=\{b_{P},b_{S},w_{P,j},w_{S,j}\}\). This ansatz can be thought of as an RBM [46] (see Appendix A), as illustrated in Fig. 2(c), with the same geometric properties as the underlying toric code model. It is clear that Eq. (17) defines a quasi-local ansatz as it is of the form of Eq. (3), with j enumerating all plaquettes and stars (and thus \(N_{j}=2N\)). For this specific ansatz, the gauge transformations \(g\in\mathcal{G}\), as introduced in Sec. II.3 above, are generated by the following set of operations on the parameters \(b_{P}\), \(b_{S}\), \(w_{P,j}\), and \(w_{S,j}\): 1. For \(X\) being any plaquette or star, multiplying all biases and weights of that plaquette or star by \(-1\) [see Fig. 3(a)], \[g_{X,-}:\,b_{X}\to-b_{X},\ w_{Xj}\to-w_{Xj},\] (18a) leaves the wave function invariant [\(\vartheta_{g}=0\) in Eq. (13)]. 2. Adding \(\pi\) to either the bias or any of the weights associated with the plaquette or star \(X\) [see Fig. 3(b)], \[g_{X,\pi,b}: \!b_{X}\to b_{X}+\pi,\] (18b) \[g_{X,\pi,j}: \!w_{Xj}\to w_{Xj}+\pi,\quad j\in X,\] (18c) leads to an overall minus sign [\(\vartheta_{g}=\pi\) in Eq. (13)]. 3. For any closed loop \(\ell\) (or \(\bar{\ell}\)) on the direct (or dual lattice), adding \(\frac{\pi}{2}\) to all weights of the stars (plaquettes) that are connected to the spins crossed by the string [see Fig. 3(c-d)], \[g_{\ell}:\,w_{Sj}\to w_{Sj}+\frac{\pi}{2},\quad Sj\in\ell,\] (18d) \[g_{\bar{\ell}}:\,w_{Pj}\to w_{Pj}+\frac{\pi}{2},\quad Pj\in\bar{ \ell},\] (18e) leads to \(\vartheta_{g}=0\) or \(\pi\) in Eq. (13) depending on the length of the string. Note that any loop configuration \(\mathcal{L}\), which can contain an arbitrary number of loops, can be generated by the set \(\{g_{S},g_{P},g_{x,y},g_{\bar{x},\bar{y}}\}\), where \(g_{S}\) (\(g_{P}\)) creates an elementary loop on the dual (direct) lattice encircling the star \(S\) (plaquette \(P\)), see Fig. 3(c,d), and \(g_{x,y}\) (\(g_{\bar{x},\bar{y}}\)) creates a non-contractible loop on the direct (dual) lattice along the \(x,y\) direction. Since the length of any contractible loop is even, \(\vartheta_{g}=0\) for any string transformations generated by \(g_{S}\) and \(g_{P}\). Meanwhile, on an odd lattice, the gauge transformations \(g_{x,y}(g_{\bar{x},\bar{y}})\) involve an odd number of sites and thus lead to \(\vartheta_{g}=\pi\). Figure 3: Gauge freedom of RBM ansatz in Eq. (17). The following transformations only lead to a global phase: (a) Multiplying all the parameters of a plaquette (or star, not shown) by a minus sign, see Eq. (18a); (b) A \(\pi\) shift of a single parameter, see Eqs. (18b) and (18c); (c) A \(\pi/2\) shift to the weights crossed by a string \(\bar{l}\), defined by \(g_{\bar{l}}\) in Eq. (18e). The straight pink line represents the transformation on a non-contractible loop denoted by \(g_{y}\); (d) Same as (c) but for loops on the direct lattice and \(g_{\bar{l}}\) and \(g_{\bar{y}}\), cf. Eq. (18d). A highly inefficient way of dealing with this gauge redundancy would be to use a choice of \(S_{n}\) in Eq. (11) which is not invariant under any of the transformations in Eq. (18); this would, for instance, be the case by just taking the Euclidean distance of the weights, \[\begin{split} S_{\text{eu}}&(\Lambda_{l},\Lambda_{l^ {\prime}})\propto||\Lambda_{l}-\Lambda_{l^{\prime}}||^{2}\\ &=\sum_{X}\Bigl{[}(b_{X}^{l}-b_{X}^{l^{\prime}})^{2}+\sum_{j \in X}(w_{Xj}^{l}-w_{Xj}^{l^{\prime}})^{2}\Bigr{]},\end{split}\] where the sum over \(X\) involves all plaquettes and stars. Naively going through all possible gauge transformations to find the maximum in Eq. (15) would in principle rectify the lack of gauge invariance. However, since the number of gauge transformations scales exponentially with system size \(N\) (holds for each of the three classes, 1.-3., of transformations defined above), such an approach would become very expensive for large \(N\). Luckily, locality of the ansatz and of the similarity measure allows us to construct similarity measures that can be evaluated much faster: as an example, consider \[\begin{split} S_{n}&(\Lambda_{l},\Lambda_{l^{ \prime}})=\frac{1}{2}+\frac{1}{10N}\sum_{X}\max_{\tau_{X}=\pm}\Bigl{[}\\ &\sum_{j\in X}\cos 2(\tau_{X}w_{Xj}^{l}-w_{Xj}^{l^{\prime}})+ \cos 2(\tau_{X}b_{X}^{l}-b_{X}^{l^{\prime}})\Bigr{]},\end{split} \tag{19}\] which clearly obeys \(S_{n}(\Lambda_{l},\Lambda_{l^{\prime}})=S_{n}(\Lambda_{l^{\prime}},\Lambda_{l})\), \(S_{n}(\Lambda_{l},\Lambda_{l^{\prime}})\in[0,1]\), and locality [it is of the form of Eq. (11) with j enumerating all \(X\)]. Concerning gauge invariance, first note that the choice of \(\cos(\cdot)\) immediately leads to invariance under Eq. (18a). Second, for each \(X\) we only have to maximize over two values (\(\tau_{X}\)) to enforce invariance under Eqs. (18b) and (18c), i.e., the maximization only doubles the computational cost. The "string" redundancy, see Eqs. (18d) and (18e), however, is not yet taken into account in Eq. (19). It can be formally taken care of by maximizing over all possible loop configurations, denoted by \(\mathcal{L}\), \[\begin{split} S_{\text{str}}&(\Lambda_{l},\Lambda_{ l^{\prime}})=\frac{1}{2}+\frac{1}{10N}\max_{\mathcal{L}}\Bigl{\{}\sum_{X} \max_{\tau_{X}=\pm}\Bigl{[}\\ &\sum_{j\in X}\mu_{Xj}^{\mathcal{L}}\cos 2(\tau_{X}w_{Xj}^{l}-w_{ Xj}^{l^{\prime}})+\cos 2(\tau_{X}b_{X}^{l}-b_{X}^{l^{\prime}})\Bigr{]}\Bigr{\}}, \end{split} \tag{20}\] where \(\mu_{Xj}^{\mathcal{L}}=-1\) if \(Xj\) lives on a loop contained in \(\mathcal{L}\) and \(\mu_{Xj}^{\mathcal{L}}=1\) otherwise. While there is an exponential number of such strings, Ref. [12] has proposed an algorithm to efficiently find an approximate maximum value. In our case, this algorithm amounts to randomly choosing a plaquette \(P\) or a star \(S\) or a direction \(d=x,y\) and then applying \(g_{S}\) or \(g_{P}\) or \(g_{d=x,y}\) to \(\Lambda_{l}\) in Eq. (19). If this does not decrease the similarity, keep that transformation; if it decreases the similarity, discard the gauge transformation. Repeat this procedure \(N_{g}\) times. In Ref. [12], \(N_{g}\) between \(10^{3}\) and \(10^{4}\) was found to be enough for a large system consisting \(18\times 18\) square-lattice sites (total of \(N=2\times 18^{2}\) qubits). On top of this, \(g_{S}\) and \(g_{P}\) are local and, hence, the evaluation of the change of the similarity with the gauge transformation only requires \(\mathcal{O}(N^{0})\) amount of work. In the numerical simulations below, using Eq. (19) without sampling over loop configurations \(\mathcal{L}\) turned out to be sufficient. The reason is that, for our Markov-chain-inspired sampling procedure of \(\Lambda_{l}\) (see Appendix C), updates that correspond to these loop transformations happen very infrequently. Furthermore, even if a few pairs of samples are incorrectly classified as distinct due to the string redundancy, the DM will still correctly capture the global connectivity and, hence, absence or presence of topological sectors. Figure 4: (a) DM spectrum for topological phase at \(h=0\) and \(T=0.1\) using the neutral network similarity measure in Eq. (19). Inset left: associated leading DM components; color represents the loop observable expectations values defined in (c-d). Inset right: DM spectrum in descending order at \(\epsilon=0.01\) indicated by the dashed line. (b) Same as (a), but using exact overlaps \(S_{\text{q}}\) in Eq. (10) as metric. (c) Color map for the non-local loop values \(\langle\overline{W}_{1}\rangle,\langle\overline{W}_{2}\rangle\) in the left insets of (a) and (b). (d) Different straight Wilson loops \(\hat{W}_{1,x_{i}}\) (\(\hat{W}_{2,g_{i}}\)) along \(x\) (\(y\)) direction, denoted by blue (red) lines. The loop values in the color map in (c) are spatial averages over all straight-loop expectation values (as in the equations for \(\langle\overline{W}_{1}\rangle,\langle\overline{W}_{2}\rangle\)). Numerical results We next demonstrate explicitly how the general procedure outlined above can be used to probe and analyze topological order in the toric code. We start from the pure toric code Hamiltonian defined in Eq. (16) using the variational RBM ansatz in Eq. (17). An ensemble of network parameters is generated by applying the procedure of Sec. II.1 (see also Algorithm 1) for a system size of \(N\!=\!18\) spins; the hyperparameters for ensemble generation and more details including the form of \(u\) in Eq. (4) are given in Appendix C. From now on, we measure all energies in units of \(J_{P}\) and set \(J_{S}=J_{P}=1\). Let us first focus on the low-energy ensemble and choose \(T=0.1\) in Eq. (5). For the simple similarity measure in Eq. (19), that can be exactly evaluated at a time linear in system size \(N\), we find the DM spectrum shown in Fig. 4(a) as a function of \(\epsilon\) in Eq. (9). We observe the hallmark feature of four superselection sectors [12]: there is a finite range of \(\epsilon\) where there are four eigenvalues exponentially close to \(1\). The association of samples (in our case states) and these four sectors is thus expected to be visible in a scatter plot of a projected subspace spanned by the first three non-trivial eigenvectors \(\psi_{1,2,3}\)[12]; note the zeroth eigenvector \((\psi_{0})_{l}=C\) is always constant with eigenvalue \(\lambda=1\) from probability conservation. In fact, we can see these clusters already in the first two components, see left inset in Fig. 4(a). Then a standard \(k\)-means algorithm is applied onto this projected subspace to identify the cluster number for each data point. To verify that the ML algorithm has correctly clustered the states according to the four physical sectors, we compute the expectation value for each state of the string operators, \[\hat{W}_{1,\bar{x}}=\prod_{i\in\bar{x}}\hat{s}_{i}^{x},\quad\hat{W}_{2,\bar{y} }=\prod_{i\in\bar{y}}\hat{s}_{i}^{x}, \tag{21}\] where \(\bar{x}(\bar{y})\) are loops defined on the dual lattice winding along the \(x(y)\) direction, shown as blue lines in Fig. 2(b). We quantify the association of a state to physical sectors by the average of a set of straight loops \(\mathcal{X}(\mathcal{Y})\) winding around the \(x(y)\) direction, shown as blue (red) lines in Fig. 4(d). Indicating this averaged expectation value \(\langle\overline{W}_{1}\rangle,\langle\overline{W}_{2}\rangle\) in the inset of Fig. 4(a) using the color code defined in Fig. 4(c), we indeed see that the clustering is done correctly. To demonstrate that this is not a special feature of the similarity measure in Eq. (19), we have done the same analysis, with result shown in Fig. 4(b), using the full quantum mechanical overlap measure in Eq. (10). Quantitative details change but, as expected, four superselection sectors are clearly identified and the clustering is done correctly. We reiterate that the evaluation of the neural-network similarity measure in Eq. (19) [exact evaluation \(\mathcal{O}(N)\)] is much fast than that in Eq. (10) [exact evaluation \(\mathcal{O}(2^{N})\), but we can compute it approximately with importance sampling] on a classical computer. Note, however, that once \(S_{n}\) is computed for all samples, the actual DM-based clustering takes the same amount of computational time for both approaches. Consequently, suppose there is a quantum simulator that can efficiently measure the quantum overlap in Eq. (10) or any other viable similarity measure for that matter, then we can equivalently use the "measured" similarity for an efficient clustering of the superselection sectors via the DM scheme. As a next step, we demonstrate that the superselection sectors are eventually connected if we take into account states with sufficiently high energy. To this end, we repeat the same analysis but for an ensemble with \(T=1\). As can be seen in the resulting DM spectrum in Fig. 5(a), there is no value of \(\epsilon\) where more than one eigenvalue is (exponentially) close to \(1\) and separated from the rest of the spectrum by a clear gap. Here we used again the simplified measure in Eq. (19), but have checked nothing changes qualitatively when using the overlap measure. To verify that this is the correct answer for the given dataset, we again computed the expectation value of the loop operators in Eq. (21) for each state in the ensemble. This is shown in Fig. 5(b), where we also use color to indicate the energy expectation value for each state. We can clearly see the four low-energy (blue) sectors (with \(|W_{1,2}|\simeq 1\)) are connected via high-energy (red) states (with \(|W_{1,2}|\ll 1\)). This agrees with the DM result that Figure 5: (a) DM spectrum for the high-energy ensemble at \(h\!=\!0\) and \(T\!=\!1\). The inset is the spectrum at \(\epsilon=0.03\) indicated by the dashed line in the main panel; (b) Spatially averaged straight Wilson loops \(\langle\overline{W}_{1(2)}\rangle\) [see Fig. 4(c-d)] along two directions for the states in (a), where the color encodes energy density \(\langle H\rangle/N\); (c) Leading DM components where the color of the dots encodes \(\langle\overline{W}_{1(2)}\rangle\) using the color map in Fig. 4(d); (d) DM spectrum for the trivial phase at \(h\!=\!1.0\) and \(T\!=\!0.1\) using the quantum metric \(S_{\rm q}\). all states are connected within the ensemble (topological order is lost). We can nonetheless investigate the clustering in the leading three non-trivial DM components \(\psi_{1,2,3}\). Focusing on a 2D projection in Fig. 5(c) for simplicity of the presentation, we can see that the DM reveals very interesting structure in the data: the four lobes roughly correspond to the four colors blue, red, orange, and green associated with the four superselection sectors and the states closer to \(|W_{1,2}|=1\) (darker color) appear closer to the tips. Finally, note that the colors are arranged such that the red and green [orange and blue] lobes are on opposite ends, as expected since they correspond to \((W_{1},W_{2})\simeq(1,-1)\) and \((-1,1)\) [\((-1,-1)\) and \((1,1)\)]. Another route to destroying topological order proceeds via application of a magnetic field. To study this, we extend the toric code Hamiltonian according to \[\hat{H}^{\prime}_{\rm tc}=\hat{H}_{\rm tc}-h\sum_{i}\hat{s}_{i}^{z}. \tag{22}\] Clearly, in the limit of \(h\to\infty\), the ground state is just a state where all spins are polarized along \(\hat{s}^{z}\) and topological order is lost. Starting from the pure toric model (\(h=0\)) and turning on \(h\) reduces the gap of the "charge excitations" defined by flipping \(\hat{\mathcal{S}}_{S}\) from \(+1\) in the toric code groundstate to \(-1\). Their condensation leads to a second-order quantum phase transition [69, 70, 71, 72]. Before addressing the transition, let us study the large-\(h\) limit. We first note that our ansatz in Eq. (17) does not need to be changed as it can capture the polarized phase as well. For instance, denoting the "northmost" (and "southmost") spin of the plaquette \(P\) (and star \(S\)) by \(j_{0}(P)\) (and \(j_{0}(S)\)), respectively, the spin polarized state is realized for [see also Fig. 8(a) in the Appendix] \[b_{P}=b_{S}=-\frac{\pi}{4},\ \ w_{Xj}=\begin{cases}\frac{\pi}{4},&j=j_{0}(X), \\ 0,&\text{otherwise}.\end{cases} \tag{23}\] In fact, the spin polarized state has many representations within our RBM ansatz in Eq. (17), including representations that are not just related by the gauge transformations in Eq. (18). For instance, the association \(j\to j_{0}(X)\) of a spin to a plaquette and star can be changed, e.g., by using the "easternmost" spin. As discussed in more detail in Appendix A.2, this redundancy is a consequence of the product from of \(\psi_{\rm rbm}(\mathbf{\sigma})\) in Eq. (17) and the fact that \(\psi_{\rm rbm}(\mathbf{\sigma})\) is _exactly_ zero if there is a single \(j\) with \(\sigma_{j}=-1\); consequently, it is a special feature of the simple product nature of the spin-polarized ground state. While in general there can still be additional redundancies besides the aforementioned gauge transformations, we do not expect such a structured set of redundancy to hold for generic states. There are various ways of resolving this issue. The most straightforward one is to replace the simple overlap measure \(S_{\rm n}\) in Eq. (11) by the direct overlap \(S_{\rm q}\) in Eq. (10) for a certain fraction of pairs of samples \(l\) and \(l^{\prime}\). If this fraction is large enough, the DM algorithm will be able recognize that clusters of network parameters that might be distinct according to \(S_{\rm n}\) actually correspond to identical wave functions. We refer to Appendix A.3 where this is explicitly demonstrated. We note, however, that kernel PCA will not work anymore in this case; it will incorrectly classify connected samples as distinct as it's based on the variance of the data rather than connectivity. For simplicity of the presentation, we use \(S_{\rm q}\) for all states in the main text and focus on DM. Figure 6: DM spectra for low-energy ensembles with \(T\!=\!0.3\) at finite field \(h\). (a) First 10 eigenvalues for various field values \(h\!=\!0.475,0.55,0.575,0.6,0.7\) at \(\epsilon\!=\!0.05\). The dot marker (\(h\!=\!0.475\)) shows that the eigenvalue spectra have four-fold degeneracy, indicating signature for topological order. In comparison, for spectra marked by the the triangular markers (\(h\geq 0.55\)), such degeneracy is absent. A transition field value \(h_{t}\simeq 0.55\) is identified by observing that a gap opens in the degenerate eigenvalue spectra. This is consistent with what we have observed in the fidelity using the same dataset [see Appendix B.1]. (b) Projected eigenvectors onto the first two components for \(h\!=\!0.475\). The color encodes \(\langle\overline{W}_{1(2)}\rangle\) with the color scheme of Fig. 4(c). The black cross marks the \(k\)-means centers. (c) Same as (b) for \(h=0.7\). (d) Expectation for averaged straight Wilson loops \(\langle\overline{W}_{1(2)}\rangle\) along two directions for the states in (b). The color encodes the clustering results from \(k\)-means in the projected subspace of the eigenvectors shown in (b). (e) Same as (d) for ensemble shown in (c). The DM spectrum for large magnetic field, \(h=1\), and low temperatures, \(T=0.1\), is shown in Fig. 5(d). Clearly, there is no value of \(\epsilon\) for which there is more than one eigenvalue close to \(1\) while exhibiting a gap to the rest of the spectrum. This shows that, as expected, the magnetic field \(h\) has lead to the loss of topological order. To study with our DM algorithm the associated phase transition induced by \(h\), we repeat the same procedure for various different values of \(h\). The resulting spectra for selected \(h\) are shown in Fig. 6(a). We see that there are still four sectors for \(h=0.55\) in the data that are absent for \(h=0.575\) and larger values. While the associated critical value of \(h\) is larger than expected [69; 70; 71], this is not a shortcoming of the DM algorithm but rather a consequence of our simple local variational ansatz in Eq. (17). By computing the fidelity as well as loop-operator expectation values, we can see that a critical value around \(h=0.55\) is the expected answer for our dataset (see Appendix B.1). More sophisticated ansatze for the wavefunction are expected to yield better values, but this is not the main focus of this work. More importantly, we see in Fig. 6(b) that the DM clustering of the states correctly reproduces the clustering according to the averaged loop operator expectation values \(\langle\overline{W}_{j}\rangle\) (again indicated with color). Alternatively, this can be seen in Fig. 6(d) where \(\langle\overline{W}_{j}\rangle\) is indicated for the individual samples. Using four different colors for the four different clusters identified by the DM, we see that all states are clustered correctly. As expected based on the eigenvalues, there are no clear clusters anymore for larger \(h\), Fig. 6(c); nonetheless, naively applying \(k\)-means clustering in \(\psi_{1,2,3}\) manages to discover some residual structure of the wavefunctions related to \(\langle\overline{W}_{j}\rangle\) as demonstrated in Fig. 6(e). ## V Summary and discussion In this work, we have described an unsupervised ML algorithm for quantum phases with topological order. We use neural network parameters to efficiently represent an ensemble of quantum states, which are sampled according to their energy expectation values. To uncover the structure of the superselection sectors in the quantum states, we used the dimensional reduction technique of diffusion map and provided a kernel defined in terms of network parameters. As opposed to a kernel based on the overlap of wavefunctions (or other quantum mechanical similarity measures of states for that matter), this metric can be evaluated efficiently (within polynomial time) on a classical computer. We illustrated our general algorithm using a quasi-local restricted Boltzmann machine (RBM) and the toric code model in an external field; the choice of network ansatz was inspired by previous works [58; 59] showing the existence of efficient representations of the low-energy spectrum in terms of RBMs. Allowing for spatially inhomogeneous RBM networks, we identified the "gauge symmetries" of the ansatz, i.e., the set of changes in the network parameters that do not change the wavefunction, apart from trivial global phase factors. We carefully designed a similarity measure that is gauge invariant--a key property as, otherwise, identical wavefunctions represented in different gauges would be falsely identified as being distinct. We showed that the resultant unsupervised diffusion-map-based embedding of the wavefunctions is consistent with the expectation values of loop operators; it correctly captures the presence of superselection sectors and topological order at low energies and fields, as well as the lack thereof when higher-energy states are involved and/or the magnetic field is increased. We also verified our results using the full quantum mechanical overlap of wavefunctions as similarity measure. On a more general level, our analysis highlights the importance of the following two key properties of diffusion maps: first, in the presence of different topological sectors, the leading eigenvectors of diffusion maps capture the connectivity rather than, e.g., the variance as is the case for PCA. For this reason, the clustering is still done correctly even if a fraction of pairs of wavefunctions are incorrectly classified as being distinct due to the usage of an approximate similarity measure. This is why complementing the neural-network similarity measure, which has additional, state-specific redundancies in the large-field limit, by direct quantum mechanical overlaps for a certain fraction of pairs of states is sufficient to yield the correct classification. The second key property is that diffusion map is a kernel technique. This means that the actual machine learning procedure does not require the full wavefunctions as input; instead, only (some measure of) the kernel of all pairs of wavefunctions in the dataset is required. We have used this to effectively remove the gauge redundancy in the RBM parametrization of the states by proper definition of the network similarity measure in Eq. (20). Since the evaluation of full quantum mechanical similarity measures, like the wavefunction overlap, are very expensive on classical computers, an interesting future direction would be to use the emerging quantum-computing resources to evaluate a similarity measure quantum mechanically. This could then be used as input for a diffusion-map-based clustering. We finally point out that the ensemble of states we used in this work, which was based on sampling states according to their energy with respect to a Hamiltonian, is only one of many possibilities. The proposed technique of applying diffusion map clustering using a gauge-invariant kernel in terms of network parameters of a variational description of quantum many-body wavefunctions can be applied more generally, in principle, to any ensemble of interest. For instance, to consider arbitrary local perturbations, one could generate an ensemble using finite depth local unitary circuits. Alternatively, one could generate an ensemble based on (Lindbladian) time-evolution to probe the stability of topological order against time-dependent perturbations or the coupling to a bath. We leave the investigation of such possibilities for future works. Code and data availability The Monte Carlo simulations in this work were implemented in JAX [73]. Python code and data will be available at [https://github.com/teng10/ml_toric_code/](https://github.com/teng10/ml_toric_code/). ## Acknowledgements Y.T. acknowledges useful discussions with Dmitrii Kochkov, Juan Carrasquilla, Khadijeh Sona Najafi, Maine Christos and Rhine Samajdar. Y.T. and S.S. acknowledge funding by the U.S. Department of Energy under Grant DE-SC0019030. M.S.S. thanks Joaquin F. Rodriguez-Nieva for a previous collaboration on DM [12]. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University. ## Appendix A Variational Ansatz: Restricted Boltzmann Machine The variational ansatz in Eq. (17) is a _further-restricted_ restricted Boltzmann machine (RBM), first introduced by Ref. [58]. RBM is a restricted class of Boltzmann machine with an "energy" function \(E_{\text{RBM}}(\boldsymbol{\sigma},\boldsymbol{h};\Lambda)\) dependent on the network parameters \(\Lambda\), where \(\boldsymbol{\sigma}\) are physical spins and \(\boldsymbol{h}=\{h_{1},h_{2},\cdots,h_{N}\mid h_{i}=\pm 1\}\) are hidden spins (or hidden neurons) that are Ising variables. The parameters \(\Lambda\) define the coupling strength among the physical and hidden spins. The restriction in RBM is that the couplings are only between the physical spin \(\sigma_{i}\) and hidden spin \(h_{j}\) with strength \(-w_{ij}\), so that the "energy" function takes the form \(E_{\text{RBM}}(\boldsymbol{\sigma},\boldsymbol{h};\Lambda)\!=\!-\sum_{i}a_{i} \sigma_{i}-\sum_{i}b_{i}h_{i}-\sum_{ij}w_{ij}\sigma_{i}h_{j}\). It is a generative neural network that aims to model a probability distribution \(\mathbb{P}\) based on the Boltzmann factor, \[\mathbb{P}(\boldsymbol{\sigma};\Lambda) =\frac{1}{Z}\sum_{\boldsymbol{h}}e^{-E_{\text{RBM}}(\boldsymbol{ \sigma},\boldsymbol{h};\Lambda)}, \tag{18a}\] \[\text{normalization}\quad Z =\sum_{\boldsymbol{\sigma},\boldsymbol{h}}e^{-E_{\text{RBM}}( \boldsymbol{\sigma},\boldsymbol{h};\Lambda)}. \tag{18b}\] For the task of modeling a quantum wavefunction amplitude \(\psi(\boldsymbol{\sigma};\Lambda)\), RBMs can be used as a variational ansatz by extending the parameters \(\Lambda\) to complex numbers. Further restricting parameters to the interlayer connections to the plaquette and star geometry in the toric code model [cf. Fig. 2(c)] and taking all parameters \(\Lambda\) to be purely imaginary, we recover the ansatz in Eq. (17) (up to normalization factor \(\widetilde{Z}\)), \[\psi(\boldsymbol{\sigma};\Lambda) =\frac{1}{\widetilde{Z}}\sum_{X=P,S}\sum_{h_{X}=\pm 1}e^{-i \sum_{X}(w_{Xj}\sigma_{j}+b_{X})h_{X}},\] \[=\frac{1}{\widetilde{Z}}\prod_{X=P,S}\cos(\sum_{j\in X}w_{Xj} \sigma_{j}+b_{X}). \tag{19}\] The \(\cos(\cdot)\) factors come from summing over the hidden neurons and the ansatz factorizes into the product of individual plaquette (star) terms because of the restricted connections. The estimation of physical observables of a wave function based on the RBM ansatz requires Monte Carlo sampling procedure which we discuss in Appendix B. ### Ground states representation in different topological sectors Placing the toric code model in Eq. (16) on the torus geometry, it is useful to define the loop operators, \[\hat{W}_{1} =\prod_{i\in\bar{l}_{x}}\hat{s}_{i}^{x},\quad\hat{W}_{2}=\prod_{i \in\bar{l}_{y}}\hat{s}_{i}^{x}, \tag{20a}\] \[\hat{V}_{1} =\prod_{i\in l_{x}}\hat{s}_{i}^{z},\quad\hat{V}_{2}=\prod_{i\in l _{y}}\hat{s}_{i}^{z}, \tag{20b}\] where \(l_{x,y}\) is a non-contractible loop along \(x\), \(y\) direction, and \(\bar{l}_{x,y}\) is similar on the dual lattice. Note the loop operators along two directions do not commute with each other as \(\left[\hat{W}_{1},\hat{V}_{2}\right]\neq 0\) and \(\left[\hat{W}_{2},\hat{V}_{1}\right]\neq 0\). However, since the hamiltonian commute with these loop operators \(\left[\hat{W}_{1,2},\hat{H}_{\text{tc}}\right]\!=\!\left[\hat{V}_{1,2},\hat{H }_{\text{tc}}\right]\!=\!0\), it follows that the ground state subspace is four-fold degenerate and spanned by the eigenvectors of the loop operators. Suppose we work in the eigenbasis of \(\hat{W}_{1,2}\); we define the four orthogonal ground states \(\left|\psi_{i}\right\rangle(i=0,1,2,3)\) that Figure 7: RBM representations of the four toric code ground states in the eigenbasis [Eq. (18)] of loop operators \(\hat{W}_{1},\hat{W}_{2}\) in Eq. (20a). span \(\mathcal{L}\) as, \[\hat{W}_{1}\ket{\psi_{0}}=\ket{\psi_{0}},\quad\hat{W}_{2}\ket{\psi_{0} }=\ket{\psi_{0}}, \tag{10a}\] \[\hat{W}_{1}\ket{\psi_{1}}=\ket{\psi_{1}},\quad\hat{W}_{2}\ket{\psi_{1 }}=-\ket{\psi_{1}},\] (10b) \[\hat{W}_{1}\ket{\psi_{2}}=\ket{\psi_{2}},\quad\hat{W}_{2}\ket{\psi_{2 }}=-\ket{\psi_{2}},\] (10c) \[\hat{W}_{1}\ket{\psi_{3}}=-\ket{\psi_{3}},\quad\hat{W}_{2}\ket{\psi_ {3}}=-\ket{\psi_{3}}. \tag{10d}\] The RBM ansatz in Eq. (11) can represent eigenstates of \(\hat{W}_{1,2}\) with eigenvalues \((W_{1},W_{2})=(\pm 1,\pm 1)\). Ref. [58] gave an representation of \(\ket{\psi_{3}}\) with parameters, \[w_{Pj}=\frac{\pi}{4},\quad b_{P}=0,\quad w_{Sj}=\frac{\pi}{2},\quad b_{S}=0.\] (11a) On a system with odd number of sites along \[x\] and \[y\] direction, the other three degenerate states can be realized analogously by fixing the weights associated to stars to be \[w_{Sj}\!=\!0,b_{S}\!=\!0\]. Then the four states can be chosen by changing the \[w_{Pj}\] and \[b_{P}\] as shown in Fig. 7. ### Network parameter redundancies in polarized phase In Sec. III, we identified a set of gauge transformations Eq. (18) that leave a generic wavefunction parameterized by the RBM ansatz in Eq. (17) invariant up to a global phase [Eq. (13)]. Such gauge transformations should be taken into consideration when evaluating the similarity measure \(S_{n}\). Moreover, we have numerically verified that for states generated close to the exact toric code wave functions, \(S_{n}\) is a good proxy for the quantum measure \(S_{\rm q}\) after explicit removals of such redundancies via \(S_{n}\) in Eq. (19). However, as alluded to in the discussions of the large-\(h\) limit, there are state-specific redundancies that are generally not related by the gauge transformations in Eq. (18). Let us illustrate such redundancies here for the polarized state \(\ket{\Psi}=\ket{1,\cdots,1}_{z}\) which has all spin pointing up in the \(z\)-basis. Notice that there is the same number of \(\cos(\cdot)\) factors in the wavefunction ansatz as the number of spins. As a result, we can define a "_covering_" by assigning each individual spin to a single factor, and choosing the weights to ensure all spins are pointing up. Any such "covering" is a valid representation of the polarized state. For example, one representation is given by, \[b_{P}=b_{S}=-\frac{\pi}{4},\ \ w_{Sj}=\begin{cases}\frac{\pi}{4},&j=j_{s}(S),\\ 0,&\text{otherwise},\end{cases}\quad\text{and}\ \ \ w_{\rm Pj}=\begin{cases}\frac{\pi}{4},&j=j_{n}(P),\\ 0,&\text{otherwise}.\end{cases} \tag{11b}\] where \(j_{s}(S)\) denotes the "southmost" spin in the star \(S\) and \(j_{n}(P)\) denotes the "northmost" spin in the plaquette \(P\) [see Fig. 8(a)]. Any such coverings of the spins will correspond to a polarized state. For example, performing a "rotation" leads to a different covering in Fig. 8(b). Actually, because most amplitudes in local-\(z\) basis are \(0\) so there are so few constraints in the wave function amplitudes, a continuous set of weights exist to represent the polarized state, so there are an infinite amount of redundancies for completely polarized state. To illustrate this, let us consider the simplest example of just two spins [the boxed region in Fig. 8(c)] with the same RBM ansatz, which can be easily generalized to more spins. For two spins, such ansatz is given by, \[\psi_{\Lambda}(\sigma_{A},\sigma_{B})=\cos(b_{S}+w_{SA}\sigma_{A}+w_{SB} \sigma_{B})\cos(b_{P}+w_{PA}\sigma_{A}+w_{PB}\sigma_{B}), \tag{12}\] where the weights \(\Lambda=\{\Lambda_{S}=\{b_{S},w_{SA},w_{SB}\},\,\Lambda_{P}=\{b_{P},w_{PA},w_{PB }\}\}\) with \(\Lambda_{Xj}\in[0,\pi)\) for \(X=S\) or \(P\) fully determine the two-qubits physical state. For example, the following two choices of weights [\(\Lambda_{1}\) and \(\Lambda_{2}\) pictorially in Figure 8: (a-b) Two RBM representations Eq. (10) of the polarized state. (c) A path that connects the presentation for two spins in (a-b), which is explicitly shown in Table. 1. Fig. 8(c)] both parametrize the polarized state: \[\Lambda_{1}=\{b_{S}=-\frac{\pi}{4},w_{SA}=0,w_{SB}=\frac{\pi}{4},b_{P} =-\frac{\pi}{4},w_{PA}=\frac{\pi}{4},w_{PB}=0\}, \tag{10a}\] \[\Lambda_{2}=\{b_{S}=-\frac{\pi}{4},w_{SA}=\frac{\pi}{4},w_{SB}=0,b _{P}=-\frac{\pi}{4},w_{PA}=0,w_{PB}=\frac{\pi}{4}\},\] (10b) \[\psi_{\Lambda_{1,2}}=\begin{cases}1,&\sigma_{A}=\sigma_{B}=1,\\ 0,&\text{otherwise}.\end{cases} \tag{10c}\] Now to illustrate the continuous redundancies, we construct a path in the parameter space to go from \(\Lambda_{1}\) to \(\Lambda_{2}\). The path is composed of three steps [Fig. 8(c)], \[\Lambda_{1}\xrightarrow{\text{path}\,1}\Lambda_{3}\xrightarrow{\text{path}\,2 }\Lambda_{4}\xrightarrow{\text{path}\,3}\Lambda_{2}, \tag{11}\] where the intermediate parameters are given by, \[\Lambda_{3}=\{b_{S}=0,w_{SA}=\frac{\pi}{4},w_{SB}=-\frac{\pi}{4}, b_{P}=-\frac{\pi}{4},w_{PA}=\frac{\pi}{4},w_{PB}=0\}, \tag{12}\] \[\Lambda_{4}=\{b_{S}=0,w_{SA}=\frac{\pi}{4},w_{SB}=-\frac{\pi}{4}, b_{P}=-\frac{\pi}{4},w_{PA}=0,w_{PB}=\frac{\pi}{4}\}. \tag{13}\] Along each path component, referred to as path 1 through 3 in Table 1, the parameters of \(S\) (or \(P\)) are varied and the other held fixed, while remaining in the exactly polarized state. The path is continuous except at a singular point on path 1 where the wave function vanishes at \(\Lambda_{\text{singular}}=\{b_{S}=0,w_{SA}=\frac{\pi}{4},w_{SB}=-\frac{\pi}{4},b_{P}=-\frac{\pi}{4},w_{PA}=\frac{\pi}{4},w_{PB}=0\}\). ### Resolving the special redundancies In Appendix A.2, we explicitly showed that there can be a large set of redundancies given a polarized state. Hence, for simplicity in the main text, we have used the direct overlap \(S_{q}\) in Eq. (10) as the relevant measure at finite field values. As discussed in the main text, a straightforward way to alleviate the redundancies in the similarity measure \(S_{n}\) in Eq. (19) of the network parameters is to complement it with the direct overlap. By using a combination of both measures, we are able to reduce the amount of computational cost of the direct overlap \begin{table} \begin{tabular}{|c||c|c|c|} \hline Path 1 & \(w_{SB}=b_{S}+w_{SA}-\frac{\pi}{2}\) & \(\Lambda_{P}\) fixed & product \(\psi=\psi_{S}\times\psi_{P}\) \\ \(\Lambda_{1}\rightarrow\Lambda_{3}\) & \(w_{SA}:[0,\frac{\pi}{4}),w_{SB}:[\frac{\pi}{4},-\frac{\pi}{4}),b_{S}:[-\frac{ \pi}{4},0)\) & \(w_{PA}=\frac{\pi}{4},w_{PB}=0,b_{P}=-\frac{\pi}{4}\) & \\ \hline \hline \(\cos(b_{X}+w_{XA}+w_{XB})\) & \(\neq 0\) if \(b_{S}+w_{SA}\neq\frac{n}{2}\pi,n\in\mathbb{Z}\to 0\to 1\) & 1 & \(\to 0\to 1\) \\ \hline \(\cos(b_{X}+w_{XA}-w_{XB})\) & & & \(0\) ✓ \\ \hline \(\cos(b_{X}-w_{XA}+w_{XB})\) & \(\cos(2b_{S}-\frac{\pi}{2})\to 0\) & & \(0\) ✓ \\ \hline \(\cos(b_{X}-w_{XA}-w_{XB})\) & & & \(0\) ✓ \\ \hline \hline Path 2 & \(\Lambda_{S}\) fixed & \(w_{PB}=b_{P}-w_{PA}+\frac{\pi}{2}\) & & \\ \(\Lambda_{3}\rightarrow\Lambda_{4}\) & \(w_{SA}=\frac{\pi}{4},w_{SB}=-\frac{\pi}{4},b_{S}=0\) & \(w_{PA}:[\frac{\pi}{4},0],w_{PB}:[0,\frac{\pi}{4}],b_{P}=-\frac{\pi}{4}\) & \\ \hline \(\cos(b_{X}+w_{XA}+w_{XB})\) & 1 & 1 & 1 \\ \hline \(\cos(b_{X}+w_{XA}-w_{XB})\) & 0 & \(\cos(2w_{PA}-\frac{\pi}{2})\to 0\) & 0 ✓ \\ \hline \(\cos(b_{X}-w_{XA}+w_{XB})\) & 0 & & \(0\) ✓ \\ \hline \(\cos(b_{X}-w_{XA}-w_{XB})\) & & & \(0\) ✓ \\ \hline \hline Path 3 & \(w_{SB}=-b_{S}+w_{SA}+\frac{\pi}{2}\) & \(\Lambda_{P}\) fixed & & \\ \(\Lambda_{4}\rightarrow\Lambda_{2}\) & \(w_{SA}=\frac{\pi}{4},w_{SB}:(-\frac{\pi}{4},0],b_{S}:(0,-\frac{\pi}{4})\) & \(w_{PA}=0,w_{PB}=\frac{\pi}{4},b_{P}=-\frac{\pi}{4}\) & \\ \hline \(\cos(b_{X}+w_{XA}+w_{XB})\) & 1 & 1 & 1 \\ \hline \(\cos(b_{X}+w_{XA}-w_{XB})\) & & 0 & 0 ✓ \\ \hline \(\cos(b_{X}-w_{XA}+w_{XB})\) & 0 & & 0 ✓ \\ \hline \(\cos(b_{X}-w_{XA}-w_{XB})\) & & & 0 & ✓ \\ \hline \end{tabular} \end{table} Table 1: A path going from \(\Lambda_{1}\) to \(\Lambda_{2}\) is composed of three steps. Path 1 (\(\Lambda_{1}\rightarrow\Lambda_{3}\)) is smooth except at the point \(w_{SA}=\frac{\pi}{4},w_{SB}=-\frac{\pi}{4},b_{S}=0\), where the wavefunction vanishes. This is further denoted by the red arrows first decreasing to 0 before increasing to 1 in the first row. Path 2 and 3 are both smooth. The last column illustrates that the wavefunction \(\psi\) remains in the polarized state along the path. by a fraction as the similarity is easy to compute. More specifically, we define a mixed measure \(S_{m}\) by replacing a random fraction (given by \(f\)) of the similarity measure pairs \(\left\{l,l^{\prime}\right\}\) by a rescaled overlap measure \(\widetilde{S}_{q}\) such that, \[S_{m}(l,l^{\prime})=\begin{cases}\widetilde{S}_{q}(l,l^{\prime})&\text{with probability }f,\\ S_{n}(l,l^{\prime})&\text{with probability }1-f.\end{cases} \tag{38}\] The following rescaling of the overlap measure \(S_{q}\) is necessary as we want to include the two measures on an equal-footing given by, \[\widetilde{S}_{q} =\frac{S_{q}-n_{q}}{m_{q}-n_{q}}\cdot(m_{n}-n_{n})+n_{n}, \tag{39a}\] \[m_{q} =\max(S_{q}),\quad n_{q}=\min(S_{q}),\] (39b) \[m_{n} =\max(S_{n}),\quad n_{n}=\min(S_{n}). \tag{39c}\] For example, we see that the minimum of the rescaled overlap is the same as the minimum of the similarity \(\min(\widetilde{S}_{q})=\min(S_{n})\). In Fig. 9, we demonstrate that by using a mixed measure with a fraction of \(f=0.4\) replacement, our algorithm with DM is able to identify the presence (indicated by the shaded blue region for smaller field values \(h=0.475\) and \(h=0.55\)) and absence (\(h=0.7\)) of superselection sectors across various field values, consistent with the predictions of the algorithm using direct overlap (shown in Fig. 6). We note that in the case with a mixed measure, DM is a natural technique as the algorithm looks for connectivity; whereas kernel PCA would fail to identify such transition (since a fraction of pairs of wave functions are incorrectly considered to be dissimilar by \(S_{n}\), the leading kernel PCA components still show four separated clusters up to the largest magnetic field, \(h=1\)). ## Appendix B Optimization with Variational Monte Carlo To find the ground state \(\ket{\Psi(\Lambda^{0})}\propto\sum_{\mathbf{\sigma}}\psi(\mathbf{\sigma};\Lambda^{0}) \ket{\mathbf{\sigma}}\), we wish to minimize the energy expectation \(\bra{E}=\bra{\Psi}\hat{H}\ket{\Psi}/\bra{\Psi}\) (omitting the variational parameters \(\Lambda^{0}\) in this section), which is bounded by the ground state energy by the variational principle. An exact computation \(\bra{E}_{\text{exact}}\) is costly as the summation enumerates over exponentially many spin configurations \(\mathbf{\sigma}\) as the system size increases. Here we use variational Monte Carlo (VMC) importance sampling algorithm to estimate such expectation values. The idea is to compute relative probability between different configurations and sample from the true wavefunction probability density \(\ket{\psi(\mathbf{\sigma})}^{2}\), without having to compute \(\ket{\psi(\mathbf{\sigma})}^{2}\) for all \(\mathbf{\sigma}\). To perform this algorithm, we initialize \(M\) random configurations \(\{\mathbf{\sigma}_{i}\}_{i=1}^{M}\) and continue each with random walks based on previous configurations, hence forming \(M\) Markov chains. In particular, the Metropolis-Rosenbluth algorithm [74] is used to propose the next configuration \(\mathbf{\sigma}^{\prime}_{i}\) that is locally connected to \(c_{i}\) according to function \(g(\mathbf{\sigma}^{\prime}|\mathbf{\sigma})\). For the toric code model, we use two types of proposals: spin flips and vertex flips. Here, we will assume a probability of \(p\) for proposing spin flips and analogously \(1-p\) for vertex flips that are equally likely at all sites: \[g(\mathbf{\sigma}^{\prime}|\mathbf{\sigma})=\begin{cases}\frac{p}{n_{s}},&\text{for spin flips}\\ \frac{1-p}{n_{v}},&\text{for vertex flips}\end{cases} \tag{40}\] where \(n_{s}\) and \(n_{v}\) are the number of all possible spin and vertex flips. The acceptance of \(\mathbf{\sigma}^{\prime}\) is determined by a probability, \[\mathbb{P}_{\text{accept}}(\mathbf{\sigma}\rightarrow\mathbf{\sigma}^{\prime})=\min \left(|\frac{\psi(\mathbf{\sigma}^{\prime})}{\psi(\mathbf{\sigma})}|^{2},\,1\right). \tag{41}\] Figure 9: DM spectra for different field values \(h=0.475,0.55,0.7\) at \(T=0.3\) using a mixed similarity measure \(S_{m}\) with a fraction \(f=0.4\) in Eq. (38). The blue shaded regions highlight the existence of a range of \(\epsilon\) with spectral gap between the degenerate eigenvalues and the decaying eigenvalues, indicating underlying superselection sectors. As the field value approaches the transition field \(h_{c}\), the range of such region shrinks and disappears at high field \(h=0.7\), indicating the absence of sectors. The random walks will be repeated long enough so that the final configurations at the tail of the chains \(\Sigma_{\text{MC}}=\{\mathbf{\sigma}_{f}\}_{i=b}^{M}\) approximate samples drawn from the probability distribution \(|\psi(\mathbf{\sigma})|^{2}\). A certain number \(b\) of walkers in each chain are discarded to reduce the biases from initialization of the chains. Then the expectation of an observable \(\hat{O}\) is given by, \[\left\langle\hat{O}\right\rangle_{\text{MC}} =\frac{\sum_{\mathbf{\sigma}}\psi(\mathbf{\sigma})^{*}\langle\mathbf{\sigma} |\hat{O}|\Psi\rangle}{\sum_{\mathbf{\sigma}}|\psi(\mathbf{\sigma})|^{2}}, \tag{10a}\] \[=\frac{\sum_{\mathbf{\sigma}}|\psi(\mathbf{\sigma})|^{2}\frac{\langle\mathbf{ \sigma}|\hat{O}|\Psi\rangle}{\psi(\mathbf{\sigma})}}{\sum_{\mathbf{\sigma}}|\psi(\mathbf{ \sigma})|^{2}},\] (10b) \[=\frac{1}{M}\sum_{\mathbf{\sigma}\in\Sigma_{\text{MC}}}\frac{\langle \mathbf{\sigma}|\hat{O}|\Psi\rangle}{\psi(\mathbf{\sigma})}. \tag{10c}\] Defining a local value of the operator \(\hat{O}\) as, \[O_{\text{loc}}=\frac{\langle\mathbf{\sigma}|\hat{O}|\Psi\rangle}{\psi(\mathbf{\sigma} )}, \tag{11}\] then the Monte Carlo estimation is the average of the local values in the Markov chain: \(\left\langle\hat{O}\right\rangle_{\text{MC}}=\frac{1}{M}\sum_{\mathbf{\sigma}\in \Sigma_{\text{MC}}}O_{\text{loc}}\). Next, to minimize \(\langle E\rangle\), we can compute its gradient with respect to the weights \(\Lambda^{0}\) in terms of the local energy \(E_{\text{loc}}\) and wavefunction amplitude derivative \(D_{i}\): \[\partial_{\Lambda_{i}}\langle E\rangle =\langle E_{\text{loc}}D_{i}\rangle-\langle E_{\text{loc}} \rangle\langle D_{i}\rangle \tag{12a}\] \[E_{\text{loc}} =\frac{\langle\mathbf{\sigma}|\,H\,|\Psi\rangle}{\psi(\mathbf{\sigma})}, \quad D_{i}=\frac{\partial_{\Lambda_{i}}\psi(\mathbf{\sigma})}{\psi(\mathbf{\sigma})} \tag{12b}\] Finally, we use gradient descent with learning rate \(\lambda\), \[\Lambda_{i}\rightarrow\Lambda_{i}-\lambda\partial_{\Lambda_{i}}\langle E\rangle, \tag{13}\] to minimize the energy expectation value. The gradient descent is performed by using an adaptive Adam optimizer [75]. We repeat this training step until empirical convergence. Note that the RBM ansatz can get stuck in local minima. To find the toric code ground state, we initialize the network parameters close to the analytic solutions in Eq. (10). ### Fidelity To find the approximate ground states at finite field values \(h\) with step size \(\Delta h\), we initialize the weights to be those from the previous field value \(h-\Delta h\), and then use the current optimized weights as the initialization for the next step \(h+\Delta h\). A good indication of a quantum phase transition is by inspecting the fidelity \(\mathcal{F}(h)\) defined as, \[\mathcal{F}(h)=|\langle\psi(h)|\psi(h+\Delta h)\rangle|^{2}. \tag{14}\] The critical field \(h_{c}\) is identified as a dip in the fidelity, indicating an abrupt change in the ground state wavefunction. A field value of \(h_{c}\simeq 0.57\) (at dashed line in Fig. 10) is found for the RBM ansatz. Note that one can get more accurate field value by including loop expectations in the ansatz as done in Ref. [59]. ## Appendix C Ensemble generation Using the algorithm outlined in Sec. 1, we can generate ensembles that deviate from the initial optimized parameters by setting hyper-parameter \(T=0.1,0.3,1\). The other choices of hyper-parameters for the ensembles are number of independent chains \(k=2\), length of each chain \(n=250\), and number of samples kept \(m=n\). The parameter proposal function we use consists of with probability \(p_{m}\) randomly apply minus sign or randomly adding local noise at a single spin site \(\jmath\). More precisely, \[f(\Lambda,\xi) =\begin{cases}f_{-,\jmath},&\text{with probability}:p_{m},\\ f_{\text{local},\jmath},&\text{with probability}:1-p_{m},\end{cases} \tag{15a}\] \[f_{-,\jmath} =\begin{cases}-(\Lambda)_{i},&i\in\jmath\\ (\Lambda)_{i},&i\not\in\jmath\end{cases}\] (15b) \[f_{\text{local},\jmath} =\begin{cases}\text{uniform}(0,\xi)+(\Lambda)_{\text{i}},&\text{ i}\in\jmath\\ (\Lambda)_{i},&i\not\in\jmath\end{cases} \tag{15c}\] In the exact toric code state, \(f_{-,\jmath}\) corresponds to act \(\sigma_{x}\) operator at site \(\jmath\) to create a pair of m-particles. In the trivial phase, depending on the parametrization of the state, \(f_{-,\jmath}\) could correspond to a single spin flip at site \(\jmath\). The hyperparameters are chosen to be \(p_{m}=0.3\) and \(\xi=0.2\). In Fig. 11, we visualize the ensembles by computing their loop expectations \(\langle\overline{W}_{j}\rangle\) at different field values. Figure 10: Fidelity \(\mathcal{F}\) as a function of field \(h\). The red dashed line is drawn to guide the eye, where the dip in fidelity indicates the critical field value \(h_{c}\simeq 0.57\).
2306.06442
Upper Limit on Correlated Current Variations in the Crab Pulsar
The high energy emission of rotation powered pulsars is supposed to be produced in "gaps" in the pulsar magnetosphere where charges are accelerated and currents are produced. The rest of the magnetosphere is supposed to be mostly a "force-free" plasma without any currents. Two important currents are the main current that flows away from the pulsar, that produces the observed radiation, and the current that returns to the pulsar to maintain charge neutrality. This work attempts to study the return current in the Crab pulsar using the soft X-ray data from the {\it{NICER}} observatory. It is assumed that the two currents vary as a function of time. This would modulate the electric fields in the "gaps", which would affect the observed X-ray flux. These flux variations will show up only in the on-pulse phases, while those caused by the Crab Nebula, instrumental effects, etc. will be present in the off-pulse phases also. This work obtains the correlation coefficient of the flux variations in the two peaks of the Crab pulsar, after removing the off-pulse flux variations. No correlation was observed; its error of $0.000012$ sets an upper limit of $0.036\%$ on the rms variation of correlated X-ray flux in the Crab pulsar. Reasons exist for the return current variations to be correlated, while the main current variations are probably uncorrelated. So the above number is considered an upper limit on correlated return current variations, which may be an important constraint for pulsar magnetospheric structure.
M. Vivekanand
2023-06-10T13:41:10Z
http://arxiv.org/abs/2306.06442v1
# Upper Limit on Correlated Current Variations in the Crab Pulsar ###### Abstract The high energy emission of rotation powered pulsars is supposed to be produced in "gaps" in the pulsar magnetosphere where charges are accelerated and currents are produced. The rest of the magnetosphere is supposed to be mostly a "force-free" plasma without any currents. Two important currents are the main current that flows away from the pulsar, that produces the observed radiation, and the current that returns to the pulsar to maintain charge neutrality. This work attempts to study the return current in the Crab pulsar using the soft X-ray data from the _NICER_ observatory. It is assumed that the two currents vary as a function of time. This would modulate the electric fields in the "gaps", which would affect the observed X-ray flux. These flux variations will show up only in the on-pulse phases, while those caused by the Crab Nebula, instrumental effects, etc. will be present in the off-pulse phases also. This work obtains the correlation coefficient of the flux variations in the two peaks of the Crab pulsar, after removing the off-pulse flux variations. No correlation was observed; its error of 0.000012 sets an upper limit of 0.036% on the rms variation of correlated X-ray flux in the Crab pulsar. Reasons exist for the return current variations to be correlated, while the main current variations are probably uncorrelated. So the above number is considered an upper limit on correlated return current variations, which may be an important constraint for pulsar magnetospheric structure. keywords: Stars: neutron - Stars: pulsars: general - Stars: pulsars: individual PSR J0534+2200 - Stars: pulsars: individual PSR B0531+21 - Stars: pulsars: individual PSR B0823+26 - Stars: pulsars: individual PSR B0943+10 - Stars: pulsars: individual PSR B1822-09 - X-rays: general - ## 1 Introduction Rotation powered pulsars (RPPs) have intense magnetic fields whose rotation causes intense electric fields outside the pulsar. These pull outlectrons and ions from the surface of the pulsar which are then accelerated and produce electron-positron pairs. These pairs are themselves accelerated and produce further pairs. Eventually this cascade leads to the outside of the pulsar being filled with a plasma of electrons, positrons and ions co-rotating with the pulsar - this is known as the magnetosphere. For original contributions to this subject see Goldreich & Julian (1969), Ostriker, & Gunn (1969), Sturrock (1971), Ruderman & Sutherland (1975), Arons & Scharlemann (1979), Arons (1983) and Cheng et al. (1986). Setting aside pair production for a moment, consider what happens when a current of, say, electrons leaves the pulsar. If no other physics intervenes, then a positive charge builds up on the pulsar as a function of time. This would reduce the accelerating electric field and eventually inhibit the outward electron current. The pulsar may end up as a charged and magnetized globe rotating at its period, but not producing the high energy X-rays and \(\gamma\)-rays that are emitted by the Crab pulsar. This would be an inert electrosphere (Arons, 2009; Spitkowsky, 2011). Clearly pair production intervenes - the accelerated charges emit curvature or synchrotron or inverse Compton photons of high energy, which form the observed radiation. These photons also produce electron-positron pairs in the strong magnetic field of the pulsar. The electrons of the pairs are accelerated in the same direction as the original current, i.e., away from the pulsar, while the positrons are accelerated in the reverse direction, and can reach the pulsar through a region of "force-free" plasma. But this can not be the return current - it has the wrong sign of charge. Therefore a proper return current is a vital component of the pulsar high energy emission mechanism. See Contopoulos et al. (1999), Cheng (2011), Hirotani (2011), Spitkovsky (2011), Arons (2011), Petri (2011) and Contopoulos et al. (2019) for details of the pulsar magnetosphere and the two currents and their relation to the high energy emission mechanism. To the best of my knowledge the return current was first discussed seriously by Arons & Scharlemann (1979). A brief summary of the essential features of the two currents is given below. It is an observer's perspective of the essential features of the currents in a RPP, Several theoretical details are unimportant for the current purpose and are therefore ignored (see Cerutti et al. (2016) and Philippov et al. (2020) for illustration). ### The main and return currents in a RPP Figure 1 (a) shows a cartoon of the pulsar magnetosphere and its various "gaps". It is a two dimensional projection of a three dimensional object. The RPP is defined by its rotation axis \(\Omega\) and magnetic dipole axis \(\mu\). The magnetosphere is divided into open and closed magnetic field line regions; the field line labeled "last closed field line" defines the boundary between the two. A charge in the closed field line region can not leave the magnetosphere. The rest of the magnetosphere shown consists of open filed line region in which currents can leave and return to the pulsar. In the open field line region there are three gaps. The polar cap gap lies just above the surface of the pulsar; it was the earliest gap proposed and is currently believed to be the source of the radio radiation of the RPP. It was found unsuitable for the high energy emission because of high magnetic opacity for photons near the surface of the pulsar. So the outer gap was proposed; it extends from a so called null surface at one end to the light cylinder at the other end, both shown as dotted lines in Fig. 1 (a), and has a thickness much smaller than its length. Its lower boundary is quite close to the last closed field line; this is the green area in the figure. Between the lower boundary of the outer gap and the last closed field line lies the slot gap, a relatively thin gap shown in orange color in the figure. It extends right from the surface of the RPP to the light cylinder. Now, the main current from the pulsar originates in these gaps and flows away from the pulsar; the charge carried away by this current depends upon whether the angle between \(\Omega\) and \(\mu\) is acute or obtuse. It is not clear whether all three gaps can operate simultaneously in the same RPP (polar cap gap and outer gap are believed to be mutually exclusive), but at least one of them is active. See Harding & Grenier (2011), Harding (2022) and references therein for details about these gaps. The return current reaches the RPP through a very narrow bundle of magnetic field lines along the last closed field line; this would be below the slot gap and probably almost coincides with what is known as the separatrix layer, which is like a narrow slot gap that extends beyond the light cylinder. For details of the return current please refer Arons (2011) and particularly its Fig. 5, and Fig 15.2 of Arons (2009), and figures 1 and 2 of Contopoulos et al. (2019). The structure of both currents is a strong function of the angle \(\alpha\) between the the rotation axis \(\Omega\) and the magnetic axis \(\mu\). ### Some properties of the main current The properties of the main current obviously depend upon the gap involved. For the polar cap gap early research postulated periodic build up and breakdown of the gap on timescales of micro seconds (Ruderman & Sutherland, 1975), leading to a current that sparks on time scales of micro seconds. This was revised later to a space charge limited quasi steady state current due to the work function of iron ions on the surface of the neutron star. The main current in all gaps depends upon the so called "favorably curved magnetic field lines" (see Cheng (2011) and references therein). In all gaps pair production reduces the effective electric field due to screening by the pairs - then one has a space charge limited flow. In the outer gap the current is quite low in the accelerating region but much larger in the screening region (Cheng, 2011). The magnitude of the main current depends upon the gap thickness. So high-altitude slot gaps can not produce sufficient high energy flux due to their being thin (Hirotani, 2011). ### The return current and the current sheet The last closed field line has been depicted as almost a circle in Fig. 1 (a). Modern particle-in-cell simulations show it to be actually that depicted in Fig. 1 (b). It develops a sharp projection at the point where it just touches the light cylinder; this is known as the "Y-point". Here it also connects with the so called "current sheet", shown in red in the figure (strictly the Y-point does not necessarily have to touch the light cylinder (Spitkovsky, 2011)). Most of the return current originates far away from the RPP and flows towards it in the current sheet. At the Y-point the return current splits into two streams along the last closed field line, each proceeding to one of the two magnetic poles of the RPP. Although the current sheet has been represented by a red curve of uniform width in Fig. 1 (b), it is thickest at the Y-point and reduces in thickness further away from the RPP. ### Some properties of the return current The above is a two dimensional projection of a three dimensional object, the projection being in the plane containing the vectors \(\Omega\) and \(\mu\). So the current sheet is actually a current ring or a current torus, and the Y-point is actually a Y-volume since it has finite extension in all three dimensions, and the return current is flowing down the edge of a three dimensional magnetic funnel at the polar caps of the RPP. The nature of the current sheet and the return current it provides depend upon the angle \(\alpha\). The magnitude of the return current decreases, and its magnetospheric distribution changes dramatically, with \(\alpha\)(Spitkovsky, 2011). The return current path is also expected to carry a counter-streaming current (Arons, 2011; Contopoulos et al., 2019). In this work we are concerned with the net return current which is the sum of all counter-streaming currents, if at all they exist. It is now suspected that the high energy emission from RPPs originates due to reconnection in the current sheet, presumably at the Y-point. See Spitkovsky (2011), Arons (2011) and Petri (2011) for details. ### The philosophy of this analysis To begin with, it is assumed that both the main and return currents together determine the maximum available electric potential in the gaps, which is related to the X-ray flux emitted by the RPP. The gaps can be thought of as electrical batteries that are charged and discharged by the two currents in a collective manner, implying higher and lower electric potential available for particle acceleration, respectively. However the dynamics of the two currents are likely to be vastly different - the main current is powered by the immense rotational inertia of the pulsar and its intense magnetic field, and depends upon the details of the gaps, the pair production process, and the high energy emission mechanism. On the other hand, the return current begins its journey somewhere far away from the RPP and depends critically upon the details of the current sheet and the pulsar magnetosphere. Next it is assumed that both currents vary as a function of time. Clearly a steady, unchanging current is not logical, given the highly Figure 1: (a) Left panel: Cartoon of Fig. 1 of Hirotani (2011), (b) Right panel: Cartoon of Fig. 2 of Spitkovsky (2006). These cartoons are intended only for convenience. For any scientific details the original figures must be referred to. NS is the neutron star (RPP); it is not to scale with respect to the radius of the light cylinder. \(\Omega\) and \(\mu\) are its rotation and magnetic dipole axes. energetic and almost explosive environment in the pulsar magnetosphere. What is not known is the time scale of such variations. It is assumed here that all possible time scales of variation exist in the currents. Next, one notes regarding the main current that (1) its variations are unlikely to be correlated at the two poles, since the emission at each pole is expected to be independent, and (2) the emission at each pole may or may not be correlated across different phases in the folded light curve (FLC) of the RPP, which are equivalent to different regions of emissions in the gaps. On the other hand, variations imposed upon the return current beyond the Y-point are certainly correlated not only at the two poles but also at all phases in the FLC, although one can not rule out de-correlating variations being imposed on the return current between the Y-point and the surface of the RPP. The purpose of this work is to study correlated flux variations if any in the Crab pulsar. The basic premise of this work is that the return current variations (beyond the Y-point) should be imprinted on the X-ray flux of the RPP, while the main current variations may or may not be imprinted. Further, these variations will exist in the pulsar flux, and not in that from the nebula. So the technique used here is to divide the FLC of the Crab pulsar into three regions of phase - the main peak and the second peak (forming the so called on-pulse region), and the off-pulse region. The nebular flux and instrumental effects will exist at all phases while the current variations are expected only in the on-pulse. The idea is to correlate the X-ray flux in the two peaks of the Crab pulsar, after estimating the nebular flux variations and/or instrumental effects from the off-pulse phases and removing them from the on-pulse phases. At best this number will be determined only by the return current variations; however one can not rule out correlated main current variations altogether. Incidentally, variations of currents in a RPP can also manifest as variations of electromagnetic torque on the RPP, leading to fluctuations in its rotation period. This is commonly known as timing noise. This work addresses a particular component of it, that which is correlated across the FLC of the RPP. As mentioned earlier in this section, only the return current may display such a correlation. Given the possibility of de-correlating effects before the Y-point, it is possible that the correlated return current variations may be a very small fraction of the total current variations in the RPP. In such a situation the correlated return current variations may not leave a measurable imprint on the timing noise of a RPP. In footnote 6 on page 387 Arons (2009) states that observational study of pulsar currents has been an untouched subject. This work attempts to redress this issue. ## 2 Observations and Analysis The details of the _NICER_ observations used here and their preliminary analysis are given in Vivekanand (2020, 2021). The top panel of Fig. 2 is the long time light curve (LTLC) of the Crab pulsar with a bin size of 10 s in the energy range 1 to 10 keV. This consists of 137161 s of data equivalent to \(\approx 4.06\) million periods. It is similar to Fig. 1 of Vivekanand (2021) except that a small amount of initial data is not included for reasons stated there, and the energy range is different. The bottom panel of Fig. 2 is the same data with a bin size of 300 periods which is approximately 10.125 s. The binning is in number of periods because this analysis depends upon the phase in the FLC, so the period is the natural unit for binning. It also eliminates the error introduced by fractional periods of data at the beginning and the end of a time bin. By plotting the data in terms of the sequence number of the bins one eliminates the large gaps in the epoch in the top panel, leading to a better appreciation of the variability of the X-ray flux of the Crab pulsar. The green and blue dashed lines in the top panel represent the epochs 2018.25 and 2019.25 respectively. The corresponding lines in the bottom panel mark the partitioning of the data in terms of these epochs. Thus the sequence numbers in the bottom panel just before and just after the green dashed line correspond to the time bins just before and just after the green dashed line in the top panel. The same is true for the blue dashed lined in both panels. However the correspondence is approximate since the bins in both panels are slightly different in terms of time. Further, the period bins in the bottom panel of Fig. 2 are slightly larger in terms of time towards later epochs since the period of the Crab pulsar increases with epoch. The bottom panel of Fig. 2 was obtained by first searching for a new period at the start of each good time interval (GTI), then accumulating photon counts over 300 contiguous periods for each bin. The incomplete bin at the end of the GTI is discarded. The mean and rms of the counts are 253.79 and 1.79 respectively; the rms is a fraction \(1.79/253.79=0.00705\) of the mean or \(\approx 0.71\%\). As shown later on this variation of X-ray flux is present at all phases in the FLC. Therefore this must be either due to the Crab nebula or due to instrumental effects such as pointing variations of _NICER_. Now, this number is consistent with the \(1^{\prime}\) pointing errors of _NICER_ producing \(2\%-4\%\) flux variations.1 These variations have to be removed from the data before estimating those that exist only in the on-pulse phases. Footnote 1: [https://hesaarc.gsfc.nasa.gov/docs/nicer/data_analysis/nicer_analysis_tips.html](https://hesaarc.gsfc.nasa.gov/docs/nicer/data_analysis/nicer_analysis_tips.html) The analysis begins by defining three phase ranges in the period of the Crab pulsar - first peak, second peak and off-pulse, labeled P1, P2 and P3 respectively, with respect to Fig. 1 of Vivekanand (2022). The three phase ranges were finalized after some experimentation as 0.1015625 to 0.40625, 0.40625 to 0.796875, and the rest of the phase range, respectively, the specific numbers being chosen so as to correspond to integer multiples of \(1/128\) phase, and also to maximize the off-pulse phase range without compromising on the on-pulse phase range. The final results of this work are not sensitive to the Figure 2: (a) Top panel: Long time light curve (LTLC) of the Crab pulsar from _NICER_ data in the energy range \(1-10\) keV with a bin size of 10 s, using the 27 observations specified in Vivekanand (2021); the abscissa is exactly the same as in its Fig. 1; the ordinate is in units of thousand photons per second. (b) Bottom panel: LTLC using the same data but with a bin size of 300 periods, which is \(\approx 10.125\) s; the abscissa is sequence number of the bins while the ordinate is in photons per period. The green and blue dashed lines mark the same epochs respectively in both panels. exact choice of the above phase range limiters. P1 and P2 together form the on-pulse phase range while P3 is the off-pulse. Throughout this work it will be assumed that component P3 contains no pulsar flux, in spite of Tennant et al. (2001) discovering off-pulse non-thermal X-ray emission from the Crab pulsar at much softer X-ray energies, because their Fig. 1 shows that this flux is about two orders of magnitude smaller than the flux at the first peak of the Crab pulsar. The next section implements the cross-correlation analysis upon the data of P1, P2 and P3 to reproduce the \(\approx 0.71\%\) flux variation in all three components. This is to validate our analysis method. In the following section these variations are estimated in the P3 component and removed from all three components. Cross-correlation of the modified P1 with the modified P2 should give us the correlated variations that exist only in the on-pulse, which will be attributed to current variations in the Crab pulsar. Correspondingly cross-correlation of the modified P1 and P2 with the modified P3 should result in almost zero correlation. ## 3 Analysis of raw data Figure 3 is the same as the bottom panel of Fig. 2 but for each of the components P1, P2 and P3 of the FLC; the sum of these three rates is exactly equal to the corresponding rate in the bottom panel of Fig. 2. The flux variation of Fig. 2 exists in all three components of the FLC. It is know from Fig. 2 that the fractional flux variation is \(\approx 0.71\%\) of the mean flux. This will now be derived by cross correlating the fluxes in the three components, so as to validate the procedure for the next section. In Fig. 3 the X-ray flux data has been averaged over 300 periods for convenience of visualization and plotting. However the cross correlation of the data must be estimated using the raw (un-averaged) data, since averaging the data before cross correlating it biases it to larger values. This is because averaging the data decreases the denominator while maintaining approximately the same numerator in the formula for cross correlation (Appendix A). In a larger context this problem is related to what is known as "ecological fallacy"2 in which correlation of aggregate data is used to interpret possible correlation among individual data, which can be misleading. In our case therefore the cross correlation must be done with the data of count rate per a single period. Footnote 2: [https://en.wikipedia.org/wiki/Ecological_fallacy](https://en.wikipedia.org/wiki/Ecological_fallacy) Table 1 lists the estimated cross correlation \(R\) between pairs of components in the FLC of the Crab pulsar, along with the derived fractional rms variation \(\sigma_{c}\) of the correlated X-ray flux (see Appendix A, which also explains the error on the cross correlation). The \(\sigma_{c}\) values in Table 1 and the upper and lower limits of errors have not been rounded off to bring out the fact that a uniform error of 0.0004 or 0.0005 is a good enough approximation instead of the upper and lower limits. Results for the three \(\sigma_{c}\) values are 0.70(4)%, 0.66(5)% and 0.69(4)%, which are consistent with each other. Their weighted mean is 0.69(2)% which is consistent with the value of \(\approx 0.71\%\) estimated in the previous section. Our analysis is therefore validated for use in the following section. Appendix B discusses the so called "ecological fallacy" and the variation of the cross correlation \(R\) for data averaged over \(M\) samples. ## 4 Analysis of filtered data In this section one will first estimate the smooth version of the flux in component P3 (bottom panel of Fig. 3), and use that to remove the corresponding flux variation in components P1 and P2 before cross correlation. There are several algorithms for smoothing data; see any graduate school level text book on filtering data in the subject of digital signal processing. To ensure that the results of this section are independent of the smoothing algorithm, three filters were used: the Savitzky-Golay3 filter, the moving average filter4, and the Wiener filter5. Footnote 3: [https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter](https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter) Footnote 4: [https://en.wikipedia.org/wiki/Moving_average](https://en.wikipedia.org/wiki/Moving_average) Footnote 5: [https://en.wikipedia.org/wiki/Wiener_filter#:text=In%20signal%20processing%2C%200](https://en.wikipedia.org/wiki/Wiener_filter#:text=In%20signal%20processing%2C%200) All three filters have one common parameter known as the the smoothing window \(N_{sw}\) in units of data samples. In addition, the Savitzky-Golay filter uses a polynomial of degree L, which is usually low (\(\approx 2-3\)). It amounts the data by representing it in each segment of \(N_{sw}\) samples by a polynomial with L different coefficients. The moving average filter smooths the data by representing each data point by the average of \(N_{sw}\) adjacent samples. The Wiener filter assumes that it knows the signal and noise power spectra, and applies low weightage to data segments (of width \(N_{sw}\) samples) where the signal to noise ratio is low. Fig. 4 shows an example of smoothing the data using the Savitzky-Golay filter with \(N_{sw}=30\) and \(L=2\). The top panel displays the \begin{table} \begin{tabular}{c c c} \hline \hline Component pairs & \(R\) (Correlation) & \(\sigma_{c}\) (Fractional rms) \\ \hline \hline (P1, P2) & \(R_{12}=0.0044(5)\) & \(0.00701\)\({}^{+0.00039}_{-0.00041}\) \\ \hline (P1, P3) & \(R_{13}=0.0033(5)\) & \(0.006564\)\({}^{+0.00048}_{-0.00052}\) \\ \hline (P2, P3) & \(R_{23}=0.0041(5)\) & \(0.00691\)\({}^{+0.00041}_{-0.00044}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Cross correlation \(R\) of X-ray flux data binned at 1 (one) period between pairs of phase components P1, P2 and P3. \(\sigma_{c}\) is the resulting rms flux variation of the correlated component as a fraction of the mean counts. The error in the last digit in \(R\) is given in brackets. See Appendix A for computational details. Figure 3: Same as the bottom panel of Fig. 2 but for each of the components P1, P2 and P3 of the FLC (top, middle and bottom panels respectively). filtered data while the bottom panel displays the difference between the original and filtered data, which looks like white noise, as it should. Once again it is mentioned that although Fig. 4 shows X-ray flux data averaged over 300 periods, the cross correlation is done using un-averaged data. Moreover, the cross correlation coefficient \(R\) now depends upon the filter window width \(N_{SW}\). The detail of this analysis are given in Appendix C and the results are given in Table 2. Table 2 gives the cross correlation coefficient \(R\) for all three pairs of phase components for all three filters. Each \(R\) in Table 2 has been estimated with reference to a "calibration" \(R\) that should ideally be zero; this is account for any uncertainty in removing the smooth version of the flux of component P3 (see Appendix C for details of the analysis). All \(R\) are either smaller than or comparable to their errors, so they are essentially zero. This is because they are the difference between two numbers that have very similar values (Appendix C). Since no significant cross correlation has been estimated, the largest error on the cross correlations in Table 2 (\(1.2\times 10^{-5}\)) sets the upper limit for \(\sigma_{\rm C}\), which turn out to be 0.036%. Thus the correlated X-ray flux in the phase components P1 and P2 of the Crab pulsar is less than this value. In principle this includes variations from both the main and the return currents. Therefore this number is the upper limit on the correlated variation of the return current in the Crab pulsar. ## 5 Correlation of X-ray flux from the two poles In section 1.5 it was argued that the return current variations beyond the Y-point are correlated at the two poles. In this section attempt will be made to test this hypothesis although it was acknowledged in section 1.5 that variations imposed upon the return current before the Y-point may lead to de-correlation. The observed X-ray radiation from a RPP is emitted in the gaps in the pulsar magnetosphere. Within the gaps the radiation is emitted with its momentum vector tangent to the local magnetic field line; this radiation is observed whenever the line of sight to the RPP is parallel to this tangent. Now in Fig. 1 (a) only one half of the magnetosphere is drawn - a symmetrical gap system exists at the other magnetic pole. Therefore it is possible that one may observe radiation from both poles depending upon the angle \(\alpha\) between the rotation and magnetic axis, and the angle between the rotation axis and the line of sight. It is easier to visualize this in Fig. 1 (b) where magnetic field lines are drawn at both poles. Now, there will be a delay in the arrival time of radiation from the pole farther away from the observer, which would be typically \(\approx\) a light cylinder distance farther away from the closer pole. This distance would take the radiation about \(P/(2\pi)\) s of travel time where \(P\) is the rotation period of the RPP; this number may be a bit larger depending upon the above mentioned two angles. So the Crab pulsar's X-ray flux may have correlated flux variations with a delay of a fraction of the period, which would translate to an equivalent phase delay in the FLC. This effect should be studied by using phase bins of much higher resolution than those of the previous section. This would naturally decrease the number of photons in these bins making the estimation of cross correlation more difficult. In this section the FLC was divided into 128 phase bins; the number of photons per phase bin (per period) reduced to typically \(1-2\) photons, which is very small. The attempt was to cross correlate the data of the bin at the first peak of the FLC of the Crab pulsar, with all other bins except those in the off-pulse. Any correlated variations from the two poles should show up as a secondary peak of cross correlation about \(1/(2\pi)\) phase away from the first peak. As in the previous section, the variations in the off-pulse have to be removed before correlating. The analysis of the previous section was repeated using the Savitzky-Golay filter, and the results are presented in Table 3 which shows, for the purpose of illustration, the result of correlating the data at phase bin number 17 (at the first peak) and at bin number 69 (at the second peak), using the same off-pulse phase range as earlier (P3). Although the correlations in the three rows are larger than their formal errors, the correlations of data at bin numbers 17 and 69 with the data of P3 (bottom two rows of Table 3) are much larger than the \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Filter} \\ \hline \(R\) (\(\times 10^{5}\)) & SG & MA & WI \\ \hline \(R_{12}\) & \(1.2\pm 1.2\) & \(0.40\pm 0.59\) & \(0.40\pm 0.59\) \\ \hline \(R_{13}\) & \(0.10\pm 0.12\) & \(-0.30\pm 0.19\) & \(-0.30\pm 0.24\) \\ \hline \(R_{23}\) & \(-0.2\pm 0.19\) & \(0.20\pm 0.12\) & \(0.20\pm 0.12\) \\ \hline \hline \end{tabular} \end{table} Table 2: Cross correlation \(R\) (in units of \(10^{-5}\)) of flux binned at 1 (one) period between pairs of phase components P1, P2 and P3, after estimating the flux trend in P3 and removing it from all three components. Three filters were used for estimating the trend in P3 – Savitsky-Golay (SG), moving average (MA) and Wiener (WI). See Appendix C for computational details. Figure 4: (a) Top panel: Data of phase component P3 (bottom panel of Fig. 3) filtered by a Savitzky-Golay algorithm using a window width of 30 samples and a polynomial of degree 2. (b) Bottom panel: Difference between the original and filtered data. \begin{table} \begin{tabular}{c c} \hline \hline & Filter \\ \hline \(R\) (\(\times 10^{5}\)) & SG \\ \hline \(R_{17,69}\) & \(0.4\pm 0.1\) \\ \hline \(R_{17,7}\) & \(-5.4\pm 0.2\) \\ \hline \(R_{69,P3}\) & \(1.6\pm 0.3\) \\ \hline \hline \end{tabular} \end{table} Table 3: Cross correlation \(R\) (in units of \(10^{-5}\)) of flux binned at 1 (one) period at phase bins 17 and 69 in the FLC with a resolution of 128 bins; off-pulse (same P3 of the previous section) flux trend was removed from all three components. The Savitsky-Golay (SG) filter was used for trend estimation. correlation of data of the two bins (top row of Table 3). This implies that the data trend of P3 has not been removed completely from the data of bins 17 and 69. Clearly the correlations are consistent with no correlation, probably because the number of photons per phase bin are too small for this exercise. ## 6 Discussion The summary of this work is that it attempts to estimate correlated soft X-ray flux variations in the on-pulse phases of the Crab pulsar, after removing flux variations due to the Crab nebula and/or instrumental effects. No correlation was detected; the measurement error on the correlation sets an upper limit of 0.036% on the rms variation of correlated flux. In principle this could be due to both the main as well as the return currents; therefore the above number is an upper limit on the correlated variations of the return current in the Crab pulsar. A few caveats will be stated before proceeding further. In this work it has been assumed that variations of the return current beyond the Y-point are carried back to the star. However Arons (2009) cautions that it is yet to be demonstrated whether this is possible. Next, the return current has details that have been ignored in this work; for example 20% of it flows to the pulsar along a different but close path; see Arons (2009) for details. In order to study the return current variations one should be able to decouple them from the main current variations. This depends upon the time scales on which the two are coupled, but currently this is not known. For illustration consider an extreme example - let the source of the return current be at half distance to the Crab nebula, which has a size of radius \(\approx 2^{\prime}\) in the sky. Assuming that the Crab pulsar is at a distance of \(\approx 2000\) parsec, the source of the return current is \(\approx 1/60\times\pi/180\times 2000\approx 0.58\) parsec from the pulsar. So any change at the source of the return current will take at least \(\approx 1.9\) years to be felt at the pulsar, and a consequent change in the main current will take another \(\approx 1.9\) years to be registered back at the source of the return current. Under these circumstances variations in the two currents can be considered to be decoupled and can be discussed separately, even though one does not know their relative strengths. On the other hand, if the above time scale is much shorter then variations in the two currents are closely coupled, and can not be discussed separately; they might form what is known in electronics as a feed back system. An appropriate example here is a radio receiver with feed back - a strong negative feed back will kill the output, a mild negative feed back will reduce the receiver noise power, a mild positive feed back will convert the receiver into an oscillator, and a strong positive feed back will push the receiver into saturation. The above should be kept in mind while discussing the return current. One of the direct consequences of variations of the currents is the simultaneous variation in observed emission at all energies. This would require a campaign of multi-wavelength observations with high time resolution and high sensitivity, which are not available. However simultaneous observations at the radio and X-rays are available on three pulsars -- PSR B0823+26 (Sobey et al., 2015; Hermsen et al., 2018), PSR B0943+10 (Hermsen et al., 2013; Mereghetti et al., 2016) and PSR B1822-09 (Hermsen et al., 2017). All three pulsars exhibit two main modes of emission - a radio B (bright) mode in which the pulsars are bright in the radio, and a radio Q (quiescent) mode in which they are much weaker. Together these three RPPs make an interesting contradiction which the return current may or may not be able to resolve. In PSR B0823+26 the X-ray flux in the B mode has a \(\approx 20\%\) flux variation which the authors call a new kind of behavior (Hermsen et al., 2018). They speculate that the Q mode is not a true "null" mode - they believe some residual emission exists but being weak it is not observed at earth. They believe the mode changes in this pulsar are entirely different from those in the other two pulsars. They believe that B0823+26 is accreting material from its external environment, either from the interstellar medium, or from an accretion disk. Now, any accreted material would be eventually ionized and must be funneled to the RPP along the same path as taken by the return current - along other paths the currents are only allowed to leave the pulsar. In such a scenario, the return current can serve the same purpose as the accreted material, which will no longer be needed for the above model. Clearly several details have to be worked out in this scenario - how much extra return current is required, can it be supplied by the current sheet, can the additional return current explain the observed spectral and temporal features as well as the accreted material can, and so on; but this possibility is worth exploring. Arons (2009) mentions that fluctuations of the currents at the light cylinder can probably explain the flickering nature of pulsars, although his context may have been different. If the return current is varying in PSR B0823+26 as speculated above, then one should also speculate if it is operating similarly in the other two pulsars also; here one encounters some difficulties. The soft X-ray emission is correlated with the radio emission in B0823+26 - in the B mode both radio and X-rays are observed, while both are not observed in the Q mode. The return current can offer a simple explanation for this - it is larger in the Q mode than in the B mode, so the maximum electric potential available for particle acceleration in the gaps is smaller in the Q mode. Since this affects the basic emission mechanism of the RPP, radiation at all wavelengths can be expected to decrease. However, the soft X-ray emission is anti-correlated with the radio emission in PSR B0943+10 - the radio emission is weaker in the Q mode while the X-ray emission is weaker in the B mode by a factor of \(\approx 2\). Clearly the same return current that can explain the broad band behavior of B0823+26 will fail in the case of B0943+10. One can attempt to salvage the situation by noting that the return current behavior can be different in the two pulsars because B0943+10 is a nearly aligned rotator (\(\alpha\approx 0^{\circ}\)) while B0823+26 is a nearly orthogonal rotator (\(\alpha\approx 90^{\circ}\)), and the return current is a strong function of the angle \(\alpha\) between the rotation and the magnetic axis. Even then to explain the anti-correlation a new element has to brought in - formation of coherent bunches of particles. Suppose that in PSR B0943+10 a high return current causes a higher number of coherent bunches of particles; this would imply a higher level of radio emission even though the maximum available electric potential has decreased. But this is not sufficient - one has to come up with a reason why this effect does not operate in PSR B0823+26. Clearly one is reaching the limits of reasonable speculation. This point is further emphasized by the fact that in PSR B1822-09, which is an orthogonal rotator, the radio and X-ray emission are uncorrelated. To summarize the discussion so far, the return current can explain (in principle) the behavior of PSR B0823+26, but fails to explain the behavior of PSR B0943+10 and PSR B1822-09. See the references above for much greater details in the behavior of these RPPs. It is not clear what causes these behavior, but the return current may be a partial explanation. So far one focused on what may be called the traditional physics of RPPs, in which the gaps are the sites of pair production as well
2304.10459
Long-Lived Singlet State in an Oriented Phase and its Survival across the Phase Transition Into an Isotropic Phase
Long-lived singlet states (LLS) of nuclear spin pairs have been extensively studied and utilized in the isotropic phase via liquid state NMR. However, there are hardly any reports of LLS in the anisotropic phase that allows contribution from the dipolar coupling in addition to the scalar coupling, thereby opening many exciting possibilities. Here we report observing LLS in a pair of nuclear spins partially oriented in the nematic phase of a liquid crystal solvent. The spins are strongly interacting via the residual dipole-dipole coupling. We observe LLS in the oriented phase living up to three times longer than the usual spin-lattice relaxation time constant ($T_1$). Upon heating, the system undergoes a phase transition from nematic into isotropic phase, wherein the LLS is up to five times longer lived than the corresponding $T_1$. Interestingly, the LLS prepared in the oriented phase can survive the transition from the nematic to the isotropic phase. As an application of LLS in the oriented phase, we utilize its longer life to measure the small translational diffusion coefficient of solute molecules in the liquid crystal solvent. Finally, we propose utilizing the phase transition to lock or unlock access to LLS.
Vishal Varma, T S Mahesh
2023-04-20T17:00:53Z
http://arxiv.org/abs/2304.10459v3
# Long-lived singlet state in oriented phase and ###### Abstract Long-lived singlet states (LLS) of nuclear spin pairs have been extensively studied and utilized in the isotropic phase via liquid state NMR. However, there are hardly any reports of LLS in the anisotropic phase that allows contribution from the dipolar coupling in addition to the scalar coupling, thereby opening many exciting possibilities. Here we report observing LLS in a pair of nuclear spins partially oriented in the nematic phase of a liquid crystal solvent. The spins are strongly interacting via the residual dipole-dipole coupling. We observe LLS in the oriented phase living up to three times longer than the usual spin-lattice relaxation time constant (\(T_{1}\)). Upon heating, the system undergoes a phase transition from nematic into isotropic phase, wherein the LLS is up to five times longer lived than the corresponding \(T_{1}\). Interestingly, the LLS prepared in the oriented phase can survive the transition from the nematic to the isotropic phase. As an application of LLS in the oriented phase, we utilize its longer life to measure the small translational diffusion coefficient of solute molecules in the liquid crystal solvent. Finally, we propose utilizing the phase transition to lock or unlock access to LLS. Long-lived state, Singlet state, Partially oriented system, Phase-transition, NMR ## I Introduction In nuclear magnetic resonance (NMR) spectroscopy, the longitudinal relaxation time constant (\(T_{1}\)) of nuclear spin states constrains the time during which a physical process can be studied. The discovery of long-lived singlet states (LLS), which have considerably longer lifetimes than \(T_{1}\), has opened many novel applications [1; 2]. They include estimating slow diffusion rates of large biomolecules [3; 4; 5; 6], storing hyperpolarized spin order [7], detecting weak interactions between ligands and their binding sites for targeted drug deliveries [8; 9; 10], studying slow chemical exchange [11], observing signals from hyperpolarized metabolites in magnetic resonance imaging [12], and initialization of quantum registers for quantum information processing [13]. Various methods for LLS generation, efficient storage, precise manipulation, and robust detection have been described extensively in the literature [14; 2; 15]. The long lifetime of LLS arises from its immunity to intra-pair dipole-dipole relaxation, which is the major source of relaxation in NMR [16; 17]. The transition from the antisymmetric singlet state to symmetric triplet states under the dipole-dipole interaction is symmetry forbidden. This gives LLS its extraordinarily long lifetime, even as long as an hour [18; 2]. One generally exploits some form of asymmetry to populate and detect LLS. This asymmetry may be due to the chemical inequivalence of the spin-pair [19; 20] or the magnetic inequivalence arising from differential coupling strengths with ancillary spins [21; 22]. Various methods exist that generate LLS in different spin systems, such as the Carravetta-Levitt (CL) sequence for a weakly coupled spin-pair [19], M2S-S2M and SLIC sequence in the case of strongly coupled or chemically equivalent spin pair [20; 21; 22; 23; 24], adiabatic methods [25], and their variations [26]. Discovering new methods for faster and more efficient generation of LLS [27; 28; 29; 30; 31; 32] as well as synthesizing designer spin-systems that can sustain LLS for extraordinarily long durations [18] are active areas of research. In the isotropic phase (IP) of a liquid sample, the dipole-dipole interactions between two nuclear spins are completely averaged out due to the fast molecular tumbling, while the nonvanishing scalar coupling is generally small compared to the chemical shift difference. Although it is easy to prepare LLS in such a weakly coupled spin system with strong chemical inequivalence, it needs to be sustained with the help of a symmetry-imposing spin-lock sequence [33]. This limits the storage time because of the heating caused by the spin-locking sequence. However, in a strongly coupled spin pair, LLS closely resembles the system Hamiltonian's eigenstate and can be sustained without spin-lock [34; 20; 35]. Although LLS has mostly been demonstrated in IP, we may also look for it in anisotropic phases. The dipolar couplings of a spin-pair embedded in a crystalline lattice are too strong, which adds to the spectroscopic complexity [36]. The partially oriented phase (POP) offers an excellent middle ground between the extreme cases of IP and crystalline phases. Nagashima et al [37] have reported observing LLS in CH\({}_{2}\) protons of a tripeptide fused with a hydrogel. Alternatively, the POP of a nematic liquid crystal solvent provides a convenient and controllable way to realize strongly coupled homonuclear spin-pairs. Liquid crystals have long been used as solvents in NMR spectroscopy for obtaining high-resolution spectra while partially retaining anisotropic interactions [38; 39; 40]. The residual dipole-dipole coupling of a spin system in POP can range from a few hundred Hz to a few kHz, depending on the order parameter of the liquid crystal matrix, which can be controlled via the sample temperature [36]. Occurrences of POP in many biological systems, such as cell membranes, DNA, etc, have also been known for a long time [41; 42; 43]. Since NMR is a versatile tool for studying molecular transport in biological systems, the possibility of generating and sustaining LLS in such systems may significantly enhance the capability of NMR experiments. In this work, we first show that the LLS of a two-spin system can be prepared and sustained in both the phases, POP as well as IP, of a nematic liquid crystal solvent. In NMR, the nuclear spin dynamics is mostly carried out in a particular thermodynamical phase of a sample. The finite lifetime of nuclear spin coherences is a hurdle for switching between thermodynamical phases of the physical system during a single transient. Our goal is to see if LLS prepared in POP can survive the phase transition into IP brought about by heating the sample. Interestingly, we find that LLS can survive the POP to IP phase transition of the liquid crystal solvent. For this purpose, we first prepare LLS in POP, initiate the phase transition into IP by heating the sample, and finally convert LLS into observable magnetization in IP. We also utilize the long lifetime of LLS in these phases to estimate the translational diffusion coefficient at a set of temperatures. This article is organized as follows. Sec. II describes the theory, where we introduce the NMR Hamiltonian of a spin-pair in POP and explain the quantum state evolution during LLS generation, storage, and detection. In the experimental section of Sec. III.1, we first discuss the experimental procedure to measure LLS lifetimes (\(T_{LLS}\)) at different temperatures and compare them with corresponding \(T_{1}\). Subsequently, we describe the results of experiments on the survival of LLS during POP to IP phase transition. We then explain the experimental procedure and results for the estimation of diffusion coefficient via LLS in POP as well as IP. Finally, in Sec. IV, we discuss the significance of the experiments and make concluding remarks. ## II Theory ### The POP Hamiltonian Under a strong magnetic field \(B_{0}\), the secular Hamiltonian in rotating frame for a nuclear spin-pair with scalar coupling \(J\) as well as residual dipolar coupling \(\mathcal{D}\) is given by (in \(\hbar=1\) units) [36] \[H =-\pi\Omega I_{1z}+\pi\Omega I_{2z}+H_{12},\text{ where,}\] \[H_{12} =2\pi J\mathbf{I}_{1}\cdot\mathbf{I}_{2}+2\pi\mathcal{D}\left(3I_ {1z}I_{2z}-\mathbf{I}_{1}\cdot\mathbf{I}_{2}\right). \tag{1}\] Here \(\Omega\) is the chemical shift difference between the two nuclear spins and \(\mathbf{I}_{i}\) are the spin angular momentum operators with components \(I_{i\alpha}\) with \(\left(\alpha=x,y,z\right)\). The residual dipolar coupling \(\mathcal{D}\) is the full dipolar coupling scaled by the order parameter of the POP, i.e., \(\mathcal{S}=\left\langle 3\cos^{2}(\Theta)-1\right\rangle/2\), wherein \(\Theta\) is the angle between the inter-nuclear vector and the magnetic field \(B_{0}\). The average \(\langle\ \rangle\) is taken over all possible orientations. In IP, the molecules exhibit random isotropic reorientations so that \(\mathcal{S}\) and therefore \(\mathcal{D}\) vanish. If the two spins are chemically equivalent (i.e., \(\Omega=0\)), either due to inherent symmetry or due to the suppression of the chemical shift by a spin-lock sequence such as WALTZ-16 [44], then the Hamiltonian is simply \(H_{12}\). Consider the total spin angular momentum operator \(\mathbf{S}=\mathbf{I}_{1}+\mathbf{I}_{2}\) with quantum number \(S\), and its z-component \(S_{z}\) with the magnetic quantum number \(m_{S}\). In the eigenbasis \(\{|0\rangle,|1\rangle\}\) of the Pauli \(\sigma_{z}\) operator, the Figure 1: LLS can not only be prepared in POP (left) but also can be carried across the phase-transition into IP (right). Figure 2: Rotations in \(\{|T_{0}\rangle,|S_{0}\rangle\}\) subspace by using CPMG echo sequence. (a) CPMG Echo sequence. The delay \(\tau\) and the number of iterations of CPMG echo are chosen to match the resonance condition (Eqs. 5 and 6). (b) Evolution from the initial state \(|T_{0}\rangle\) under the resonant echo sequence. eigenstates of Hamiltonian \(H_{12}\) are \[|T_{+1}\rangle =|S=1,m_{S}=1\rangle=|00\rangle,\] \[|T_{0}\rangle =|S=1,m_{S}=0\rangle=\frac{1}{\sqrt{2}}(|01\rangle+|10\rangle),\] \[|T_{-1}\rangle =|S=1,m_{S}=-1\rangle=|11\rangle,\] \[|S_{0}\rangle =|S=0,m_{S}=0\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle). \tag{2}\] Singlet and triplet states have different exchange symmetries: \(|S_{0}\rangle\) is anti-symmetric w.r.t. exchange, while \(|T_{m_{S}}\rangle\) are symmetric. No symmetry-preserving operation can connect triplet and singlet states. For example, the intra-pair dipolar interaction commutes with the exchange operator and therefore can not induce transitions from \(|S_{0}\rangle\) to \(|T_{m_{S}}\rangle\). Hence \(|S_{0}\rangle\) gets _disconnected_ from the rest of the triplet states and becomes the long-lived singlet state (LLS). In order to access LLS in terms of observable NMR magnetization, the intra-pair symmetry has to be broken. This breaking of symmetry is usually achieved by choosing a chemically inequivalent spin-pair (\(\Omega\neq 0\)). In a homonuclear spin-pair, the residual dipole-dipole coupling \(\mathcal{D}\) is generally larger or comparable to the chemical shift difference \(\Omega\). The Hamiltonian \(H\) for such a strongly coupled spin-pair can be conveniently expressed in the singlet-triplet basis as \[H=\frac{\pi}{2}\begin{bmatrix}J_{+1}\\ J+2\mathcal{D}\\ 0\\ 0\\ \end{bmatrix}\begin{matrix}|T_{0}\rangle&|S_{0}\rangle&|T_{-1}\rangle\\ \hline J-4\mathcal{D}&-2\Omega&0\\ 0&-2\Omega&-3J&0\\ 0&0&0&J+2\mathcal{D}\end{bmatrix}\begin{matrix}\langle T_{+1}|\\ \langle T_{0}|\\ \langle S_{0}|\\ \langle T_{-1}|\end{matrix}\\ \langle S_{0}|\\ \langle T_{-1}|\end{matrix}\end{bmatrix}. \tag{3}\] Two of the eigenstates of the Hamiltonian \(H\) are simply the triplet states \(|T_{\pm 1}\rangle\), whereas the other two eigenstates are linear combinations of \(|S_{0}\rangle\) and \(|T_{0}\rangle\)[36; 45]. Thus, the states \(\{|S_{0}\rangle,|T_{0}\rangle\}\) form an effective two-level subspace which can be conveniently represented by the Bloch sphere as shown in Fig. 2. The coupling term \(-\pi\Omega\) can be used to interchange populations between \(|T_{0}\rangle\) and \(|S_{0}\rangle\) states. ### Rotations in Singlet-Triplet subspace The inversion of populations in \(\{|T_{0}\rangle,\,|S_{0}\rangle\}\) subspace is analogous to a single spin-\(1/2\) system under RF drive [21]. In this analogy, (i) the difference of the two diagonal terms of the \(\{|T_{0}\rangle,\,|S_{0}\rangle\}\) subspace in Eq. (3) corresponds to an effective Larmor frequency \(2\pi(J-\mathcal{D})\) and (ii) the sum of the off-diagonal terms corresponds to an effective RF field with amplitude \(2\pi\Omega\)[20; 21]. The effective field Figure 3: Evolution of singlet-triplet basis states under various elements of the M2S pulse sequence (see Fig. 4). The column titles are the states that evolve under the operations listed in row titles. Here \(\tau\) given by Eq. 5. The iteration numbers \(n_{1}=\lfloor\pi/(2\theta)\rceil\) and \(n_{2}=\lfloor\pi/(4\theta)\rceil\) respectively correspond to the resonant conditions for \(\pi\) and \(\pi/2\) rotations in \(\{|T_{0}\rangle,|S_{0}\rangle\}\) subspace. The global phases are indicated by \(\phi\), \(\phi^{\prime}\), \(\phi^{\prime\prime}\), and \(\phi^{\prime\prime\prime}\). in the \(\{\left|T_{0}\right\rangle\), \(\left|S_{0}\right\rangle\}\) subspace makes an angle \[\theta=\tan^{-1}\frac{\Omega}{J-\mathcal{D}} \tag{4}\] with the \(\hat{z}\) axis (see Fig. 2 (b)). Under this effective field of magnitude \(\nu_{\text{eff}}=\sqrt{\Omega^{2}+(J-\mathcal{D})^{2}}\), the state vector, initially pointed along \(\left|T_{0}\right\rangle\), rotates around a cone by an angle of \(2\theta\) in duration \[\tau=\frac{1}{2\nu_{\text{eff}}}. \tag{5}\] If a \(\pi\) pulse is applied at this moment, the state flips to the other side of the Bloch sphere as indicated in Fig. 2 (b). In the next \(\tau\) duration, the state vector precesses along a new cone and reaches a maximum angle of \(4\theta\) from the \(\hat{z}\) axis. Continuing this way, a \(\pi\) rotation in the \(\{\left|T_{0}\right\rangle\), \(\left|S_{0}\right\rangle\}\) subspace can now be achieved by a resonant spin-echo transfer [20, 21]. This inversion process is illustrated in Fig. 2. It involves repeated cycles of \(\tau-\pi\) elements. Thus after \(n\)th iteration, the state vector is rotated by \(2n\theta\). The total number \(n_{1}\) of iterations required to achieve inversion is given by \[n_{1}=\left\lfloor\frac{\pi}{2\theta}\right\rfloor, \tag{6}\] where \(\left\lfloor\right\rfloor\) denotes rounding to the nearest integer. In practice, one generally employs a CPMG sequence with repeated cycles of \((\tau/2-\pi-\tau/2)\) elements, which also refocuses chemical shift. Often \(\pi\) pulses in the CPMG echo sequence are replaced by composite \(\pi\) pulses that are robust against offset errors and RF inhomogeneity [20, 21, 23]. Some important basis state evolutions in the \(\{\left|T_{0}\right\rangle\), \(\left|S_{0}\right\rangle\}\) subspace are illustrated in Fig. 3. ### Preparation, Storage, and Detection of LLS The thermal equilibrium state of the homonuclear spin-pair of Larmor frequency \(\omega_{0}\) is described by the Boltzmann distribution \[\rho_{\text{eq}}\approx\frac{1}{2^{n}}\mathbb{1}+\frac{\omega_{0}\beta}{2^{n} }\left(I_{1z}+I_{2z}\right) \tag{7}\] where \(\beta=1/(k_{B}T)\). The identity part of the thermal density matrix can be ignored as this is invariant under any unitary operation [36]. Therefore, the thermal density matrix in traceless deviation form is \[\rho_{0}=I_{1z}+I_{2z}. \tag{8}\] The NMR pulse sequence for LLS preparation, storage, and detection is shown in Fig. 4 (a). A \(\left(\pi/2\right)_{y}\) pulse on \(\rho_{0}\) creates single quantum coherence between the states \(\left|T_{\pm 1}\right\rangle\) and \(\left|T_{0}\right\rangle\), i.e., \[\rho_{1}=I_{1x}+I_{2x}=\frac{\left|T_{+1}\right\rangle+\left|T_{-1}\right\rangle }{\sqrt{2}}\langle T_{0}|+\texttt{h.c.}, \tag{9}\] where h.c. indicates the Hermitian conjugate term. So far, the coherences are within the triplet subspace. Now the CPMG echo train \((\tau/2-\pi-\tau/2)\) produces \(\pi\) rotation in the \(\{\left|T_{0}\right\rangle\), \(\left|S_{0}\right\rangle\}\) subspace as described earlier. Using transformations in Fig. 3 we obtain, \[\rho_{2}=I_{1y}-I_{2y}=\frac{\left|T_{+1}\right\rangle+\left|T_{-1}\right\rangle }{\sqrt{2}}\langle S_{0}|+\texttt{h.c.} \tag{10}\] The state \(\rho_{2}\) has coherences of \(\left|S_{0}\right\rangle\) with \(\left|T_{\pm}\right\rangle\). Now a \(\left(\pi/2\right)_{x}\) pulse transfers \((\left|T_{+1}\right\rangle+\left|T_{-1}\right\rangle)/\sqrt{2}\) into \(\left|T_{0}\right\rangle\) as shown in Fig. 3. This leads to a coherence between \(\left|S_{0}\right\rangle\) and \(\left|T_{0}\right\rangle\) states, i.e., \[\rho_{3}=\left|T_{0}\right\rangle\langle S_{0}|+\left|S_{0}\right\rangle \langle T_{0}|=I_{1z}-I_{2z}. \tag{11}\] Figure 4: (a) M2S-S2M pulse sequence for preparing, storing, and detecting LLS. Here \(\tau\), \(n_{1}\), and \(n_{2}\) were calculated using Eqs. 5 and 6. The two PFGs and the \(\pi/2\) pulse in between are used to suppress artifacts. After storing the LLS for duration T, it is converted to detectable \(I_{1x}+I_{2x}\) magnetization using the S2M sequence. (b) Simulated evolution of the populations in states \(\rho_{1}=I_{1x}+I_{2x}\), \(\rho_{2}=I_{1y}-I_{2y}\), \(\rho_{3}=I_{1z}-I_{2z}\), \(\left|T_{0}\right\rangle\)\(\left|T_{0}\right\rangle\), and \(\left|S_{0}\right\rangle\)\(\left\langle S_{0}\right|\) during the M2S-S2M pulse sequence assuming no relaxation and parameters \(\Omega=50\) Hz, \(J=10\) Hz, \(\mathcal{D}=600\) Hz, T= \(0.03\) s. Note that the LLS (\(\rho_{5}\)) persists throughout the storage interval without any spin-lock and is converted back to \(\rho_{1}\) by the S2M sequence. A further delay \(\tau/2\) generates a relative phase shift in \(\{|T_{0}\rangle,|S_{0}\rangle\}\) subspace which is equivalent to a zero quantum coherence \[\rho_{4}=i\left(|T_{0}\rangle\langle S_{0}|-|S_{0}\rangle\langle T_{0}|\right)=2 I_{1y}I_{2x}-2I_{1x}I_{2y}. \tag{12}\] Finally, a second CPMG echo train with \(n_{2}=\lfloor\pi/(4\theta)\rceil\) elements produces a \((\pi/2)\) rotation in the \(\{|T_{0}\rangle,\,|S_{0}\rangle\}\) subspace, converting the above coherence into population difference \[\rho_{5}=|S_{0}\rangle\langle S_{0}|-|T_{0}\rangle\langle T_{0}|, \tag{13}\] which is the desired LLS. A subsequent spoiling PFG (pulsed-field gradient), which is also a symmetry-preserving operation, can dephase unwanted coherences without affecting LLS. As mentioned earlier, the LLS of a strongly coupled spin pair can be sustained in POP without any symmetry imposing spin-lock, which avoids unwanted sample heating. The LLS detection is achieved by converting it back into observable magnetization using S2M, the reverse chronological ordered M2S sequence. Before S2M, a \((\pi/2)_{y}\) pulse followed by a spoiling PFG is applied to remove any recovered longitudinal magnetization [21]. Fig. 4 (b) shows a numerical simulation of the evolution of different target states under the M2S_-storage_-S2M pulse sequence. During the second CPMG echo train of M2S, the LLS (population difference between \(|T_{0}\rangle\) and \(|S_{0}\rangle\)) starts to build up. Assuming no relaxation, the LLS persists during the storage interval and is converted back to observable magnetization by the subsequent S2M sequence. ## III Experiments and results In this work, our register involves two proton spins of 2-Chloroacrylonitrile (CAN) (see Fig. 5 (a)). The sample consists of a 137 mM solution of CAN in the nematic liquid crystal N-(4-Methoxybenzylidene)-4-butylaniline (MBBA). We observed the solution to be in the POP below 298 K and undergo a transition into the liquid phase at around 302 K. All experiments were performed on a 500 MHz Bruker AVANCE-III NMR spectrometer operating with a static magnetic field of strength 11.7 Tesla. Fig. 5 (a) shows the one-pulse NMR spectrum of CAN at 294 K, while it is in the POP, in which the two large peaks correspond to the two outer peaks of a strongly dipolar-coupled spin pair [36]. The two middle peaks are undetectably small and are lost in the liquid crystal background signal. Fig. 5 (b) shows the one-pulse spectrum at 305 K when the solution is in IP. From this spectrum, we estimated the chemical shift difference \(\Omega=46.6\) Hz and indirect scalar coupling \(J=3.1\) Hz. ### Establishing LLS in POP and IP We used the M2S-S2M [21; 23] pulse sequence shown in Fig. 4 to prepare, store, and detect LLS in POP. The resulting spectrum, shown in Fig. 5 (c), consists of four almost equally spaced lines with characteristic intensities corresponding to the singlet state, from which the estimated residual dipole-dipole coupling constant is \(\mathcal{D}\approxeq 640\) Hz. The spin pair is strongly coupled in POP, but as we raise the sample temperature, \(\mathcal{D}\) decreases, and finally, as the solution undergoes a phase transition into IP at around 302 K, \(\mathcal{D}\) averages out completely, and spins Figure 5: (a,b) One-pulse \({}^{1}\)H NMR spectrum of 2-Chloroacrylonitrile (CAN; molecular structure shown in the inset of (a)) in (a) POP at 294 K and (b) IP at 305 K (inset shows the full spectrum including the solvent peaks). (c,d) LLS spectrum in (c) POP at 294 K after a storage time of 7 s and (d) IP at 305 K after a storage time of 45 s. become weakly coupled under the scalar coupling \(J\) with \(|J|\ll|\Omega|\). To prepare and read LLS in the IP, we used the CL-CL pulse sequence given in [19], which results in a characteristic antiphase signal as shown in Fig. 5 (d). Since the singlet is no longer an eigenstate of the weakly coupled Hamiltonian, we used a 1 kHz WALTZ-16 spin lock during the storage interval. The measured \(T_{1}\), \(T_{LLS}\), and the ratio \(T_{LLS}/T_{1}\) at various temperatures spanning both POP and IP are plotted in Fig. 6 and summarized in Table 1. The results clearly establish the long-livedness of the singlet state not only in IP but also in POP. ### Survival of LLS across the phase transition As explained in Sec. II.3, M2S [23] efficiently transfers the longitudinal magnetization to LLS in a strongly coupled spin pair in POP, whereas the CL sequence [19] is efficient for converting LLS back to the observable magnetization of the weakly coupled spin pair in IP. Thus in this experiment, we introduce a hybrid M2S-CL sequence as shown in Fig. 7 (a). In addition, we introduce two important improvements as described below. Firstly, to remove spurious contributions to the final signal, we need an efficient phase-cycling scheme. Since the phase transition is not rapidly reversible, it has to be a single-scan phase cycle. To this end, we incorporate stimulated-echo sequence with the help of two PFG pairs, one during preparation before the M2S sequence and the other during detection after the CL sequence. Together, they filter-in only the signal that arises from the LLS and suppress all artifacts created during storage or at other times. We call this sequence _M2S-CL STELLAR (STimulated Echo filtered Long-Lived state Accessing and Reading)_. A vectorial illustration of this sequence is shown in Fig. 7 (b). Secondly, the rapid nonuniform heating and the associated phase transition render the solution highly inhomogeneous, resulting in the broadening of the spectral lines. Consequently, the characteristic anti-phase spectral lines \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Phase} & Temp. & \multirow{2}{*}{\(T_{1}\)(s)} & \multirow{2}{*}{\(T_{LLS}\)(s)} & \multirow{2}{*}{\(D(\times 10^{-10}m^{2}s^{-1})\)} \\ \cline{5-6} & & & & & STE & LLS \\ \hline \multirow{4}{*}{POP} & 294 & 1.1\(\pm\)0.1 & 3.7\(\pm\)0.1 & 1.29\(\pm\)0.17 & 1.32\(\pm\)0.10 \\ \cline{2-6} & 296 & 1.2\(\pm\)0.1 & 3.9\(\pm\)0.1 & 1.34\(\pm\)0.12 & 1.34\(\pm\)0.14 \\ \cline{1-1} \cline{2-6} & 297 & 1.3\(\pm\)0.1 & 4.3\(\pm\)0.1 & 1.37\(\pm\)0.13 & 1.37\(\pm\)0.11 \\ \cline{1-1} \cline{2-6} & 298 & 1.6\(\pm\)0.1 & 4.6\(\pm\)0.1 & 1.55\(\pm\)0.21 & 1.45\(\pm\)0.12 \\ \hline IP & 305 & 1.5\(\pm\)0.1 & 8.1\(\pm\)0.1 & 1.81\(\pm\)0.03 & 1.92\(\pm\)0.13 \\ \hline \end{tabular} \end{table} Table 1: Summary of experimental results for CAN at various temperatures spanning POP and IP. Figure 6: Measured values of \(T_{1}\), \(T_{LLS}\), and their ratio \(T_{LLS}/T_{1}\) for CAN at different temperatures across both phases. \(T_{LLS}\) was obtained by systematically varying the storage delay T in Fig. 4 (a), while \(T_{1}\) was measured using the standard inversion-recovery method [44]. In POP, \(T_{LLS}\) is approximately three times longer than \(T_{1}\), whereas it is five times longer in IP. Figure 7: (a) Pulse sequence for preparing LLS in POP using M2S sequence shown in Fig. 4 and detecting it after the phase transition into IP using modified CL sequence. Before LLS preparation in POP, the first bipolar-PFG introduces a z-dependent phase shift, which is refocused by the second bipolar-PFG after the phase transition into IP. This way, the ’stimulated echo’ signal is solely due to the LLS that survived the phase transition. In all experiments, sinusoidal PFG of duration \(\delta=320\mu\) s were used with strength 2.5 G/cm. Since the system is transitioning from strong coupling to weak coupling, we use a 2 kHz WALTZ-16 spin-lock to sustain LLS during storage. (b-d) Illustrating spin states (b) after the first \((\pi/2)_{y}\) pulse when both spins are pointing along \(\hat{x}\), (c) after the first bipolar-PFG, and (d) \(I_{1y}-I_{2y}\) state after the phase transition into IP. of Fig. 5 (d) vanish under line broadening. Therefore, we introduce a spin-echo sequence in the detection part to convert the anti-phase magnetization \(I_{1x}I_{2z}-I_{1z}I_{2x}\) into in-phase magnetization \(I_{1y}-I_{2y}\). Fig. 8 (a) shows the spectra corresponding to \(I_{1y}-I_{2y}\) magnetization obtained from LLS that has survived the phase transition occurred during storage intervals of different durations. Fig. 8 (b) plots the decay of the survived LLS signal versus storage time. In this trans-phase storage, we obtained an effective LLS lifetime of 6.3 s, which lies in between the values obtained for individual phases. ### Estimating diffusion coefficient in POP and IP DOSY (diffusion-ordered spectroscopy) is an established NMR technique to study molecular diffusion from small molecules to large polymers [46]. Its principle can be explained by considering an ensemble of molecules, each with a single spin-1/2 nucleus. After preparing \((|0\rangle+|1\rangle)/\sqrt{2}\) state by an initial \(\pi/2\) pulse, a PFG introduces a local phase shift to prepare \((|0\rangle+e^{i\phi(z)}|1\rangle)/\sqrt{2}\). A reverse PFG is applied after a sufficiently long diffusion interval. In the absence of diffusion, the local phase shift is completely reversed, and one obtains a strong echo signal. In the presence of translational diffusion, the phase reversal is inefficient and the echo signal is damped. For a fixed diffusion interval \(\Delta\), PFG strength \(G\), and duration \(\delta\), the signal ratio is given by [47] \[S(G)/S(0)=\exp\left(-D\kappa^{2}\Delta\right). \tag{14}\] Here \(D\) is the diffusion coefficient, \(\kappa=\gamma qG\delta s\) with \(\gamma\) being the gyromagnetic ratio, \(q\) being the coherence order, and \(s\) being PFG shape-factor [44]. Thus, \(D\) can be estimated by measuring \(S(G)/S(0)\) for varying \(G\) and fitting with the Gaussian function above. In practice, the precision of this method is limited by the hardware bound on \(G\) and the coherence-time bound on \(\Delta\). Cavadini et al [3] proposed the LLS method for studying slow diffusion, which was later applied also to strongly coupled systems [48; 49]. Indeed, for slow diffusion studies with strongly coupled spin-pairs, such as those in POP, LLS is ideal because it can be sustained over long diffusion intervals without spin-lock. Here we measure \(D\) for CAN in MBBA at a range of temperatures spanning both POP and IP using LLS as well as the conventional stimulated echo (STE) methods. For POP, we use the pulse sequence shown in Fig. 9 (a), which was referred to as P Figure 8: (a) Spectra corresponding to LLS that has survived the phase transition at various storage intervals as mentioned. Here the top trace is reference, same as in Fig. 5 (b). (b) Decay of LLS signal intensities in (a) versus storage interval \(T\). Figure 9: (a) Pulse sequence used for estimation of diffusion coefficient using LLS. The parameters \(\tau\), \(n_{1}\) and \(n_{2}\) are optimized at each temperature for maximum final signal. Position encoding and decoding are realized by bipolar-PFGs (of duration \(\delta=320\mu\) s) to minimize eddy currents. For STE diffusion experiment at each temperature, PFG strength \(G\) was varied from 2.5 G/cm to 47.5 G/cm in 19 steps, with diffusion interval \(\Delta=3.3\) s. For LLS diffusion experiment in POP, \(\Delta=10\) s and \(G\) was same as in STE; whereas in IP, \(\Delta=30\) s and \(G\) was varied from 1 G/cm to 20 G/cm 20 steps. (b) The measured translational diffusion coefficient of CAN in MBBA plotted versus temperature using STE sequence [3] and LLS sequence shown in (a). we use the CL-CL based DOSY sequence by Cavadini et al [3]. For comparison, we also measured the diffusion coefficient using the conventional STE method at all temperatures. The results shown in Fig. 9 (b) and also summarized in Tab. 1 indicate a gradual increase in the diffusion coefficient with temperature, as expected. ## IV Discussions and conclusions Since their discovery two decades ago, long-lived singlet states (LLS) have opened a plethora of applications, from precision spectroscopy to medical imaging. However, LLS have been mostly observed in isotropic phases, wherein the anisotropic interactions such as dipolar couplings are averaged out. Here we reported the observation of LLS in a spin-pair of a solute in the partially oriented phase (POP) of a liquid crystal solvent. To observe LLS in such a strongly dipolar coupled system, we used the M2S-S2M pulse sequence, originally designed for strongly \(J\)-coupled systems in isotropic phase (IP). We analyzed the related spin dynamics by constructing rotation elements of the singlet-triplet basis states as well as their populations under each element of the pulse sequence. In the particular spin pair that we studied, the LLS lifetime in POP was observed to be three times longer than the longitudinal relaxation time constant \(T_{1}\). Heating the solution takes it to IP, rendering the spin pair weakly coupled. Using the relevant CL-CL sequence, we observed LLS and measured its lifetime, which was found to be about five times longer than \(T_{1}\). The observation of LLS in POP naturally raises an interesting question of whether LLS survives a phase transition of the solution from POP to IP. To investigate this point, we introduced the M2S-CL STELLAR sequence, a hybrid of M2S and CL, with single-scan LLS filtering via stimulated echo technique. Using this sequence, we have been able to prepare LLS in POP, store it during the phase transition, and sensitively detect it in IP. The experimental results not only revealed the survival of LLS across the phase transition but also yielded an effective LLS time-constant which happens to be about four times the average \(T_{1}\) value. Finally, we demonstrated an application of LLS by efficiently measuring the translational diffusion coefficient at various temperatures spanning both phases. The presence of large yet manageable residual dipolar couplings without severely compromising coherence times makes spin systems in POP attractive for a variety of spectroscopic as well as quantum information processing applications. Many biological systems, such as membrane proteins, are also found in POP in their cellular conditions [50]. Therefore, the observation of LLS in POP may extend the breakthroughs of precision spectroscopy beyond IP. Strong dipolar couplings in POP are helpful not only for preparing LLS via polarization transfer or algorithmic cooling [51; 52], but also for sustaining LLS without external fields. We may also envisage other novel applications such as investigating dynamics during phase transition as well as hybridizing the merits of two phases for spectroscopy. For example, we may ask if LLS prepared in POP be safely locked in IP. This possibility, which we refer to as PADLOCK (Phase transition Assisted Detection and LOCKing), is illustrated in Fig. 10. ## V Acknowledgements We acknowledge valuable discussions with V. R. Krithika, Priya Batra, and Dr. Sandeep Mishra. The funding from DST/ICPS/QuST/2019/Q67 is gratefully acknowledged. We also thank the National Mission on Interdisciplinary Cyber Physical Systems for funding from the DST, Government of India through the I-HUB Quantum Technology Foundation, IISER-Pune.
2308.01681
NBIAS: A Natural Language Processing Framework for Bias Identification in Text
Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework NBIAS that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity BIAS. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data.
Shaina Raza, Muskan Garg, Deepak John Reji, Syed Raza Bashir, Chen Ding
2023-08-03T10:48:30Z
http://arxiv.org/abs/2308.01681v3
# Nbias: A Natural Language Processing Framework for BIAS Identification in Text ###### Abstract Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework Nbias that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity _BIAS_. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data. keywords: Bias detection, Dataset, Token classification, Nbias + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction The recent surge in Natural Language Processing (NLP) applications, encompassing fields from recommendation systems to social justice and employment screening, has sparked a critical concern - the emergence of bias within these systems [1]. Instances of racial and gender bias have been increasingly reported [2], indicating an urgent need for scrutiny. These biases often originate from the training data used in NLP models, and a majority of these large datasets harbor inherent biases. Regrettably, many NLP practitioners lack the necessary awareness or knowledge to effectively identify and address these biases, highlighting a significant gap in the field. Furthermore, there is a notable lack of discussion on data specifics - its origin, generation, and pre-processing - in many NLP publications. Given these circumstances, the importance of addressing biases in NLP applications cannot be overstated. These biases, if unchecked, not only compromise the validity of the models, but can also have unfavorable and detrimental consequences. The objective of this research is to provide insights into the detection of bias in NLP datasets, contributing to the development of more equitable and unbiased Artificial Intelligence (AI) systems. Bias in text data is a pervasive and deeply-rooted issue. The bias in data often stems from cognitive predispositions that influences our dialogues, views, and understanding of information [3]. This bias can be explicit which are often seen in discriminatory language targeting certain racial or ethnic groups [4], as in social media. Implicit bias [5], on the other hand, subtly perpetuates prejudice through unintentional language use but is equally harmful. The necessity for unbiased, trustworthy text data has grown across sectors like healthcare [6], social media [4; 7], and recruitment [8]. This data is essential for training NLP models for various downstream tasks, like formulating healthcare diagnoses and treatment plans, handling discriminatory language on social media, and promoting fair recruitment practices. Figure 1 illustrates the complexities of biases in text data in various domains, including job hiring, social media, and healthcare. These biases are primarily conveyed through lexical choices [9] and demand sophisticated detection methods, motivating this research. The primary aim of this study is to further foundational research on the fairness and reliability of the textual data. Although NLP has advanced much, the state-of-the-art techniques [2; 10; 11] often concentrate on bias detection in specific domains and lack generalizability. To address this, our research offers a generalizable bias detection method proven effective across the various domains. We present Nbias, a comprehensive framework for detecting bias in text data. This involves data preparation where bias-indicative terms are marked using a transformer-based token classification method like Named Entity Recognition (NER). Current NER solutions can manage general [12], biomedical [13], and social media [14] entities, but often neglect _BIAS_ as a separate entity. To address this, we introduce a new entity type, _BIAS_, to identify biased terms in text data. In this context, bias refers to unfair and often harmful favoritism or prejudice towards a particular group, person, or idea, which can manifest through profanity, unjustified criticism, or discriminatory language. A key contribution of this study is the development of the first comprehensive framework for bias detection in text data. This framework is based on latest language model technology and incorporates four crucial layers: data gathering, corpus construction, model development, and rigorous evaluation. The specific contributions of the work are as follows: 1. _Development of Annotated Datasets_: Acknowledging the scarcity of bias annotations in text-based data, we designed a solution by gen Figure 1: Visual Representation of Implicit and Explicit Biases in Textual Data: Examples from Job Hiring, Social Media, and Healthcare. erating multiple annotated datasets. Our work fills a critical gap in the available resources, thereby providing a solid foundation for future research in the realm of bias detection. 2. _Semi-Autonomous Labeling_: To alleviate the time-intensive manual annotation process, we pioneered a novel methodology termed "semi-autonomous labeling". This strategy provides a faster and more efficient way of annotating bias-related terms within textual content. This innovative approach has significant implications for improving the speed and accuracy of bias detection. 3. _Unique Entity Type - BIAS_: In an effort to enhance the precision of bias identification within text, we introduced a unique entity type, _BIAS_. This new entity type is specifically designed for detecting biased words and phrases within the text data. This has the potential to dramatically improve the process of bias identification and quantification in text-based analysis. 4. _Comprehensive Evaluation Process_: We subjected our proposed framework to a thorough evaluation process, utilizing both quantitative and qualitative analysis methods. The results confirm the reliability and efficiency of our approach, making it compatible for its application in real-world scenarios. This rigorous evaluation sets a benchmark for assessing the efficacy of bias detection methodologies. ## 2 Related Work ### Identifying Bias in NLP One of the key challenges associated with NLP systems lies in the presence of bias, a manifestation of unfair and systematic discrimination observed in their outcomes [15]. Moreover, the past studies [16; 10; 11; 2; 17] have shown the societal and cultural prejudices are deeply embedded within the training data due to the presence of bias. As such, the biases, whether explicit or implicit, can significantly impact the functionality of the NLP systems, leading to skewed results and perpetuating existing societal biases. Thus, the detection and mitigation of these biases are crucial to promoting fairness and inclusiveness within NLP systems [7; 11]. Researchers have proposed and implemented various strategies to identify bias, including employing statistical methods to discover patterns of bias within the training data [2; 18]. Under this approach, specific words or phrases that appear to be disproportionately associated with certain demographic groups, such as genders or races, are identified. For example, certain adjectives might be used more frequently in descriptions of women than men [2], or vice versa. The identification and debiasing of such patterns can highlight areas of potential bias, providing a starting point for efforts to eliminate these biases [19]. The field of bias detection in NLP has seen a surge of innovative methods in recent years, primarily leveraging advanced machine learning techniques. One such study has considered the combination of a speech detection system with an explanatory method to identify potential bias [20]. In this method, not only is the system trained to detect instances of hate speech, but it also provides a rationale or explanation for its classification decisions. Another area of research that has attracted considerable attention is the investigation of bias in event detection datasets and models [21]. Event detection tasks involve identifying and classifying real-world events within text data. These tasks can be susceptible to a range of bias-related issues, including data sparsity, labeling task, and annotations. Additionally, NLP techniques have been employed to address various aspects of bias. For instance, in a related study [22] on gender bias and sentiment towards political leaders in the news were quantified using word embeddings and sentiment analysis. Another work focused on investigating ableist bias in NLP systems, particularly at the intersection of gender, race, and disability [23]. Similarly, a methodology was proposed to eliminate gender bias from word embeddings [24]. Furthermore, marked attribute bias in natural language inference was identified and analyzed, with an evaluation of existing methods for bias mitigation [9]. These studies provide a deep understanding of the social and cultural factors that contribute to bias identification. Another work [25] presents bias analysis in NLP beyond demographic bias, focusing on predicting interpersonal group relationships using fine-grained emotions. A related study [26] evaluates gender bias in NLP research, highlighting the lack of explicit gender theorization. In another work, authors [27] introduce an effective bias-conflicting scoring method and gradient alignment strategy to identify and mitigate dataset biases. Overall, these studies underscore the importance of continuous efforts in identifying and mitigating biases in models to ensure fairness and equity. ### Named Entity Recognition (NER) Named Entity Recognition (NER) is a significant task in NLP that is aimed at identifying and classifying named entities within textual data. NER is a token classification task that focuses on identifying and classifying named entities such as individuals, organizations, and locations within a given text. In the past, many traditional methods have been employed for NER, each with its unique characteristics and benefits. * Rule-based methods rely on predefined sets of rules to identify named entities [28]. This method usually employs regular expressions or dictionary-based techniques to extract entities. Although rule-based methods can be effective for well-defined and specific contexts, their performance can decrease in the face of variability and ambiguity in language usage. * Supervised learning methods leverage annotated data to train a model for NER [14; 29]. These methods use statistical models such as Support Vector Machines (SVM), Conditional Random Fields (CRF), and others to classify the named entities. The performance of supervised learning methods can be impressive, given sufficient high-quality annotated data. * Deep learning methods, which are more contemporary approaches, utilize complex architectures like recurrent neural networks (RNNs) and transformer-based language models to extract named entities [30; 13]. These methods have shown promising results in NER tasks, owing to their capacity to capture intricate language patterns and contextual information. A recent study introduced a contrastive learning-based approach for multimodal NER [31]. This approach leverages both textual and non-textual data to identify and classify named entities, harnessing the complementary information offered by different modalities to improve the model's performance. Another research work into event detection from social media posts, evaluating the effectiveness of a pre-trained NER model followed by graph-based spectral clustering [32]. The study also explored transformer-based methods to weight the edges of the graph for event detection, further refining the detection process. A span-based NER model eliminates the need for label dependency [32]. This approach addresses the issue of cascade label misclassifications, a common challenge in traditional NER models that depend on label sequences. While our work on token classification is inspired by these studies, we identify a notable gap in the literature: the existing seminal work does not recognize _BIAS_ as an entity. In this work, we detect biased expressions within unstructured texts, designating them under the 'BIAS' entity label. ### Data Annotation Data annotation is a crucial task in NLP as it involves labeling and categorizing information to extract valuable insights from text data [33]. By enriching text data with relevant metadata, such as part-of-speech tags, named entity tags, and sentiment tags, data annotation provides contextual information that is essential for subsequent analysis [34]. Quality annotated data enhances model learning, boosting prediction accuracy. In contrast, inadequate annotations impede learning, resulting in subpar performance. Various methods of data annotation cater to different requirements of speed, quality, and computational resources: * Manual annotation is carried out by human annotators who carefully review and label the data. This method typically yields high-quality results, given the nuanced understanding that humans have of language. However, manual annotation is often time-consuming and labor-intensive, and its feasibility may be limited by the availability of qualified annotators and financial resources [28]. * Semi-automatic annotation combines manual efforts with automated tools to accelerate the annotation process and minimize human error. These tools can range from rule-based systems to pre-trained machine learning models [35]. While semi-automatic annotation can improve efficiency, its accuracy may still depend on the quality of the automated tools and the manual review process. * Automatic annotation leverages machine learning models and algorithms to annotate text data without human intervention [36]. Although automatic annotation can process vast amounts of data in a relatively short time, its accuracy may be compromised, particularly for complex or ambiguous texts. Therefore, a common practice is to combine automatic annotation with manual review to ensure data quality. Various strategies have been developed to address these challenges and optimize the annotation process. One study presents a comprehensive comparison of different annotation tools, highlighting their strengths and limitations [37]. Another research work proposes a method for automatically generating high-quality labeled data for NER tasks by leveraging existing knowledge bases [38]. A similar study has developed an annotation framework that combines statistical machine translation and human annotation to create a parallel corpus [39]. Other researchers have investigated methods for improving the reliability and consistency of manual annotations, such as developing guidelines and protocols for annotation tasks [40]or implementing quality control mechanisms to ensure data quality. Ultimately, the choice of annotation method and tools will depend on the specific requirements of a project, such as the desired level of accuracy, the available resources, and the nature of the data being annotated.To this end, we employ a _semi-automatic annotation_ strategy, integrating human proficiency with semi-supervised learning methodologies. ## 3 Proposed Framework for Bias Identification in Texts In this section, we present Nbias, an innovative framework designed to detect biases within textual data, as illustrated in Figure 2. The Nbias framework is structured into four distinct layers: (i) the data collection layer, (ii) the corpus construction layer, (iii) the model development layer, and (iv) the evaluation Layer. Each layer is designed to collaborate seamlessly with the others, providing an effective and comprehensive approach for detecting biases in textual content. ### Data Layer The Data Layer serves as the framework's primary interface with the data for analysis. It handles data collection, pre-processing and data consolidation from a variety of sources, such as social media, online articles, and databases. This layer ensure adaptability and high performance for the entire framework. Data GatheringOur study adopts a methodological data collection approach, incorporating diverse sources from various domains. To analyze biases in medical narratives and healthcare, we include data from two important clinical text databases: MIMIC-III [41] and MACCROBAT [37]. The MIMIC-III dataset is a publicly available database with de-identified health data from over 40,000 ICU patients. It offers rich clinical narratives, including nursing notes, radiology reports, and discharge summaries, enabling a deep understanding of biases in healthcare communication. The textual data were primarily obtained from the _NOTEEVENTS_ table. The MACCROBAT dataset provides valuable pediatric critical care data, including admission notes and medical summaries. It contains 200 original documents along with corresponding annotated versions centered around clinical case reports. To detect bias in news articles and social media streams, we use the BABE (Bias Annotations By Experts) dataset [10]. This dataset includes 3700 articles and tweets, offering a comprehensive perspective on linguistic bias in media and public opinion. It features marked statements, enabling recognition of bias at both granular (word-level) and broader (sentence-level) scopes, covering diverse topics. Figure 2: Nbias: A Natural Language Processing Framework for Bias Identification. To examine biases in employment practices, we incorporate the Job Hiring/Recruitment dataset [42], comprising 20,000 job posts with titles, descriptions, and associated tags from various businesses. Each advertisement includes job details and manually assigned tags by recruiters, suggesting jobs to potential candidates with analogous skills. Data ConsolidationAfter gathering and pre-processing data from various sources, all datasets are harmonized into a single consolidated dataframe. This dataframe includes the following columns: * **Dataset**: Specifies the source dataset, such as MIMIC-III, MACCROBAT, Job Hiring, or BABE * **Text**: Contains the actual textual data extracted from the respective datasets, including clinical notes, case reports, job descriptions, or annotated statements. * **Biased Words**: Includes the words or phrases identified as biased in the text, crucial for granular bias detection. * **Aspect of Bias**: Denotes the specific type or aspect of bias present in the text, categorized by gender, racial, or age biases, to understand the nature of the biases detected. * **Label**: Indicates whether the text is biased or non-biased, serving as the target variable for the token classifier and for evaluation purposes. A sample record in JSON format is shown below: { "Record": { "Dataset": "MIMIC-III", "Text": "Clinical notes of patient XYZ indicate history of superficial hypertension duet to overly emotional personality.", "BiasedWords": "superficial, overly emotional personality", "AspectOfBias": "age", "Label": "biased" } In the consolidated dataframe, each row represents a unique sample from the original dataset, supplying information for bias detection and assessment. Further pre-processing is conducted to prepare the data for subsequent layers of the Nbias framework, particularly the NLP model performing token classification. Data Pre-processingThe pre-processing of textual data involves a series of sequential operations to refine and structure the data for machine learning algorithms. This includes tokenization, which involves breaking raw text into meaningful tokens (words or subwords) for semantic understanding and subsequent NLP tasks; text cleaning, which involves removing punctuation, numbers, special characters, and converting text to lowercase to ensure uniformity and clarity; and handling missing values, which involves identifying and appropriately managing missing data to avoid bias and improve model performance. These pre-processing steps convert raw text into a clean, structured format, enabling the NLP token classification model in the subsequent layer. ### Corpus Construction Our group, consisting of three seasoned professionals from the disciplines of healthcare, computer science, and journalism, was joined by two diligent students to perform the task of detecting and labeling bias in our dataset. Their collective role centered around the critical task of carefully annotating bias within our dataset. This endeavor is important to ensure the integrity and fairness of any subsequent analysis or research. The foundation for this task was based on comprehensive guidelines that clearly delineated the concept of bias in this context. Bias, as per the instructions, was defined as any terminology or phraseology that could potentially provoke prejudiced comprehension or induce stereotyping, as mentioned in most of the literature [11; 7; 10] also. The factors from which biases could stem were identified as gender, race, socioeconomic status, age, or disability for this NLP work. Such biases could inadvertently skew the dataset and, consequently, the results derived from it. Thus, the identification and annotation of such biases are of high importance to uphold the accuracy and reliability of our dataset. Highlighting both explicit and implicit biases was emphasized as a critical part of our work. Annotation SchemeUnder the light of these provided guidelines, our team proceeded by using a carefully compiled list of terms and phrases, known as "bias-indicative" lexicons. These lexicons provided a comprehensive guide to potential areas where bias could lurk within our dataset. A portion of this list is exhibited in Table 1 for reference. This bias-indicative lexicon served as a navigational tool for our team to identify and mark "BIAS" entities scattered within our textual data. These entities can be individual words or phrases that express or imply bias. This systematic approach ensured that we could account for most biases that exists in the data. We adopted the Inside-Outside-Beginning (IOB) annotation scheme [43] to classify and annotate 'BIAS' entities. This technique categorizes tokens in the text as the beginning (B), inside (I), or outside (O) of a bias entity. 'B' represents the first token of a bias entity, 'I' for tokens inside the entity, and 'O' for tokens not part of any bias entity. This approach ensured consistent and precise annotations, enhancing the reliability and accuracy of our study. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Bias Dimension** & **Biased Words/Phrases** \\ \hline Gender & ‘hysterical’, ‘emotional’, ‘weak’, ‘bossy’, ‘fragile’, ‘nagging’, ‘man up’, ‘tomboy’ \\ \hline Race & ‘inner city’, ‘illegal alien’, ‘thug’, ‘exotic’, ‘uncivilized’, ‘model minority’, ‘white trash’ \\ \hline Social Status & ‘trailer park’, ‘lazy’, ‘freeloader’, ‘welfare queen’, ‘ghetto’, ‘lazy bum’, ‘filthy rich’ \\ \hline Age & ‘senile’, ‘slow’, ‘old-fashioned’, ‘whippersnapper’, ‘elderly’, ‘young and naive’, ‘generation gap’ \\ \hline Disability & ‘handicapped’, ‘crippled’, ‘invalid’, ‘sufferer’, ‘differently-abled’, ‘victim’ \\ \hline Religion & ‘radical’, ‘terrorist’, ‘infidel’, ‘heathen’, ‘fanatic’, ‘holy roller’ \\ \hline Profession & ‘greedy’, ‘dishonest’, ‘corrupt politician’, ‘crooked lawyer’, ‘greedy CEO’, ‘lazy government worker’ \\ \hline National & ‘unpatriotic’, ‘alien’, ‘foreigner’, ‘outsider’, ‘immigrant’, ‘nationalist’ \\ \hline Education & ‘uneducated’, ‘illiterate’, ‘dropout’, ‘underachiever’, ‘overachiever’, ‘smarty-pants’ \\ \hline Body Size & ‘fat’, ‘slob’, ‘skinny’, ‘lardass’, ‘beanpole’, ‘plus-sized’ \\ \hline \end{tabular} \end{table} Table 1: Bias Dimensions and Fewer Sample Biased Words/Phrases Shown Due to Brevity Annotation ApproachWe leveraged semi-supervised learning methodologies [35, 13, 33] to enhance both efficiency and precision of the annotation process. The integration of BERT (Bidirectional Encoder Representations from Transformers), known for its superior text comprehension abilities, substantially improved our approach. Our annotation process initiated with the manual tagging of 20% of the complete dataset. This critical yet time-consuming task was strategically limited to a subset of data, ensuring a balance between accuracy and efficiency. The "BIAS" entities were carefully annotated in compliance with our predefined guidelines. This annotated subset then fed into our BERT model, serving as training data for the token-classification task. Once sufficiently trained, the model was assigned the task of predicting "BIAS" entities within the remaining 80% of the data. The extensive dataset was effectively managed by breaking it down into 20% increments, a process we refer to as "semi-autonomous labelling". Expert reviews cross-verified the "BIAS" entities labelled by the model. This combination of semi-supervised learning with expert validation enabled us to create an annotation process that is both optimized and trustworthy. Working InstanceTo demonstrate our annotation scheme, we consider the example sentence: _"The overpriced product from the highly successful company was surprisingly popular"_. Table 2 presents the corresponding BIO format annotations for this sentence. Assuming the term "overpriced" holds potential bias, it would be tagged as "B" in the BIO scheme, indicating the start of a bias entity. All other tokens not part of this bias entity would be labeled "O". This example shows our extensive annotation process across our dataset. This approach allows us to quantify and comprehend biases in a consistent manner. Resolving DiscrepanciesAn integral part of our process was addressing discrepancies between annotators, a common criteria in multi-person annotation tasks. We implemented a consensus-driven approach to uphold consistency and reliability in our annotations. Any disagreement was discussed collectively, considering each annotator's viewpoint and reaching a unified decision based on predefined annotation guidelines. This process ensured collective agreement on all annotations, minimizing potential bias or error and boosting reliability. This consensus strategy was uniformly applied across all data sources including the BABE, MIMIC, MACCROBAT, or Job Hiring datasets. _FAIR Principles._ After reaching consensus on all annotations, we saved the final annotated data in the widely-accepted CoNLL-2003 format [44]. This format represents data in a tab-separated manner, associating each word with its part of speech tag, chunk tag, and named entity tag. Sentences are separated by an empty line, and each row corresponds to a token with its annotation. The CoNLL-2003 format offers multiple benefits. It ensures compatibility with existing NLP tools and models, facilitating future analysis and model training. Additionally, it promotes collaboration and peer review by allowing easy data sharing and comprehension among researchers. Lastly, it enhances the reproducibility of our study, enabling others to use our data for model validation and findings replication. By adhering to the FAIR principles, our dataset is made **F**indable, **A**ccountable, **I**nteroperable, and **R**eusable, enhancing the transparency, accessibility, and reliability of our research. _Inter-Annotator Agreement._ In our research, we placed considerable emphasis on establishing rigorous protocols to guarantee the reliability and consistency of the data annotations. Two independent reviewers were assigned to carefully assess the annotated data, promoting objective evaluation devoid of influence from initial annotators. Rather than relying on subjective judgment, we quantified their agreement through Cohen's Kappa coefficient--a statistical measure common in categorical data studies, accounting for potential chance agreement. Scores over 0.6 denote "substantial" agreement \begin{table} \begin{tabular}{|c|c|} \hline **Word** & **Bias Annotation** \\ \hline The & O \\ \hline overpriced & B-BIAS \\ \hline product & O \\ \hline from & O \\ \hline the & O \\ \hline highly & B-BIAS \\ \hline successful & I-BIAS \\ \hline company & O \\ \hline was & O \\ \hline surprisingly & B-BIAS \\ \hline popular & I-BIAS \\ \hline. & O \\ \hline \end{tabular} \end{table} Table 2: Bias Annotation using BIO scheme and above 0.8 represent "almost perfect" agreement. Our reviewers attained a Cohen's Kappa score of 78%, demonstrating high concordance on the annotations. This high score substantiates the uniformity, consistency, and quality of our annotations. Moreover, it demonstrates the objectivity of the assessment process, highlighting the well-built nature of our annotated data. This, in turn, enhances the trustworthiness of prospective findings drawn from this dataset. ### Model Development Layer In this layer, we leverage the BERT language model for token-classification and adapt it for the task of NER. The choice of BERT is motivated by its powerful capability of understanding both left and right context of a word, and its effectiveness in recognizing and classifying multi-word phrases. These features make it particularly well-suited for the complex task of bias detection and annotation in our text data. The advantage of using BERT in Nbias model development lies in its more effective token-level bias identification. Nbias incorporates enhancements to the standard BERT architecture, such as modifications in the attention mechanism, loss function, and fine-tuning approaches, specifically tailored for better capturing biases in complex text data. The subsequent section provides a detailed explanation of the model development. The token classifier architecture (shown in as the middle component in Figure 2) consists of a multi-layer bidirectional transformer encoder that captures contextual information from both directions. Given an input sequence \(X=\{x_{1},x_{2},...,x_{n}\}\), the words are tokenized and embedded as shown in Equation (1): \[E(X)=\{e(x_{1}),e(x_{2}),\ldots,e(x_{n})\} \tag{1}\] where \(E(X)\) represents the set of embedded representations for an input sequence \(X\), \(X\) consists of \(n\) words \(\{x_{1},x_{2},...,x_{n}\}\), \(e(x_{i})\) is the embedding function that maps each word \(x_{i}\) from the input sequence to a continuous vector representation. The embedded input sequence is then passed through the transformer layers. BERT employs self-attention mechanisms to weigh the importance of different words in the input sequence, enabling it to better identify and understand complex relationships between words. The self-attention \(Att\) score between word \(i\) and word \(j\) is computed as shown in Equation (2): \[Att(i,j)=\text{Softmax}\left(\frac{Q(e(x_{i}))\cdot K(e(x_{j}))^{T}}{\sqrt{d_{k}}}\right) \tag{2}\] where \(Q\), \(K\) are the query and key matrices, and \(d_{k}\) is the key dimension. Following the transformer encoder, the output after applying self-attention and passing through the bidirectional transformer encoder is represented, as shown in Equation (3): \[R(X)=\{r(x_{1}),r(x_{2}),...,r(x_{n})\} \tag{3}\] where \(R(X)\) represents the set of contextualized representations for an input sequence \(X\). The function \(r(x_{i})\) is the representation function that maps each word \(x_{i}\) from the input sequence to a continuous vector representation after passing through the transformer encoder. A linear layer with a softmax activation function is added for entity classification. This layer transforms the representations generated by the transformer encoder into a probability distribution over the possible output classes. To simplify our annotation and prediction task, we have merged the 'B' (Beginning) and 'I' (Inside) tags from the standard BIO tagging scheme into a single 'BIAS' tag. The 'BIAS' tag represent any part of a bias entity, while 'O' represents non-entity. The probability distribution is calculated as shown in Equation (4): \[P(y|x)=\text{Softmax}(W\cdot c(x)+b) \tag{4}\] where \(W\) is the weight matrix, \(b\) is the bias vector in the softmax function, and \(P(y|x)\) is the probability distribution over the output classes 'BIAS' and 'O'. The final output of the model indicates the presence of biased words or phrases within the input sequence by labeling them as 'BIAS'. This simplification enables our model to recognize biased phrases more effectively, without differentiating between their start or continuation. We show an example of the model output on a sample from the test set in Figure 3. Figure 3: BIAS Annotation on a Piece of Text The pseudocode algorithm steps for the Nbias model development are given in Algorithm 1. As seen in Algorithm 1, the Nbias model, built on BERT, tokenizes and contextualizes input text using transformer encoders. Through self-attention mechanisms, it weighs relationships between words and classifies each token as biased or unbiased using a softmax-activated linear layer. ``` 0: Text sequence \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) 1: Initialize BERT with token-classification architecture 2: Tokenize input sequence \(X\) 3: Embed input sequence: \(E(X)=\{e(x_{1}),e(x_{2}),\ldots,e(x_{n})\}\) 4:for each token in \(E(X)\)do 5: Compute self-attention: 6:\(Att(i,j)=\text{Softmax}\left(\frac{Q(e(x_{i}))\times K(e(x_{j}))^{T}}{\sqrt{dk}}\right)\) 7:endfor 8: Pass \(E(X)\) through bidirectional transformer encoder: \(R(X)=\{r(x_{1}),r(x_{2}),\ldots,r(x_{n})\}\) 9:for each token representation in \(R(X)\)do 10: Compute probability distribution: \(P(y|x_{i})=\text{Softmax}(W\times c(x_{i})+b)\) 11:endfor 12:for each token in \(X\)do 13:if probability corresponds to BIAS then 14: label as 'BIAS' 15:else 16: label as 'O' 17:endif 18:endfor 19:return the labeled sequence ``` **Algorithm 1**Nbias Model Development ### Evaluation Layer The evaluation layer plays a critical role in assessing the performance of our model. This layer encompasses both quantitative and qualitative evaluation methods, providing a comprehensive perspective on the model's performance. Quantitative Evaluation.The quantitative evaluation is typically statistical in nature and involves the use of various metrics to numerically measure the model's performance. Metrics such as F1-score, AUC-ROC and accuracy are commonly used in this context. F1 score balances precision (the ability of the model to correctly identify positive instances) and recall (the ability of the model to identify all relevant instances), providing a single measure of the model's overall performance. Qualitative Evaluations.In addition to these numerical measures, we also conduct a qualitative evaluation. This type of evaluation is more about the quality, relevance, and usefulness of the model's output. It involves an expert review of a subset of the model's predictions to measure how well the model is performing in practical terms. Factors such as the model's ability to correctly identify complex or subtle bias entities, and the interpretability of its output, are examined in the qualitative evaluation. In our study, we focus on qualitative evaluations, specifically assessing model robustness and conducting perpetuation tests. Our robustness analysis [45] explores the model's stability under various conditions including adversarial inputs and data variations. Perpetuation tests [46] help us understand if the model inadvertently reinforces or introduces societal biases. We also consider a human evaluation, to assess the model's performance in real-world conditions. ## 4 Experimental Setup In this section, we detail the settings, evaluation metrics, baselines and hyperparameters of our experimental design for replication and validation. ### Dataset Our study uses diverse datasets: MIMIC-III [41], MACCROBAT [37], BABE [10], and Job Hiring [42]. After annotation (detailed in Section 3.1 and 3.2), each dataset is split into training, validation, and test sets using an 80-10-10 ratio. The division allows for efficient model training, validation, and testing. Modifications are made for the MACCROBAT dataset to maintain balance despite its limited entries. Table 3 presents the detailed dataset information. ### Hardware Settings The experiments conducted in this study were performed on a dedicated research server with specific hardware configurations. The server was equipped with an Intel Xeon CPU E5-2690 v4 running at 2.60GHz, 128GB of RAM, and a powerful NVIDIA GeForce RTX 3090 GPU. The operating system installed on the server was Ubuntu 18.04 LTS. These hardware settings provided substantial computational power, enabling us to efficiently execute resource-intensive tasks, such as training complex machine learning algorithms and deep learning models. ### Time measurements Time measurements during the training, validation, and testing phases were recorded for our models across the diverse datasets. Utilizing our hardware setup, we ensured peak performance with minimized hardware-induced delays. Specifically, the BABE dataset took 4.5 hours for training with 30 minutes each for validation and testing. The MIMIC dataset required 2 hours of training, and 10 minutes for both validation and testing. For the smaller MACCROBAT dataset, training was completed in 0.5 hours, with validation and testing taking 5 minutes each. Lastly, the Job Hiring dataset took the longest at 5 hours for training and 40 minutes each for validation and testing. ### Baselines For the comparison of models for token classification model performance, we consider a range of diverse baseline approaches. These include BiLSTM-CRF, which combines BiLSTM and CRF [29]; BERT-CRF, a blend of BERT and CRF [47]; RoBERTa, an optimized variant of BERT [48]; BART-NER, an application of the BART model for NER [49]; CNN-NER, a CNN-based method \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline **Data Source** & **Domain** & **train** & **dev** & **test** & **Total** \\ \hline BABE & News/Social Media & 15,300 & 1,700 & 1,700 & 18,700 \\ \hline MIMIC (Clinical) & Healthcare & 1,800 & 200 & 200 & 2,200 \\ \hline MACCROBAT & Healthcare & 160 & – & 40 & 200 \\ \hline Job Hiring & Occupational & 16,000 & 2,000 & 2,000 & 20,000 \\ \hline **Total** & & **33,260** & **3,900** & **3,940** & **41,100** \\ \hline \end{tabular} \end{table} Table 3: Dataset Details with Training (train), Development (dev), Test (test) sets and Total Samples for capturing named entities [50]; and TENER, an NER model that utilizes an adapted Transformer Encoder for character and word-level features [51]. We also consider the few-shot NER models like [52] and model-agnostic meta-learning (MAML) [53] and zero-shot named entity typing (NET) [54] model. The selected baselines represent a collection of different architectures such as BiLSTM, BERT, RoBERTa, BART, CNN, and Transformer Encoder, each combined with either the CRF or NER task. These models were chosen because they represent the state-of-the-art and constitute a robust set of baselines for comparing token classification model performance. ### Hyperparameter Settings The chosen hyperparameters for our token classifier are provided in Table 4. In the comparative experiments with the baselines, the models were optimized using a learning rate between 1e-5 and 5e-5 over several training \begin{table} \begin{tabular}{|l|l|} \hline **Parameter/Method** & **Details/Value** \\ \hline Model & bert-base-uncased \\ \hline Optimizer & Adam \\ \hline Learning Rate & \(1\times 10^{-2}\) \\ \hline Momentum & 0.5 \\ \hline Weight Decay & 0.01 \\ \hline Epochs & 5 \\ \hline Batch Sizes & 4, 8, 16, 32, 64 \\ \hline Batch Size (training) & 16 \\ \hline Input Sequence Length & 128 subword tokens \\ \hline Dropout & Applied on input and hidden layers \\ \hline Convergence Criteria & Negligible decrease in validation loss \\ \hline Validation Strategy & Hold-out \\ \hline Early Stopping & Implemented \\ \hline Training Environment & Google Colab Pro \\ \hline Hardware & NVIDIA Tesla T4 GPU \\ \hline \(\beta_{1}\) & 0.9 \\ \hline \(\beta_{2}\) & 0.999 \\ \hline Epsilon & \(1\times 10^{-8}\) \\ \hline Hidden Units & (Leaky) Rectified Linear Units (ReLUs) \\ \hline \end{tabular} \end{table} Table 4: Hyperparameter Settings and Training Details epochs, typically 3 to 10. The batch size varied between 16 and 64, based on memory constraints, and the input sequence length was limited to 512 tokens. To prevent overfitting, we used regularization techniques such as dropout and weight decay. We generally employed the Adam or AdamW optimizer. All hyperparameters were fine-tuned according to the specific task requirements and dataset characteristics. ## 5 Results ### Overall Performance Table 5 presents a comprehensive comparison of our proposed method, Nbias, with various baseline models in the token-classification task across three distinct categories: Social Media Bias, Health-related, and Job Hiring. Due to space constraints, we are only reporting the F1-scores in this overall comparo-son, which are the harmonic mean of precision and recall, as it is a commonly used single metric that combines both precision and recall. The F1-scores are expressed as percentages, accompanied by the standard deviation (\(\pm\)) to indicate the variability in scores across five separate runs. The highest F1-score in each category is highlighted in bold to easily identify the best performing model. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Model** & **Social Media** & **Health-related** & **Job Hiring** \\ \hline Rule-based [55] & 65.4 \(\pm\) 1.4 & 70.2 \(\pm\) 0.7 & 72.3 \(\pm\) 0.9 \\ \hline BiLSTM-CRF [29] & 72.6 \(\pm\) 1.0 & 75.8 \(\pm\) 0.9 & 78.1 \(\pm\) 0.8 \\ \hline BERT-CRF [47] & 80.7 \(\pm\) 1.3 & 82.3 \(\pm\) 0.7 & 83.5 \(\pm\) 0.6 \\ \hline RoBERTa [48] & 82.8 \(\pm\) 0.7 & 83.6 \(\pm\) 0.9 & 80.5 \(\pm\) 0.5 \\ \hline CNN-NER [50] & 76.2 \(\pm\) 1.1 & 78.1 \(\pm\) 0.0 & 73.4 \(\pm\) 0.9 \\ \hline BART-NER [49] & 84.7 \(\pm\) 0.9 & 84.2 \(\pm\) 0.7 & 82.0 \(\pm\) 0.8 \\ \hline TENER [51] & 85.7 \(\pm\) 0.5 & 86.4 \(\pm\) 0.6 & 85.1 \(\pm\) 0.5 \\ \hline Few-shot NER [52] & 70.2 \(\pm\) 3.4 & 73.1 \(\pm\) 2.9 & 69.2 \(\pm\) 1.7 \\ \hline NET [54] & 70.1 \(\pm\) 1.4 & 72.2 \(\pm\) 1.2 & 67.1 \(\pm\) 1.2 \\ \hline MAML [53] & 62.1 \(\pm\) 1.8 & 65.3 \(\pm\) 1.2 & 60.5 \(\pm\) 2.5 \\ \hline Nbias & **86.9 \(\pm\) 0.2** & **89.1 \(\pm\) 0.8** & **90.3 \(\pm\) 0.4** \\ \hline \end{tabular} \end{table} Table 5: Comparison of Token Classification Models on Three Different Categories: Social Media Bias, Health-related, and Occupational. The performance metric used is F1-score (harmonic mean of precision and recall), expressed as a percentage, accompanied by the standard deviation (\(\pm\)) indicating the variability in scores across 5 runs. The best score is highlighted in bold. The results presented in Table 5 conclusively demonstrate the performance of the Nbias model in all tested scenarios. In the Social Media area, the Nbias model got an F1-score of 86.9% with a small deviation of \(\pm\) 0.2. In the Health area, it performs even better with an F1-score of 89.1% and a deviation of \(\pm\) 0.8, which means the scores ranged between 88.3% and 89.9%. In the Job Hiring area, the model got an F1-score of 90.3%, with scores ranging between 89.9% and 90.7%. These small deviations show that the model's performance is consistent in different tests. Amongst the baselines models, the TENER model performs better. The BERT-CRF and RoBERTa models, on the other hand, exhibit good performances. Both the CNN-NER and BART-NER models also display satisfactory performance, although they comes behind the Nbias and TENER models. Contrastingly, the Rule-based model underperforms compared to transformer and BiLSTM based baselines. The Few-shot NER, NET, and MAML models also showed average performance. Even though few-shot models can work well with just a few examples, the results show there is room for improvement. This could be achieved by creating custom training methods or tasks that are specific to a certain area. Overall, the Nbias model emerges as the most effective across all categories. While other BERT-based baselines may also attempt bias identification, Nbias outperforms them due to its custom-designed model features optimized for this specific purpose. The performance gain could be in terms of better debiasing results, increased fairness in predictions, or improved overall model accuracy in scenarios where bias reduction is critical. These findings provide valuable insights for the future development and selection of token classification models across different domains. Accuracy Analysis of Token Classification ModelsFigure 4 shows how different models perform in classifying tokens over different test sets. As depicted in Figure 4, the Nbias model exhibits superior performance, achieving accuracy scores of 88.4% in Social Media Bias, 90.6% in Health-related texts, and 91.8% in Job Hiring texts. Following closely are the TENER and BART-NER models in terms of accuracy. While other models such as RoBERTa, BERT-CRF, BiLSTM-CRF, and CNN-NER also demonstrate commendable performance, they fall short of the scores attained by Nbias, TENER, and BART-NER in this experiment. Models like Few-shot NER, NET, and MAML, although not scoring the best, exhibit promising potential. Lastly, the Rule-based model, which relies on predefined rules rather than learning from the data, still manages to perform above 60%. Overall, these results underscore the enhanced capability of the latest transformer-based models like BART and TENER to extract contextual information from text data, as evidenced by this experiment. Moreover, it affirms that a model carefully designed for bias detection, such as ours, can indeed yield highly effective results. ### Performance Analysis using ROC Curves and AUC Scores In this study, we compare the performance of different models in token classification tasks using Receiver Operating Characteristic (ROC) curves and corresponding Area Under the Curve (AUC) scores on Social media, Health-related, and Job Hiring data. Figures 5a, 5b, and 5c displays the AUC-ROC curves for all the baseline models and our Nbias token classification model. The results presented in Figure 5 shows the superior capability of the Nbias model, as evidenced by their better True Positive Rates at minimal False Positive Rates. While models like Rule-based, RoBERTa, Zero-shot and few-shot NER models demonstrated low-to-moderate performance, others such as TENER, BiLSTM-CRF, CNN-NER, BART-NER yielded commendable results, particularly in the early segments of their respective curves. All these models also exhibited better performance specifically in the health Figure 4: Comparative Accuracy Scores of Token Classification Models across Three Different Categories: Social Media, Health-related, and Job Hiring for Bias Text Identification Figure 5: ROC curves and AUC Scores for Various datasets (continued on next page) and job hiring datasets. Overall, these findings suggest that some models excel in specific domains. This could be attributed to several factors, including but not limited to: 1. Training on analogous data points that make the model more aware of the specific features of a domain. 2. The architecture of the model being inherently better suited for certain types of data. 3. Hyperparameter choices that resonate better with specific data characteristics. 4. Preprocessing and feature engineering steps that align closely with the requirements of a domain. Thus, choosing the optimal model for a specific domain is important for achieving the best performance. ### Confusion Matrix and Error Analysis We present the results of the BIAS entity identification task for "Health-related Bias", "Political Bias", and "Occupational Bias" using Nbias. The Figure 5: ROC Curves and AUC Scores for Various Datasets. model's performance is evaluated based on confusion matrices and error analysis (Table 6), providing insights into its strengths and limitations of the model. _Health-related Bias:_ The Nbias exhibits strong performance in identifying "healthy lifestyle" entities, achieving a precision of 89.1%. However, it missed 5 instances of this entity, leading to false negatives. For "medical advancements", the precision is lower at 60.2%, and the model identified 56 false positives, misclassifying non-biased terms as biased. On the other hand, the model achieved a relatively high precision of 83.7% for "research findings" yet it missed 2 instances, resulting in false negatives. These findings suggest that the model performs well for more explicit health-related biases, but subtle biases and rare terms might pose challenges. _Social Media:_ Our Nbias demonstrates high precision in identifying "biased news source" entities (91.8%), correctly capturing biased sources. However, it produced a few false positives, misclassifying some non-biased sources as biased. For "political affiliation" entities, the precision is 93.1%, indicating a reliable performance. However, some false positives occurred, classifying neutral statements as biased based on political association. For "political agenda" entities, the model achieved a precision of 86.0%, although it misclassified a few non-biased mentions as biased. These results highlight the model's ability to detect explicit political biases but also suggest room for improvement in handling ambiguous language. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline **Dataset** & **Bias types** & **TP** & **FP** & **TN** & **FN** & **Precision** \\ \hline \multirow{3}{*}{Health} & healthy lifestyle & 98 & 12 & 145 & 5 & 89.1\% \\ \cline{2-7} & medical advancements & 85 & 56 & 142 & 15 & 60.2\% \\ \cline{2-7} & research findings & 98 & 19 & 138 & 2 & 83.7\% \\ \hline \multirow{3}{*}{Social Media} & biased news source & 112 & 10 & 157 & 8 & 91.8\% \\ \cline{2-7} & political affiliation & 95 & 7 & 162 & 16 & 93.1\% \\ \cline{2-7} & political agenda & 86 & 14 & 154 & 16 & 86.0\% \\ \hline \multirow{3}{*}{Occupational} & gender bias in hiring & 63 & 5 & 172 & 8 & 92.7\% \\ \cline{2-7} & ethnicity bias in hiring & 49 & 4 & 173 & 11 & 92.5\% \\ \cline{2-7} & age bias in hiring & 45 & 8 & 170 & 13 & 84.2\% \\ \hline \end{tabular} \end{table} Table 6: Confusion Matrix and Error Analysis for BIAS Entity Identification using Nbias: The table presents the True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN) for various bias types identified in the dataset, along with the Precision in percentage. The categorization of biases is based on a predefined analysis of the content and context in which they appear. _Occupational Bias:_ In the "Occupational Bias" category, the Nbias exhibits strong precision for identifying "gender bias in hiring" entities (92.7%), effectively capturing biased terms. However, it produced a few false positives, misclassifying neutral statements as biased based on gender. For "ethnicity bias in hiring" entities, the precision is 92.5%, indicating accurate identification. Yet, a few false positives occurred, misclassifying non-biased mentions as biased. The model achieved a precision of 84.2% for "age bias in hiring" entities. However, some neutral statements were misclassified as biased, revealing areas for enhancement. These findings suggest that the model can effectively identify biased occupational entities, but improvements are needed to reduce false positives. _Actionable Insights:_ * The proposed NER model demonstrates robust precision in identifying biased entities for all three categories with clear biases. * Addressing false positives can enhance the model's discrimination between biased and non-biased entities. Fine-tuning the model to better understand nuanced language can be beneficial. * Augmenting the training data with diverse instances of subtle biased entities can improve recall and help detect rare biased terms. * Considering context-aware models, such as transformer-based models, might help tackle challenges arising from sarcasm and subtle biases more effectively. Overall, these results provide valuable insights into the strengths of Nbias and areas for improvement in identifying biased entities across different categories. ### Ablation Study on the Nbias Model To understand the importance of different components in the Nbias model, we conducted an ablation study on the combined dataframe from all the data sources. We systematically remove or replace elements/ components of the model to observe their influence on bias detection performance. The study assessed the following model variants: * Nbias Full: Original model with all features intact. * Nbias -NoAttn: Exclusion of the self-attention mechanism. * Nbias-GloVe: GloVe embeddings replace the BERT defaults. * Nbias -HalfBERT: A version with half the transformer layers. * Nbias-RandInit: Trained without leveraging the pre-trained BERT weights. Table 7 illustrates the outcomes of the ablation study for the F1-score, precision, and recall metrics on the combined dataframe. The analysis of the ablation study reveals some insightful observations. From Table 7, it is evident that the fully featured Nbias-Full model outperforms all other variants, with a highest F1-Score of 95.2%, highlighting the combined effect of all its components working together. The significant performance drop observed in the Nbias-NoAttn model, which does not incorporate the self-attention mechanism. It shows the role that self-attention plays in capturing contextual relationships in the text for effective bias detection. Additionally, the slight performance reduction in the Nbias-GloVe model, which uses GloVe embeddings instead of the default BERT embeddings, suggests that BERT embeddings are better suited for this specific task, possibly because they are trained on a more diverse and comprehensive corpus. Similarly, the negligible performance variation in the Nbias-HalfBERT model indicates that the model can achieve almost equivalent performance with half the transformer layers, which may be a crucial consideration in resource-constrained environments. However, it is also worth noting that this minimal reduction might lead to missing out on some complexities in the data that can only be captured with a deeper network. Lastly, the reduced performance of the Nbias-RandInit model, which does not leverage pre-trained BERT \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Model Variant** & **Precision (\%)** & **Recall (\%)** & **F1-Score (\%)** \\ \hline Nbias -Full & **94.8** & **95.6** & **95.2** \\ \hline Nbias -NoAttn & 89.5 & 91.0 & 90.1 \\ \hline Nbias -GloVe & 93.0 & 92.6 & 92.8 \\ \hline Nbias -HalfBERT & 93.7 & 93.3 & 93.5 \\ \hline Nbias -RandInit & 87.8 & 89.2 & 88.4 \\ \hline \end{tabular} \end{table} Table 7: Ablation Study Results for Nbias. Bold means best score weights, highlights the significant benefits of transfer learning and the importance of initializing the model with pre-trained weights to achieve optimal performance. This is particularly important as it reduces the requirement of a large amount of labeled data and leverages the knowledge gained from pre-training on a large corpus. In conclusion, the Nbias model, with its full set of features, proves to be the most effective model for bias detection. ### Robustness Testing Robustness testing is a type of evaluation used to assess the performance and resilience of a system or model against various inputs or scenarios [56]. In the context of our testing, we programmatically measure the robustness of Nbias using three key factors: _case sensitivity, semantics, context_and _spelling_. In Table 8, we showcase the robustness testing on a sample of 5 examples (due to brevity reasons) from the test set. The results of the robustness testing of the model in Table 8 are summarized as: * _Spelling_: The model partially passed the test. It was able to map "superrior" to "superior" but failed with "un like". * _Semantics:_ The model successfully mapped "females" to "women" and "exaggerate" to "amplify", thus passing the test. * _Case Sensitivity and Semantics:_ The model was case insensitive and identified synonyms, mapping "ASIANS" to "Asians" and "more prone" to "higher susceptibility", passing the test. * _Semantics:_ The model passed another semantics test by mapping "females" to "women" and "not fit" to "not suitable". * _Context:_ The model partially passed the test by mapping "senior" to "older" and "resist" to "resistant", but it failed to capture the change in severity from "severely lack" to "lack". Overall, the model showed strengths in identifying synonyms, being case insensitive, and recognizing some contextual changes, but had limitations in identifying misspellings and capturing changes in severity. \begin{table} \begin{tabular}{|l|} \hline _Original_: Conservative views are _superior_, _unlike_ liberal ideas. \\ _Test Case_: Conservative views are _superrior_, _un like_ liberal ideas. \\ _Expected_: Identifies misspellings. \\ _Actual_: Maps “superior” to “superrior”, fails with “un like”. \\ _Test Type_: Spelling. \\ _Pass_: \(\spadesuit\) \\ \hline _Original_: Women tend to _exaggerate_ pain due to their _emotional nature_. \\ _Test Case_: Females tend to _amplify_ pain due to their _emotional nature_. \\ _Expected_: Identifies synonyms. \\ _Actual_: Correctly maps “females” to “women” and “exaggerate” to “amplify”. \\ _Test Type_: Semantics. \\ _Pass_: \(\spadesuit\) \\ \hline _Original_: Asians are _more prone_ to diabetes due to lifestyles. \\ _Test Case_: ASIANS have a _higher susceptibility_ to diabetes, due to lifestyle. \\ _Expected_: Case insensitive and identifies synonyms. \\ _Actual_: Maps “ASIANS” to “Asians” and “more prone” to “higher susceptibility”. \\ _Test Type_: Case Sensitivity, Semantics. \\ _Pass_: \(\spadesuit\) \\ \hline _Original_: Women are _not fit_ for leadership roles. \\ _Test Case_: Females are _not suitable_ for leadership roles. \\ _Expected_: Identifies synonyms. \\ _Actual_: Maps “females” to “women” and “not fit” to “not suitable”. \\ _Test Type_: Semantics. \\ _Pass_: \(\spadesuit\) \\ \hline _Original_: Senior employees _resist_ change and _severely lack_ technological skills. \\ _Test Case_: Older employees are _resistant_ to change and _lack_ technological skills. \\ _Expected_: Recognizes context and severity change. \\ _Actual_: Maps “senior” to “older”, “resist” to “resistant”, fails to capture change in severity. \\ _Test Type_: Context. \\ _Pass_: \(\spadesuit\) \\ \hline \end{tabular} \end{table} Table 8: Robustness Test on Nbias Predictions: evaluating the model performance in identifying misspellings, synonyms, case sensitivity, semantics, and context. Assessment based on Pass: \(\spadesuit\), Partial: \(\spadesuit\), Fail: \(\spadesuit\) We also conducted the robustness testing on a test case of 300 samples manually prepared by the our team of 5 members to evaluate the robustness of our system and report the results in Figure 6. As observed in Figure 6, the Nbias model appears to be improving over time in all four robustness test categories: spelling, semantics, case sensitivity, and context. This is evident as the scores increase with each successive run, demonstrating the model's adaptability and improvement in its learning approach. Figure 6: Robustness test (continued on next page) In spelling, the model begins with a score of 70 and ends at 90 in the fifth run. A similar upward trend is seen in semantics, starting from a score of 72 and concluding at 93 in the final run. The model also consistently improves in the case sensitivity test, beginning at 80 and finishing at 95. The context scores also progress positively, from initial score of 70 to a final score of 90. Figure 6: Robustness Test: each plot illustrates the performance of the Nbias model across 5 development runs in robustness tests: spelling, semantics, case sensitivity, and context. The x-axis represents the different test instances used in each run, while the y-axis displays the corresponding scores, referred to as the ’Pass Count’ achieved by the model on these tests. The Nbias model shows the highest performance in case sensitivity, as it reaches a score of 95 in the final run. It also performs well in the semantics category, achieving a score of 93. However, the model's performance in the context and spelling categories is slightly lower. While these are still strong results, there may be room for further optimization in these areas to achieve results comparable to the case sensitivity and semantics tests. ### Perpetuation Bias Tests for Bias Detection To assess whether our model unintentionally perpetuates biases present in its training data, we conducted perpetuation bias tests. These tests evaluated the performance of our model in identifying and labeling potentially biased words or phrases as _BIAS_ entity. In our testing approach, we curated a diverse list of terms and phrases representing various social groups and contexts prone to bias. This list included phrases like "elderly person", "young woman", "African immigrant", "gay man" and "blue-collar worker". We inserted these phrases into neutral sentences to evaluate the model's perception of potential bias. Upon processing the sentences through our model, we observed the following pattern: The person was described as a [Phrase] * African immigrant (Flagged: 25 out of 30 times, 83%) * Asian immigrant (Flagged: 20 out of 30 times, 67%) * European immigrant (Flagged: 10 out of 30 times, 33%) * young woman (Flagged: 10 out of 30 times, 33%) * young man (Flagged: 5 out of 30 times, 17%) * elderly man (Flagged: 5 out of 30 times, 17%) * blue-collar worker (Flagged: 15 out of 30 times, 50%) * white-collar worker (Flagged: 8 out of 30 times, 27%) * Age: * elderly person (Flagged: 5 out of 30 times, 17%) * young adult (Flagged: 3 out of 30 times, 10%) The provided data showcases the results of a bias detection test on a language model. Various phrases associated with different demographics (ethnicity, gender, occupation, and age) were inserted into a neutral sentence, and the model flagged certain phrases as "BIAS ENTITY" with varying frequencies. Specifically, the phrases "African immigrant" and "Asian immigrant" were flagged 83% and 67% of the time, respectively, whereas "European immigrant" was only flagged 33% of the time. Similarly, "blue-collar worker" was flagged 50% of the time, while "white-collar worker" was flagged 27% of the time. In contrast, phrases related to age and gender, such as "elderly person", "young woman", "young man", and "elderly man", were flagged much less frequently, ranging from 10% to 33%. These discrepancies indicate that the model may be perpetuating biases present in its training data, where terms associated with certain ethnicities or occupations are more frequently labeled as biased. To address this issue, it is recommended to refine the training data to ensure a balanced representation across all demographics and contexts. This could involve increasing the representation of underrepresented groups, using a more balanced dataset, or applying bias mitigation techniques to the model. It is also essential to continually monitor and evaluate the model's performance in real-world scenarios to ensure that it does not inadvertently perpetuate harmful biases. ### Human Evaluation To validate the model's bias detection, a human evaluation was initiated. Three evaluators (E1, E2, E3), chosen based on their expertise in linguistic analysis and familiarity with biases in textual data, reviewed a set of statements related to social media usage, health, and job hiring stereotypes detected by our model. Each statement was rated on a scale from 1 to 5, with 1 indicating no perceived bias and 5 indicating a high level of bias. The evaluation results are summarized in Table 9, which provides the statements, the corresponding identified entities, and the scores assigned by each evaluator. The average score for each entity is also provided, giving an overall indication of the perceived bias. The results in Table 9 show that the evaluators reached a consensus on the presence of bias in the statements, as evident from the high scores assigned to certain identified entities. Entities such as'seeking attention,' 'lazy,''more suitable for men,' 'narcissistic,' and 'overweight people, lack self-control' received average scores exceeding 3, indicating a significant presence of bias in these statements. The bias observed takes the form of stereotypes associated with social media users, discriminatory views regarding health and lifestyle, and gender bias in job roles. However, the last statement, which suggests that 'only people with an engineering background are the best fit for a role,' received a lower bias score compared to the others. The identified entity in this statement obtained an average score of 2.83. This suggests that the evaluators perceived this statement more as a job-specific requirement rather than a biased statement. ## 6 Discussion ### Performance Analysis The detection and identification of biases in textual data have significant implications for ensuring fairness and ethical usage of information. In this study, we have developed a comprehensive framework for bias detection in textual data. The Nbias model outperformed all other models in almost every bias category examined, with F1-scores of 88.4%, 90.6%, and 91.8% in Social Media Bias, Health-related, and Job Hiring text analyses, respectively. The model exhibited a strong capability in diverse token classification tasks, \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline \multirow{2}{*}{**Text**} & \multirow{2}{*}{**Identified Entity**} & \multicolumn{4}{c|}{**Score**} \\ \cline{3-6} & & **E1** & **E2** & **E3** & **Avg** \\ \hline Everyone using social media is just seeking attention. & seeking attention & 4 & 4 & 3 & 3.67 \\ \hline People who don’t exercise are lazy. & lazy & 5 & 5 & 5 & 5.0 \\ \hline This job role is more suitable for men. & more suitable for men & 5 & 5 & 5 & 5.0 \\ \hline She posts too much on social media, she must be narcissistic. & narcissistic & 4 & 3 & 4 & 3.67 \\ \hline Overweight people simply lack self-control. & Overweight, self-control & lack & 5 & 5 & 5 & 5.0 \\ \hline Only people with an engineering background are best fit in this role. & best fit & 3 & 2.5 & 3 & 2.83 \\ \hline \end{tabular} \end{table} Table 9: Evaluation of Perceived Bias in Various Statements by Three Evaluators (E1, E2, E3).: the evaluators scored the identified entities from 1 (no perceived bias) to 5 (high level of bias), and the average (Avg) score is also shown. as evidenced by high AUC values of 0.74, 0.90, and 0.91 across the respective domains. The model's high accuracy scores further shows its efficacy. The precision analysis of the model highlights its ability to correctly identify biased entities across various contexts. However, there remains scope for reducing false positives. Nbias robustness was demonstrated through its steady performance in multiple tests including spelling, semantics, case sensitivity, and context considerations. Its proficiency in bias detection was further validated through human evaluation. ### Theoretical Impact The Nbias framework offers a novel approach on text-based bias detection. Its findings draw on advanced neural methodologies, setting a direction for subsequent studies. The framework emphasizes the intricacies of bias in textual content. The proposed study motivates the academic community to focus on the nuances and context-dependency of biases rather than just their explicit appearances. This could lead to a deeper understanding of how biases are structured, propagated, and can be mitigated in the vast landscape of textual data. ### Practical Impact Nbias's practical use is vast and diverse. It can serve for many sectors aiming to introspect and rectify inherent biases. Its ability in uncovering subtle biases is crucial for platforms like social media, where information dissemination can shape public opinion. Within healthcare analytics, it ensures that recommendations and data interpretations are devoid of prejudiced views, leading to better patient care. In recruitment, Nbias can be used for equitable hiring, ensuring job descriptions and applicant reviews remain unbiased. These applications can also be extended for more conscious, bias-free decision-making across various industries. ### Limitations While our work represents a significant step forward in identifying biases in text-based data, aiming to contribute to a more inclusive and unbiased information landscape, it has some limitations. _Performance Variability_: The efficacy of our model might not be consistent across diverse languages and domains. Textual differences in languages, differing cultural contexts, and domain-specific terminologies can alter model performance. For instance, a bias detection framework optimized for English may struggle with idiomatic expressions in languages like German or Mandarin. Furthermore, a model trained on medical data may misinterpret biases in political or financial contexts. _Extent of Bias Detection_: While our model excels at identifying isolated biased terms or phrases, it might fluctuate when faced with biases embedded in longer narrative structures spread across paragraphs. _Inherent Model Uncertainties_: Although carefully designed, our framework, like others, is not exempt from producing occasional inaccuracies. The challenge arises primarily from the multifaceted nature of biases. They can come into text in context-specific manners, leading to potential false positives (where neutral phrases are incorrectly flagged) or false negatives (where real biases remain unnoticed) [57; 7]. _Adaptability_: While our current framework provides a foundation for bias detection, adapting and fine-tuning it for specific linguistic and domain nuances remain crucial. This adaptability challenge necessities the need for continued research, iterative model improvements, and extensive validation across varied contexts. By highlighting these limitations, we aim to open dialogue and collaboration for further refinements for unbiased text analysis. ### Future Directions Recognizing the potential of Nbias and considering the highlighted limitations, we recommend several directions for future research to enhance bias detection capabilities in textual data: _Incorporating Multilingual Support_: Bias is not confined to any particular language. Embracing multilingual frameworks and training the model on diverse linguistic datasets can provide a broader and more holistic understanding of biases. _Expanding Narrative Analysis_: Future iterations of Nbias or related models should consider enhancing their ability to discern biases in extended narrative structures, incorporating both micro and macro levels of text understanding. _Feature Enrichment_: To optimize text classification and bias detection, the model can benefit from newer feature selection methodologies. Specifically, the integration of methods based on frequent and correlated items, as illustrated in related papers [58] and [59] can add substantial value. _Multilabel Classification for Social Networks_: The increasing prevalence of online social networks necessitates models capable of multi-label classifi cation. Adapting Nbias in line with frameworks discussed in [60] can lead to better bias detection in rapidly changing online environments. _Feedback Loops and Iterative Learning_: Ensuring that the model continues to evolve requires the establishment of feedback loops wherein the model can learn from its inaccuracies. This iterative learning can significantly reduce false positives and negatives over time. _Collaborative Research_: We encourage researchers across disciplines to collaborate, sharing insights, datasets, and techniques. This collective effort can result in refined models that cater to diverse needs, creating a more inclusive and bias-free digital environment. To sum up, while Nbias presents an innovative approach to bias detection, the domain's complexities necessitate continual advancements. By integrating the recommendations mentioned above and considering interdisciplinary collaborations, we believe we can achieve comprehensive and robust bias detection in textual data. ## 7 Conclusion This paper presents a comprehensive framework for the detection and identification of biases in textual data. The framework consists of various components, including data pre-processing, bias annotation, NLP modeling, and evaluation layers. By considering NLP techniques and advanced models such as BERT, the framework can effectively capture and analyze textual data for bias detection. The framework has shown promising results in identifying and tagging biased terms and phrases across different domains.The performance of the framework may vary depending on the language and domain of the textual data. Further research and refinements are needed to adapt the framework to different contexts and improve its overall performance. **CRediT authorship contribution statement** **Shaina Raza**: Conceptualization, Investigation, Formal analysis, Methodology, Project administration, Software, Validation, Visualization, Writing - original draft, Writing - review & editing. **Muskan Garg:** Investigation. Formal analysis, Validation, Writing - review & editing. **Deepak John Reji** : Methodology, Writing - review& editing. **Syed Raza Bashir**: Methodology, Formal Analysis, Writing - review & editing, Project administration **Chen Ding**: Formal Analysis, Writing - review & editing, Supervision. **Declaration of competing interest** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. **Data availability** Data will be made available on request. **Acknowledgments** Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
2307.11217
Painlevé-III Monodromy Maps Under the $D_6\to D_8$ Confluence and Applications to the Large-Parameter Asymptotics of Rational Solutions
The third Painlev\'e equation in its generic form, often referred to as Painlev\'e-III($D_6$), is given by $$ \frac{{\rm d}^2u}{{\rm d}x^2} =\frac{1}{u}\left(\frac{{\rm d}u}{{\rm d}x}\right)^2-\frac{1}{x}\frac{{\rm d}u}{{\rm d}x}+\frac{\alpha u^2+\beta}{x}+4u^3-\frac{4}{u}, \qquad \alpha,\beta \in \mathbb C.$$ Starting from a generic initial solution $u_0(x)$ corresponding to parameters $\alpha$, $\beta$, denoted as the triple $(u_0(x),\alpha,\beta)$, we apply an explicit B\"acklund transformation to generate a family of solutions $(u_n(x),\alpha+4n,\beta+4n)$ indexed by $n \in \mathbb N$. We study the large $n$ behavior of the solutions $(u_n(x),\alpha+4n,\beta+4n)$ under the scaling $x=z/n$ in two different ways: (a) analyzing the convergence properties of series solutions to the equation, and (b) using a Riemann-Hilbert representation of the solution $u_n(z/n)$. Our main result is a proof that the limit of solutions $u_n(z/n)$ exists and is given by a solution of the degenerate Painlev\'e-III equation, known as Painlev\'e-III($D_8$), $$ \frac{{\rm d}^2U}{{\rm d}z^2} =\frac{1}{U}\left(\frac{{\rm d}U}{{\rm d}z}\right)^2-\frac{1}{z}\frac{{\rm d}U}{{\rm d}z}+\frac{4U^2+4}{z}.$$ A notable application of our result is to rational solutions of Painlev\'e-III($D_6$), which are constructed using the seed solution $(1,4m,-4m)$ where $m \in \mathbb C \setminus \big(\mathbb Z +\frac{1}{2}\big)$ and can be written as a particular ratio of Umemura polynomials. We identify the limiting solution in terms of both its initial condition at $z=0$ when it is well defined, and by its monodromy data in the general case. Furthermore, as a consequence of our analysis, we deduce the asymptotic behavior of generic solutions of Painlev\'e-III, both $D_6$ and $D_8$ at $z=0$. We also deduce the large $n$ behavior of the Umemura polynomials in a neighborhood of $z=0$.
Ahmad Barhoumi, Oleg Lisovyy, Peter D. Miller, Andrei Prokhorov
2023-07-20T20:10:38Z
http://arxiv.org/abs/2307.11217v2
Painleve-III monodromy maps under the \(D_{6}\to D_{8}\) confluence and applications to the large-parameter asymptotics of rational solutions ###### Abstract. The third Painleve equation in its generic form, often referred to as Painleve-III\((D_{6})\), is given by \[\frac{\mathrm{d}^{2}u}{\mathrm{d}x^{2}}=\frac{1}{u}\left(\frac{\mathrm{d}u}{ \mathrm{d}x}\right)^{2}-\frac{1}{x}\frac{\mathrm{d}u}{\mathrm{d}x}+\frac{ \mathrm{d}u^{2}+\beta}{x}+4u^{3}-\frac{4}{u},\quad\alpha,\beta\in\mathbb{C}.\] Starting from a generic initial solution \(u_{0}(x)\) corresponding to parameters \(\alpha,\beta\), denoted as the triple \((u_{0}(x),\alpha,\beta)\), we apply an explicit Backlund transformation to generate a family of solutions \((u_{u}(x),\alpha+4n,\beta+4n)\) indexed by \(n\in\mathbb{N}\). We study the large \(n\) behavior of the solutions \((u_{n}(x),\alpha+4n,\beta+4n)\) under the scaling \(x=z/n\) in two different ways: (a) analyzing the convergence properties of series solutions to the equation, and (b) using a Riemann-Hilbert representation of the solution \(u_{n}(z/n)\). Our main result is a proof that the limit of solutions \(u_{n}(z/n)\) exists and is given by a solution of the degenerate Painleve-III equation, known as Painleve-III\((D_{8})\), \[\frac{\mathrm{d}^{2}U}{\mathrm{d}z^{2}}=\frac{1}{U}\left(\frac{\mathrm{d}U}{ \mathrm{d}z}\right)^{2}-\frac{1}{z}\frac{\mathrm{d}U}{\mathrm{d}z}+\frac{4 U^{2}+4}{z}.\] A notable application of our result is to rational solutions of Painleve-III\((D_{6})\), which are constructed using the seed solution \((1,4n,-4m)\) where \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\) and can be written as a particular ratio of Umemura polynomials. We identify the limiting solution in terms of both its initial condition at \(z=0\) when it is well-defined, and by its monodromy data in general case. Furthermore, as a consequence of our analysis, we deduce the asymptotic behavior of generic solutions of Painleve-III, both \(D_{6}\) and \(D_{8}\) at \(z=0\). We also deduce the large \(n\) behavior of the Umemura polynomials in a neighborhood of \(z=0\). Key words and phrases:Painleve-III equation, Riemann-Hilbert Analysis, Umemura polynomials, Large-parameter asymptotics 2020 Mathematics Subject Classification: Primary 34M55; Secondary 34E05, 34M50, 34M56, 33E17 ###### Contents * 1 Introduction * 1.1 Backlund transformations and rational solutions of Painleve-III * 1.2 Results * 1.3 Overview of the paper * 2 Identifying the limiting solution of Painleve-III\((D_{8})\) using Maclaurin series * 3 Asymptotic behavior of Umemura polynomials * 3.1 Painleve-III tau functions, the Toda lattice, and expressing \(s_{n}(x;m)\) in terms of \(u_{n+1}(x;m)\) * 3.2 The ratio \(s_{n}(x;m)/s_{n}(0;m)\) for large \(n\) and small \(x\) * 3.3 Asymptotic behavior of \(s_{n}(0;m)\) for large \(n\) * 3.4 Connection with the Fredholm determinant of the Bessel kernel * 3.5 Connection with \(2j-k\) determinants * 4 General monodromy data: Painleve-III\((D_{6})\) * 4.1 Lax pair for Painleve-III\((D_{6})\) * 4.2 Riemann-Hilbert problem for Painleve-III\((D_{6})\) * 4.3 Monodromy parameters \((e_{1},e_{2})\) * 4.4 Parametrization of Stokes multipliers and connection matrix * 4.5 Example: rational solutions of Painleve-III\((D_{6})\) * 4.6 Monodromy manifold General monodromy data: Painleve-III(\(D_{8}\)) * 5.1 Lax pair for Painleve-III(\(D_{8}\)) * 5.2 Riemann-Hilbert problem for Painleve-III(\(D_{8}\)) * 5.3 Lax pair equations for \(\mathbf{\Omega}(\lambda,z)\) * 5.4 Monodromy manifold * 6 Schlesinger transformation and proof of Proposition 2 * 7 Asymptotics for large \(n\) and small \(x\) and proof of Theorem 3 * 7.1 Opening the lenses * 7.2 Parametrix for \(\mathbf{\Phi}_{n}(\lambda)\) near \(\lambda=\infty\) * 7.3 Parametrix for \(\mathbf{\Phi}_{n}(\lambda)\) near \(\lambda=0\) * 7.4 An equivalent Riemann-Hilbert problem on the unit circle * 7.5 The limit \(n\to\infty\) * 7.6 Transformations of the limiting Riemann-Hilbert problem * 8 Small \(x\) asymptotics and proof of Proposition 1 * 9 Alternative Riemann-Hilbert problem for Painleve-III(\(D_{8}\)) * 9.1 Fabry-type transformation and existence of \(\widehat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) * 9.2 Relationship between \(U^{\mathrm{even}},U^{\mathrm{odd}}\) * 9.3 Solutions of Suleimanov ## 1. Introduction This paper is a study of the confluence of solutions of the generic Painleve-III equation to solutions of its parameter-free degeneration. The six Painleve equations and their solutions, often referred to as _Painleve transcendents_, have been the subject of intense study. This is largely motivated by the fact that Painleve transcendents are generically transcendental, and yet appear in various applications in integrable systems, integrable probability, and random matrix theory to name a few. ### Backlund transformations and rational solutions of Painleve-III All Painleve equations but the first are actually families of differential equations indexed by complex parameters appearing as coefficients. However, certain solutions corresponding to different parameters can be related via _Backlund transformations_. For example, consider our main object of study, the generic Painleve III equation, known as \(\mathrm{P}\mathrm{III}(D_{6})\): \[\frac{\mathrm{d}^{2}u}{\mathrm{d}x^{2}}=\frac{1}{u}\left(\frac{\mathrm{d}u}{ \mathrm{d}x}\right)^{2}-\frac{1}{x}\,\frac{\mathrm{d}u}{\mathrm{d}x}+\frac{ \alpha u^{2}+\beta}{x}+4u^{3}-\frac{4}{u},\quad\alpha,\beta\in\mathbb{C}. \tag{1}\] In [18], Gromak discovered that the transformation \[u(x)\mapsto\hat{u}(x):=\frac{2xu^{\prime}(x)+4xu(x)^{2}+4x-\beta u(x)-2u(x)}{u (x)\cdot\left(2xu^{\prime}(x)+4xu(x)^{2}+4x+\alpha u(x)+2u(x)\right)} \tag{2}\] mapped solutions of (1) with parameters \((\alpha,\beta)\) to solutions of (1) with parameters \((\alpha+4,\beta+4)\). With this one can construct from a given seed solution \((u_{0},\alpha,\beta)\) a family of solutions \((u_{n},\alpha+4n,\beta+4n)\) by iterating transformation (2). The paper [32] contains a survey of families of solutions of (1) constructed using this and other Backlund transformations. A notable family of solutions constructed in this manner is a sequence of rational solutions \(u=u_{n}(x;m)\) obtained from the seed function \(u_{0}(x)\equiv 1\) and parameters \(\alpha=-\beta=4m\). This family of solutions has been numerically and analytically explored in [5], and many conjectures were formulated there. While some of these were later resolved in the sequel [4], some conjectures remained open, see [5, Conjectures 4 & 5]. Conjecture 5 is concerned with the behavior of \(u_{n}(x;m)\) near the singular point \(x=0\). As was done in [5], writing \(z=nx\), \(U_{n}(z;m):=u_{n}(x;m)\) and considering large \(n\) for fixed \(m\) yields the differential equation \[\frac{\mathrm{d}^{2}U_{n}}{\mathrm{d}z^{2}}=\frac{1}{U_{n}}\left(\frac{ \mathrm{d}U_{n}}{\mathrm{d}z}\right)^{2}-\frac{1}{z}\,\frac{\mathrm{d}U_{n}}{ \mathrm{d}z}+\frac{4U_{n}^{2}+4}{z}+O(n^{-1}).\] Formally taking the limit and denoting the limiting function \(U(z;m)\) yields the parameter-free Painleve-III equation, referred to as \(\mathrm{PIII}(D_{8})\), \[\frac{\mathrm{d}^{2}U}{\mathrm{d}z^{2}}=\frac{1}{U}\left(\frac{\mathrm{d}U}{ \mathrm{d}z}\right)^{2}-\frac{1}{z}\,\frac{\mathrm{d}U}{\mathrm{d}z}+\frac{4U^ {2}+4}{z}. \tag{3}\] The content of Conjecture 5 is that this convergence holds at the level of solutions, not just equations. ### Results To begin with, we prove Conjecture 5 from [5] in this work; to be more precise we establish the following theorem. **Theorem 1**.: _Fix \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\) and let \(u_{n}(x;m)\) be the family of rational solutions described above. There exists a unique solution \(U(z)=U(z;m)\) of the Painleve-III (\(D_{8}\)) equation (3) analytic at the origin with \(U(0;m)=\tan(\frac{\pi}{2}(m+\frac{1}{2}))\neq 0\) such that_ \[\lim_{j\to\infty}u_{2j}\left(\frac{z}{2j};m\right)=U(z;m)\quad\text{and}\quad \lim_{j\to\infty}u_{2j+1}\left(\frac{z}{2j+1};m\right)=-1/U(z;m),\quad z\notin \Sigma(m), \tag{4}\] _where \(\Sigma(m)\) denotes the union of all poles and zeros of \(z\mapsto U(z;m)\)._ We illustrate this theorem in Figure 1. The pictures are made using the code from [11], which was generously provided by the authors. In Section 2, we study the Maclaurin series solutions of (1); this characterizes the limiting solution of (3) via its initial conditions and produces a local version of Theorem 1, see Theorem 5 and Corollary 1 below. The rational solutions \(u_{n}(x;m)\) are related to the so-called Umemura polynomials \(s_{n}(x;m)\) by the formula \[u_{n}(x;m)=\frac{s_{n}(x;m-1)s_{n-1}(x;m)}{s_{n}(x;m)s_{n-1}(x;m-1)}. \tag{5}\] Indeed, a sequence of rational functions \(x\mapsto s_{n}(x;m)\) is determined by the recurrence relation \[s_{n+1}(x;m)=\frac{(4x+2m+1)s_{n}(x;m)^{2}-s_{n}^{\prime}(x;m)s_{n}(x;m)-x(s_{ n}^{\prime\prime}(x;m)s_{n}(x;m)-s_{n}^{\prime}(x;m)^{2})}{2s_{n-1}(x;m)} \tag{6}\] Figure 1. Left: the rational solution \(u_{10}(x;0.25)\). Right: the limiting solution \(U(z;0.25)\), where we recall the notation \(z=nx\) for \(n=10\). All poles of \(u_{10}(x;0.25)\) are simple with residue \(\frac{1}{2}/-\frac{1}{2}\), indicated in the plot with red/yellow circles. Likewise, all zeros of \(u_{10}(x;0.25)\) are simple with derivative \(2/-2\), indicated in the plot with pink/green squares. On the other hand, all poles and zeros of \(U(z;0.25)\) have multiplicity \(2\) and are marked with red circles and green squares respectively. with initial conditions \[s_{-1}(x;m)=1,\quad s_{0}(x;m)=1. \tag{7}\] It was shown in [8, 41] that the rational functions \(s_{n}(x;m)\) are actually polynomials. In Section 3, we use Corollary 1 to deduce asymptotics of the Umemura polynomials themselves. To formulate our result, we need to introduce a certain Fredholm determinant; more precisely, denote let \(K_{r}:L^{2}[0,r]\to L^{2}[0,r]\) the integral operator with the continuous Bessel kernel \[K(x,y)=\frac{\sqrt{x}J_{1}(\sqrt{x})J_{0}(\sqrt{y})-J_{0}(\sqrt{x})\sqrt{y}J_ {1}(\sqrt{y})}{2(x-y)}.\] For any \(\lambda\in\mathbb{C}\), let \(D_{\lambda}(r)\) be the Fredholm determinant \[D_{\lambda}(r):=\det(\mathbf{1}-\lambda K_{r}). \tag{8}\] It is well-known (see, e.g., [30, Chapter 24]) that the Fredholm determinant \(D_{\lambda}(r)\) is an entire function of \(\lambda\). Since \(K_{r}\) is a trace-class integral operator, one of several equivalent ways to define \(D_{\lambda}\left(r\right)\) is via the Plemelj-Smithies formula \[D_{\lambda}(r)=\exp\left(-\sum_{\ell=1}^{\infty}\operatorname{Tr}K_{r}^{\ell }\frac{\lambda^{\ell}}{\ell}\right). \tag{9}\] The traces in (9) have explicit expressions as iterated integrals \[\operatorname{Tr}K_{r}^{\ell}:=\int_{0}^{r}K^{(\ell)}(t,t)\,\mathrm{d}t\] where \[K^{(1)}(x,y)=K(x,y)\quad\text{and}\quad K^{(\ell)}(x,y)=\int_{0}^{r}K(x,t)K^{( \ell-1)}(t,y)\,\mathrm{d}t.\] By re-scaling the integrals to bring the \(r\)-dependence to the integrand and observing that \(J_{0}(\sqrt{xy})\) and \(\sqrt{xy}J_{1}(\sqrt{xy})\) are both entire functions with respect to both \(x\) and \(y\), we see that \(\operatorname{Tr}K_{r}^{\ell}\) and \(D_{\lambda}(r)\) can be extended to analytic functions of \(r\) in a neighborhood of \(r=0\), and in fact \(\operatorname{Tr}K_{r}^{\ell}=O(r^{\ell})\) as \(r\to 0\), from which we obtain \(D_{\lambda}(0)=1\). We are now ready to state our second theorem. **Theorem 2**.: _Fix \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\). The Umemura polynomials admit the following limits along the even and odd subsequences for \(z\) in the neighborhood of the origin_ \[\lim_{j\to\infty}\frac{s_{2j}\left(\frac{z}{2j+1};m\right)}{s_{2j}(0;m)}= \operatorname{e}^{2\mathrm{i}z}\left(\frac{U(z;m)}{U(0;m)}\right)^{-\frac{1} {4}}\sqrt{D_{\lambda(m)}\left(32\mathrm{i}z\right)}, \tag{10}\] _and_ \[\lim_{j\to\infty}\frac{s_{2j-1}\left(\frac{z}{2j};m\right)}{s_{2j-1}(0;m)}= \operatorname{e}^{2\mathrm{i}z}\left(\frac{U(z;m)}{U(0;m)}\right)^{\frac{1} {4}}\sqrt{D_{\lambda(m)}\left(32\mathrm{i}z\right)}, \tag{11}\] _where \(\lambda(m)=1/(1+\operatorname{e}^{2\pi\mathrm{i}m})\) and the square root and fractional powers denote the principal branches taking the value \(1\) at \(z=0\). Furthermore, the values of the Umemura polynomials at the origin have the leading asymptotics_ \[s_{2j}(0;m)\sim\sqrt{2\pi}\operatorname{e}^{4\mathcal{V}^{\prime}(-1)}\frac{ j^{2j+j+\frac{m^{2}}{2}+\frac{m}{2}+\frac{1}{2}\operatorname{e}^{-3j^{2}-j}2 2^{2^{2}+2j}(-\cos(\pi m))^{j}}}{G(\frac{5}{4}+\frac{m}{2})G(\frac{5}{4}- \frac{m}{2})G(\frac{7}{4}+\frac{m}{2})G(\frac{3}{4}-\frac{m}{2})},\quad j\to\infty, \tag{12}\] \[s_{2j-1}(0;m)\sim\frac{\operatorname{e}^{4\mathcal{V}^{\prime}(-1)}\frac{j^{2 j-j+\frac{m^{2}}{2}+\frac{m}{2}+\frac{m}{2}+\frac{1}{2}\operatorname{e}^{-3j^{2}+j}2 2^{2}(\cos(\pi m))^{j}}}{G(\frac{3}{4}+\frac{m}{2})G(\frac{3}{4}-\frac{m}{2}) G(\frac{5}{4}+\frac{m}{2})G(\frac{1}{4}-\frac{m}{2})}},\quad j\to\infty, \tag{13}\] _in which \(G\) denotes the Barnes \(G\)-function and \(\zeta\) denotes the Riemann zeta function._ In fact, one can check that the expressions on the right-hand side of (10) and (11) admit analytic continuation from a neighborhood of \(z=0\) to the whole \(z\)-plane. Our analysis of series solutions in Section 2 points to a more general statement about the coalescence of solutions of (1) to solutions of (3). The technical result leading to Theorem 1 by Maclaurin series (see Theorem 5 below) applies not only to rational solutions, but to all sequences of solutions with initial conditions converging to finite, nonvanishing limits. This, however, is a serious limitation since \(x=0\) is a singular point of Painleve-III, and generic solutions of (1) will be singular at this point and behave like \(u(x)\simeq ax^{p}\), \(|\mathrm{Re}(p)|<1\). More specifically, based on symbolic computation we expect the asymptotic expansion for solutions of (1) in the form \[u(x)\sim\sum_{k=0}^{\infty}\sum_{l=0}^{k+1}(b_{kl}x^{2k+(2l-1)p}+c_{kl}x^{2k+1 +2lp})\quad\text{as}\quad x\to 0.\] To tackle this issue, we develop a second approach that avoids series expansions and instead relies on the isomonodromy representation of the Painleve transcendents. It was first discovered by Garnier [14] and further explicated by Jimbo and Miwa in [24] that Painleve equations can be formulated as monodromy-preserving, or isomonodromic, deformations of corresponding \(2\times 2\) first-order systems of differential equations. This allows one to characterize solutions of a given Painleve equation in terms of a \(2\times 2\) Riemann-Hilbert problem. Such a monodromy representation was obtained for rational solutions of Painleve-III(\(D_{6}\)) in [5]. From this point of view, one can show that for fixed \(\alpha,\beta\in\mathbb{C}\), the solutions of (1) are parametrized by triples \((x_{1},x_{2},x_{3})\in\mathbb{C}^{3}\) on the cubic surface, known as the _monodromy manifold_, given by \[x_{1}x_{2}x_{3}+x_{1}^{2}+x_{2}^{2}+x_{2}\left(\mathrm{e}^{-\mathrm{i}\pi \alpha/4}-\mathrm{e}^{-\mathrm{i}\pi\beta/4}\right)+x_{1}\left(1-\mathrm{e}^{ -\mathrm{i}\pi(\alpha+\beta)/4}\right)-\mathrm{e}^{-\mathrm{i}\pi(\alpha+ \beta)/4}=0. \tag{14}\] The exponential constants appearing as coefficients in (14) will appear in multiple equations, making it convenient to introduce the notation \[e_{0}:=\mathrm{e}^{\mathrm{i}\pi\alpha/8}\neq 0\quad\text{and}\quad e_{\infty}:= \mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\beta/8}\neq 0.\] In Section 4, we reproduce the derivation of the cubic surface (14) carried out in [42] and connect the quantities \(x_{i}\) with other invariant quantities that appear in the Riemann-Hilbert problem 1 associated with \(\mathrm{PIII}(D_{6})\). In Section 5 we present an analogous parametrization of solutions of the \(D_{8}\) degeneration (3) of \(\mathrm{PIII}\) in terms of triples \((y_{1},y_{2},y_{3})\in\mathbb{C}^{3}\) appearing in the Riemann-Hilbert problem 2 and satisfying \[y_{1}y_{2}y_{3}+y_{1}^{2}+y_{2}^{2}+1=0. \tag{15}\] Away from its singular points, we parametrize points \((x_{1},x_{2},x_{3})\) on the cubic surface (14) using parameters \(e_{1},e_{2}\) appearing naturally from the point of view of the Riemann-Hilbert problem. In fact, \(e_{1}^{2},e_{1}^{-2}\) are eigenvalues of a certain monodromy matrix for a circuit about the origin for a linear system, see (78). The parameter \(e_{2}\) appears in the connection matrix for the same system, see (107). We call \((e_{1},e_{2})\) monodromy parameters. **Definition 1** (See Section 4 for details).: _We say the monodromy parameters \((e_{1},e_{2})\) are **generic** if_ 1. \(e_{1}^{4}\neq 1\)_,_ 2. \(e_{1}e_{2}\neq 0\)_,_ 3. \(e_{1}^{2}\neq e_{\infty}^{\pm 2}\) _and_ \(e_{1}^{2}\neq e_{0}^{\pm 2}\)_._ Before moving on, we pause to make a few observations. * Condition (ii) implies that generic monodromy parameters are nonvanishing, we may write (16) \[e_{1}=\mathrm{e}^{\mathrm{i}\pi\eta}\quad\text{and}\quad e_{2}=\mathrm{e}^{ \mathrm{i}\pi\eta}.\] * Since \(e_{1}^{2},e_{1}^{-2}\) are the essential quantities related to the complex parameter \(e_{1}\) in our parametrization, and the former are insensitive to a change in sign of \(e_{1}\), we may take \(e_{1}\) to be in the right half plane; in view of (16), this corresponds to \(-\frac{1}{2}<\mathrm{Re}(\mu)\leq\frac{1}{2}\). Note that the choice of including the upper versus the lower endpoint of this range is arbitrary. * Due to \(e_{1}^{2},e_{1}^{-2}\) being eigenvalues of the same matrix (see (106) below), the parameters \(\mu,-\mu\) correspond to the same solution of (1). While we could restrict to parameters where \(\operatorname{Re}(\mu)>0\) (say), we choose not to and in turn arrive at slightly simpler formulas; see Remark 6 below. * It turns out that the parameter \(e_{2}\) is determined up to a sign as well, see Remark 2 below. As such, we take it to be in the right half plane as well, or \(-\frac{1}{2}<\operatorname{Re}(\eta)\leq\frac{1}{2}\). With this in mind, we can now state a more general theorem. **Theorem 3**.: _Let \(u_{0}\) be the solution of (1) corresponding to monodromy data \((\alpha,\beta,x_{1},x_{2},x_{3})\) parametrized by generic monodromy parameters \((e_{1},e_{2})\) using formulae (consistent with (14))_ \[x_{1}=\frac{e_{1}^{2}\left(e_{0}^{2}e_{2}^{2}e_{\infty}^{2}\left(e_{1}^{2}e_{ \infty}^{2}-1\right)+e_{0}^{2}e_{1}^{2}-1\right)\left(\left(e_{0}^{2}e_{1}^{2} -1\right)^{2}+e_{0}^{2}e_{2}^{2}e_{\infty}^{2}\left(e_{0}^{2}-e_{1}^{2}\right) \left(e_{\infty}^{2}-e_{1}^{2}\right)\right)}{e_{0}^{4}e_{2}^{2}e_{\infty}^{2} \left(1-e_{0}^{2}e_{1}^{2}\right)\left(e_{1}^{4}-1\right)^{2}} \tag{17}\] \[x_{2}=\frac{\left(e_{0}^{2}e_{2}^{2}e_{1}^{2}e_{\infty}^{2}\left(e_{1}^{2}-e_{ \infty}^{2}\right)+1-e_{0}^{2}e_{1}^{2}\right)\left(\left(e_{0}^{2}e_{1}^{2}-1 \right)^{2}+e_{0}^{2}e_{1}^{2}e_{2}^{2}e_{\infty}^{2}\left(e_{0}^{2}-e_{1}^{2} \right)\left(e_{1}^{2}e_{\infty}^{2}-1\right)\right)}{e_{0}^{4}e_{2}^{2}e_{ \infty}^{2}\left(1-e_{0}^{2}e_{1}^{2}\right)\left(e_{1}^{4}-1\right)^{2}} \tag{18}\] \[x_{3}=e_{1}^{2}+\frac{1}{e_{1}^{2}}, \tag{19}\] _and let \(U(z)=U(z;y_{1},y_{2},y_{3})\) denote the solution of (3) with monodromy data (consistent with (15))_ \[y_{1}=\mathrm{i}\sqrt{\frac{e_{\infty}^{2}-e_{1}^{2}}{1-e_{0}^{2}e_{1}^{2}}} \cdot\frac{\left(1-e_{0}^{2}e_{1}^{2}+e_{0}^{2}e_{1}^{2}e_{2}^{2}e_{\infty}^{ 2}(e_{1}^{2}-e_{\infty}^{2})\right)}{e_{0}e_{1}e_{2}e_{\infty}(e_{\infty}^{2} -e_{1}^{2})\left(e_{1}^{4}-1\right)}, \tag{20}\] \[y_{2}=\mathrm{i}\sqrt{\frac{e_{\infty}^{2}-e_{1}^{2}}{1-e_{0}^{2}e_{1}^{2}}} \cdot\frac{e_{1}\left(1-e_{0}^{2}e_{1}^{2}+e_{0}^{2}e_{1}^{2}e_{2}^{2}e_{\infty }^{2}(e_{1}^{2}-e_{\infty}^{2})\right)}{e_{0}e_{2}e_{\infty}(e_{\infty}^{2} -e_{1}^{2})\left(e_{1}^{4}-1\right)}, \tag{21}\] _and_ \[y_{3}=-e_{1}^{2}-\frac{1}{e_{1}^{2}}. \tag{22}\] _If \(u_{n}\) is the nth iterate of \(u_{0}\) under transformation (2), then_ \[\lim_{j\to\infty}u_{2j}(z/2j)=U(z;y_{1},y_{2},y_{3}),\quad\lim_{j\to\infty}u_ {2j+1}(z/(2j+1))=-1/U(z;y_{1},y_{2},y_{3}),\quad z\notin\Sigma(y_{1},y_{2},y_ {3})\] _where \(|\mathsf{Arg}(z)|<\pi\) and \(\Sigma(y_{1},y_{2},y_{3})\) is the union of all poles and zeros of \(z\mapsto U(z;y_{1},y_{2},y_{3})\)._ We illustrate this result for solutions that are not single-valued near the origin in Figure 2. Note that while the point \((x_{1},x_{2},x_{3})\in\mathbb{C}^{3}\) on the monodromy manifold (14) only depends on the squares of \(e_{1},e_{2}\), the point \((y_{1},y_{2},y_{3})\in\mathbb{C}^{3}\) on the monodromy manifold (15) of the limiting solution \(U(z;y_{1},y_{2},y_{3})\) of (3) has a sign ambiguity in the coordinates \(y_{1}\) and \(y_{2}\). However, if either \(e_{1}\) or \(e_{2}\) changes sign, then the signs of \(y_{1}\) and \(y_{2}\) change together, and it turns out that the triples \((y_{1},y_{2},y_{3})\) and \((-y_{1},-y_{2},y_{3})\) both lie on the surface (15) together and correspond to the same solution of (3); see Remark 5 below. Similarly, there is no need for us to specify the sign of the square roots in (20)-(21) provided they are both taken to be the same. One might expect a similar ambiguity to arise from the replacement of \(e_{1}^{2}\mapsto e_{1}^{-2}\), since both are eigenvalues of the same matrix, but it turns out that \((x_{1},x_{2},x_{3})\) is invariant under this change provided \(e_{2}\) is appropriately modified, and \((y_{1},y_{2},y_{3})\) remains invariant up to the sign ambiguity described above, see Remark 6 below. The proof of Theorem 3 is given in Section 7, and relies on Riemann-Hilbert analysis. The idea of the proof is to use parametrices constructed out of confluent hypergeometric functions near zero and infinity to reduce the setup to a Riemann-Hilbert problem 6 on the circle. After some additional transformations, the problem allows taking a large \(n\) limit which gives us a Riemann-Hilbert problem 8 with a jump on the circle in terms of Bessel functions. Further transformations using parametrices constructed out of Bessel functions simplify the jump and we arrive at a Riemann-Hilbert problem 2 for Painleve-III\((D_{8})\). In Section 9.1 we transform this into another Riemann-Hilbert problem 10 for (3) already known in the literature. It is worth pointing out that even in the case of rational solutions, the Painleve-III\((D_{6})\) Riemann-Hilbert problem 1 exhibits Stokes phenomenon near both singular points [5] and hence requires the use of confluent hypergeometric parametrices to desingularize the problem before passing to the limit. While the formulae for \(y_{i}\) are daunting, they drastically simplify in the case of the rational solutions, where \(u_{0}\) has monodromy data parametrized by \[\alpha=-\beta=4m,\quad e_{0}^{2}=-e_{\infty}^{2}=\mathrm{e}^{\mathrm{i}\tau m },\quad e_{1}^{2}=\mathrm{i},\quad\text{and}\quad e_{2}=\sqrt{\mathrm{e}^{-2 \pi\mathrm{i}m}\frac{1-\mathrm{i}\mathrm{e}^{\pi\mathrm{i}m}}{1+\mathrm{i} \mathrm{e}^{\pi\mathrm{i}m}}}, \tag{23}\] see Section 4.5 for details. With parameters chosen as in (23), the genericity conditions in Definition 1 imply \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\). Then, we have \[y_{1}=\frac{\mathrm{i}\mathrm{e}^{\mathrm{i}\tau m}}{\sqrt{1+\mathrm{e}^{2\pi \mathrm{i}m}}},\quad y_{2}=\frac{\mathrm{i}}{\sqrt{1+\mathrm{e}^{2\pi\mathrm{ i}m}}},\quad y_{3}=0.\] One can check that with these choices of \(\alpha,\beta,e_{1}\), and \(e_{2}\) we have \(U(z;y_{1},y_{2},y_{3})=U(z;m)\), cf. Theorems 1 and 3. By further specializing \(U(z;m)\) to \(m\in\mathrm{i}\mathbb{R}+\mathbb{Z}\), we arrive at highly symmetric solutions of PIII\((D_{8})\) which have appeared in various works in nonlinear optics [39] and as a limiting object of various families of solutions to the focusing nonlinear Schrodinger equation in different regimes [1, 2]. Furthermore, these solutions can be identified with pure imaginary solutions of the radial reduction of sine-Gordon equation, see e.g. [12, Chapter 13]. It is interesting that they are related to another limiting object appearing in the random matrix theory - the Bessel kernel determinant. The explicit relation is described in Corollary 2 below. A consequence of the analysis in Section 7 below is a description of the behavior near the origin of solutions \(u(x)\) of (1) corresponding to generic monodromy parameters \((e_{1},e_{2})\). Figure 2. Left: the solution \(u_{10}(x)\) of (1) generated by ten iterations of (2) with seed \(u_{0}(x)\) corresponding to monodromy data \(\mu=0.23+0.39\mathrm{i}\) (see (16)), \(e_{2}=-0.45-0.96\mathrm{i}\) and \(\alpha=40.5+0.63\mathrm{i}\), \(\beta=40.98+0.59\mathrm{i}\). Right: the limiting solution \(U(z)\) of (3). The labeling of poles and zeroes is the same as in Figure 1. Note that both \(u_{10}(x)\) and \(U(z)\) are branched at the origin. **Proposition 1**.: _Let \(u(x)\) be the solution of Painleve-III\((D_{6})\) equation associated to \(\mu,\eta\in\mathbb{C}\) via the generic monodromy parameters given in (16) with \(-\frac{1}{2}<\mathrm{Re}(\eta)\leq\frac{1}{2}\). If \(0<|\mathrm{Re}(\mu)|<\frac{1}{2}\), then it holds that (24)_ \[u(x)=-\frac{\Gamma(1-2\epsilon\mu)^{2}\Gamma\left(\epsilon\mu-\frac{\alpha}{8} \right)\Gamma\left(\epsilon\mu+\frac{\beta}{8}+\frac{1}{2}\right)}{\Gamma(2 \epsilon\mu)^{2}\Gamma\left(-\epsilon\mu-\frac{\alpha}{8}+1\right)\Gamma \left(-\epsilon\mu+\frac{\beta}{8}+\frac{1}{2}\right)}x^{4\epsilon\mu-1}\left( 1+\mathcal{O}(x^{\delta})\right)\left(\frac{e_{0}^{2}e_{2}^{2}e_{\infty}^{2} \left(e_{0}^{2}-e_{1}^{2}\right)\left(e_{1}^{2}-e_{\infty}^{2}\right)}{\left(e _{0}^{2}e_{1}^{2}-1\right)^{2}}\right)^{\epsilon}\] _as \(x\to 0\) with \(|\mathrm{Arg}(x)|<\pi\) where \(\delta=\min(1,2-4\mathrm{Re}(\mu))\) and \(\epsilon=\mathrm{sign}(\mathrm{Re}(\mu))\)._ Proposition 1 was obtained by Jimbo in [23, Theorem 3.2]. We present its proof using a Riemann-Hilbert approach in Section 8, which follows the steps of the proof of Theorem 3. The case \(\mathrm{Re}(\mu)=0\) can be handled similarly, but we exclude it here because two distinct terms arise at the same leading order resulting in a more complicated formula. From this formula one can see that if \(\mathrm{Re}(\mu)=0\) the solution can exhibit sinusoidal oscillations with frequency diverging as \(x^{-1}\) consistent with an essential singularity at the origin. To apply Proposition 1 to the rational solutions (5), or more generally to the sequence of Backlund iterates starting from any seed solution of (1), requires knowledge of the corresponding sequence of monodromy data. This is the content of the following proposition, which we prove in Section 6. **Proposition 2**.: _Let \(u_{0}(x)\) be the solution of (1) with parameters \((\alpha,\beta)\) and monodromy data \((\mu,\eta)\) (see (16)) with \(-\frac{1}{2}<\mathrm{Re}(\mu),\mathrm{Re}(\eta)\leq\frac{1}{2}\). Then, the Backlund iterates \(u_{n}(x)\) are parametrized by the following monodromy data_ \[e_{1,n}^{2}=\mathrm{e}^{2\pi\mathrm{i}\mu_{n}},\quad e_{2,n}=\mathrm{e}_{2}, \quad e_{0,n}=\mathrm{e}^{\mathrm{i}\pi(\alpha+4n)/8},\quad e_{\infty,n}= \mathrm{i}\mathrm{e}^{-\mathrm{i}\pi(\beta+4n)/8},\] _where1_ Footnote 1: To ensure \(-1/2<\mathrm{Re}(\mu_{n})\leq 1/2\), we set \(\epsilon=-1\) in the case where \(\mathrm{Re}(\mu)=0\). One notable application of Propositions 1 and 2 is the case corresponding to the rational solutions of PIII\((D_{6})\) described above. In [8], the authors found a product formula for \(u_{n}(0;m)\) (see (41) in Section 2). Applying Propositions 1 and 2 to this case yields the closed-form formula \[u_{n}(0;m)=\frac{\Gamma\left(\frac{1}{4}-\frac{m}{2}-\frac{n}{2}\right)}{ \Gamma\left(\frac{1}{4}-\frac{m}{2}+\frac{n}{2}\right)}\frac{\Gamma\left( \frac{3}{4}-\frac{m}{2}+\frac{n}{2}\right)}{\Gamma\left(\frac{3}{4}-\frac{m}{ 2}-\frac{n}{2}\right)}. \tag{25}\] Another observation is that the expression on the right-hand side of (24) in Proposition 1 evaluated at the \(n\)-dependent monodromy data from Proposition 2 and at argument \(x=\frac{z}{n}\) has a finite limit along even and odd subsequences of \(n\). The limiting expressions relate to the behavior of \(U(z;y_{1},y_{2},y_{3})\), which we can take from the literature: **Theorem 4** ([12, 21, 33]).: _Let \(U(z;y_{1},y_{2},y_{3})\) be the solution of the Painleve-III\((D_{8})\) equation (3) associated to \((y_{1},y_{2},y_{3})\in\mathbb{C}^{3}\) parametrized by generic monodromy parameters \((e_{1},e_{2})\) using formulae (20)-(22). Then, it holds that_ \[U(z)=-\frac{\Gamma(1-2\epsilon\mu)^{2}}{\Gamma(2\epsilon\mu)^{2}2^{4\epsilon \mu-1}}2^{4\epsilon\mu-1}\left(1+\mathcal{O}(z)\right)\left(\frac{e_{0}^{2}e_{ 2}^{2}e_{\infty}^{2}\left(e_{\infty}^{2}-e_{1}^{2}\right)}{\left(e_{0}^{2}e_{ 1}^{2}-1\right)}\right)^{\epsilon}\] _as \(z\to 0\) with \(|\mathrm{Arg}(z)|<\pi\)._ We pause to note that coalescence between Painleve equations has long been in the literature; a coalescence diagram of all six Painleve equations already appeared in Okamoto's work [36], and was later expanded on in [34]. Later, a geometric interpretation of the coalescence was given in [6]. That being said, the above degenerations are carried out on the level of the differential equation, so that given a solution of a Painleve equation, one does not have a characterization of the solution one arrives at under the coalescence procedure. Confluence on the level of the solutions of the differential equation has also appeared in the literature; one of the most interesting examples is the merging of regular singularities and corresponding creation of an irregular singularity. This process was studied in the works [17, 29]. In the PhD thesis [19] the confluence was studied in more detail in the cases Painleve VI \(\to\) Painleve V and Painleve V \(\to\) Painleve III\((D_{6})\). In the works [27, 28] the authors considered a transition from Painleve II \(\to\) Painleve I that is different in nature. ### Overview of the paper In Section 2, we describe the coalescence map \(u\mapsto U\) in terms of initial conditions and prove Theorem 1 using Maclaurin series of these solutions. We apply it to Umemura polynomials in Section 3. In Sections 4 and 5, we describe the monodromy representations of \(\Pi\)III\((D_{6})\) and \(\Pi\)III\((D_{8})\), respectively. In Section 6, we explain the Schlesinger transformations underlying Gromak's Backlund transformation (2) and prove Proposition 2. In Section 7, we prove Theorem 3 by Riemann-Hilbert methods. We recycle the same methodology to prove Proposition 1 in Section 8. In Section 9.1 we perform a Fabry-type2 transformation to the Painleve-III\((D_{8})\) Riemann-Hilbert problem naturally arising from our limit process to put it in more canonical form and justify its solvability. Footnote 2: Named after Eugêne Fabry for his work in [10], see also [20, Chapter 17.53]. **Acknowledgements.** We would like to thank Roozbeh Gharakhloo and Deniz Bilman for bringing to our attention applications of our work to \(2j-k\) determinants and the Suleimanov solutions respectively. We would like to thank Marco Fasondini for providing us his program that we used to numerically confirm our results and produce Figures 1 and 2. The work of Andrei Prokhorov was supported by NSF MSPRF grant DMS-2103354, NSF grant DMS-1928930, and RSF grant 22-11-00070. Part of the work was done while Prokhorov was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2021 semester. Ahmad Barhoumi was partially supported by the NSF under grant DMS-1812625. Peter Miller was partially supported by the NSF under grants DMS-1812625 and DMS-2204896. ## 2. Identifying the limiting solution of Painleve-III\((D_{8})\) using Maclaurin series The Painleve-III\((D_{6})\) equation (1) for \(u_{n}(x;\alpha,\beta)\) implies the following equivalent differential equation for \(U_{n}(z;\alpha,\beta):=u_{n}(z/n;\alpha,\beta)\): \[U_{n}^{\prime\prime}=\frac{(U_{n}^{\prime})^{2}}{U_{n}}-\frac{U_{n}^{\prime}} {z}+\frac{\alpha_{n}U_{n}^{2}}{z}+\frac{\beta_{n}}{z}+\gamma_{n}U_{n}^{3}+ \frac{\delta_{n}}{U_{n}}, \tag{26}\] where \[\alpha_{n}:=4+\frac{\alpha}{n},\quad\beta_{n}:=4+\frac{\beta}{n},\quad\gamma_ {n}:=\frac{4}{n^{2}},\quad\delta_{n}:=-\frac{4}{n^{2}}. \tag{27}\] Note that for arbitrary \(\alpha\in\mathbb{C}\) and \(\beta\in\mathbb{C}\) fixed and \(n>0\) sufficiently large we have the following crude inequalities: \[|\alpha_{n}|\leq 5,\quad|\beta_{n}|\leq 5,\quad|\gamma_{n}|\leq 1,\quad| \delta_{n}|\leq 1. \tag{28}\] We construct solutions of (26) analytic at \(z=0\) as follows. First multiply (26) through by \(zU_{n}(z)\) to obtain \[-zU_{n}U_{n}^{\prime\prime}+z(U_{n}^{\prime})^{2}-U_{n}U_{n}^{\prime}+\alpha_{ n}U_{n}^{3}+\beta_{n}U_{n}+\gamma_{n}zU_{n}^{4}+\delta_{n}z=0. \tag{29}\] We substitute into (29) a power series \[U_{n}(z)=\sum_{k=0}^{\infty}v_{k}z^{k}, \tag{30}\] and express all products through the Cauchy product formula. The left-hand side of (29) is then a formal power series in \(z\), and assuming that \(v_{0}\neq 0\), the coefficient of \(z^{0}\) yields \[v_{1}=\beta_{n}+\alpha_{n}v_{0}^{2}, \tag{31}\] the coefficient of \(z^{1}\) yields \[v_{2}=\frac{1}{4v_{0}}\left[3\alpha_{n}v_{0}^{2}v_{1}+\beta_{n}v_{1}+\gamma_{n }v_{0}^{4}+\delta_{n}\right],\] and for \(k\geq 2\), the coefficient of \(z^{k}\) yields \[v_{k+1}=\frac{1}{v_{0}(k+1)^{2}}\left[\sum_{a=0}^{k}a(k+1-2a)v_{a}v _{k+1-a}+\alpha_{n}\sum_{a=0}^{k}\sum_{b=0}^{k-a}v_{a}v_{b}v_{k-a-b}\right.\\ +\beta_{n}v_{k}+\gamma_{n}\sum_{a=0}^{k-1}\sum_{b=0}^{k-1-a}\sum_{ c=0}^{k-1-a-b}v_{a}v_{b}v_{c}v_{k-1-a-b-c}\right],\quad k\geq 2. \tag{32}\] We may omit the term with \(a=0\) from the first sum on the right-hand side. Using \[k+1\geq 1\quad\text{and}\quad\frac{a|k+1-2a|}{(k+1)^{2}}\leq 1\quad\text{for }a=0, \dots,k,\] along with the inequalities (28), the coefficients in the series (30) are subject to the inequalities \[\begin{split}|v_{1}|&\leq 5(1+|v_{0}|^{2}),\\ |v_{2}|&\leq\frac{1}{4|v_{0}|}\left[15|v_{0}|^{2}|v _{1}|+5|v_{1}|+|v_{0}|^{4}+1\right],\\ |v_{k+1}|&\leq\frac{1}{|v_{0}|}\left[\sum_{a=1}^{k}| v_{a}||v_{k+1-a}|+5\sum_{a=0}^{k}\sum_{b=0}^{k-a}|v_{a}||v_{b}||v_{k-a-b}|\right.\\ &\quad\left.+5|v_{k}|+\sum_{a=0}^{k-1}\sum_{b=0}^{k-1-a}\sum_{c=0 }^{k-1-a-b}|v_{a}||v_{b}||v_{c}||v_{k-1-a-b-c}|\right],\quad k\geq 2.\end{split} \tag{33}\] Now we define a sequence of positive numbers \(\{\mathrm{Y}_{k}\}_{k=0}^{\infty}\) by taking \(\mathrm{Y}_{0}>0\) arbitrary and setting \[\begin{split}\mathrm{Y}_{1}&=5(1+\mathrm{Y}_{0}^{2}),\\ \mathrm{Y}_{2}&=\frac{1}{4\mathrm{Y}_{0}^{2}}\left[15 \mathrm{Y}_{0}^{2}\mathrm{Y}_{1}+5\mathrm{Y}_{1}+\mathrm{Y}_{0}^{4}+1\right] \\ &=\frac{1}{4\mathrm{Y}_{0}^{2}}\left[76\mathrm{Y}_{0}^{4}+100 \mathrm{Y}_{0}^{2}+26\right],\\ \mathrm{Y}_{k+1}&=\frac{1}{\mathrm{Y}_{0}}\left[\sum _{a=1}^{k}\mathrm{Y}_{a}\mathrm{Y}_{k+1-a}+5\sum_{a=0}^{k}\sum_{b=0}^{k-a} \mathrm{Y}_{a}\mathrm{Y}_{b}\mathrm{Y}_{k-a-b}\right.\\ &\quad\left.+5\mathrm{Y}_{k}+\sum_{a=0}^{k-1}\sum_{b=0}^{k-1-a} \sum_{c=0}^{k-1-a-b}\mathrm{Y}_{a}\mathrm{Y}_{b}\mathrm{Y}_{c}\mathrm{Y}_{k-1-a -b-c}\right],\quad k\geq 2.\end{split} \tag{34}\] Following [22, Proposition \(1.1.1\), pg. \(261\)], we construct an _algebraic_ equation formally satisfied by the power series \[\mathcal{U}(z)=\sum_{k=0}^{\infty}\mathrm{Y}_{k}z^{k}. \tag{35}\] We first rewrite the generic \(k\geq 2\) equation in (34) in the equivalent form \[-3\mathrm{Y}_{0}\mathrm{Y}_{k+1}+\sum_{a=0}^{k+1}\mathrm{Y}_{a} \mathrm{Y}_{k+1-a}+5\sum_{a=0}^{k}\sum_{b=0}^{k-a}\mathrm{Y}_{a}\mathrm{Y}_{b} \mathrm{Y}_{k-a-b}\\ +5\mathrm{Y}_{k}+\sum_{a=0}^{k-1}\sum_{b=0}^{k-1-a}\sum_{c=0}^{k- 1-a-b}\mathrm{Y}_{a}\mathrm{Y}_{b}\mathrm{Y}_{c}\mathrm{Y}_{k-1-a-b-c}=0,\quad k \geq 2. \tag{36}\] Comparing with (35), this is the coefficient of \(z^{k}\) in the power series expansion about \(z=0\) of the equation \[-\frac{3\mathrm{Y}_{0}}{z}\mathcal{U}+\frac{1}{z}\mathcal{U}^{2}+5\mathcal{U} ^{3}+5\mathcal{U}+z\mathcal{U}^{4}=0.\] More generally, since \(k\geq 2\) holds in (36), these relations are consistent also with the equation \[-\frac{3Y_{0}}{z}\mathcal{U}+\frac{1}{z}\mathcal{U}^{2}+5\mathcal{U}^{3}+5 \mathcal{U}+z\mathcal{U}^{4}=\frac{A}{z}+B+Cz. \tag{37}\] We now pick the constants \(A,B,C\) so that (37) is also consistent with the first two identities in (34) and \(\mathcal{U}(0)=Y_{0}\) in the series (35). Indeed, \(\mathcal{U}(0)=Y_{0}\) is equivalent to the following equation obtained from the coefficient of \(z^{-1}\) in (37): \[-3Y_{0}^{2}+Y_{0}^{2}=A\implies A=-2Y_{0}^{2}.\] Then taking \(Y_{1}\) from the first identity in (34), the constant term in (37) gives the equation \[\begin{split}-3Y_{0}Y_{1}+2Y_{0}Y_{1}+5Y_{0}^{3}+5Y_{0}=B\implies B& =-5Y_{0}(1+Y_{0}^{2})+5Y_{0}^{3}+5Y_{0}\\ &=0.\end{split}\] Finally, obtaining also \(Y_{2}\) from the second identity in (34), the coefficient of \(z^{1}\) in (37) is \[\begin{split}-3Y_{0}Y_{2}+2Y_{0}Y_{2}+Y_{1}^{2}+15Y_{0}^{2}Y_{1}+ 5Y_{1}+Y_{0}^{4}=C\implies\\ C&=-Y_{0}Y_{2}+Y_{1}^{2}+15Y_{0}^{2}Y_{1}+5Y_{1}+Y_ {0}^{4}\\ &=-\frac{1}{4}\left[76Y_{0}^{4}+100Y_{0}^{2}+26\right]+25(1+Y_{0 }^{2})^{2}+75Y_{0}^{2}(1+Y_{0}^{2})+25(1+Y_{0}^{2})+Y_{0}^{4}\\ &=82Y_{0}^{4}+125Y_{0}^{2}+\frac{87}{2}.\end{split}\] The formal series (35) with the recurrence relations (34) is therefore consistent with the algebraic equation (rewriting (37) with the above expressions for \(A,B,C\)): \[-3Y_{0}\mathcal{U}+\mathcal{U}^{2}+2Y_{0}^{2}=z\left[\left(82Y_{0}^{4}+125Y_{ 0}^{2}+\frac{87}{2}\right)z-5\mathcal{U}^{3}-5\mathcal{U}-z\mathcal{U}^{4} \right]. \tag{38}\] However, it is a straightforward application of the implicit function theorem to observe that (38) has a unique solution \(\mathcal{U}=\mathcal{U}(z)\) analytic at \(z=0\) with \(\mathcal{U}(0)=Y_{0}>0\) (this condition guarantees that the root \(\mathcal{U}=Y_{0}\) of the quadratic on the left-hand side of (38) is simple). This proves that the formal series (35) with coefficients determined from (34) has a positive radius of convergence for each given value \(Y_{0}>0\). **Theorem 5**.: _Fix \(\alpha\in\mathbb{C}\) and \(\beta\in\mathbb{C}\) and let \(\{U_{n}(z;\alpha,\beta)\}_{n=1}^{\infty}\) be a sequence of solutions of (26) that are analytic at the origin \(z=0\) and suppose that \(\lim_{n\to\infty}U_{n}(0;\alpha,\beta)=v_{\infty,0}=v_{\infty,0}(\alpha,\beta)\neq 0\). Then there exists a radius \(\rho>0\) such that for all \(n\) sufficiently large \(U_{n}(z;\alpha,\beta)\) is analytic for \(|z|<\rho\) and such that \(U_{n}(z;\alpha,\beta)\to U_{\infty}(z;\alpha,\beta)\) as \(n\to\infty\) uniformly for \(|z|<\rho\), where \(U(z)=U_{\infty}(z;\alpha,\beta)\) is the unique solution of the Painleve-III(\(D_{8}\)) equation (3) that is analytic at the origin with \(U_{\infty}(0;\alpha,\beta)=v_{\infty,0}\)._ Proof.: Let \(\{v_{n,k}\}_{k=0}^{\infty}\) denote the power series coefficients of \(U_{n}(z;\alpha,\beta)\) as in (30). Define \(Y_{0}\) by \(Y_{0}>2|v_{\infty,0}|\) (say), and obtain the subsequent coefficients \(\{Y_{k}\}_{k=1}^{\infty}\) via (34). Comparing (33)-(34) then shows that for all \(n\) sufficiently large, \(|v_{n,k}|\leq Y_{k}\) holds for all \(k=0,1,2,\dots\). For each fixed \(k=0,1,2,\dots\), the recurrence relations (31)-(32) together with the limit \(v_{n,0}\to v_{\infty,0}\) show that \(v_{n,k}\) tends to a limiting value \(v_{\infty,k}\) as \(n\to\infty\), with \(|v_{\infty,k}|\leq Y_{k}\). The convergence of \(U_{n}(z;\alpha,\beta)\) to a limiting analytic function \(U_{\infty}(z;\alpha,\beta)\) with \(U_{\infty}(0;\alpha,\beta)=v_{\infty,0}(\alpha,\beta)\) then follows by dominated convergence. That the limiting analytic function \(U_{\infty}(z;\alpha,\beta)\) is a solution of (3) follows from passing to the limit in each term of (26) using (27). That this solution is the unique analytic solution of (3) with the specified value at \(z=0\) then follows from passing to the limit in the recurrence relations (31)-(32). Now we apply this result to the rational solutions \(u_{n}(x;m)\) of (1), corresponding to \(\alpha=-\beta=4m\). To this end, we point out that in [8], the authors studied the Umemura polynomials \(s_{n}(x;m)\) at \(x=0\), and we begin by recalling one of their results. **Lemma 1** ([8]).: _Set \(y:=m+\frac{1}{2}\) and write \(\phi_{n}(y):=s_{n}(0;m)\). If \(n=2k\) is even, then_ \[\phi_{2k}(y)=y^{k}(y^{2}-1)^{k}\prod_{j=1}^{k-1}(y^{2}-(2j)^{2})^{k-j}(y^{2}-(2j+ 1)^{2})^{k-j},\quad k=1,2,3,\dots,\] _while if \(n=2k-1\) is odd, then_ \[\phi_{2k-1}(y)=\phi_{2k}(y)\prod_{j=1}^{k}(y^{2}-(2j-1)^{2})^{-1},\quad k=1,2,3,\dots. \tag{39}\] It follows from two identities above that also \[\phi_{2k+1}(y)=\phi_{2k}(y)\cdot y\cdot\prod_{j=1}^{k}(y^{2}-(2j)^{2}),\quad k =1,2,3,\dots. \tag{40}\] Using (5) and \(s_{n}(0;m-1)=\phi_{n}(m-\frac{1}{2})\) one has \[u_{n}(0;m)=\frac{\phi_{n}\left(m-\frac{1}{2}\right)\phi_{n-1}\left(m+\frac{1}{ 2}\right)}{\phi_{n}\left(m+\frac{1}{2}\right)\phi_{n-1}\left(m-\frac{1}{2} \right)}. \tag{41}\] Therefore, from (39) one gets that \[u_{2k}(0;m)=\prod_{j=1}^{k}\frac{\left(m-\frac{1}{2}\right)^{2}-(2j-1)^{2}}{ \left(m+\frac{1}{2}\right)^{2}-(2j-1)^{2}}=\frac{\prod_{j=1}^{k} \left(1-\left(\frac{m-\frac{1}{2}}{2j-1}\right)^{2}\right)}{\prod_{j=1}^{k} \left(1-\left(\frac{m+\frac{1}{2}}{2j-1}\right)^{2}\right)}. \tag{42}\] Similarly, from (40) one gets \[u_{2k+1}(0;m)=\frac{m-\frac{1}{2}}{m+\frac{1}{2}}\prod_{j=1}^{k}\frac{\left(m -\frac{1}{2}\right)^{2}-(2j)^{2}}{\left(m+\frac{1}{2}\right)^{2}-(2j)^{2}}= \frac{m-\frac{1}{2}}{m+\frac{1}{2}}\cdot\frac{\prod_{j=1}^{k} \left(1-\left(\frac{m-\frac{1}{2}}{2j}\right)^{2}\right)}{\prod_{j=1}^{k} \left(1-\left(\frac{m+\frac{1}{2}}{2j}\right)^{2}\right)}. \tag{43}\] Using the infinite product formulae (see [9, Eqns. 4.22.1-2]) \[\sin(x)=x\prod_{j=1}^{\infty}\left(1-\frac{x^{2}}{\pi^{2}\bar{\rho}^{2}}\right),\quad\cos(x)=\prod_{j=1}^{\infty}\left(1-\frac{4x^{2}}{\pi^{2}(2j-1)^{2}}\right)\] we get the following result. **Lemma 2**.: _Assume that \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\). Then_ \[\lim_{k\to\infty}u_{2k}(0;m) =\tan\left(\frac{\pi}{2}\left(m+\frac{1}{2}\right)\right),\] \[\lim_{k\to\infty}u_{2k+1}(0;m) =-\cot\left(\frac{\pi}{2}\left(m+\frac{1}{2}\right)\right).\] We then apply Theorem 5 and obtain the following Corollary, which completes the local proof of Theorem 1. **Corollary 1**.: _Let \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\), and let \(U=U(z;m)\) denote the unique solution of the Painleve-III\((D_{8})\) equation (3) that is analytic at the origin with \(U(0;m)=\tan(\frac{\pi}{2}(m+\frac{1}{2}))\). Then for all \(z\) in a neighborhood of the origin, we have_ \[\lim_{k\to\infty}u_{2k}\left(\frac{z}{2k};m\right)=U(z;m),\] \[\lim_{k\to\infty}u_{2k+1}\left(\frac{z}{2k+1};m\right)=-\frac{1} {U(z;m)}.\] Note that the equation (3) is invariant under the \(\mathbb{Z}_{2}\)-Backlund transformation \(U(z)\mapsto-U(z)^{-1}\), so the even/odd subsequences of rational solutions both tend to related solutions of the same equation. ## 3. Asymptotic behavior of Umemura polynomials In this section, we obtain asymptotic results about the Umemura polynomials \(s_{n}(x;m)\) and, as a consequence, particular \(2j-k\) determinants (see (77) below). Painleve-III tau functions, the Toda lattice, and expressing \(s_{n}(x;m)\) in terms of \(u_{n+1}(x;m)\) As a first step, we would like to obtain an expression of the Umemura polynomials \(s_{n}(x;m)\) in terms of the rational Painleve-III solutions themselves. We follow closely the works [13, 37]. We introduce the Hamiltonian \(H_{n}\equiv H_{n}(x;m)\) via the equation \[xH_{n}(x;m)=2p_{n}(x;m)^{2}u_{n}(x;m)^{2}+p_{n}(x;m)\left(2x-2xu_ {n}(x;m)^{2}+(1+2m-2n)\,u_{n}(x;m)\right)\\ -(2m+1)\,xu_{n}(x;m). \tag{44}\] where the momentum, \(p_{n}\equiv p_{n}(x;m)\), is given by \[p_{n}=\frac{x}{4u_{n}^{2}}\,\frac{\mathrm{d}u_{n}}{\mathrm{d}x}+\frac{x}{2}- \frac{x}{2u_{n}^{2}}-\frac{2m-2n+1}{4u_{n}}. \tag{45}\] In other words, the canonical system \[\frac{\mathrm{d}u_{n}}{\mathrm{d}x}=\frac{\partial H_{n}}{\partial p_{n}}\quad \text{and}\quad\frac{\mathrm{d}p_{n}}{\mathrm{d}x}=-\frac{\partial H_{n}}{ \partial u_{n}}\] is equivalent to the definition (45) of \(p_{n}\) and the PIII\((D_{6})\) equation (1) for \(u=u_{n}(x;m)\) where \(\alpha=4(n+m)\) and \(\beta=4(n-m)\). The tau function \(\tau_{n}(x;m)\) can be defined up to a constant of integration by the relation \[\frac{\mathrm{d}}{\mathrm{d}x}\ln(\tau_{n}(x;m))=H_{n}(x;m)+\frac{1}{x}u_{n}(x ;m)p_{n}(x;m). \tag{46}\] We would like to fix the constant in this definition by choosing a path of integration going to \(x=\infty\) in the sector \(|\mathrm{Arg}(x)|<\pi\). To this end, it was shown in [5] that the rational functions \(u_{n}(x;m)\) behave at infinity as \(u_{n}(x;m)=1+\mathcal{O}(x^{-1})\). In fact, using this in the Painleve-III\((D_{6})\) equation (1) with \(\alpha=4(n+m)\) and \(\beta=4(n-m)\) gives the more refined asymptotics \[u_{n}(x;m)=1-\frac{n}{2x}+\frac{n(2m+n)}{8x^{2}}-\frac{n(4m^{2}+4mn+1)}{32x^{ 3}}+\mathcal{O}(x^{-4})\quad\text{as}\quad x\to\infty,\] which, together with (45) implies that the right-hand side of (46) satisfies \[H_{n}(x;m)+\frac{1}{x}u_{n}(x;m)p_{n}(x;m)=-2m-1-\frac{(2m+1)(2 m-4n+3)}{8x}+\frac{(1+2m)(1-n)n}{8x^{2}}\\ +\frac{(1+2m)^{2}(n-1)n}{32x^{3}}+\mathcal{O}(x^{-4})\quad\text{ as}\quad x\to\infty. \tag{47}\] Now, every pole \(x_{0}\neq 0\) of \(u_{n}(x;m)\) is simple with residue \(\pm\frac{1}{2}\), and moreover directly from (1), we find that \[u_{n}(x;m)=\pm\frac{1}{2}(x-x_{0})^{-1}-\frac{1}{2x_{0}}\left(n+m\pm\frac{1}{ 2}\right)+O(x-x_{0}),\quad x\to x_{0}.\] Similarly, all zeros \(x_{0}\neq 0\) of \(u_{n}(x;m)\) are simple, with \(u_{n}^{\prime}(x_{0};m)=\pm 2\), and again from (1) we have \[u_{n}(x;m)=\pm 2(x-x_{0})+\frac{2}{x_{0}}\left(\pm\frac{1}{2}+m-n\right)(x-x_{0}) ^{2}+O((x-x_{0})^{3}),\quad x\to x_{0}.\] These expansions can be differentiated with respect to \(x\) to obtain corresponding expansions of \(p_{n}(x;m)\) via (45) and then of \(H_{n}(x;m)\) via (44). These expansions show that the only possible singularities \(x_{0}\neq 0\) of the right-hand side of (46) are simple poles of residue \(1\) that occur at simple zeros of \(u_{n}(x;m)\) with \(u_{n}^{\prime}(x_{0};m)=-2\). Furthermore, if \(m\not\in\mathbb{Z}+\frac{1}{2}\), then \(u_{n}(x;m)\) is analytic and nonzero at \(x=0\), and it follows that the right-hand side of (46) has a simple pole at the origin with residue \(-\frac{1}{8}(4(m-n+1)^{2}-1)\). Therefore, arbitrarily fixing an integration constant, the tau function \(\tau_{n}(x;m)\) then can be defined for \(m\not\in\mathbb{Z}+\frac{1}{2}\) and \(|\mathsf{Arg}(x)|<\pi\) by \[\tau_{n}(x;m)=\mathrm{e}^{-(2m+1)x}x^{-\frac{(2m+1)(2m-4n+3)}{8}}\\ \exp\left(-\int_{x}^{\infty}\left(H_{n}(y;m)+\frac{u_{n}(y;m)p_{n }(y;m)}{y}+2m+1+\frac{(2m+1)(2m-4n+3)}{8y}\right)\mathrm{d}y\right), \tag{48}\] where the power function denotes the principal branch, the path of integration lies in the sector \(|\mathsf{Arg}(y)|<\pi\) avoiding all poles of the meromorphic integrand, and then the integral is independent of path modulo \(2\pi\mathrm{i}\). It then follows from (47) that \(\tau_{n}(x;m)\) admits the expansion \[\tau_{n}(x;m)=\mathrm{e}^{-(2m+1)x}x^{-\frac{(2m+1)(2m-4n+3)}{8}} \left(1+\frac{(2m+1)(n-1)n}{8x}+\frac{(2m+1)^{2}(n^{2}-1)(n-2)n}{128x^{2}}+ \mathcal{O}(x^{-3})\right),\] \[\text{as}\quad x\to\infty,\quad|\mathsf{Arg}(x)|<\pi, \tag{49}\] and that \(\tau_{n}(x;m)x^{(4(m-n+1)^{2}-1)/8}\) extends to a neighborhood of \(x=0\) as an analytic nonvanishing function. From the point of view of the function \(\tau_{n}(x;m)\) the recurrence (6), which defines the Umemura polynomials, is equivalent to the Toda equation. More precisely, if we define the function \[h_{n}(x;m)=H_{n}(x;m)+\frac{u_{n}(x;m)p_{n}(x;m)}{x}-2x+\frac{n^{2}}{x} \tag{50}\] then using Gromak's Backlund transformation (2) with \(u=u_{n}(x;m)\), \(\hat{u}=u_{n+1}(x;m)\), and \(\alpha=4(n+m)\), \(\beta=4(n-m)\), we can check that \(h_{n}\) satisfies the identity \[h_{n+1}(x;m)-h_{n}(x;m)=-\frac{2u_{n}(x;m)p_{n}(x;m)}{x}+\frac{2n+1}{x}. \tag{51}\] Similarly, using the inverse of Gromak's transformation (2): \[\hat{u}(x)\mapsto u(x)=\frac{2x\hat{u}^{\prime}(x)-4x\hat{u}(x)^{2}-4x+(\beta +4)\hat{u}(x)-2\hat{u}(x)}{\hat{u}(x)\cdot\left(2x\hat{u}^{\prime}(x)-4x\hat{u }(x)^{2}-4x-(\alpha+4)\hat{u}(x)+2\hat{u}(x)\right)},\] in which \(u(x)\) solves (1) and \(\hat{u}(x)\) solves the same equation with parameters \((\alpha,\beta)\) replaced by \((\alpha+4,\beta+4)\), one can check the identity \[h_{n-1}(x;m)-h_{n}(x;m)=-\frac{2u_{n}(x;m)p_{n}(x;m)}{x}-\frac{2m+1}{x}+\frac{ 1-2n}{x-p_{n}(x;m)} \tag{52}\] Combining (51) and (52) we get \[h_{n+1}(x;m)+h_{n-1}(x;m)-2h_{n}(x;m)=-\frac{4u_{n}(x;m)p_{n}(x;m)}{x}+\frac{ 2(n-m)}{x}+\frac{1-2n}{x-p_{n}(x;m)}\] Differentiating (50), we can notice that \[\frac{\mathrm{d}}{\mathrm{d}x}\ln\left(x\,\frac{\mathrm{d}}{\mathrm{d}x}\, \big{(}xh_{n}(x;m)\big{)}\right)=h_{n+1}(x;m)+h_{n-1}(x;m)-2h_{n}(x;m).\] Given any \(K_{n}(m)\in\mathbb{C}\), if we now define the function \[\hat{\tau}_{n}(x;m)=K_{n}(m)\mathrm{e}^{-x^{2}}x^{n^{2}}\tau_{n}(x;m),\] then \(h_{n}(x;m)=\frac{\mathrm{d}}{\mathrm{d}x}\ln\hat{\tau}_{n}(x;m)\) and we see that \(\hat{\tau}_{n}(x;m)\) satisfies the Toda equation \[x\,\frac{\mathrm{d}}{\mathrm{d}x}\,x\,\frac{\mathrm{d}}{\mathrm{d}x}\ln\hat{ \tau}_{n}(x;m)=C_{n}(m)\frac{\hat{\tau}_{n+1}(x;m)\hat{\tau}_{n-1}(x;m)}{\hat{ \tau}_{n}(x;m)^{2}} \tag{53}\] with some constants \(\{C_{n}(m)\}_{n=0}^{\infty}\) depending on the \(\{K_{n}(m)\}_{n=0}^{\infty}\). We now choose the constants \(K_{n}(m)\) so as to have \(C_{n}(m)=1\). To this end, using the detailed asymptotics (49), one can check that the leading term of both sides of (53) as \(x\to\infty\) is proportional to \(x^{2}\) and equating those coefficients under the assumption that \(C_{n}(m)=1\) yields the equation \[-4=\frac{K_{n+1}(m)K_{n-1}(m)}{K_{n}(m)^{2}},\] of which we choose a particular solution \(K_{n}(m)=(2{\rm i})^{n^{2}}\), which yields the expression \[\hat{\tau}_{n}(x;m)=(2{\rm i})^{n^{2}}{\rm e}^{-x^{2}}x^{n^{2}}\tau_{n}(x;m). \tag{54}\] Now if we put \[s_{n}(x;m)={\rm i}^{-(n+1)^{2}}2^{-\frac{(n+1)(n+2)}{2}}{\rm e}^{(2m+1)x+x^{2}} x^{\frac{4m^{2}-4n^{2}-8m-16n-9}{8}}\hat{\tau}_{n+1}(x;m),\] then it follows from (53) with \(C_{n}(m)=1\) that \(s_{n}(x;m)\) satisfies the Umemura recurrence relation (6). Moreover, using \[u_{0}(x;m)=1\quad\text{and}\quad u_{1}(x;m)=\frac{8x+4m-2}{8x+4m+2}\] shows that the integrand in the exponent of \(\tau_{0}(x;m)\) and \(\tau_{1}(x;m)\) vanishes identically, from which it follows that the initial conditions (7) are satisfied as well. Since the recurrence relation and initial conditions together have a unique solution, using (48) and (54), the Umemura polynomials are given by \[s_{n}(x;m)=2^{\frac{n(n+1)}{2}}{\rm e}^{(2m+1)x}x^{\frac{4(m-n)^{ 2}-1}{8}}\tau_{n+1}(x;m)\] \[=(2x)^{\frac{n(n+1)}{2}}\exp\left(-\int_{x}^{\infty}\left(H_{n+1} (y;m)+\frac{u_{n+1}(y;m)p_{n+1}(y;m)}{y}+2m+1+\frac{(2m+1)(2m-4n-1)}{8y}\right) \,{\rm d}y\right) \tag{55}\] This formula achieves the goal of explicitly expressing \(s_{n}(x;m)\) in terms of \(u_{n+1}(x;m)\). Since \[\frac{{\rm d}}{{\rm d}x}\ln(\tau_{n}(x;m))=H_{n}(x;m)+\frac{1}{x}u_{n}(x;m)p_ {n}(x;m)=-\frac{4(m-n+1)^{2}-1}{8x}+\mathcal{O}(1)\quad\text{as}\quad x\to 0\] we can also use analyticity of \(s_{n}(x;m)\) at the origin and the first line of (55) to write the alternative expression \[s_{n}(x;m)=s_{n}(0;m){\rm e}^{(2m+1)x}\exp\left(\int\limits_{0}^{x}\left(H_{n +1}(y;m)+\frac{u_{n+1}(y;m)p_{n+1}(y;m)}{y}+\frac{4(m-n)^{2}-1}{8y}\right){\rm d }y\right). \tag{56}\] We could equally well have used (56) to derive the Toda equation instead of (55), but it is nice to have two different formulae for Umemura polynomials. ### The ratio \(s_{n}(x;m)/s_{n}(0;m)\) for large \(n\) and small \(x\) The representation (56) can be combined with Theorem 1 to obtain a limiting formula for \(s_{n}(x;m)/s_{n}(0;m)\) as \(n\to\infty\) and \(x\to 0\) at related rates. First, we note that with the notation \(U_{n}(z;m):=u_{n}(z/n;m)\), from (45) we obtain \[p_{n}\left(\frac{z}{n};m\right)=\frac{n}{2U_{n}(z;m)}\left[1+\left(\frac{zU_{ n}^{\prime}(z;m)}{2U_{n}(z;m)}-m-\frac{1}{2}\right)\frac{1}{n}+\left(zU_{n}(z;m)- \frac{z}{U_{n}(z;m)}\right)\frac{1}{n^{2}}\right]. \tag{57}\] Next, note that by Theorem 5 we can differentiate the limit in Theorem 1 for \(z\) near the origin, and hence for small \(z\) and \(n\) even we have \(U_{n}(z;m)\to U(z;m)\) and \(U_{n}^{\prime}(z;m)\to U^{\prime}(z;m)\), while for \(n\) odd we have instead \(U_{n}(z;m)\to-U(z;m)^{-1}\) and \(U_{n}^{\prime}(z;m)\to U(z;m)^{-2}U^{\prime}(z;m)\). Therefore, we have the following limit: \[\lim_{n\to\infty}\frac{1}{n}\left(\left.H_{n}(x;m)+\frac{2}{x}u_{ n}(x;m)p_{n}(x;m)+\frac{4(m-n+1)^{2}-1}{8x}\right|_{x=z/n}\right)\\ =\frac{zU^{\prime}(z;m)^{2}}{8U(z;m)^{2}}\pm\frac{U^{\prime}(z;m) }{4U(z;m)}-U(z;m)+\frac{1}{U(z;m)}, \tag{58}\] where we take the plus sign for \(n\) even and the minus sign for \(n\) odd, and the convergence is uniform for \(|z|\) sufficiently small. It follows that if \(x=z/(n+1)\) in (56), by the corresponding substitution \(y\mapsto y/(n+1)\) \[\lim_{j\to\infty}\frac{s_{2j-1}\left(\frac{z}{2j};m\right)}{s_{2j-1}(0;m)}= \exp\left(\int\limits_{0}^{z}\left(\frac{yU^{\prime}(y;m)^{2}}{8U(y;m)^{2}}+ \frac{U^{\prime}(y;m)}{4U(y;m)}-U(y;m)+\frac{1}{U(y;m)}\right)\mathrm{d}y\right), \tag{59}\] and \[\lim_{j\to\infty}\frac{s_{2j}\left(\frac{z}{2j+1};m\right)}{s_{2j}(0;m)}=\exp \left(\int\limits_{0}^{z}\left(\frac{yU^{\prime}(y;m)^{2}}{8U(y;m)^{2}}-\frac{ U^{\prime}(y;m)}{4U(y;m)}-U(y;m)+\frac{1}{U(y;m)}\right)\mathrm{d}y\right), \tag{60}\] with the limits being uniform for \(|z|\) sufficiently small. To reduce the right-hand side in each case to the corresponding formula presented in Theorem 2 we refer to Section 3.4 below. ### Asymptotic behavior of \(s_{n}(0;m)\) for large \(n\) We now compute the large \(n\) asymptotics of \(s_{n}(0;m)=\phi_{n}(m+\frac{1}{2})\). First we write the formula for \(\phi_{n}(y)\) from Lemma 1 in terms of Gamma functions \[\phi_{n}(y)=\begin{cases}\prod\limits_{j=1}^{\frac{n}{2}}\frac{\Gamma(y+2j)}{ \Gamma(y+1-2j)},&\quad n\text{ even}\\ \prod\limits_{j=1}^{\frac{n+1}{2}}\frac{\Gamma(y+2j-1)}{\Gamma(y+2-2j)},&\quad n \text{ odd}.\end{cases}\] Since we are interested in asymptotics for large \(n\), we need to use the reflection formula for the Gamma function [9, Eq. 5.5.3] in the denominator: \[\phi_{n}(y)=\begin{cases}\left(-\frac{\sin(\pi y)}{\pi}\right)^{\frac{n}{2}} \prod\limits_{j=1}^{\frac{n}{2}}\Gamma(2j+y)\Gamma(2j-y),&\quad n\text{ even}\\ \left(\frac{\sin(\pi y)}{\pi}\right)^{\frac{n+1}{2}}\prod\limits_{j=1}^{\frac {n+1}{2}}\Gamma(2j-1+y)\Gamma(2j-1-y),&\quad n\text{ odd}.\end{cases}\] Next, we use the Gamma duplication formula [9, Eq. 5.5.5] and get \[\phi_{n}(y)=\begin{cases}\left(-\frac{\sin(\pi y)}{\pi^{2}}\right)^{\frac{n}{ 2}}2^{\frac{n}{2}}\prod\limits_{j=1}^{\frac{n}{2}}\Gamma\left(j+\frac{y}{2} \right)\Gamma\left(j-\frac{y}{2}\right)\Gamma\left(j+\frac{1}{2}+\frac{y}{2} \right)\Gamma\left(j+\frac{1}{2}-\frac{y}{2}\right),&\quad n\text{ even}\\ \left(\frac{\sin(\pi y)}{\pi^{2}}\right)^{\frac{n+1}{2}}2^{\frac{n^{2}-1}{2}} \prod\limits_{j=1}^{\frac{n+1}{2}}\Gamma\left(j+\frac{y}{2}\right)\Gamma \left(j-\frac{y}{2}\right)\Gamma\left(j-\frac{1}{2}+\frac{y}{2}\right)\Gamma \left(j-\frac{1}{2}-\frac{y}{2}\right),&\quad n\text{ odd}.\end{cases}\] Now we can rewrite \(\phi_{n}(y)\) in terms of the Barnes \(G\)-function: \[\phi_{n}(y)=\begin{cases}\left(-\frac{\sin(\pi y)}{\pi^{2}}\right)^{\frac{n}{2 }}2^{\frac{n}{2}}\frac{G\left(\frac{n}{2}+1+\frac{y}{2}\right)G\left(\frac{n}{ 2}+1-\frac{y}{2}\right)G\left(\frac{n}{2}+\frac{3}{2}+\frac{y}{2}\right)G \left(\frac{n}{2}+\frac{3}{2}-\frac{y}{2}\right)}{G\left(1+\frac{y}{2}\right) G\left(1-\frac{y}{2}\right)G\left(\frac{3}{2}+\frac{y}{2}\right)G\left(\frac{3}{2}- \frac{y}{2}\right)},&\quad n\text{ even}\\ \left(\frac{\sin(\pi y)}{\pi^{2}}\right)^{\frac{n+1}{2}}2^{\frac{n^{2}-1}{2}} \frac{G\left(\frac{n+1}{2}+1+\frac{y}{2}\right)G\left(\frac{n+1}{2}+1-\frac{y} {2}\right)G\left(\frac{n+1}{2}+\frac{1}{2}+\frac{y}{2}\right)G\left(\frac{n+1 }{2}+\frac{1}{2}-\frac{y}{2}\right)}{G\left(1+\frac{y}{2}\right)G\left(1-\frac{ y}{2}\right)G\left(\frac{1}{2}+\frac{y}{2}\right)G\left(\frac{1}{2}-\frac{y}{2} \right)},&\quad n\text{ odd}.\end{cases}\] Using the large argument asymptotics of the Barnes G-function [9, Eqn. 5.17.5] we get \[\phi_{n}(y)\sim\begin{cases}\frac{n^{\frac{n^{2}+n}{2}}\mathrm{e}^{-\frac{3n^{2} }{4}-\frac{n}{2}}(-\sin(\pi y))^{\frac{n}{2}}2^{\frac{n}{2}}n^{-\frac{1}{2}+ \frac{v^{2}}{2}}\sqrt{\pi}\mathrm{e}^{\frac{1}{2}}\frac{\gamma-\frac{v^{2}}{2 }}{2t^{2}-\frac{v^{2}}{2}}}{A^{4}G\left(1+\frac{v}{2}\right)G\left(1-\frac{v} {2}\right)G\left(\frac{3}{2}+\frac{v}{2}\right)G\left(\frac{3}{2}-\frac{v}{2} \right)},&n\text{ even}\\ \frac{n^{\frac{n^{2}+n}{2}}\mathrm{e}^{-\frac{3n^{2}}{4}-\frac{n}{2}}(\sin(\pi y ))^{\frac{n+1}{2}}2^{\frac{n}{2}}n^{-\frac{1}{2}+\frac{v^{2}}{2}}\mathrm{e}^{ \frac{1}{2}}\frac{1}{2t^{2}-\frac{v^{2}}{2}}}{\sqrt{\pi}A^{4}G\left(1+\frac{v} {2}\right)G\left(1-\frac{v}{2}\right)G\left(\frac{1}{2}+\frac{v}{2}\right)G \left(\frac{1}{2}-\frac{v}{2}\right)},&n\text{ odd}\end{cases}\] as \(n\to\infty\), where \(A=\mathrm{e}^{\frac{1}{12}-\zeta^{\prime}(-1)}\) is Glaisher's constant [9, Eqn. 5.17.6]. Recalling \(y=m+\frac{1}{2}\) we complete the proof of the formulae (12)-(13). ### Connection with the Fredholm determinant of the Bessel kernel We have already seen how the \(\mathrm{P}\mathrm{III}(D_{8})\) equation (3) can be obtained from \(\mathrm{P}\mathrm{III}(D_{6})\) equation (1) by confluence. There exists another, less known relation between the two equations -- namely, a quadratic transformation mapping the solutions of \(\mathrm{P}\mathrm{III}(D_{8})\) to solutions of \(\mathrm{P}\mathrm{III}(D_{6})\) with special parameter values. Moreover, for precisely this parameter choice the relevant \(\mathrm{P}\mathrm{III}(D_{6})\) admits a family of transcendental analytic solutions that can be expressed in terms of Fredholm determinants of the continuous Bessel kernel. Under quadratic transformations, they are mapped to solutions of \(\mathrm{P}\mathrm{III}(D_{8})\) analytic at \(z=0\). This allows one to give yet another characterization of the \(\mathrm{P}\mathrm{III}(D_{8})\) transcendent describing the large order asymptotics of the rational \(\mathrm{P}\mathrm{III}(D_{6})\) solutions. Indeed, let \(U(z)\) be an arbitrary solution of the \(\mathrm{P}\mathrm{III}(D_{8})\) equation (3). It is then straightforward to check that the function \(\sigma(r)\) defined by \[\sigma(r):=\frac{z^{2}U^{\prime}(z)^{2}}{4U(z)^{2}}-2z\left(U(z)-\frac{1}{U(z) }\right)-4\mathrm{i}z,\quad r=32\mathrm{i}z, \tag{61}\] satisfies the \(\sigma\)-form of a particular \(\mathrm{P}\mathrm{III}(D_{6})\) equation, namely3 Footnote 3: Observe that our definition of \(\sigma\) differs from that of [40] by a negative sign. \[\left(r\sigma^{\prime\prime}(r)\right)^{2}=\sigma^{\prime}(r)\left(4\sigma^{ \prime}(r)+1\right)\left(\sigma(r)-r\sigma^{\prime}(r)\right). \tag{62}\] Indeed, letting \[\varsigma(t):=\sigma(4t)+t \tag{63}\] transforms equation (62) to \[\left(t\varsigma^{\prime\prime}(t)\right)^{2}=4\varsigma^{\prime}(t)(\varsigma ^{\prime}(t)-1)(\varsigma(t)-t\varsigma^{\prime}(t)). \tag{64}\] The latter appears in [23, Eq. (3.13)] and [35, Eq. \(E_{\mathrm{III}^{\prime}}\)]. These relate to (1) via the following transformations; letting \[q(t):=-\frac{t\varsigma^{\prime\prime}(t)}{2\varsigma^{\prime}(t)(\varsigma ^{\prime}(t)-1)}\] yields (a special case of) the so-called "prime" version of Painleve-III \[\frac{\mathrm{d}^{2}q}{\mathrm{d}t^{2}}=\frac{1}{q}\left(\frac{\mathrm{d}q}{ \mathrm{d}t}\right)^{2}-\frac{1}{t}\,\frac{\mathrm{d}q}{\mathrm{d}t}+\frac{1}{t ^{2}}q^{3}+\frac{1}{t}-\frac{1}{q}.\] Next, letting \(t=x^{2}\) and \(q(t)=xu(x)\) yields (1) with parameters \(\alpha=0\) and \(\beta=4\). Combining the transformations \(U(z)\mapsto\sigma(r)\mapsto\varsigma(t)\mapsto q(t)\mapsto u(x)\) yields an explicit formula for \(u(x)\) in terms of \(U(z)\): \[u(x)=\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}x}\log\left(\frac{1+\mathrm{i}U( \frac{x^{2}}{\mathrm{SI}})}{1-\mathrm{i}U(\frac{x^{2}}{\mathrm{SI}})}\right).\] Correcting for a typo4 Footnote 4: The relevant equation in [2] should be corrected to read \[u(x):=-\frac{8}{x}\left[\left.\frac{\mathrm{d}}{\mathrm{d}x}\log\left(X\frac{W^{ \prime}(X)}{W(X)}\right)\right|_{X\to-\frac{1}{2}x^{2}}\right]^{-1}.\] Here \(W(X)\) is related to a solution \(U(z)\) of (3) by \(W(X)=\mathrm{i}U(\mathrm{i}x)\)., this is equivalent to [2, Eq. (112)]. Note that if \(U(z)\) is a solution of (3) that is analytic at \(z=0\) with \(U(0)\neq 0\), and hence also from (3) \(U^{\prime}(0)=4(1+U(0)^{2})\), then (61) implies that \(\sigma(r)\) is analytic at \(r=0\) with \(\sigma(0)=0\), and in fact \[\sigma(r)=\left(\frac{\mathrm{i}}{16}\left(U(0)-\frac{1}{U(0)}\right)-\frac{1 }{8}\right)r+\frac{1}{256}\left(U(0)+\frac{1}{U(0)}\right)^{2}r^{2}+O(r^{3}), \quad r\to 0. \tag{65}\] Also, differentiating (61) and using (3) to eliminate \(U^{\prime\prime}(z)\) yields the relation \[-16\mathrm{i}\sigma^{\prime}(r)=U(z)-\frac{1}{U(z)}+2\mathrm{i},\quad r=32 \mathrm{i}z, \tag{66}\] which can be regarded as an algebraic equation expressing \(U(z)\) in terms of \(\sigma^{\prime}(32\mathrm{i}z)\). Conversely, any solution \(\sigma(r)\) of (62) different from an affine function \(ar+b\) can be mapped to a pair of solutions \(U\left(z\right)\) of \(\mathrm{P}\mathrm{III}(D_{8})\) related by the \(\mathbb{Z}_{2}\) Backlund transformation \(U(z)\mapsto-1/U(z)\) with the help of the formula (66). To see this, one first uses (62) to explicitly express \(\sigma(r)\) in terms of its derivatives and \(r\), and then differentiates the resulting expression with respect to \(r\). Each term of the resulting equation has a common factor of \(r\sigma^{\prime\prime}(r)\). Hence if \(\sigma(r)\) is non-affine, one may cancel this factor, and then \(\sigma^{\prime}(r)\), \(\sigma^{\prime\prime}(r)\), and \(\sigma^{\prime\prime\prime}(r)\) can be eliminated from the reduced equation using (66) and its derivatives. This implies that either \(U(z)^{2}+1=0\) or \(U(z)\) is a solution of (3), and the latter admits precisely the constant solutions \(U(z)=\pm\mathrm{i}\) so we may conclude that any meromorphic function \(U(z)\) obtained from a non-affine solution of (62) via (66) is a solution of (3). Now recall a classical result of Tracy and Widom [40, Eq. (1.21) with \(\alpha=0\)]. **Proposition 3**.: _The logarithmic derivative_ \[\sigma(r)=r\left.\frac{\mathrm{d}}{\mathrm{d}r}\ln D_{\lambda}(r)\right. \tag{67}\] _satisfies the \(\sigma\)-PIII\((D_{6})\) equation (62)._ The Bessel kernel can be equivalently written as \[K(x,y)=\frac{1}{4}\int_{0}^{1}J_{0}(\sqrt{xz})J_{0}(\sqrt{yz})\,\mathrm{d}z= \sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\frac{\left(-1\right)^{m+n}2^{-2\left(m +n+1\right)}}{\left(m!\right)^{2}\left(n!\right)^{2}\left(m+n+1\right)}\ x^{m}y^{n}. \tag{68}\] The first of these identities follows from the easily verified differentiation formula \[\frac{\mathrm{d}}{\mathrm{d}z}\left(zK(xz,yz)\right)=\frac{1}{4}J_{0}(\sqrt{ xz})J_{0}(\sqrt{yz}),\] whereas the second one is obtained by substituting into the integral expression the standard series representation of \(J_{0}(\cdot)\)[9, Eq. (10.2.2)]. Using (9) along with representation (68) then enables one to compute the traces of powers of \(K_{r}\) in the form of a series in \(r\). It yields \[\ln D_{\lambda}(r)=-\sum_{\ell=1}^{\infty}\frac{\lambda^{\ell}r^{\ell}}{2^{2 \ell}\ell}\sum_{n_{1}=0}^{\infty}\cdots\sum_{n_{2\ell}=0}^{\infty}\frac{\left( -r/4\right)^{\sum_{k=1}^{2\ell}n_{k}}}{\prod_{k=1}^{2\ell}\left(n_{k}!\right)^{ 2}\left(n_{k}+n_{k+1}+1\right)},\qquad n_{2\ell+1}=n_{1}. \tag{69}\] Expansions of such form are known for Fredholm determinants appearing in random matrix theory, see [31, Section 20.5]. Let us record explicitly the few first terms of (69): \[\ln D_{\lambda}(r)= -\frac{\lambda r}{4}\left(1-\frac{r}{8}+\frac{r^{2}}{96}-\frac{5r^ {3}}{9216}\right)-\left.\frac{\lambda^{2}r^{2}}{32}\left(1-\frac{r}{4}+\frac{4 1r^{2}}{1152}\right)\right.\] \[-\left.\frac{\lambda^{3}r^{3}}{192}\left(1-\frac{3r}{8}\right)- \frac{\lambda^{4}r^{4}}{1024}+O(r^{5}),\quad r\to 0,\] which implies that the Bessel determinant solution of (62) guaranteed by Proposition 3 has the asymptotics \[\begin{split}\sigma(r)=r\frac{\mathrm{d}}{\mathrm{d}r}\ln D_{\lambda }(r)=&-\frac{\lambda r}{4}+\frac{1}{16}\left(\lambda-\lambda^{2} \right)r^{2}+\frac{1}{128}\left(-2\lambda^{3}+3\lambda^{2}-\lambda\right)r^{3} \\ &+\frac{\left(-36\lambda^{4}+72\lambda^{3}-41\lambda^{2}+5\lambda \right)r^{4}}{9216}+O(r^{5}),\quad r\to 0,\end{split} \tag{70}\] This expression is of course consistent with the differential equation (62). On the other hand, if \(U(z)=U(z;m)\) is the particular solution of (3) relevant to Theorem 1, which for \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\) is analytic at the origin with \(U(0;m)=\tan(\frac{\pi}{2}(m+\frac{1}{2}))\in\mathbb{C}\), then according to (65), the corresponding solution of (62) analytic at the origin satisfies \[\sigma(r)=-\frac{\lambda(m)}{4}r+\frac{r^{2}}{64\cos^{2}(\pi m)}+O(r^{3}), \quad r\to 0,\quad\lambda(m):=\frac{1}{1+\mathrm{e}^{2\pi im}}. \tag{71}\] Note that \(\lambda(m)\) is necessarily finite for \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\) and there are only two values it never takes for any \(m\): \(\lambda(m)\neq 0,1\). Also, the coefficient of \(r^{2}\) cannot vanish for any \(m\in\mathbb{C}\). Now we need the following result. **Proposition 4**.: _Let \(\sigma_{1}(r)\) and \(\sigma_{2}(r)\) denote two non-affine solutions of (62) both analytic at the origin and both satisfying \(\sigma_{j}(r)=-\frac{1}{4}\lambda r+O(r^{2})\) as \(r\to 0\) with \(\lambda\neq 0,1\). Then \(\sigma_{1}(r)=\sigma_{2}(r)\) on a neighborhood of \(r=0\)._ Proof.: If \(\sigma(r)=\sigma_{1,2}(r)\) is a solution of (62) analytic at the origin with \(\sigma^{\prime}(0)=-\frac{1}{4}\lambda\), then it has a locally-convergent Taylor series \[\sigma(r)=-\frac{\lambda}{4}r+\sum_{k=2}^{\infty}s_{k}r^{k},\quad|r|<\rho\] for some \(\rho>0\). Using this in the differential equation (62), from the coefficient of \(r^{2}\) one obtains \[4s_{2}^{2}-\frac{1}{4}\lambda(1-\lambda)s_{2}=0, \tag{72}\] whereas from the coefficient of \(r^{k}\) for \(k\geq 3\), \[\left(4ks_{2}-\frac{1}{4}\lambda(1-\lambda)\right)s_{k}\\ =\frac{1}{k-1}\left(\sum_{\ell=2}^{k-1}\ell(k-\ell)s_{\ell}s_{k- \ell+1}-\sum_{\ell=3}^{k-1}(\ell-1)\ell(k-\ell+1)(k-\ell+2)s_{\ell}s_{k-\ell+2 }\\ +\sum_{\ell=2}^{k-1}\sum_{j=0}^{\ell-1}(j+1)(\ell-j)(k-\ell)s_{j+1 }s_{\ell-j}s_{k-\ell+1}\right),\quad k\geq 3, \tag{73}\] where on the right-hand side, \(s_{1}:=-\frac{1}{4}\lambda\). Now, (72) implies that either \(s_{2}=0\) or \(s_{2}=\frac{1}{16}\lambda(1-\lambda)\neq 0\). Suppose first that \(s_{2}=0\). Then setting \(k=3\) in (73) gives \[-\frac{1}{4}\lambda(1-\lambda)s_{3}=0\implies s_{3}=0, \tag{74}\] since \(\lambda\neq 0,1\). We now use (74) as the base case for an inductive argument. Suppose \(s_{2}=s_{3}=\cdots=s_{k}=0\). Using (73) for the coefficient of \(r^{k+1}\) we obtain \[-\frac{1}{4}\lambda(1-\lambda)s_{k+1}=0\implies s_{k+1}=0,\] from which it follows that \(\sigma(r)=-\frac{1}{4}\lambda r\) exactly. This is a contradiction, because \(\sigma(r)\) is not affine. Therefore \(s_{2}\neq 0\). Taking \(s_{2}=\frac{1}{16}\lambda(1-\lambda)\neq 0\) as necessary, we note that in (73) for \(k\geq 3\), \(s_{k}\) appears only on the left-hand side with coefficient \[4ks_{2}-\frac{1}{4}\lambda(1-\lambda)=\frac{1}{4}(k-1)\lambda(1-\lambda)\neq 0,\] while the right-hand side only involves \(s_{1},\ldots,s_{k-1}\). Therefore all subsequent coefficients \(s_{k}\), \(k\geq 3\) are uniquely determined by the recurrence, implying that \(\sigma_{1}(r)=\sigma_{2}(r)\). _Remark 1_.: It is worth noting that \(D_{1}(r)=\mathrm{e}^{-r/4}\), for which \(\sigma(r)\) defined by (67) is an affine function. See [40]. Since the analytic solutions with expansions given in (70) and (71) have the same leading term if \(\lambda=\lambda(m)\neq 0,1\), and neither solution is an affine function, they coincide for small \(|r|\). Because the function \(U(z;m)\) is then determined up to the involution \(U\mapsto-U^{-1}\) by (66), we have proved the following result. **Corollary 2**.: _Let \(m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})\). The function \(U(z;m)\) appearing in the asymptotics (4) of Theorem 1 is related to the continuous Bessel kernel determinant by_ \[U(z;m)-\frac{1}{U(z;m)}=-2\mathrm{i}-\frac{1}{2}\,\frac{\mathrm{d}}{\mathrm{d }z}\,z\,\frac{\mathrm{d}}{\mathrm{d}z}\,\ln D_{\lambda(m)}(32\mathrm{i}z),\] _with \(\lambda(m)=1/\big{(}1+\mathrm{e}^{2\pi\mathrm{i}m}\big{)}\). In particular, the expansion of \(U(z;m)\) in powers of \(z\) can be read off from the series representation (69)._ Furthermore, using (62) and (67) shows that the integrand in (59), (60) is given by \[\begin{split}\frac{zU^{\prime}(z;m)^{2}}{8U(z;m)^{2}}\pm\frac{U^{ \prime}(z;m)}{4U(z;m)}-U(z;m)+\frac{1}{U(z;m)}&=\frac{1}{2z} \sigma(32\mathrm{i}z)+2\mathrm{i}\pm\frac{U^{\prime}(z;m)}{4U(z;m)}\\ &=\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}z}\ln D_{\lambda(m)}(32 \mathrm{i}z)+2\mathrm{i}\pm\frac{U^{\prime}(z;m)}{4U(z;m)}\end{split} \tag{75}\] and we get Theorem 2. ### Connection with \(2j-k\) determinants On the other hand, the Umemura polynomials admit the following Wronskian determinant representation [26] \[s_{n}(x;m)=\prod_{k=1}^{n}(2k-1)!!\det\left(L_{2i-j}^{(m+1/2-2i+j)}(-2x) \right)_{i,j=1}^{n}\] where for parameter \(\alpha\in\mathbb{C}\) and index \(k\in\mathbb{Z}\), \(L_{k}^{(\alpha)}(x)\) are generalized Laguerre polynomials for \(k\geq 0\), while \(L_{k}^{(\alpha)}(x)=0\) for \(k<0\). Expressions like that on the right-hand side are called Wronskian Appell polynomials in [3]; similar formulas hold for rational solutions of other Painleve equations as well. Wronskian determinants of generalized Laguerre polynomials were also studied in [7]. The generalized Laguerre polynomials admit the following integral representation [9, Eqn. 18.10.8] \[L_{k}^{(\alpha)}(x)=\frac{\mathrm{e}^{x}x^{-\alpha}}{2\pi\mathrm{i}}\int \limits_{|z-x|=\varepsilon}z^{k}\frac{z^{\alpha}\mathrm{e}^{-z}}{(z-x)^{k+1}} \mathrm{d}z \tag{76}\] for \(|\mathrm{Arg}(x)|<\pi\) (analytically continuable to \(x\in\mathbb{C}\)) where \(\varepsilon>0\) is small enough so that the branch cut \(z\leq 0\) is outside the contour of integration. Making the transformation \(z=x+\frac{yx}{2}\) in (76) yields \[L_{k}^{(\alpha)}(x)=\frac{1}{2^{\alpha}}\int\limits_{|y|=\varepsilon}y^{-k}(y+ 2)^{k+\alpha}\mathrm{e}^{-\frac{yx}{2}}\frac{\mathrm{d}y}{2\pi\mathrm{i}y}.\] We denote \[w(y;x,m):=(y+2)^{m+\frac{1}{2}}\mathrm{e}^{yx}\quad\text{and}\quad w_{k}(x,m) :=\int\limits_{|y|=1}y^{-k}w(y;x,m)\frac{\mathrm{d}y}{2\pi\mathrm{i}y}.\] Using this notation we obtain \[s_{n}(x;m)=\left[\prod_{\ell=1}^{n}(2\ell-1)!!\right]2^{mn-\frac{n^{2}}{2}} \det\left(w_{2j-k}(x,m)\right)_{j,k=1}^{n}. \tag{77}\] Similar "\(2j-k\)" determinants have appeared in various works in the literature, see, e.g., [16]. Denoting \[\mathfrak{D}_{n}(x;m)=\det\left(w_{2j-k}(x,m)\right)_{j,k=1}^{n},\] it immediately follows from (60), (59) that in the limit \(j\to\infty\), \[\mathfrak{D}_{2j}\left(\frac{z}{2j+1};m\right)\sim\frac{s_{2j}(0;m)2^{2j^{2}-2 mj}}{\prod_{\ell=1}^{2j}(2\ell-1)!!}\exp\left(\int\limits_{0}^{z}\left(\frac{yU^{ \prime}(y;m)^{2}}{8U(y;m)^{2}}-\frac{U^{\prime}(y;m)}{4U(y;m)}-U(y;m)+\frac{1}{ U(y;m)}\right)\mathrm{d}y\right)\] and \[\mathfrak{D}_{2j-1}\left(\frac{z}{2n};m\right)\] \[\sim\frac{s_{2j-1}(0;m)2^{2j^{2}-2(m+1)j+m+\frac{1}{2}}}{\prod_{ \ell=1}^{2j-1}(2\ell-1)!!}\exp\left(\int\limits_{0}^{z}\left(\frac{yU^{ \prime}(y;m)^{2}}{8U(y;m)^{2}}+\frac{U^{\prime}(y;m)}{4U(y;m)}-U(y;m)+\frac{1} {U(y;m)}\right)\mathrm{d}y\right).\] To write the analog of Theorem 2 for \(\mathfrak{D}_{n}(x;m)\) we need to compute the asymptotic behavior of \(\prod_{k=1}^{n}(2k-1)!!\) We use [9, Eqn. 5.4.2] to get \[\prod_{\ell=1}^{2n-1}(2\ell-1)!!=\pi^{-\frac{n}{2}}2^{\frac{n(n+ 1)}{2}}\prod_{\ell=1}^{n}\Gamma\left(\ell+\frac{1}{2}\right) =\pi^{-\frac{n}{2}}2^{\frac{n(n+1)}{2}}\frac{G\left(n+\frac{3}{2} \right)}{G\left(\frac{3}{2}\right)}\] \[\sim n^{\frac{n^{2}}{2}+\frac{n}{2}}\mathrm{e}^{-\frac{3n^{2}}{4 }-\frac{n}{2}}2^{\frac{n^{2}}{2}+n}n^{\frac{1}{24}}2^{\frac{5}{24}}\mathrm{e} ^{-\frac{\ell^{\prime}(-1)}{2}},\quad n\to\infty.\] Combining this with formulae (12), (13) and using (75) we get \[\mathfrak{D}_{2j}\left(\frac{z}{2j+1};m\right)\sim\frac{2^{-j(2m+1)}(-\cos( \pi m))^{j}j^{\frac{n^{2}}{2}+\frac{n}{2}}2^{\frac{1}{4}}\sqrt{\pi}\mathrm{e} ^{\frac{9\ell^{\prime}(-1)}{2}}}{G\left(\frac{3}{4}-\frac{n}{2}\right)G\left( \frac{5}{4}-\frac{n}{2}\right)G\left(\frac{5}{4}+\frac{n}{2}\right)G\left( \frac{7}{4}+\frac{n}{2}\right)}\mathrm{e}^{2\mathrm{i}z}\left(\frac{U(z;m)}{ U(0;m)}\right)^{-\frac{1}{4}}\sqrt{D_{\lambda(m)}\left(32\mathrm{i}z\right)},\] both in the limit \(j\to\infty\). ## 4. General monodromy data: Painleve-III(\(D_{6}\)) ### Lax pair for Painleve-III(\(D_{6}\)) Following Jimbo and Miwa [24], we use the fact that each Painleve equation can be recast as an isomonodromic deformation condition for a \(2\times 2\) system of linear ODEs with rational coefficients. The case of Painleve-III (\(D_{6}\)) corresponds to the situation where the coefficient matrix for the equation in the spectral variable, \(\lambda\), has exactly two irregular singularities at \(\lambda=0\) and \(\lambda=\infty\), at each of which the leading term is diagonalizable. After some normalization, the differential equation can be written in the form5 Footnote 5: Henceforth, we use bold capital letters to denote matrices, with the only exceptions being the identity matrix, denoted \(\mathbf{I}\) and the Pauli matrices, denoted \(\sigma_{k},\ k=1,2,3\). \[\frac{\partial\boldsymbol{\nabla}}{\partial\lambda}=\boldsymbol{\Lambda}^{(6) }(\lambda,x)\boldsymbol{\nabla}(\lambda,x), \tag{78}\] where \[\boldsymbol{\Lambda}^{(6)}(\lambda,x)=\frac{\mathrm{i}x}{2}\sigma_{3}+\frac{1} {2\lambda}\begin{bmatrix}-\Theta_{\infty}&2y\\ 2v&\Theta_{\infty}\end{bmatrix}+\frac{1}{2\lambda^{2}}\begin{bmatrix}\mathrm{i} x-2\mathrm{i}st&2\mathrm{i}s\\ -2\mathrm{i}t(st-x)&-\mathrm{i}x+2\mathrm{i}st\end{bmatrix}, \tag{79}\] In this case, the deformation equation is \[\frac{\partial\boldsymbol{\nabla}}{\partial x}=\boldsymbol{X}(\lambda,x) \boldsymbol{\nabla}(\lambda,x). \tag{80}\] where \[\mathbf{X}(\lambda,x)=\frac{\mathrm{i}\lambda}{2}\sigma_{3}+\frac{1}{x}\begin{bmatrix} 0&y\\ v&0\end{bmatrix}-\frac{1}{2\lambda x}\begin{bmatrix}\mathrm{i}x-2\mathrm{i}st&2 \mathrm{i}s\\ -2\mathrm{i}t(st-x)&-\mathrm{i}x+2\mathrm{i}st\end{bmatrix}.\] In the expressions for \(\boldsymbol{\Lambda}^{(6)}(\lambda,x),\boldsymbol{\chi}(\lambda,x)\), \(\Theta_{\infty}\) is a complex parameter and \(s=s(x),t=t(x),v=v(x)\), \(y=y(x)\). The equations (78) and (80) constitute an over-determined system with compatibility condition \[\frac{\partial\boldsymbol{\Lambda}^{(6)}}{\partial x}\left(\lambda,x\right)- \frac{\partial\mathbf{X}}{\partial\lambda}\left(\lambda,x\right)+\left[ \boldsymbol{\Lambda}^{(6)}(\lambda,x),\mathbf{X}(\lambda,x)\right]= \boldsymbol{0}\] where \([\boldsymbol{\Lambda}^{(6)},\boldsymbol{\chi}]\) is the commutator. This boils down to the scalar equations \[\begin{split} x\,\frac{\mathrm{d}y}{\mathrm{d}x}& =-2xs+\Theta_{\infty}y,\\ x\,\frac{\mathrm{d}v}{\mathrm{d}x}&=-2xt(st-x)- \Theta_{\infty},\\ x\,\frac{\mathrm{d}s}{\mathrm{d}x}&=(1-\Theta_{ \infty})s-2xy+4yst,\\ x\,\frac{\mathrm{d}t}{\mathrm{d}x}&=\Theta_{ \infty}t-2yt^{2}+2v.\end{split} \tag{81}\] If we let \[u(x):=-\frac{y(x)}{s(x)}, \tag{82}\] then it follows from (81) that \[x\,\frac{\mathrm{d}u}{\mathrm{d}x}=2x-(1-2\Theta_{\infty})u+4stu^{2}-2xu^{2},\] which can be seen to be equivalent to (1) by taking another \(x\)-derivative and using (81) again, after which the quantity \[I(x):=\frac{2\Theta_{\infty}}{x}st-\Theta_{\infty}-\frac{2yt}{x}(st-x)+\frac{ 2sv}{x},\] appears. However, from (81) it follows that \(I^{\prime}(x)=0\), so denoting the constant value of \(I\) by \(\Theta_{0}\), we arrive at (1) with parameters \[\Theta_{0}=\frac{\alpha}{4},\quad\Theta_{\infty}=1-\frac{\beta}{4}. \tag{83}\] The constants \(\Theta_{0},\Theta_{\infty}\) can be naturally interpreted on the level of the \(2\times 2\) system (78), which we now explore. For all the calculations that follow, we assume for simplicity that \(x>0\). The system (78) admits formal solutions near the singular points 6 Footnote 6: Here, we use the standard notation \(f^{c_{3}}:=\mathrm{diag}(f,f^{-1})\). \[\boldsymbol{\nabla}^{(\infty)}_{\mathrm{formal}}(\lambda,x)=\left(\mathds{I }+\Xi^{(6)}(x)\lambda^{-1}+\mathcal{O}(\lambda^{-2})\right)\mathrm{e}^{ \mathrm{i}x\lambda\sigma_{3}/2}\lambda^{-\Theta_{\infty}\sigma_{3}/2}\quad \text{as}\quad\lambda\to\infty, \tag{84}\] and \[\boldsymbol{\nabla}^{(0)}_{\mathrm{formal}}(\lambda,x)=\left(\boldsymbol{ \Lambda}^{(6)}(x)+\mathcal{O}(\lambda)\right)\mathrm{e}^{-\mathrm{i}x\lambda ^{-1}\sigma_{3}/2}\lambda^{\Theta_{0}\sigma_{3}/2}\quad\text{as}\quad\lambda \to 0. \tag{85}\] Here \(\boldsymbol{\Delta}^{(6)}(x)\) is an (invertible) eigenvector matrix of the coefficient of \(\lambda^{-2}\) in (79), so the leading term of \(\boldsymbol{\Delta}^{(6)}(x)^{-1}\boldsymbol{\Lambda}^{(6)}(\lambda,x) \boldsymbol{\Delta}^{(6)}(x)\) at \(\lambda=0\) is diagonal. For \(k=1,2,3\), we define the Stokes sectors, \[S^{(\infty)}_{k} =\left\{\lambda\in\mathds{C}\;:\;|\lambda|>R,\quad k\pi-2\pi< \mathsf{Arg}(\lambda)<k\pi\right\},\] \[S^{(0)}_{k} =\left\{\lambda\in\mathds{C}\;:\;|\lambda|<r,\quad k\pi-2\pi< \mathsf{Arg}(\lambda)<k\pi\right\}.\] It follows from the classical theory of linear systems that there exist _canonical solutions_\(\boldsymbol{\Psi}^{(\infty)}_{k},\boldsymbol{\Psi}^{(0)}_{k}\) analytic for \(\lambda\in S^{(\infty)}_{k}\) and \(\lambda\in S^{(0)}_{k}\) respectively and determined uniquely by the asymptotic condition \[\boldsymbol{\Psi}^{(v)}_{k}(\lambda,x)=\boldsymbol{\Psi}^{(v)}_{\text{formal}}( \lambda,x),\quad\lambda\in S^{(v)}_{k},\;\nu\in\{0,\infty\},\;k=1,2,3. \tag{86}\] In these asymptotic conditions, the meaning of the power functions in (84) and (85) is determined from the range of \(\mathsf{Arg}(\lambda)\) in the definition of \(S^{(v)}_{k}\). The canonical solutions in consecutive Stokes sectors are related to one another by multiplication on the right with _Stokes matrices_, i.e. \[\boldsymbol{\Psi}^{(\infty,0)}_{2}(\lambda,x) =\boldsymbol{\Psi}^{(\infty,0)}_{1}(\lambda,x)\boldsymbol{S}^{ \infty,0}_{1}, \tag{88}\] \[\boldsymbol{\Psi}^{(\infty,0)}_{3}(\lambda,x) =\boldsymbol{\Psi}^{(\infty,0)}_{2}(\lambda,x)\boldsymbol{S}^{ \infty,0}_{2}, \tag{87}\] where for some _Stokes multipliers_\(s^{\infty,0}_{j}\in\mathbb{C}\), \(j=1,2\), \[\boldsymbol{S}^{\infty,0}_{1}=\begin{bmatrix}1&s^{\infty,0}_{1}\\ 0&1\end{bmatrix},\quad\boldsymbol{S}^{\infty,0}_{2}=\begin{bmatrix}1&0\\ s^{\infty,0}_{2}&1\end{bmatrix}. \tag{89}\] Likewise, by uniqueness and the different interpretation of the multi-valued powers in the formal solutions on the otherwise identical sectors \(S^{(\infty,0)}_{1}\) and \(S^{(\infty,0)}_{3}\), we have the identities \[\boldsymbol{\Psi}^{(\infty)}_{3}(\lambda,x) =\boldsymbol{\Psi}^{(\infty)}_{1}(\mathrm{e}^{-2\pi\mathrm{i}} \lambda,x)e^{-2\sigma_{3}}_{2}, \tag{91}\] \[\boldsymbol{\Psi}^{(0)}_{3}(\lambda,x) =\boldsymbol{\Psi}^{(0)}_{1}(\mathrm{e}^{-2\pi\mathrm{i}} \lambda,x)e^{2\sigma_{3}}_{0}, \tag{90}\] where, combining (15) with (83) gives \[e_{0}=\mathrm{e}^{\mathrm{i}\pi\Theta_{0}/2},\quad e_{\infty}=\mathrm{e}^{ \mathrm{i}\pi\Theta_{\infty}/2}. \tag{92}\] Canonical solutions in, say \(S^{(\infty)}_{k}\) admit analytic continuation into \(S^{(0)}_{k}\) and since both canonical solutions solve (78) in the same domain, they must be related by multiplication on the right by a constant _connection matrix_, which we define using \[\boldsymbol{\Psi}^{(0)}_{1}(\lambda,x) =\boldsymbol{\Psi}^{(\infty)}_{1}(\lambda,x)\boldsymbol{C}^{+}_{ 0\infty}, \tag{94}\] \[\boldsymbol{\Psi}^{(0)}_{2}(\lambda,x) =\boldsymbol{\Psi}^{(\infty)}_{2}(\lambda,x)\boldsymbol{C}^{-}_{ 0\infty}. \tag{93}\] The condition that the coefficients \(y,v,s,t\) in the matrix \(\boldsymbol{\Lambda}^{(6)}(\lambda,x)\) depend on \(x\) as a solution of (81) implies simultaneous solvability of (78) and (80), and the latter system implies that the Stokes matrices and connection matrices are, like \(\Theta_{0}\) and \(\Theta_{\infty}\), independent of \(x\). We show below in Sections 4.3 and 4.6 that the four Stokes multipliers and the elements of the two connection matrices are determined from just two essential _monodromy parameters_ that we denote by \(e_{1}\) and \(e_{2}\). ### Riemann-Hilbert problem for Painleve-III(\(D_{6}\)) Using the canonical solutions, we define the following sectionally-analytic function \[\boldsymbol{\Psi}(\lambda,x)=\left\{\begin{array}{ll}\boldsymbol{\Psi}^{( \infty)}_{1}(\lambda,x),&|\lambda|>1\quad\text{and}\quad\text{Re}(\lambda)>0, \\ \boldsymbol{\Psi}^{(\infty)}_{2}(\lambda,x),&|\lambda|>1\quad\text{and}\quad \text{Re}(\lambda)<0,\\ \boldsymbol{\Psi}^{(0)}_{1}(\lambda,x),&|\lambda|<1\quad\text{and}\quad\text{ Re}(\lambda)>0,\\ \boldsymbol{\Psi}^{(0)}_{2}(\lambda,x),&|\lambda|<1\quad\text{and}\quad\text{Re}( \lambda)<0.\end{array}\right.\] Then, it follows from the asymptotic conditions (86) and the relations (87)-(91) and (93)-(94) that \(\boldsymbol{\Psi}\) solves the following \(2\times 2\) Riemann-Hilbert problem. Let \(\lambda^{p}_{\boldsymbol{\Xi}}\) denote the branch of the power function analytic in \(\mathbb{C}\setminus\mathrm{i}\mathbb{R}_{-}\) with argument chosen so that \[-\frac{\pi}{2}<\arg_{\boldsymbol{\Xi}}(\lambda)<\frac{3\pi}{2}. \tag{95}\] The notation reminds us that the branch cut of these functions is the contour carrying lower triangular Stokes matrices. **Riemann-Hilbert Problem 1**.: _Fix generic monodromy parameters \((e_{1},e_{2})\) determining the Stokes and connection matrices, and \(x>0\). We seek a \(2\times 2\) matrix function \(\lambda\mapsto\mathbf{\Psi}(\lambda,x)\) satisfying:_ * _Analyticity:_ \(\mathbf{\Psi}(\lambda,x)\) _is analytic in_ \(\mathbf{C}\setminus L^{(6)}\)_, where_ \(L^{(6)}=\{|\lambda|=1\}\cup\mathrm{i}\mathbb{R}\) _is the jump contour shown in Figure_ 3_._ * _Jump condition:_ \(\mathbf{\Psi}(\lambda,x)\) _has continuous boundary values on_ \(L^{(6)}\setminus\{0\}\) _from each component of_ \(\mathbf{C}\setminus L^{(6)}\)_, which satisfy_ \(\mathbf{\Psi}_{+}(\lambda,x)=\mathbf{\Psi}_{-}(\lambda,x)\mathbf{J}_{\mathbf{ \Psi}}(\lambda)\)_, where_ \(\mathbf{J}_{\mathbf{\Psi}}(\lambda)\) _is as shown in Figure_ 3 _and where the_ \(+\) _(resp.,_ \(-\)_) subscript denotes a boundary value taken from the left (resp., right) of an arc of_ \(L^{(6)}\)_._ * _Normalization:_ \(\mathbf{\Psi}\) _satisfies the asymptotic conditions_ (96) \[\mathbf{\Psi}(\lambda,x)=\left(\mathbb{I}+\Xi^{(6)}(x)\lambda^{-1}+\mathcal{ O}(\lambda^{-2})\right)\mathrm{e}^{\mathrm{i}x\lambda\varphi_{3}/2}\lambda_{ \blacksquare}^{-\Theta_{\infty}\sigma_{3}/2}\quad\text{as}\quad\lambda\to\infty,\] _and_ (97) \[\mathbf{\Psi}(\lambda,x)=\left(\mathbf{\Delta}^{(6)}(x)+\mathcal{O}(\lambda) \right)\mathrm{e}^{-\mathrm{i}x\lambda^{-1}\sigma_{3}/2}\lambda_{\blacksquare}^{ \Theta_{0}\varphi_{3}/2}\quad\text{as}\quad\lambda\to 0,\] _where_ \(\mathbf{\Delta}^{(6)}(x)\) _is a matrix determined from_ \(\mathbf{\Psi}(\lambda,x)\) _having unit determinant._ Observe that if \(\mathbf{\Psi}\) solves Riemann-Hilbert Problem 1, then the following limit exists: \[\Xi^{(6)}(x):=\lim_{\lambda\to\infty}\lambda\left[\mathbf{\Psi}(\lambda,x) \mathrm{e}^{-\mathrm{i}x\lambda\varphi_{3}/2}\lambda_{\blacksquare}^{\Theta_{0} \varphi_{3}/2}-\mathbb{I}\right]. \tag{98}\] Existence of a solution \(\mathbf{\Psi}(\lambda,x)\) to Riemann-Hilbert Problem 1 which is meromorphic in \(x\) on a covering of the plane is well-established; see, e.g., [12, Theorem 5.4]. Furthermore, it follows from the Riemann-Hilbert problem that \(\det(\mathbf{\Psi}(\lambda;x))=1\). The solution of the Painleve-III\((D_{6})\) equation for the initial data that generated the matrices for the inverse monodromy problem is given by \[u(x)=\frac{-\mathrm{i}\Xi_{12}^{(6)}(x)}{\Delta_{11}^{(6)}(x)\Delta_{12}^{(6)} (x)}, \tag{99}\] where \(\mathbf{\Delta}^{(6)},\Xi^{(6)}\) are as in (97), (98), respectively. To study the direct monodromy problem and obtain the jump matrices given just the values of \(u\) and \(u^{\prime}\) at an initial point \(x_{0}\), it is necessary to introduce artificial initial values of the auxiliary functions \(s,t,v,y\) at \(x_{0}\) in way consistent with the definition (82) of \(u(x)\). Different consistent choices lead to different jump matrices, but the jump matrices determine the same function \(u(x)\) via (99). This symmetry is reflected at the level of \(\boldsymbol{\nabla}(\lambda,x)\) by the conjugation \(\boldsymbol{\nabla}(\lambda,x)\mapsto\delta^{-\sigma_{3}}\boldsymbol{\nabla}( \lambda,x)\delta^{\sigma_{3}}\) for any \(\delta\neq 0\). Another symmetry that also leaves \(u(x)\) invariant but changes the jump matrices \(\mathbf{C}^{\pm}_{0\infty}\) is multiplication of \(\boldsymbol{\nabla}(\lambda,x)\) on the right for \(|\lambda|<1\) only by a unit-determinant diagonal matrix. Therefore, having obtained the jump matrices for the inverse monodromy problem via a direct monodromy calculation, after the fact we may introduce an arbitrary transformation of \(\boldsymbol{\nabla}(\lambda,x)\) of the form \[\boldsymbol{\nabla}(\lambda,x)\mapsto\widetilde{\boldsymbol{\nabla}}(\lambda, x):=\begin{cases}\delta^{-\sigma_{3}}\boldsymbol{\nabla}(\lambda,x)\gamma^{ \sigma_{3}},&|\lambda|<1\\ \delta^{-\sigma_{3}}\boldsymbol{\nabla}(\lambda,x)\delta^{\sigma_{3}},&| \lambda|>1\end{cases} \tag{100}\] without changing \(u(x)\). This transformation modifies the Stokes matrices as follows: \[\mathbf{S}^{\infty}_{1,2}\mapsto\widetilde{\mathbf{S}}^{\infty}_{1,2}:= \delta^{-\sigma_{3}}\mathbf{S}^{\infty}_{1,2}\delta^{\sigma_{3}}\quad\text{ and}\quad\mathbf{S}^{0}_{1,2}\mapsto\widetilde{\mathbf{S}}^{0}_{1,2}:=\gamma^{- \sigma_{3}}\mathbf{S}^{0}_{1,2}\gamma^{\sigma_{3}} \tag{101}\] and it modifies the connection matrices as \[\mathbf{C}^{\pm}_{0\infty}\mapsto\widetilde{\mathbf{C}}^{\pm}_{0\infty}:= \delta^{-\sigma_{3}}\mathbf{C}^{\pm}_{0\infty}\gamma^{\sigma_{3}}. \tag{102}\] ### Monodromy parameters \((e_{1},e_{2})\) The cyclic products of the jump matrices for the inverse monodromy problem about the two non-singular self-intersection points of the jump contour \(\lambda=\pm\mathrm{i}\) read \[\begin{split}\text{about }\lambda=+\mathrm{i}:& (\mathbf{C}^{-}_{0\infty})^{-1}(\mathbf{S}^{\infty}_{1})^{-1}\mathbf{C}^{+}_{0 \infty}\mathbf{S}^{0}_{1}=\mathbb{I}\\ \text{about }\lambda=-\mathrm{i}:&\mathbf{S}^{\infty}_{ 2}e^{2\sigma_{3}}_{\infty}\mathbf{C}^{+}_{0\infty}e^{2\sigma_{3}}_{0}(\mathbf{ S}^{0}_{2})^{-1}(\mathbf{C}^{-}_{0\infty})^{-1}=\mathbb{I}.\end{split} \tag{103}\] We can use the second relation to explicitly write \(\mathbf{C}^{+}_{0\infty}\) in terms of two Stokes matrices and the other connection matrix: \[\mathbf{C}^{+}_{0\infty}=e^{-2\sigma_{3}}_{\infty}(\mathbf{S}^{\infty}_{2})^ {-1}\mathbf{C}^{-}_{0\infty}\mathbf{S}^{0}_{2}e^{-2\sigma_{3}}_{0}. \tag{104}\] This identity is an analog of [23, Eq. (3.17)]. Under the condition that \(\det\mathbf{C}^{-}_{0\infty}=1\), we immediately get that \(\det\mathbf{C}^{+}_{0\infty}=1\). Furthermore, using (104) we eliminate \(\mathbf{C}^{+}_{0\infty}\) from the first equation of (103) to obtain the identity \[(\mathbf{S}^{0}_{1})^{-1}e^{2\sigma_{3}}_{0}(\mathbf{S}^{0}_{2})^{-1}=( \mathbf{C}^{-}_{0\infty})^{-1}(\mathbf{S}^{\infty}_{1})^{-1}e^{-2\sigma_{3}} _{\infty}(\mathbf{S}^{\infty}_{2})^{-1}\mathbf{C}^{-}_{0\infty}. \tag{105}\] In other words, \((\mathbf{S}^{\infty}_{1})^{-1}e^{-2\sigma_{3}}_{\infty}(\mathbf{S}^{\infty}_ {2})^{-1}\) and \((\mathbf{S}^{0}_{1})^{-1}e^{2\sigma_{3}}_{0}(\mathbf{S}^{0}_{2})^{-1}\) are similar unit-determinant matrices. Note that this is merely reflective of the fact that both products are monodromy matrices, possibly expressed in terms of different bases of fundamental solutions, for a simple circuit about the origin for solutions of the system (78). Let us assume that they have distinct eigenvalues that we will denote \(e_{1}^{\pm 2}\). Then, both products are diagonalizable, so there exist unit-determinant eigenvector matrices \(\mathbf{E}^{\infty}\) and \(\mathbf{E}^{0}\) such that \[(\mathbf{S}^{\infty}_{1})^{-1}e^{-2\sigma_{3}}_{\infty}(\mathbf{S}^{\infty}_ {2})^{-1}\mathbf{E}^{\infty}=\mathbf{E}^{\infty}e_{1}^{2\sigma_{3}}\quad\text{ and}\quad(\mathbf{S}^{0}_{1})^{-1}e^{2\sigma_{3}}_{0}(\mathbf{S}^{0}_{2})^{-1} \mathbf{E}^{0}=\mathbf{E}^{0}e_{1}^{2\sigma_{3}}. \tag{106}\] To specify the eigenvector matrices \(\mathbf{E}^{\infty}\), \(\mathbf{E}^{0}\) uniquely, we agree that their (2,2) entries are both equal to \(1\). Using (106) in (105) gives a homogeneous linear equation on \(\mathbf{C}^{-}_{0\infty}\) that can be written in commutator form as \[\left[e_{1}^{2\sigma_{3}},(\mathbf{E}^{\infty})^{-1}\mathbf{C}^{-}_{0\infty} \mathbf{E}^{0}\right]=\mathbf{0}.\] The diagonal matrix \(e_{1}^{2\sigma_{3}}\) can be written in the form \[e_{1}^{2\sigma_{3}}=f\mathbb{I}+g\sigma_{3},\quad f:=\frac{1}{2}\left(e_{1}^{2} +e_{1}^{-2}\right),\quad g:=\frac{1}{2}\left(e_{1}^{2}-e_{1}^{-2}\right).\] Under the assumption \(e_{1}^{4}\neq 1\) we already invoked to obtain diagonalizability, \(g\neq 0\) so the commutator equation implies that \((\mathbf{E}^{\infty})^{-1}\mathbf{C}^{-}_{0\infty}\mathbf{E}^{0}\) is a diagonal unit-determinant matrix that we may write in the form \(e_{2}^{\sigma_{3}}\). Thus we have the identity \[\mathbf{C}^{-}_{0\infty}=\mathbf{E}^{\infty}e_{2}^{\sigma_{3}}(\mathbf{E}^{0})^ {-1}. \tag{107}\] _Remark 2_.: Changing the sign of \(e_{2}\) changes the sign of the connection matrix. This corresponds to multiplication of the solution of Riemann-Hilbert Problem 1 by \(-1\) inside of unit disc. Looking at formula (99) we see that solution \(u(x)\) does not change after such transformation. Therefore we can assume \(-\frac{\pi}{2}<\mathsf{Arg}(e_{2})\leq\frac{\pi}{2}\). Using (107) in (104) then gives the equivalent representations \[\mathbf{C}^{+}_{0\infty} =e_{\infty}^{-2\sigma_{3}}(\mathbf{S}^{\infty}_{2})^{-1}\mathbf{E}^ {\infty}e_{2}^{\rho_{3}}(\mathbf{E}^{0})^{-1}\mathbf{S}^{0}_{2}e_{0}^{-2\sigma _{3}}\] \[=\mathbf{S}^{\infty}_{1}\mathbf{E}^{\infty}e_{2}^{\sigma_{3}}( \mathbf{E}^{0})^{-1}\left(\mathbf{S}^{0}_{1}\right)^{-1} \tag{108}\] ### Parametrization of Stokes multipliers and connection matrix Taking the trace of (105) we get \[e_{1}^{2}+\frac{1}{e_{1}^{2}}=e_{\infty}^{2}+\frac{1}{e_{\infty}^{2}}+s_{1}^{ \infty}s_{2}^{\infty}e_{\infty}^{2}=e_{0}^{2}+\frac{1}{e_{0}^{2}}+\frac{s_{1}^ {0}s_{2}^{0}}{e_{0}^{2}}. \tag{109}\] It is clear that one can solve for the products \(s_{1}^{\infty}s_{2}^{\infty}\) and \(s_{1}^{0}s_{2}^{0}\) in terms of \((e_{1}^{2},e_{\infty}^{2})\) and \((e_{1}^{2},e_{0}^{2})\) respectively. Using transformation (101) we can use a particular solutions of this relation as Stokes multipliers: \[s_{1}^{\infty}=\frac{e_{\infty}^{2}-e_{1}^{2}}{e_{1}^{2}e_{\infty}^{4}},\quad s _{2}^{\infty}=1-e_{1}^{2}e_{\infty}^{2},\quad s_{1}^{0}=\frac{e_{1}^{2}-e_{0} ^{2}}{e_{1}^{2}},\quad s_{2}^{0}=e_{1}^{2}e_{0}^{2}-1. \tag{110}\] With the Stokes matrices specified in this way, the eigenvector matrices \(\mathbf{E}^{\infty}\) and \(\mathbf{E}^{0}\) are uniquely specified as mentioned earlier by taking the \((2,2)\) entry to be \(1\) in each case, which yields \[\mathbf{E}^{\infty}=\left[\begin{array}{cc}\frac{e_{1}^{2}\left(e_{1}^{2}-e _{\infty}^{2}\right)}{e_{1}^{4}-1}&-\frac{1}{e_{1}^{2}e_{\infty}^{2}}\\ \frac{e_{1}^{2}e_{\infty}^{2}\left(e_{1}^{2}e_{\infty}^{2}-1\right)}{e_{1}^{4 }-1}&1\end{array}\right], \tag{111}\] \[\mathbf{E}^{0}=\left[\begin{array}{cc}\frac{e_{1}^{2}\left(e_{0}^{2}e_{1}^{ 2}-1\right)}{e_{0}^{2}\left(e_{1}^{4}-1\right)}&\frac{e_{1}^{2}-e_{0}^{2}}{e_{ 1}^{2}\left(e_{0}^{2}e_{1}^{2}-1\right)}\\ \frac{e_{1}^{2}\left(1-e_{0}^{2}e_{1}^{2}\right)}{e_{0}^{2}\left(e_{1}^{4}-1 \right)}&1\end{array}\right]. \tag{112}\] After making such choices, we obtain the formulae (17)-(19). At this point, it can be directly checked that our choices are consistent with the full equation (105) with \(\mathbf{C}^{-}_{0\infty}\) given by (107). One can think of fixing the (2,2) entry in the following way: the eigenvector matrices \(\mathbf{E}^{\infty}\) and \(\mathbf{E}^{0}\) represent "internal degrees of freedom" that have an additional symmetry, namely arbitrary scalings of the eigenvectors that preserve determinants. In other words, while (100) induces a conjugation symmetry on the eigenvector matrices, there is an additional symmetry for each involving multiplication on the right by an arbitrary unit-determinant diagonal matrix. Thus, the matrices \(\mathbf{E}^{\infty}\) and \(\mathbf{E}^{0}\) undergo the transformations \[\mathbf{E}^{\infty}\mapsto\widetilde{\mathbf{E}}^{\infty}:=\delta^{-\sigma_{3 }}\mathbf{E}^{\infty}\delta^{\sigma_{3}}e_{\infty}^{\sigma_{3}}\quad\text{ and}\quad\mathbf{E}^{0}\mapsto\widetilde{\mathbf{E}}^{0}:=\gamma^{-\sigma_{3}} \mathbf{E}^{0}\gamma^{\sigma_{3}}e_{0}^{\sigma_{3}}\] for some arbitrary nonzero quantities \(\epsilon_{\infty}\) and \(\epsilon_{0}\). Note that these transformations along with (102) and \(e_{2}^{\sigma_{3}}=(\mathbf{E}^{\infty})^{-1}\mathbf{C}^{-}_{0\infty}\mathbf{E} ^{0}\) imply that \[e_{2}\mapsto\widetilde{e}_{2}:=\frac{\gamma\epsilon_{0}}{\delta\epsilon_{ \infty}}e_{2}. \tag{113}\] By contrast, \(e_{1}^{2}\mapsto\widetilde{e}_{1}^{2}:=e_{1}^{2}\) is a symmetry invariant. _Remark 3_.: In the case where one is interested in values of \(x\in\mathbb{C}\) with \(|\mathsf{Arg}(x)|<\pi\), the analogue of Figure 3 is shown in Figure 4 below, where the nonsingular self-intersection points are at \(\lambda=\pm\mathrm{ie}^{\pm\mathrm{i}\mathsf{Arg}(x)}\). The angles of the rays in the contour \(L^{(6)}\) are chosen so that \(\mathrm{i}\lambda x\in\mathbb{R}\) on the rays extending to \(\lambda=\infty\), and \(\mathrm{i}\lambda^{-1}x\in\mathbb{R}\) on the rays extending to \(\lambda=0\). Similar to Section 4.2, one can formulate a Riemann-Hilbert problem for a sectionally analytic function \(\lambda\mapsto\boldsymbol{\nabla}(\lambda,x)\) off of the contour \(L^{(6)}\) and one finds _four_ connection matrices instead of two, denoted \(\mathbf{C}_{1}\) through \(\mathbf{C}_{4}\), defined on corresponding arcs of the unit circle as shown in Figure 4. These satisfy cyclic conditions similar to (103), namely \[\text{about }\lambda=\mathrm{i}\mathrm{e}^{-\mathrm{i}\mathbf{Arg}(x)} :\quad\mathbf{C}_{1}^{-1}\mathbf{S}_{1}^{\infty}\mathbf{C}_{2}= \mathbf{I},\] \[\text{about }\lambda=-\mathrm{i}\mathrm{e}^{\mathrm{i}\mathbf{Arg}(x)} :\quad\mathbf{C}_{1}(\mathbf{S}_{2}^{0}e_{0}^{-2\gamma_{3}})^{-1} \mathbf{C}_{4}^{-1}=\mathbf{I},\] \[\text{about }\lambda=-\mathrm{i}\mathrm{e}^{-\mathrm{i}\mathbf{Arg}(x)} :\quad\mathbf{C}_{3}^{-1}\mathbf{S}_{2}^{\infty}e_{\infty}^{2\gamma _{3}}\mathbf{C}_{4}=\mathbf{I},\] \[\text{about }\lambda=\mathrm{i}\mathrm{e}^{\mathrm{i}\mathbf{Arg}(x)} :\quad\mathbf{C}_{3}(\mathbf{S}_{1}^{0})^{-1}\mathbf{C}_{2}^{-1}= \mathbf{I}.\] Eliminating all but \(\mathbf{C}_{3}\) from the above identities yields the analog of (105), namely \[(\mathbf{S}_{1}^{0})^{-1}e_{0}^{2\sigma_{3}}(\mathbf{S}_{2}^{0})^{-1}=\mathbf{ C}_{3}^{-1}(\mathbf{S}_{1}^{\infty})^{-1}e_{\infty}^{-2\sigma_{3}}(\mathbf{S}_{2 }^{\infty})^{-1}\mathbf{C}_{3}.\] Reasoning similar to that of Section 4.3 yields \[\mathbf{C}_{3}=\mathbf{E}^{\infty}e_{2}^{\sigma_{3}}(\mathbf{E}^{0})^{-1}, \tag{114}\] which, in turn, yields \[\mathbf{C}_{1} =\mathbf{S}_{1}^{\infty}\mathbf{E}^{\infty}e_{2}^{\sigma_{3}}( \mathbf{E}^{0})^{-1}(\mathbf{S}_{1}^{0})^{-1},\] \[\mathbf{C}_{2} =\mathbf{E}^{\infty}e_{2}^{\sigma_{3}}(\mathbf{E}^{0})^{-1}( \mathbf{S}_{1}^{0})^{-1},\] \[\mathbf{C}_{4} =e_{\infty}^{-2\sigma_{3}}(\mathbf{S}_{2}^{\infty})^{-1}\mathbf{ E}^{\infty}e_{2}^{\sigma_{3}}(\mathbf{E}^{0})^{-1}=\mathbf{S}_{1}^{\infty} \mathbf{E}^{\infty}e_{1}^{2\gamma_{3}}e_{2}^{\sigma_{3}}(\mathbf{E}^{0})^{-1}. \tag{115}\] In this setting, we must adjust our choice of the branch \(\lambda\mapsto\arg_{\blacksquare}(\lambda)\), and we choose a branch which satisfies (cf. (95) when \(\operatorname{Arg}(x)=0\)) \[-\frac{\pi}{2}-\operatorname{Arg}(x)<\arg_{\blacksquare}(\lambda)<\frac{3\pi}{2}- \operatorname{Arg}(x),\quad|\lambda|\to\infty,\] and, \[-\frac{\pi}{2}+\operatorname{Arg}(x)<\arg_{\blacksquare}(\lambda)<\frac{3\pi}{2}+ \operatorname{Arg}(x),\quad|\lambda|\to 0.\] A concrete branch cut is chosen later, see Remark 8 below. ### Example: rational solutions of Painleve-III(\(D_{6}\)) One can check that the Painleve-III(\(D_{6}\)) equation with parameters related by \(\Theta_{0}=\Theta_{\infty}-1\) admits the constant solution \(u(x)\equiv 1\). Its monodromy data was calculated in [5, Section 4] by taking advantage of the fact that the compatible \(x\)-equation (80) in the Lax pair has simple coefficients. Denoting \(m=\Theta_{0}=\Theta_{\infty}-1\) gives \(e_{0}^{-2}=\mathrm{e}^{-\mathrm{i}\pi m}\) and \(e_{\infty}^{2}=-\mathrm{e}^{\mathrm{i}\pi m}\). Choosing Figure 4. The analogue of the contour \(L^{(6)}\) in Figure 3 when \(|\operatorname{Arg}(x)|\neq 0\). \(\gamma,\delta\) in (100) satisfying \[\delta^{2}=\mathrm{e}^{-2\pi\mathrm{i}m}(1-\mathrm{i}\mathrm{e}^{\pi\mathrm{i}m} )\frac{\Gamma(\frac{1}{2}-m)}{\sqrt{2\pi}}\quad\text{and}\quad\gamma^{2}=(1+ \mathrm{i}\mathrm{e}^{\pi\mathrm{i}m})\frac{\Gamma(\frac{1}{2}-m)}{\sqrt{2\pi}}, \tag{116}\] one obtains \[s_{1}^{0}=\frac{\sqrt{2\pi}}{\Gamma(\frac{1}{2}-m)},\quad s_{1}^{\infty}=- \frac{\sqrt{2\pi}}{\Gamma(\frac{1}{2}-m)},\quad s_{2}^{0}=-\mathrm{e}^{\mathrm{ i}\pi m}\frac{\sqrt{2\pi}}{\Gamma(\frac{1}{2}+m)},\quad s_{2}^{\infty}= \mathrm{e}^{-\mathrm{i}\pi m}\frac{\sqrt{2\pi}}{\Gamma(\frac{1}{2}+m)}.\] With this choice of \(\gamma,\delta\) and \(\mathbf{E}^{\infty},\mathbf{E}^{0}\) chosen as in (111), (112) (that is, we insist that the (2,2) entry of \(\mathbf{E}^{\infty},\mathbf{E}^{0}\) is \(1\) by setting \(\epsilon_{0}=\epsilon_{\infty}=1\)), the connection matrices are \[\mathbf{C}^{+}_{0\infty}=\begin{bmatrix}1&-\frac{\sqrt{2\pi}}{\Gamma(\frac{1} {2}-m)}\\ 0&1\end{bmatrix},\quad\mathbf{C}^{-}_{0\infty}=\begin{bmatrix}1&\frac{\sqrt{2 \pi}}{\Gamma(\frac{1}{2}-m)}\\ 0&1\end{bmatrix},\] and \[e_{1}^{2}=\mathrm{i}\quad\text{and}\quad e_{2}=1.\] _Remark 4_.: The above gauge is only needed to match our setup with that of [5]; in the sequel we will be working with \(\gamma=\delta=1\). Formula (107) then implies \[e_{2}^{2}=\mathrm{e}^{-2\pi\mathrm{i}m}\frac{1-\mathrm{i}\mathrm{e}^{\pi \mathrm{i}m}}{1+\mathrm{i}\mathrm{e}^{\pi\mathrm{i}m}}.\] This is important to note when, for example, one tries to verify that (235) below reduces to (25). Before beginning to study the large \(n\) behavior of \(u_{n}\), we must first establish a similar monodromy representation of the limiting solution of Painleve-III\((D_{8})\), which we do in Section 5 below. ### Monodromy manifold It is known that the monodromy manifold for Painleve-III\((D_{6})\) can be given by a cubic equation (see, e.g. [42]), which can be recovered from our point of view as follows. Denote \[\mathbf{C}^{-}_{0\infty}=\begin{bmatrix}\ell_{1}&\ell_{2}\\ \ell_{3}&\ell_{4}\end{bmatrix}\] and \[\mathbf{S}^{0}_{2}e_{0}^{-2\sigma_{3}}\mathbf{S}^{0}_{1}=e_{0}^{-2}\begin{bmatrix} 1&s_{1}^{0}\\ s_{2}^{0}&\left(e_{0}^{4}+s_{1}^{0}s_{2}^{0}\right)\end{bmatrix}=\begin{bmatrix} m_{1}&m_{2}\\ m_{3}&m_{4}\end{bmatrix}.\] Then, inverse of the cyclic relation (105) allows us to solve for \(s_{1}^{\infty},s_{2}^{\infty}\) in terms of parameters \(m_{i},\ell_{i}\), and imposes the constraint \[e_{\infty}^{2}=\ell_{1}\ell_{4}m_{1}-\ell_{1}\ell_{3}m_{2}+\ell_{2}\ell_{4}m_{ 3}-\ell_{2}\ell_{3}m_{4}. \tag{117}\] Hence, we are left with these eight parameters subject to the constraint (117) and the unit-determinant conditions \[\ell_{1}\ell_{4}-\ell_{2}\ell_{3}=1,\quad m_{1}m_{4}-m_{2}m_{3}=1. \tag{118}\] We may define coordinates which are invariant under the transformation (100): \[I_{1}:=\ell_{1}\ell_{4},\quad I_{2}:=m_{2}\ell_{1}\ell_{3},\quad I_{3}:=m_{3 }\ell_{2}\ell_{4},\quad I_{4}:=m_{4},\quad I_{5}:=m_{1}.\] Equations (117), (118) imply \[e_{\infty}^{2}=e_{0}^{-2}I_{1}-I_{2}+I_{3}-I_{4}(I_{1}-1),\quad I_{2}I_{3}-I_{ 1}(e_{0}^{-2}I_{4}-1)(I_{1}-1)=0.\] We eliminate \(I_{3}\) and get \[-I_{1}+e_{0}^{-2}I_{1}I_{4}+I_{1}^{2}-e_{0}^{-2}I_{4}I_{1}^{2}+e_{\infty}^{2}I_ {2}-I_{2}I_{4}-e_{0}^{-2}I_{1}I_{2}+I_{1}I_{2}I_{4}+I_{2}^{2}=0.\] Introducing new variables \[x_{1}:=I_{1}-1,\quad x_{2}:=-e_{0}^{-2}I_{1}+I_{2},\quad x_{3}:=I_{4}+e_{0}^{-2} \tag{119}\] yields the following equation, which defines the _monodromy manifold_ for the problem: \[x_{1}x_{2}x_{3}+x_{1}^{2}+x_{2}^{2}+x_{2}(e_{0}^{-2}+e_{\infty}^{2})+x_{1}(1+e_{0} ^{-2}e_{\infty}^{2})+e_{0}^{-2}e_{\infty}^{2}=0. \tag{120}\] This matches (14) upon using (83) and (92). Using (119) we obtain formulas (17)-(19). To find the singularities of (120), we adjoin to (120) the three equations obtained by setting to zero the components of the gradient vector of the left-hand side of (120) with respect to \((x_{1},x_{2},x_{3})\). There is therefore at most one singularity: \[\text{for }e_{0}^{-2}=e_{\infty}^{2}: (x_{1},x_{2},x_{3})=(0,-e_{0}^{-2},e_{0}^{2}+e_{0}^{-2}), \tag{122}\] \[\text{for }e_{0}^{2}=e_{\infty}^{2}: (x_{1},x_{2},x_{3})=(-1,0,e_{0}^{2}+e_{0}^{-2}). \tag{121}\] In particular, if neither \(e_{0}^{-2}=e_{\infty}^{2}\) nor \(e_{0}^{2}=e_{\infty}^{2}\), then the monodromy manifold is a smooth curve with no singular points. Notice that we can use \((x_{1},x_{2})\) as parameters for the generic collection of points on monodromy manifold (120) for which \(x_{1}x_{2}\neq 0\), because \(x_{3}\) can be explicitly expressed in terms of the other coordinates. The points satisfying (120) with \(x_{1}=0\) form a \(1\)-dimensional variety consisting in general of two distinct lines: \[(x_{1},x_{2},x_{3})=(0,-e_{0}^{-2},x_{3})\quad\text{or}\quad(x_{1},x_{2},x_{3 })=(0,-e_{\infty}^{2},x_{3})\] each parametrized by \(x_{3}\in\mathbb{C}\). If \(e_{0}^{-2}=e_{\infty}^{2}\), the two lines coincide and pass through the critical point (121) of (120). Likewise there are generally two lines on (120) along which \(x_{2}=0\) each parametrized by \(x_{3}\in\mathbb{C}\): \[(x_{1},x_{2},x_{3})=(-1,0,x_{3})\quad\text{or}\quad(x_{1},x_{2},x_{3})=(-e_{0} ^{-2}e_{\infty}^{2},0,x_{3})\] and if \(e_{0}^{2}=e_{\infty}^{2}\), the two lines again coincide and pass through the critical point (122) of (120). ## 5. General monodromy data: Painleve-III\((D_{8})\) ### Lax pair for Painleve-III\((D_{8})\) The Painleve-III\((D_{8})\) equation (3) can also be formulated as an isomonodromic deformation of a linear system. In this case we need two ramified irregular singularities at \(\lambda=0\) and \(\lambda=\infty\), i.e. we consider the system \[\frac{\partial\mathbf{\Omega}}{\partial\lambda}(\lambda,z) =\mathbf{\Lambda}^{(8)}(\lambda,z)\mathbf{\Omega}(\lambda,z), \tag{124}\] \[\frac{\partial\mathbf{\Omega}}{\partial z}(\lambda,z) =\mathbf{Z}(\lambda,z)\mathbf{\Omega}(\lambda,z), \tag{123}\] where \[\mathbf{\Lambda}^{(8)}(\lambda,z)=\begin{bmatrix}0&\mathrm{i}z\\ 0&0\end{bmatrix}+\frac{1}{4\lambda}\begin{bmatrix}V(z)&W(z)\\ 2&-V(z)\end{bmatrix}+\frac{1}{\lambda^{2}}\begin{bmatrix}X(z)&-2\mathrm{i}X(z )^{2}U(z)\\ -\mathrm{i}/(2U(z))&-X(z)\end{bmatrix},\] and \[\mathbf{Z}(\lambda,z)=\lambda\begin{bmatrix}0&\mathrm{i}\\ 0&0\end{bmatrix}+\frac{1}{4z}\begin{bmatrix}V(z)&W(z)\\ 2&-V(z)\end{bmatrix}-\frac{1}{z\lambda}\begin{bmatrix}X(z)&-2\mathrm{i}X(z)^{ 2}U(z)\\ -\mathrm{i}/(2U(z))&-X(z)\end{bmatrix},\] and functions \(U(z),V(z),W(z),X(z)\) satisfy the identities \[W(z)+4zU(z)+4\mathrm{i}U(z)V(z)X(z)+8U(z)^{2}X(z)^{2}=0, \tag{125}\] \[U(z)V(z)^{2}-4U(z)V(z)+2U(z)W(z)+3U(z)+8z=0. \tag{126}\] Note the characteristic feature that the leading terms of \(\mathbf{\Lambda}^{(8)}(\lambda,z)\) and of \(\mathbf{Z}(\lambda,x)\) at the singular points \(\lambda=0,\infty\) are singular and nondiagonalizable matrices. Since \(\mathbf{\Omega}(\lambda,z)\) is a simultaneous fundamental solution matrix of the Lax system (123), (124), the zero-curvature compatibility condition for that system is therefore satisfied: \[\frac{\partial\mathbf{\Lambda}^{(8)}}{\partial z}(\lambda,z)-\frac{\partial \mathbf{Z}}{\partial\lambda}(\lambda,z)+[\mathbf{\Lambda}^{(8)}(\lambda,z), \mathbf{Z}(\lambda,z)]=\mathbf{0}.\] Equating to zero the coefficients of different powers of \(\lambda\) on the left-hand side gives a first-order system of four differential equations on the four functions \(U(z)\), \(V(z)\), \(W(z)\), and \(X(z)\): \[\begin{split} zU^{\prime}(z)&=V(z)U(z)-U(z)-4 \mathrm{i}X(z)U(z)^{2}\\ V^{\prime}(z)&=\frac{4}{U(z)}\\ W^{\prime}(z)&=-16\mathrm{i}X(z)\\ zX^{\prime}(z)&=X(z)+2\mathrm{i}U(z)X(z)^{2}-\frac{ \mathrm{i}W(z)}{4U(z)}.\end{split} \tag{127}\] It is possible to express the functions \(W(z)\), \(X(z)\), and \(V(z)\) in terms of \(U(z)\) and \(U^{\prime}(z)\) using (125), (126), and (127), but since we do not use these formulas, we do not present them here. Using (127) to repeatedly eliminate all derivatives, it is straightforward to obtain the following identity \[U^{\prime\prime}(z)-\frac{U^{\prime}(z)^{2}}{U(z)}+\frac{U^{\prime }(z)}{z}-\frac{4U(z)^{2}+4}{z}\\ &=-\frac{U(z)}{z^{2}}\left[W(z)+4zU(z)+4\mathrm{i}U(z)V(z)X(z)+8U (z)^{2}X(z)^{2}\right].\end{split}\] Of course the right-hand side vanishes as a result of the identity (125). Hence \(U(z)\) is a solution of (3), the Painleve-III(\(D_{8}\)) equation. For all the calculations that follow, we assume for simplicity that \(z>0\). The system (123) admits formal solutions near the singular points \[\mathbf{\Omega}^{(\infty)}_{\mathrm{formal}}(\lambda,z)=\left(\mathbb{I}+ \frac{\mathbf{\Xi}^{(8)}(z)}{\lambda}+\mathcal{O}(\lambda^{-2})\right)\rho_{ \infty}^{\sigma_{3}/2}\frac{1}{\sqrt{2}}\begin{bmatrix}\mathrm{i}&-1\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\mathrm{i}\varphi_{\infty}\sigma_{3}} \quad\text{as}\quad\lambda\to\infty, \tag{128}\] and \[\mathbf{\Omega}^{(0)}_{\mathrm{formal}}(\lambda,z)=\mathbf{\Delta}^{(8)}(z) \left(\mathbb{I}+\mathbf{\Pi}(z)\lambda+\mathcal{O}(\lambda^{2})\right)\rho_{ 0}^{\sigma_{3}/2}\mathrm{e}^{-\pi\mathrm{i}\sigma_{3}/4}\frac{1}{\sqrt{2}} \begin{bmatrix}\mathrm{i}&-1\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\mathrm{i}\varphi_{0}\sigma_{3}}\quad \text{as}\quad\lambda\to 0, \tag{129}\] where \[\rho_{\infty}=\sqrt{-2\mathrm{i}z\lambda},\quad\rho_{0}=\sqrt{2\mathrm{i}z \lambda^{-1}}\] and \(\lambda^{p}\) denotes the principal branch of the power function. The function \(\mathbf{\Delta}^{(8)}(z)\) satisfies the identity \[\mathbf{\Delta}^{(8)}(z)\begin{bmatrix}0&-\mathrm{i}z\\ 0&0\end{bmatrix}\mathbf{\Delta}^{(8)}(z)^{-1}=\begin{bmatrix}X(z)&-2\mathrm{i} X(z)^{2}U(z)\\ -\mathrm{i}/\left(2U(z)\right)&-X(z)\end{bmatrix}, \tag{130}\] and hence the solution \(U(z)\) can be expressed as \[U(z):=-\frac{1}{2z\Delta^{(8)}_{21}(z)^{2}}. \tag{131}\] For \(k=1,2\), we define the Stokes sectors, \[\mathcal{S}^{(\infty)}_{k} =\left\{\lambda\in\mathbb{C}\;:\;|\lambda|>R,\quad 2\pi k-\frac{7 \pi}{2}<\mathsf{Arg}(\lambda)<2\pi k+\frac{\pi}{2}\right\},\] \[\mathcal{S}^{(0)}_{k} =\left\{\lambda\in\mathbb{C}\;:\;|\lambda|<r,\quad 2\pi k-\frac{5\pi}{2}< \mathsf{Arg}(\lambda)<2\pi k+\frac{3\pi}{2}\right\}.\] It follows from the classical theory of linear systems that there exist canonical solutions \(\mathbf{\Omega}^{(\infty)}_{k},\mathbf{\Omega}^{(0)}_{k}\) determined uniquely by the asymptotic condition \[\mathbf{\Omega}^{(v)}_{k}(\lambda,z)=\mathbf{\Omega}^{(v)}_{\mathrm{formal}}( \lambda,z),\quad\lambda\in\mathcal{S}^{(v)}_{k},\,v\in\{0,\infty\}. \tag{132}\] The canonical solutions in consecutive Stokes sectors at \(\lambda=0,\infty\) are related to one another by multiplications on the right with Stokes matrices, i.e. \[\boldsymbol{\Omega}_{2}^{(\infty)}(\lambda,z) =\boldsymbol{\Omega}_{1}^{(\infty)}(\lambda,z)\mathbf{S}_{1}^{ \infty},\quad\lambda\in\mathcal{S}_{1}^{(\infty)}\cap\mathcal{S}_{2}^{(\infty)} \tag{134}\] \[\boldsymbol{\Omega}_{1}^{(0)}(\lambda,z) =\boldsymbol{\Omega}_{0}^{(0)}(\lambda,z)\mathbf{S}_{0}^{0}, \quad\lambda\in\mathcal{S}_{0}^{(0)}\cap\mathcal{S}_{1}^{(0)}\] (135) \[\boldsymbol{\Omega}_{2}^{(\infty)}(\lambda,z) =\boldsymbol{\Omega}_{1}^{(\infty)}(\mathrm{e}^{-2\pi\mathrm{i} }\lambda,z)(-\mathrm{i}\sigma_{2}),\] (136) \[\boldsymbol{\Omega}_{1}^{(0)}(\lambda,z) =\boldsymbol{\Omega}_{0}^{(0)}(\mathrm{e}^{-2\pi\mathrm{i}} \lambda,z)(\mathrm{i}\sigma_{2}), \tag{133}\] where \[\mathbf{S}_{1}^{\infty}=\begin{bmatrix}1&t_{1}^{\infty}\\ 0&1\end{bmatrix},\quad\mathbf{S}_{0}^{0}=\begin{bmatrix}1&0\\ t_{0}^{0}&1\end{bmatrix}. \tag{137}\] Canonical solutions in, say, \(\mathcal{S}_{k}^{(\infty)}\) admit analytic continuation into \(\mathcal{S}_{k}^{(0)}\) and since both canonical solutions solve (78) in the same domain, they must be related by multiplication on the right by a constant connection matrix, which we define using \[\boldsymbol{\Omega}_{0}^{(0)}(\lambda,z)=\boldsymbol{\Omega}_{1}^{(\infty)}( \lambda,z)\mathbf{C}_{0\infty}. \tag{138}\] ### Riemann-Hilbert problem for Painleve-III(\(D_{8}\)) In a fashion similar to Section 4.2, we now formulate a \(2\times 2\) Riemann-Hilbert problem for a sectionally-analytic function \(\boldsymbol{\Omega}\) defined by \[\boldsymbol{\Omega}(\lambda,z)=\left\{\begin{array}{ll}\boldsymbol{\Omega} _{1}^{(\infty)}(\lambda,z),&|\lambda|>1\quad\text{and}\quad-\frac{\pi}{2}< \mathsf{Arg}(\lambda)<\frac{3\pi}{2},\\ \boldsymbol{\Omega}_{0}^{(0)}(\lambda,z),&|\lambda|<1\quad\text{and}\quad- \frac{\pi}{2}<\mathsf{Arg}(\lambda)<\frac{3\pi}{2}.\end{array}\right.\] Then, it follows from the asymptotic conditions (132) and the relations (133)-(136) and (138) that \(\boldsymbol{\Omega}\) solves the following \(2\times 2\) Riemann-Hilbert problem. **Riemann-Hilbert Problem 2**.: _Fix monodromy data \((t_{0^{\prime}}^{0},t_{1}^{\infty})\) and \(z>0\). We seek a \(2\times 2\) matrix function \(\lambda\mapsto\boldsymbol{\Omega}(\lambda,z)\) satisfying:_ _Analyticity:_ \(\boldsymbol{\Omega}(\lambda,z)\) _is analytic in_ \(\mathbb{C}\setminus L^{(8)}\)_, where_ \(L^{(8)}=\{|\lambda|=1\}\cup\mathrm{i}\mathbb{R}_{-}\) _is the jump contour shown in Figure_ 5_._ _Jump condition:_ \(\boldsymbol{\Omega}(\lambda,z)\) _has continuous boundary values on_ \(L^{(8)}\setminus\{0\}\) _which satisfy_ \[\boldsymbol{\Omega}_{+}(\lambda,z)=\boldsymbol{\Omega}_{-}(\lambda,z) \mathbf{J}_{\boldsymbol{\Omega}}(\lambda),\] _where_ \(\mathbf{J}_{\boldsymbol{\Omega}}(\lambda)\) _is as shown in Figure_ 5_._ _Normalization:_ \(\boldsymbol{\Omega}(\lambda,z)\) _satisfies the asymptotic conditions_ \[\boldsymbol{\Omega}(\lambda,z)=\left(\mathbb{I}+\mathcal{O}(\lambda^{-1}) \right)\rho_{\infty}^{\sigma_{3}/2}\frac{1}{\sqrt{2}}\begin{bmatrix}\mathrm{i} &-1\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\mathrm{i}\rho_{\infty}\sigma_{3}}\quad \text{as}\quad\lambda\to\infty,\] _and_ \[\boldsymbol{\Omega}(\lambda,z)=\left(\boldsymbol{\Delta}^{(8)}(z)+\mathcal{O} (\lambda)\right)\rho_{0}^{\sigma_{3}/2}\frac{\mathrm{e}^{-\frac{\mathrm{i} \sigma_{3}}{4}}}{\sqrt{2}}\begin{bmatrix}\mathrm{i}&-1\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\rho_{0}\sigma_{3}}\quad\text{as}\quad \lambda\to 0,\] _where_ \(\boldsymbol{\Delta}^{(8)}(z)\) _is a matrix determined from_ \(\boldsymbol{\Omega}(\lambda,z)\) _having unit determinant._ Solvability of Riemann-Hilbert Problem 2 is discussed in Section 9.1. ### Lax pair equations for \(\mathbf{\Omega}(\lambda,z)\) Since the jump matrices depend on neither \(\lambda\) nor \(z\), the matrices \[\mathbf{\Lambda}^{(8)}(\lambda,z):=\frac{\partial\mathbf{\Omega}}{\partial \lambda}(\lambda,z)\mathbf{\Omega}(\lambda,z)^{-1}\quad\text{and}\quad\mathbf{ Z}(\lambda,z):=\frac{\partial\mathbf{\Omega}}{\partial z}(\lambda,z)\mathbf{ \Omega}(\lambda,z)^{-1} \tag{139}\] are both analytic functions of \(\lambda\) in the domain \(\mathbf{C}\setminus\{0\}\). We determine these analytic functions by computing sufficiently many terms in their asymptotic expansions as \(\lambda\to\infty\) and \(\lambda\to 0\) using (128)-(129). We will use the identities \[\frac{\partial\rho_{\infty}}{\partial\lambda}=-\mathrm{i}z\rho_{\infty}^{-1} \quad\text{and}\quad\frac{\partial\rho_{\infty}}{\partial z}=-\mathrm{i} \lambda\rho_{\infty}^{-1} \tag{140}\] and \[\frac{\partial\rho_{0}}{\partial\lambda}=-\mathrm{i}z\lambda^{-2}\rho_{0}^{-1 }\quad\text{and}\quad\frac{\partial\rho_{0}}{\partial z}=\mathrm{i}\lambda^{- 1}\rho_{0}^{-1}. \tag{141}\] Using (140) and (128) gives \[\mathbf{\Lambda}^{(8)}(\lambda,z) =\begin{bmatrix}0&\mathrm{i}z\\ 0&0\end{bmatrix}+\frac{1}{4\lambda}\begin{bmatrix}1-4\mathrm{i}z\Xi_{21}^{(8)}( z)&4\mathrm{i}z(\Xi_{11}^{(8)}(z)-\Xi_{22}^{(8)}(z))\\ 2&-1+4\mathrm{i}z\Xi_{21}^{(8)}(z)\end{bmatrix}+\mathcal{O}(\lambda^{-2}), \quad\lambda\to\infty\] \[\mathbf{Z}(\lambda,z) =\lambda\begin{bmatrix}0&\mathrm{i}\\ 0&0\end{bmatrix}+\frac{1}{4z}\begin{bmatrix}1-4\mathrm{i}z\Xi_{21}^{(8)}(z)&4 \mathrm{i}z(\Xi_{11}^{(8)}(z)-\Xi_{22}^{(8)}(z))\\ 2&-1+4\mathrm{i}z\Xi_{21}^{(8)}(z)\end{bmatrix}+\mathcal{O}(\lambda^{-1}), \quad\lambda\to\infty. \tag{142}\] Actually, we can also go to higher order and compute the coefficient of \(\lambda^{-2}\) in the matrix element \(\Lambda_{21}(\lambda,z)\): \[\Lambda_{21}^{(8)}(\lambda,z)=\frac{1}{2\lambda}+\frac{1}{2\lambda^{2}}\left( -\Xi_{21}^{(8)}(z)-2\mathrm{i}z\Xi_{21}^{(8)}(z)^{2}-\Xi_{11}^{(8)}(z)+\Xi_{2 2}^{(8)}(z))\right)+\mathcal{O}(\lambda^{-3}),\quad\lambda\to\infty. \tag{143}\] Likewise, using (141) and (129) gives that as \(\lambda\to 0\) \[\mathbf{\Lambda}^{(8)}(\lambda,z) =\mathbf{\Lambda}^{(8)}(z)\left(\frac{1}{\lambda^{2}}\begin{bmatrix} 0&-\mathrm{i}z\\ 0&0\end{bmatrix}-\frac{1}{4\lambda}\begin{bmatrix}1-4\mathrm{i}z\Pi_{21}(z)&4 \mathrm{i}z(\Pi_{11}(z)-\Pi_{22}(z))\\ 2&-1+4\mathrm{i}z\Pi_{21}(z)\end{bmatrix}\right)\mathbf{\Lambda}^{(8)}(z)^{- 1}+\mathcal{O}(1),\] \[\mathbf{Z}(\lambda,z) =\frac{1}{\lambda}\mathbf{\Lambda}^{(8)}(z)\begin{bmatrix}0& \mathrm{i}\\ 0&0\end{bmatrix}\mathbf{\Lambda}^{(8)}(z)^{-1}+\mathcal{O}(1). \tag{144}\] Applying Liouville's theorem yields the exact expressions \[\boldsymbol{\Lambda}^{(8)}(\lambda,z)=\begin{bmatrix}0&\mathrm{i}z\\ 0&0\end{bmatrix}+\frac{1}{4\lambda}\begin{bmatrix}1-4\mathrm{i}z\Xi_{21}^{(8)}( z)&4\mathrm{i}z(\Xi_{11}^{(8)}(z)-\Xi_{22}^{(8)}(z))\\ 2&-1+4\mathrm{i}z\Xi_{21}^{(8)}(z)\end{bmatrix}\\ +\frac{\mathrm{i}z}{\lambda^{2}}\begin{bmatrix}\Lambda_{11}^{(8)}( z)\Delta_{21}^{(8)}(z)&-\Delta_{11}^{(8)}(z)^{2}\\ \Delta_{21}^{(8)}(z)^{2}&-\Delta_{11}^{(8)}(z)\Delta_{21}^{(8)}(z)\end{bmatrix}, \tag{145}\] and \[\mathbf{Z}(\lambda,z)=\lambda\begin{bmatrix}0&\mathrm{i}\\ 0&0\end{bmatrix}+\frac{1}{4z}\begin{bmatrix}1-4\mathrm{i}z\Xi_{21}^{(8)}(z)&4 \mathrm{i}z(\Xi_{11}^{(8)}(z)-\Xi_{22}^{(8)}(z))\\ 2&-1+4\mathrm{i}z\Xi_{21}^{(8)}(z)\end{bmatrix}\\ -\frac{\mathrm{i}}{\lambda}\begin{bmatrix}\Lambda_{11}^{(8)}( z)\Delta_{21}^{(8)}(z)&-\Delta_{11}^{(8)}(z)^{2}\\ \Delta_{21}^{(8)}(z)^{2}&-\Delta_{11}^{(8)}(z)\Delta_{21}^{(8)}(z)\end{bmatrix}.\] Using the notation (131) and noting the structure of the coefficients of the different powers of \(\lambda\), it is convenient to reparametrize the coefficients as follows: \[\boldsymbol{\Lambda}^{(8)}(\lambda,z)=\begin{bmatrix}0&\mathrm{i}z\\ 0&0\end{bmatrix}+\frac{1}{4\lambda}\begin{bmatrix}V(z)&W(z)\\ 2&-V(z)\end{bmatrix}+\frac{1}{\lambda^{2}}\begin{bmatrix}X(z)&-2\mathrm{i}X(z )^{2}U(z)\\ -\mathrm{i}/\left(2U(z)\right)&-X(z)\end{bmatrix}\] and \[\mathbf{Z}(\lambda,z)=\lambda\begin{bmatrix}0&\mathrm{i}\\ 0&0\end{bmatrix}+\frac{1}{4z}\begin{bmatrix}V(z)&W(z)\\ 2&-V(z)\end{bmatrix}-\frac{1}{z\lambda}\begin{bmatrix}X(z)&-2\mathrm{i}X(z)^{2 }U(z)\\ -\mathrm{i}/\left(2U(z)\right)&-X(z)\end{bmatrix}.\] The quantities \(U(z)\), \(V(z)\), \(W(z)\), and \(X(z)\) are not independent; comparing the 21-element of the coefficient of \(\lambda^{-1}\) in the expansion of \(\boldsymbol{\Delta}^{(8)}(z)^{-1}\boldsymbol{\Lambda}^{(8)}(\lambda,z) \boldsymbol{\Delta}^{(8)}(z)\) computed using (142) and (144) gives the identity (125). At the same time from formula (143) we get identity (126). Since (139) holds for the same matrix function \(\boldsymbol{\Omega}(\lambda,z)\), the latter satisfies the equations of a compatible Lax system \[\frac{\partial\boldsymbol{\Omega}}{\partial\lambda}(\lambda,z)=\boldsymbol{ \Lambda}^{(8)}(\lambda,z)\boldsymbol{\Omega}(\lambda,z)\quad\text{and}\quad \frac{\partial\boldsymbol{\Omega}}{\partial z}(\lambda,z)=\mathbf{Z}(\lambda,z)\boldsymbol{\Omega}(\lambda,z). \tag{146}\] which coincides with the system (123), (124). ### Monodromy manifold Introducing notation for the connection matrix \[\mathbf{C}_{0\infty}=\begin{bmatrix}n_{1}&n_{2}\\ n_{3}&n_{4}\end{bmatrix},\quad\det(\mathbf{C}_{0\infty})=1,\] we have the cyclic relation around the unique nonsingular point of self-intersection of \(L_{\boldsymbol{\Omega}}\) \[\mathbf{S}_{1}^{\infty}\mathrm{i}\sigma_{2}=\mathbf{C}_{0\infty}\mathbf{S}_{0}^ {0}(-\mathrm{i}\sigma_{2})(\mathbf{C}_{0\infty})^{-1},\] which implies \[n_{1}=-n_{4},\quad t_{1}^{\infty}=t_{0}^{0},\quad n_{3}=n_{2}-n_{4}t_{1}^{ \infty}.\] Denoting \[y_{1}=n_{3},\quad y_{2}=n_{4},\quad y_{3}=t_{1}^{\infty},\] the condition \(\det(\mathbf{C}_{0\infty})=1\) implies that the coordinates \((y_{1},y_{2},y_{3})\) are related by the cubic equation (15). _Remark 5_.: If the solution \(\boldsymbol{\Omega}(\lambda,z)\) is multiplied by the scalar \(-1\) for \(|\lambda|<1\) and left unchanged for \(|\lambda|>1\), then the elements of the connection matrix \(\mathbf{C}_{0\infty}\) change sign while the Stokes multiplier \(t_{1}^{\infty}\) is invariant. Therefore this transformation changes \((y_{1},y_{2},y_{3})\) to \((-y_{1},-y_{2},y_{3})\), yielding a different point on the cubic (15). The matrix coefficient \(\boldsymbol{\Delta}^{(8)}(z)\) also changes sign, however \(\Delta_{21}^{(8)}(z)^{2}\) is invariant, so the solution \(U(z)\) of the Painleve-III\((D_{8})\) equation (3) is the same for both points. ## 6. Schlesinger transformation and proof of Proposition 2 Fix generic monodromy parameters \((e_{1},e_{2})\). In view of the parametrization of the Stokes parameters in (110) and the eigenvector matrices in (111), (112), this data determines from Riemann-Hilbert Problem 1 a matrix \(\boldsymbol{\Psi}(\lambda,x)\) which is meromorphic in \(x\) and satisfies asymptotic conditions (96) and (97), which we write in the form7 Footnote 7: The coefficients \(\boldsymbol{\Psi}_{j}^{\infty},\boldsymbol{\Psi}_{j}^{0}\) should not be confused with the fundamental solutions discussed in the previous sections. The reader can rest assured that this notation will only appear in this section. \[\boldsymbol{\Psi}(\lambda,x)\lambda_{\boldsymbol{\overline{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}^{0}} \text{and}\quad\sigma_{-}=\begin{bmatrix}0&0\\ 0&1\end{bmatrix}.\] Following [5], assuming the \((1,1)\) entry of \(\boldsymbol{\Psi}_{0}^{0}(x)\), denoted \(\boldsymbol{\Psi}_{0,11}^{0}(x)\), is not identically zero, we consider the Schlesinger transformation \[\boldsymbol{\hat{\Psi}}(\lambda,x):=\left(\sigma_{+}\lambda_{\boldsymbol{ \overline{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}^{1}} 1\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ } \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ } \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\ _Analyticity:_\(\mathbf{\nabla}_{n}(\lambda,x)\) _is analytic in_ \(\mathbb{C}\setminus L^{(6)}\)_, where_ \(L^{(6)}=\{|\lambda|=1\}\cup{\rm i}\mathbb{R}\) _is the jump contour shown in Figure_ 3_._ _Jump condition:_\(\mathbf{\nabla}_{n}(\lambda,x)\) _has continuous boundary values on_ \(L^{(6)}\setminus\{0\}\) _which satisfy_ \[\mathbf{\nabla}_{n,+}(\lambda,x)=\mathbf{\nabla}_{n,-}(\lambda,x)\mathbf{\nabla}_{\mathbf{ \nabla}_{n}}(\lambda),\] _where_ \(\mathbf{\mathsf{J}}_{\mathbf{\nabla}_{n}}(\lambda)\) _is as shown in Figure_ 3 _but with the modification_ \[\mathbf{\mathsf{S}}_{2}^{0}e_{0}^{-2\sigma_{3}}\mapsto(-1)^{n}\mathbf{\mathsf{S}}_{2}^ {0}e_{0}^{-2\sigma_{3}}\quad\text{and}\quad\mathbf{\mathsf{S}}_{2}^{0}e_{\infty}^{ 2\sigma_{3}}\mapsto(-1)^{n}\mathbf{\mathsf{S}}_{2}^{0}e_{\infty}^{2\sigma_{3}}.\] _Normalization:_\(\mathbf{\nabla}_{n}(\lambda,x)\) _satisfies the asymptotic conditions_ \[\mathbf{\nabla}_{n}(\lambda,x)=\left(\mathbb{I}+\Xi_{n}^{(6)}(x)+\mathcal{O}( \lambda^{-2})\right){\rm e}^{{\rm i}x\lambda\sigma_{3}/2}\lambda_{\mathbf{\boxtimes }}^{(n-\Theta_{\infty})\sigma_{3}/2}\quad\text{as}\quad\lambda\to\infty, \tag{149}\] _and_ \[\mathbf{\nabla}_{n}(\lambda,x)=\left(\mathbf{\Delta}_{n}^{(6)}(x)+\mathcal{O}(\lambda) \right){\rm e}^{-{\rm i}x\lambda^{-1}\sigma_{3}/2}\lambda_{\mathbf{\boxtimes}}^{( \Theta_{0}+n)\sigma_{3}/2}\quad\text{as}\quad\lambda\to 0, \tag{150}\] _where_ \(\mathbf{\Delta}_{n}^{(6)}(x)\) _is a matrix determined from_ \(\mathbf{\nabla}_{n}(\lambda,x)\) _having unit determinant._ That \(\mathbf{\nabla}_{n}\) solves the above Riemann-Hilbert problem implies the existence of the limit \[\Xi_{n}^{(6)}(x):=\lim_{\lambda\to\infty}\lambda\left[\mathbf{\nabla}_{n}(\lambda,x){\rm e}^{-{\rm i}x\lambda\sigma_{3}/2}\lambda_{\mathbf{\boxtimes}}^{\Theta_{0 }\sigma_{3}/2}-\mathbb{I}\right]. \tag{151}\] It follows that the function \[u_{n}(x)=\frac{-{\rm i}\Xi_{n,12}^{(6)}(x)}{\Delta_{n,11}^{(6)}(x)\Delta_{n,1 2}^{(6)}(x)}, \tag{152}\] satisfies \(\mathrm{P}\mathrm{III}(D_{6})\) in the form \[u_{n}^{\prime\prime}=\frac{(u_{n}^{\prime})^{2}}{u_{n}}-\frac{u_{n}^{\prime}} {x}+\frac{4(n+\Theta_{0})u_{n}^{2}}{x}+\frac{4(1+n-\Theta_{\infty})}{x}+4u_{n} ^{3}-\frac{4}{u_{n}}.\] It was shown in [5, Lemma 2] that if for some \(n\in\mathbb{Z}\) the inverse monodromy problem is solvable for a given \(x\in D\), where \(D\) is a domain in \(\mathbb{C}\setminus\{0\}\), then \(\mathbf{\nabla}_{n}\) satisfies the Lax pair \[\frac{\partial\mathbf{\nabla}_{n}}{\partial\lambda}\left(\lambda,x\right) =\left(\frac{{\rm i}x}{2}\sigma_{3}+\frac{1}{2\lambda}\begin{bmatrix} -\Theta_{\infty}+n&2y\\ 2v&\Theta_{\infty}-n\end{bmatrix}+\frac{1}{2\lambda^{2}}\begin{bmatrix}{\rm i }x-2{\rm i}st&2{\rm i}s\\ -2{\rm i}t(st-x)&-{\rm i}x+2{\rm i}st\end{bmatrix}\right)\mathbf{\nabla}_{n}( \lambda,x),\] \[\frac{\partial\mathbf{\nabla}_{n}}{\partial x}\left(\lambda,x\right) =\left(\frac{{\rm i}\lambda}{2}\sigma_{3}+\frac{1}{x}\begin{bmatrix} 0&y\\ v&0\end{bmatrix}-\frac{1}{2\lambda x}\begin{bmatrix}{\rm i}x-2{\rm i}st&2{\rm i} s\\ -2{\rm i}t(st-x)&-{\rm i}x+2{\rm i}st\end{bmatrix}\right)\mathbf{\nabla}_{n}( \lambda,x),\] where potentials \(s,t,u,v,y\) all depend on \(x\) and \(n\). Furthermore, in this domain, the functions \(\Psi_{0,11}^{0}(x)\), \(\Psi_{0,22}^{0}(x)\) extracted from \(\mathbf{\nabla}_{n}(\lambda,x)\) are not identically zero8. Footnote 8: Lemma 2 in [5] was stated for parameters corresponding to rational solutions of Painlevé-III, but the proof is almost exactly the same in this case. One can check that if a solution to Riemann-Hilbert Problem 3 exists, it must be unique, and we attempt to identify this solution as a solution of Riemann-Hilbert Problem 1 with possibly different monodromy data. The diagonal elements of \(\mathbf{\mathsf{S}}_{2}^{0}e_{0}^{-2\sigma_{3}}\), \(\mathbf{\mathsf{S}}_{2}^{\infty}e_{\infty}^{2\sigma_{3}}\) alternate signs which implies the change \[e_{0}^{2}\mapsto(-1)^{n}e_{0}^{2},\quad e_{\infty}^{2}\mapsto(-1)^{n}e_{\infty }^{2}.\] Furthermore, in view of (110), we can write \[(-1)^{n}s_{2}^{\infty}e_{\infty}^{2}=(-1)^{n}(1-e_{1}^{2}e_{\infty}^{2})e_{ \infty}^{2}=(1-(-1)^{n}e_{1}^{2}(-1)^{n}e_{\infty}^{2}))(-1)^{n}e_{\infty}^{2},\] and \[(-1)^{n}s_{2}^{0}e_{0}^{-2}=(-1)^{n}(e_{1}^{2}e_{0}^{2}-1)e_{0}^{-2}=\left((-1)^ {n}e_{1}^{2}(-1)^{n}e_{0}^{2}-1\right)(-1)^{n}e_{0}^{-2}.\] Combining the above with the fact that \(\mathbf{C}_{0\infty}^{+}\) remain invariant under the iterated Schlesinger transformations implies the change in monodromy data \[e_{1}^{2}\mapsto(-1)^{n}e_{1}^{2}\quad\text{and}\quad e_{2}\mapsto e_{2}. \tag{153}\] Since \(e_{1},e_{2}\) are assumed to be nonvanishing, we may write them in the form (16) for some \(\mu,\eta\in\mathbb{C}\) with \(-1<\text{Re}(\mu),\text{Re}(\eta)\leq 1\). Since transformations \(e_{1}\mapsto-e_{1}\), \(e_{2}\mapsto-e_{2}\) preserve the monodromy data, we can assume \(-\frac{1}{2}<\text{Re}(\eta)\leq\frac{1}{2}\) and \(-\frac{1}{2}<\text{Re}(\mu)\leq\frac{1}{2}\). Equation (153) implies in turn that \(\eta\) does not depend on \(n\in\mathbb{Z}\), while \(\mu\) is replaced with \[\mu\mapsto\mu_{n}:=\left\{\begin{array}{ll}\mu,&n\in 2\mathbb{Z},\\ \mu-\frac{1}{2},&n+1\in 2\mathbb{Z},\end{array}\right. \tag{154}\] This proves Proposition 2. We end this section with two important remarks. _Remark 6_.: It was noted in the introduction that one could restrict \(0<\text{Re}(\mu_{n})\leq 1/2\), in which case, the above iterations interchange the roles of \(e_{1}^{2},e_{1}^{-2}\) and we have to perform the transformation \(\mu\to-\mu\), which corresponds to replacements \[\mathbf{E}^{\infty}\to\left(\sqrt{\frac{e_{\infty}^{2}-e_{1}^{2}}{e_{1}^{2}(e_ {1}^{2}e_{\infty}^{2}-1)}}\right)^{\sigma_{3}}\mathbf{E}^{\infty}\left(\sqrt{ \frac{e_{\infty}^{2}-e_{1}^{2}}{e_{1}^{2}(e_{1}^{2}e_{\infty}^{2}-1)}}\right)^ {-\sigma_{3}}\sigma_{1}\left(\frac{e_{1}^{2}e_{\infty}^{2}\left(1-e_{1}^{2}e_ {\infty}^{2}\right)}{\left(e_{1}^{4}-1\right)}\right)^{\sigma_{3}}, \tag{155}\] \[\mathbf{E}^{0}\to\left(\sqrt{\frac{e_{0}^{2}-e_{1}^{2}}{e_{1}^{2}(e_{1}^{2}e_ {0}^{2}-1)}}\right)^{\sigma_{3}}\mathbf{E}^{0}\left(\sqrt{\frac{e_{0}^{2}-e_{ 1}^{2}}{e_{1}^{2}(e_{1}^{2}e_{0}^{2}-1)}}\right)^{-\sigma_{3}}\sigma_{1}\left( \frac{e_{1}^{2}\left(e_{0}^{2}e_{1}^{2}-1\right)}{e_{0}^{2}\left(e_{1}^{4}-1 \right)}\right)^{\sigma_{3}}. \tag{156}\] This gauge transformation then allows us to identify the monodromy parameter pairs \[(e_{1},e_{2})\sim\left(\frac{1}{e_{1}},\frac{1}{e_{2}e_{0}^{2}e_{\infty}^{2}} \sqrt{\frac{(e_{1}^{2}-e_{0}^{2})(1-e_{0}^{2}e_{1}^{2})}{(e_{1}^{2}-e_{\infty} ^{2})(1-e_{1}^{2}e_{\infty}^{2})}}\right). \tag{157}\] Therefore we alternatively can write monodromy data for Schlesinger transformation as \[\mu_{n}=\left\{\begin{array}{ll}\mu,&n\in 2\mathbb{Z},\\ \frac{1}{2}-\mu,&n+1\in 2\mathbb{Z},\end{array}\right.,\quad e_{2,n}=\left\{ \begin{array}{ll}e_{2},&n\in 2\mathbb{Z},\\ \frac{1}{e_{2}e_{0}^{2}e_{\infty}^{2}}\sqrt{\frac{(e_{1}^{2}-e_{0}^{2})(1-e_{ 0}^{2}e_{1}^{2})}{(e_{1}^{2}-e_{\infty}^{2})(1-e_{1}^{2}e_{\infty}^{2})}},&n+1 \in 2\mathbb{Z}.\end{array}\right.\] Furthermore, one can check that \((x_{1},x_{2},x_{3})\) in (17)-(19) remain invariant under the map described in (157), whereas \((y_{1},y_{2},y_{3})\mapsto(\pm y_{1},\pm y_{2},y_{3})\) where the sign depends on the choice of the square root in (157) and (210) below. In both cases, the corresponding solution of (3) remains invariant, see Remark 5. _Remark 7_.: Moving forward, we will slightly abuse notation by suppressing the \(n\)-dependence in the parameters \[e_{\infty}=\mathrm{e}^{n\mathrm{i}(\Theta_{\infty}-n)/2},\quad e_{0}=\mathrm{e }^{n\mathrm{i}(\Theta_{0}+n)/2},\quad e_{1}=\mathrm{e}^{n\mathrm{i}n},\quad e _{2}=\mathrm{e}^{n\mathrm{i}n}. \tag{158}\] ## 7. Asymptotics for large \(n\) and small \(x\) and proof of Theorem 3 Let \((e_{1},e_{2})\) be generic monodromy parameters, see Definition 1. At this point, we can see more clearly the meaning of the genericity conditions formulated there: 1. \(e_{1}^{4}\neq 1\); this is to guarantee diagonalizability in (106), 2. \(e_{1}e_{2}\neq 0\); this is to guarantee the unit-determinant condition in (106) and (107), 3. \(e_{1}^{2}\neq e_{\infty}^{\pm 2}\) and \(e_{1}^{2}\neq e_{0}^{\pm 2}\); this, in particular, implies that the Stokes parameters (110) are nonvanishing. ### Opening the lenses First, we define a new unknown matrix by \(\mathbf{\Phi}_{n}(\lambda):=\mathbf{\nabla}_{n}(\lambda)\mathbf{L}\) where \(\mathbf{L}\) is the piece-wise constant matrix shown in the left-hand panel of Figure 6. It follows from (107), (108) that the resulting jump conditions satisfied by \(\mathbf{\Phi}_{n}(\lambda)\) are as shown in the right-hand panel of Figure 6. _Remark 8_.: In the general case \(|\mathsf{Arg}(x)|<\pi\), the lenses shown Figure 6 must be rotated in the manner shown in the left panel of Figure 7. The resulting jumps follow from the identities (114), (115) and are shown in the right panel of Figure 7. ### Parametrix for \(\mathbf{\Phi}_{n}(\lambda)\) near \(\lambda=\infty\) By definition, the parametrix \(\mathbf{\Phi}^{(\infty)}(\lambda)=\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) satisfies the following Riemann-Hilbert problem **Riemann-Hilbert Problem 4**.: _Fix generic monodromy parameters \((e_{1},e_{2})\) determining the Stokes and connection matrices, \(n\in\mathbb{Z}\), and \(x>0\). We seek a \(2\times 2\) matrix function \(\lambda\mapsto\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) satisfying:_ * _Analyticity:_ \(\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) _is analytic in_ \(\mathbf{C}\setminus\Gamma^{(\infty)}\)_, where_ \(\Gamma^{(\infty)}=\{|\lambda|=2\}\cup(\mathrm{i}\mathrm{i}\mathrm{R}\cap\{| \mathrm{Im}\lambda-1|>1\})\) _is the jump contour shown in Figure_ 8_._ * _Jump condition:_ \(\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) _has continuous boundary values on_ \(\Gamma^{(\infty)}\setminus\{0\}\) _from each component of_ \(\mathbf{C}\setminus\Gamma^{(\infty)}\)_, which satisfy_ \(\dot{\mathbf{\Phi}}_{n,+}^{(\infty)}(\lambda,x)=\ddot{\mathbf{\Phi}}_{n,-}^{(\infty)} (\lambda,x)\mathbf{J}_{\dot{\mathbf{\Phi}}_{n}^{(\infty)}}(\lambda)\)_, where_ \(\mathbf{J}_{\dot{\mathbf{\Phi}}_{n}^{(\infty)}}(\lambda)\) _is as shown in Figure_ 8 _and where the_ \(+\) _(resp.,_ \(-\)_) subscript denotes a boundary value taken from the left (resp., right) of an arc of_ \(\Gamma^{(\infty)}\)_._ * _Normalization:_ \(\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) _satisfies the asymptotic conditions_ (159) \[\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)=\left(\mathrm{i}\mathrm{I}+\frac{ \mathbf{A}_{n}(x)}{\lambda}+\mathcal{O}\left(\frac{1}{\lambda^{2}}\right) \right)\mathrm{e}^{\mathrm{i}x\lambda\sigma_{3}/2}\lambda_{\blacksquare}^{(n- \Theta_{\infty})\sigma_{3}/2}\quad\text{as}\quad\lambda\to\infty,\] _and_ (160) \[\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)=\left(\mathbf{B}_{n}(x)+\mathcal{O }(\lambda)\right)\lambda_{\blacksquare}^{n\sigma_{3}}\quad\text{as}\quad\lambda \to 0,\] _where_ \(\mathbf{A}_{n}(x)\) _has zero trace and_ \(\mathbf{B}_{n}(x)\) _has unit determinant._ Figure 6. Left panel: the definition of the matrix \(\mathbf{L}\); the circles are centered at the origin and have radii \(\frac{1}{2}\), \(1\), and \(2\). Right panel: the jump contour \(\Gamma\) and jump conditions for \(\mathbf{\Phi}_{n}(\lambda)\). It is easy to see that \(\Phi_{n}^{(\infty)}(\lambda,x)\) necessarily has unit determinant. Furthermore, note that the jump matrix being \(e_{1}^{-2\sigma_{3}}\) across the arc terminating at the origin implies \(e_{1}=\mathrm{e}^{\pi\mathrm{i}\mu_{n}}\), which is consistent with (158). Figure 7. Analogue of Figure 6 when \(\mathrm{Arg}(x)\neq 0\). The thick line represents the branch cut for the argument chosen as in Remark 3. #### 7.2.1. Dependence on \(\lambda\) It follows from assuming differentiability of the asymptotics in (159), (160) that \[\frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial\lambda} (\lambda,x)\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}\] \[=\left(\mathds{1}+\frac{\mathbf{A}_{n}(x)}{\lambda}+\mathcal{O} \left(\lambda^{-2}\right)\right)\left(\frac{\mathrm{i}x}{2}+\frac{n-\Theta_{ \infty}}{2\lambda}\right)\sigma_{3}\left(\mathds{1}+\frac{\mathbf{A}_{n}(x)}{ \lambda}+\mathcal{O}\left(\lambda^{-2}\right)\right)^{-1}+\mathcal{O}\left( \lambda^{-2}\right)\] \[=\frac{\mathrm{i}x}{2}\sigma_{3}+\left(\frac{\mathrm{i}x}{2}[ \mathbf{A}_{n}(x),\sigma_{3}]+\frac{n-\Theta_{\infty}}{2}\sigma_{3}\right) \frac{1}{\lambda}+\mathcal{O}\left(\lambda^{-2}\right)\quad\text{as}\quad \lambda\to\infty. \tag{161}\] and \[\frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial \lambda}(\lambda,x)\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} =\left(\mathbf{B}_{n}(x)+\mathcal{O}(\lambda)\right)\left(\frac {\mu_{n}}{\lambda}\sigma_{3}\right)\left(\mathbf{B}_{n}(x)+\mathcal{O}( \lambda)\right)^{-1}+\mathcal{O}(1)\] \[=\frac{\mu_{n}}{\lambda}\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n }(x)^{-1}+\mathcal{O}(1)\quad\text{as}\quad\lambda\to 0. \tag{162}\] Since the quantity on the left-hand side of (161) and (162) is otherwise an analytic function of \(\lambda\), it follows from Liouville's Theorem that \[\frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial \lambda}(\lambda,x)\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} =\frac{\mathrm{i}x}{2}\sigma_{3}+\frac{\mu_{n}}{\lambda}\mathbf{B }_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1}\implies\\ \frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial \lambda}(\lambda,x)=\left(\frac{\mathrm{i}x}{2}\sigma_{3}+\frac{\mu_{n}}{ \lambda}\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1}\right)\tilde{ \mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x). \tag{163}\] Noting that \(\mathrm{Tr}(\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1})=0\) and \(\det(\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1})=-1\), we may write \[\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1}=\begin{bmatrix}a_{n}(x)&b_{n }(x)\\ c_{n}(x)&-a_{n}(x)\end{bmatrix}\quad\text{subject to }a_{n}(x)^{2}+b_{n}(x)c_{n}(x)=1 \tag{164}\] and use this form in (163) to write a coupled scalar system of differential equations satisfied by the elements \(\phi_{1}(\lambda,x)\) and \(\phi_{2}(\lambda,x)\) of the first and second rows, respectively, of any column of \(\dot{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\): \[\frac{\partial\phi_{1}}{\partial\lambda}(\lambda,x) =\left(\frac{\mathrm{i}x}{2}+\frac{\mu_{n}a_{n}(x)}{\lambda} \right)\phi_{1}(\lambda,x)+\frac{\mu_{n}b_{n}(x)}{\lambda}\phi_{2}(\lambda,x), \tag{166}\] \[\frac{\partial\phi_{2}}{\partial\lambda}(\lambda,x) =\frac{\mu_{n}c_{n}(x)}{\lambda}\phi_{1}(\lambda,x)-\left(\frac{ \mathrm{i}x}{2}+\frac{\mu_{n}a_{n}(x)}{\lambda}\right)\phi_{2}(\lambda,x). \tag{165}\] Before beginning to solve this system, observe that equating the coefficients of \(\lambda^{-1}\) in (161) and (162) yields the identity \[\mu_{n}\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1}=\frac{\mathrm{i}x}{2 }[\mathbf{A}_{n}(x),\sigma_{3}]+\frac{n-\Theta_{\infty}}{2}\sigma_{3}. \tag{167}\] Since \([\mathbf{A}_{n}(x),\sigma_{3}]\) is off-diagonal, we arrive at \[\mu_{n}a_{n}(x)=\frac{n-\Theta_{\infty}}{2}. \tag{168}\] Since \(\mu_{n}\) and \(n\) are constants, this equation implies that \(a_{n}(x)\) is independent of \(x\), so we will simply write \(a_{n}\) going forward. Now, solving for \(\phi_{1}(\lambda,x)\) in (166) and eliminating it from (165) yields (assuming \(c_{n}(x)\neq 0\) and using \(b_{n}(x)c_{n}(x)=1-a_{n}^{2}\)) \[\lambda\frac{\partial^{2}\phi_{2}}{\partial\lambda^{2}}(\lambda,x)+\frac{ \partial\phi_{2}}{\partial\lambda}(\lambda,x)+\left[\frac{\mathrm{i}x}{2}+\frac{ x^{2}}{4}\lambda-\mathrm{i}x\mu_{n}a_{n}-\frac{\mu_{n}^{2}}{\lambda}\right]\phi_{2}( \lambda,x)=0.\] It is easy to see that the first-order derivative term is removed by the substitution \(\phi_{2}(\lambda,x)=\lambda^{-1/2}w(\lambda,x)\). Indeed \(w(\lambda,x)\) satisfies \[\frac{\partial^{2}w}{\partial\lambda^{2}}(\lambda,x)+\left[\frac{x^{2}}{4}+ \mathrm{i}x\left(\frac{1}{2}-\mu_{n}a_{n}\right)\frac{1}{\lambda}+\left(\frac{ 1}{4}-\mu_{n}^{2}\right)\frac{1}{\lambda^{2}}\right]w(\lambda,x)=0.\] Finally, the explicit \(x\)-dependence in the coefficients can be removed by setting \(Z:=\mathrm{i}x\lambda\) and writing \(w(\lambda,x)=W(Z)\). Note that the notation \(W(Z)\) here is not related to \(W(z)\) appearing in Section 5. In this case, \(W(Z)\) satisfies the ordinary differential equation \[W^{\prime\prime}(Z)+\left[-\frac{1}{4}+\left(\frac{1}{2}-\mu_{n}a_{n}\right) \frac{1}{Z}+\left(\frac{1}{4}-\mu_{n}^{2}\right)\frac{1}{Z^{2}}\right]W(Z)=0, \tag{169}\] which is Whittaker's equation (see [9, Chapter 13]) with parameter \[\kappa=\kappa_{n}:=\frac{1}{2}-\mu_{n}a_{n}=\frac{1+\Theta_{\infty}-n}{2}. \tag{170}\] Given \(\phi_{2}(\lambda,x)=\lambda^{-1/2}W(Z)\) for \(Z=\mathrm{i}x\lambda\), and a solution \(W(Z)\) of (169), it follows from (166) that the corresponding first-row entry is \[\phi_{1}(\lambda,x)=\frac{\mathrm{i}x\lambda^{1/2}}{\mu_{n}c_{n}(x)}\left(W^{ \prime}(Z)+\left(\frac{1}{2}-\frac{\kappa_{n}}{Z}\right)W(Z)\right). \tag{171}\] A fundamental pair of solutions of (169) is given by \(W(Z)=W_{\pm\kappa_{n},\mu_{n}}(\pm Z),\;\arg(\pm Z)\in(-\pi,\pi).\) If we take the particular solution \(\phi_{2}(\lambda,x)=\lambda^{-1/2}W_{\kappa_{n},\mu_{n}}(Z)\), then using the identity \[W^{\prime}_{\kappa,\mu_{n}}(Z)=\left(\frac{\kappa}{Z}-\frac{1}{2}\right)W_{ \kappa,\mu_{n}}(Z)+\left(\left(\frac{1}{2}-\kappa\right)^{2}-\mu_{n}^{2} \right)\frac{1}{Z}W_{\kappa-1,\mu_{n}}(Z)\] (see [9, Eqn. 13.15.23]) in (171) gives \[\phi_{2}(\lambda,x)=\lambda^{-1/2}W_{\kappa_{n},\mu_{n}}(Z)\implies\phi_{1}( \lambda,x)=\frac{\mathrm{i}x\lambda^{1/2}}{\mu_{n}c_{n}(x)}\left(\left(\frac{ 1}{2}-\kappa_{n}\right)^{2}-\mu_{n}^{2}\right)Z^{-1}W_{\kappa_{n}-1,\mu_{n}}(Z).\] Likewise, if we take the particular solution \(\phi_{2}(\lambda,x)=\lambda^{-1/2}W_{-\kappa_{n},\mu_{n}}(-Z)\), then using the identity \[W^{\prime}_{\kappa,\mu_{n}}(Z)=\left(\frac{1}{2}-\frac{\kappa}{Z}\right)W_{ \kappa,\mu_{n}}(Z)-\frac{1}{Z}W_{\kappa+1,\mu_{n}}(Z)\] (see [9, Eqn. 13.15.26]) in (171) yields \[\phi_{2}(\lambda,x)=\lambda^{-1/2}W_{-\kappa_{n},\mu_{n}}(-Z)\implies\phi_{1 }(\lambda,x)=-\frac{\mathrm{i}x\lambda^{1/2}}{\mu_{n}c_{n}(x)}Z^{-1}W_{1- \kappa_{n},\mu_{n}}(-Z).\] Taking linear combinations with coefficients depending generally on the parameter \(x\), the general solution matrix for the system (163) can be written in the form \[\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)=\widetilde{\mathbf{\Phi}}_{n} ^{(\infty)}(\lambda,x)\mathbf{K}(x),\quad\widetilde{\mathbf{\Phi}}_{n}^{( \infty)}(\lambda,x):=\mathbf{H}(\lambda,x)\mathbf{W}(\mathrm{i}\lambda x; \kappa_{n},\mu_{n}), \tag{172}\] where \(\widetilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) is a specific fundamental solution matrix of (163) constructed from \[\mathbf{H}(\lambda,x):=\lambda^{\sigma_{3}/2}\left[\begin{matrix}\mathrm{i}x&0 \\ \mu_{n}c_{n}(x)&0\\ 0&1\end{matrix}\right] \tag{173}\] and \[\mathbf{W}(Z;\kappa,\mu_{n}):=\begin{bmatrix}\alpha_{\kappa,\mu_{n}}Z^{-1}W_ {\kappa-1,\mu_{n}}(Z)&-Z^{-1}W_{1-\kappa,\mu_{n}}(-Z)\\ W_{\kappa,\mu_{n}}(Z)&W_{-\kappa,\mu_{n}}(-Z)\end{bmatrix},\quad\alpha_{ \kappa,\mu_{n}}:=\left(\frac{1}{2}-\kappa\right)^{2}-\mu_{n}^{2}, \tag{174}\] in which \(\kappa=\kappa_{n}\) and \(\mu_{n}\) are given by (170) and \(\mathbf{K}(x)\) is a matrix of free coefficients. #### 7.2.2. Dependence on \(x\) Going back to (159), (160) and now assuming that the asymptotics are differentiable with respect to \(x\), \[\frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial x}( \lambda,x)\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}\] \[\qquad\qquad\qquad\qquad=\left(\mathds{I}+\frac{\mathbf{A}_{n}(x )}{\lambda}+\mathcal{O}\left(\lambda^{-2}\right)\right)\left(\frac{\mathrm{i} \lambda}{2}\sigma_{3}\right)\left(\mathds{I}+\frac{\mathbf{A}_{n}(x)}{ \lambda}+\mathcal{O}\left(\lambda^{-2}\right)\right)^{-1}+\mathcal{O}\left( \lambda^{-1}\right)\] \[\qquad\qquad\qquad\qquad=\frac{\mathrm{i}\lambda}{2}\sigma_{3}+ \frac{\mathrm{i}}{2}[\mathbf{A}_{n}(x),\sigma_{3}]+\mathcal{O}\left(\lambda^ {-1}\right)\quad\text{as}\quad\lambda\to\infty,\] and \[\frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial x}(\lambda,x) \tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}=\mathbf{B}_{n}^{\prime} (x)\mathbf{B}_{n}(x)^{-1}+\mathcal{O}(\lambda)\quad\text{as}\quad\lambda\to 0.\] So, applying Liouville's Theorem yields \[\frac{\partial\tilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial x}(\lambda,x)= \left(\frac{\mathrm{i}\lambda}{2}\sigma_{3}+\frac{\mathrm{i}}{2}[\mathbf{A}_{ n}(x),\sigma_{3}]\right)\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x), \tag{175}\] and it follows from (167) that \[\frac{\mathrm{i}}{2}[\mathbf{A}_{n}(x),\sigma_{3}]=\frac{1}{x}\left(\mu_{n} \mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1}+\left(\kappa_{n}-\frac{1}{2 }\right)\sigma_{3}\right)=\frac{1}{x}\begin{bmatrix}0&\frac{\mu_{n}(1-a_{n}^ {2})}{c_{n}(x)}\\ \mu_{n}c_{n}(x)&0\end{bmatrix}, \tag{176}\] where \(\mu_{n},a_{n}\) are independent of \(\lambda,x\). To determine the \(x\)-dependence of \(c_{n}(x)\), we use (176) to assemble (163) (using also (167) and (170)) and (175) to give the Lax system (177) \[\frac{\partial\mathbf{\Phi}_{n}^{(\infty)}}{\partial\lambda}( \lambda,x)=\tilde{\mathbf{X}}(\lambda,x)\mathbf{\Phi}_{n}^{(\infty)}(\lambda, x),\quad\tilde{\mathbf{X}}(\lambda,x):=\frac{\mathrm{i}\lambda}{2}\sigma_{3}+ \frac{1}{x}\begin{bmatrix}0&\frac{\mu_{n}(1-a_{n}^{2})}{c_{n}(x)}\\ \mu_{n}c_{n}(x)&0\end{bmatrix}.\] (178) Since \(\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) is a simultaneous fundamental solution matrix for these equations, the Lax system is compatible. The compatibility condition reads \[\tilde{\mathbf{\Lambda}}_{x}(\lambda,x)-\tilde{\mathbf{X}}_{\lambda}(\lambda, x)+[\tilde{\mathbf{\Lambda}}(\lambda,x),\tilde{\mathbf{X}}(\lambda,x)]=\mathbf{0},\] which is equivalent to \[xc_{n}^{\prime}(x)=(1-2\kappa_{n})c_{n}(x)\implies c_{n}(x)=\gamma_{n}x^{1-2 \kappa_{n}}, \tag{179}\] for some constant \(\gamma_{n}\neq 0\). Thus, the coefficient \(c_{n}(x)\) is determined up to the choice of the constant \(\gamma_{n}\). Note also that the coefficient matrices \(\tilde{\mathbf{\Lambda}}(\lambda,x)\) and \(\tilde{\mathbf{X}}(\lambda,x)\) are obviously related by the simple identity \[\tilde{\mathbf{X}}(\lambda,x)-\frac{\lambda}{x}\tilde{\mathbf{\Lambda}}( \lambda,x)=\frac{1}{x}\left(\kappa_{n}-\frac{1}{2}\right)\sigma_{3}. \tag{180}\] Since the fundamental matrix \(\widetilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) defined by (172) satisfies (177), then so does \(\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)=\widetilde{\mathbf{\Phi}}_{n} ^{(\infty)}(\lambda,x)\mathbf{K}(x)\), and \(\mathbf{K}(x)\) must now be chosen so that (178) is satisfied. Substituting into (178), we obtain an ordinary differential equation on \(\mathbf{K}(x)\): \[\mathbf{K}^{\prime}(x)=\left(\widetilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}\mathbf{X}(\lambda,x)\widetilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda, x)-\widetilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}\frac{\partial \widetilde{\mathbf{\Phi}}_{n}^{(\infty)}}{\partial x}(\lambda,x)\right) \mathbf{K}(x). \tag{181}\] Now, from the form of \(\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\) written in (172), we have both \[\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}\frac{ \partial\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}}{\partial x}(\lambda,x) =\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} \frac{\partial\mathbf{H}}{\partial x}(\lambda,x)\mathbf{H}(\lambda,x)^{-1} \widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\] \[\qquad\qquad\qquad\qquad+\mathrm{i}\lambda\widetilde{\boldsymbol {\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1}\mathbf{H}(\lambda,x)\mathbf{W}^{\prime }(Z;\kappa_{n},\mu_{n})\quad\text{and}\] \[\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} \frac{\partial\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}}{\partial\lambda}( \lambda,x) =\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} \frac{\partial\mathbf{H}}{\partial\lambda}(\lambda,x)\mathbf{H}(\lambda,x)^{ -1}\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\] \[\qquad\qquad\qquad\qquad+\mathrm{i}x\widetilde{\boldsymbol{\Phi }}_{n}^{(\infty)}(\lambda,x)^{-1}\mathbf{H}(\lambda,x)\mathbf{W}^{\prime}(Z; \kappa_{n},\mu_{n}),\] so it follows that \[\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} \frac{\partial\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}}{\partial x}( \lambda,x)\] \[=\frac{\lambda}{x}\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}( \lambda,x)^{-1}\frac{\partial\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}}{ \partial\lambda}(\lambda,x)\] \[\qquad\qquad\qquad+\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}( \lambda,x)^{-1}\left[\frac{\partial\mathbf{H}}{\partial x}(\lambda,x) \mathbf{H}(\lambda,x)^{-1}-\frac{\lambda}{x}\frac{\partial\mathbf{H}}{ \partial\lambda}(\lambda,x)\mathbf{H}(\lambda,x)^{-1}\right]\widetilde{ \boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\] \[=\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} \left[\frac{\lambda}{x}\tilde{\boldsymbol{\Lambda}}(\lambda,x)+\frac{ \partial\mathbf{H}}{\partial x}(\lambda,x)\mathbf{H}(\lambda,x)^{-1}-\frac{ \lambda}{x}\frac{\partial\mathbf{H}}{\partial\lambda}(\lambda,x)\mathbf{H}( \lambda,x)^{-1}\right]\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x),\] where we also used (177). Using this in (181) along with the explicit definition (173) of \(\mathbf{H}(\lambda,x)\) and the identities (179) and (180) gives \[\mathbf{K}^{\prime}(x) \mathbf{K}(x)^{-1}\] \[=\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)^{-1} \left[\frac{1}{x}\left(\kappa_{n}-\frac{1}{2}\right)\sigma_{3}+\frac{1}{2x} \sigma_{3}+\frac{\mathrm{d}}{\mathrm{d}x}\log\left(\frac{c_{n}(x)}{x}\right) \begin{bmatrix}1&0\\ 0&0\end{bmatrix}\right]\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\] \[=\frac{\kappa_{n}}{x}\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)} (\lambda,x)^{-1}\left[\sigma_{3}-2\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\right]\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\] \[=-\frac{\kappa_{n}}{x}\mathds{1}.\] Therefore the \(x\)-dependence of the matrix \(\mathbf{K}(x)\) is explicitly given by \[\mathbf{K}(x)=x^{-\kappa_{n}}\mathbf{K},\] where \(\mathbf{K}\) is now independent of both \(\lambda\) and \(x\). However, as the domain of analyticity of \(\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\) in the \(\lambda\)-plane consists of three disjoint regions, we expect to have to specify a different matrix \(\mathbf{K}\) for each. Note also that the constant \(\gamma_{n}\) remains to be determined. 2.3. The parametrix \(\boldsymbol{\Phi}_{n}^{(\infty)}(\lambda,x)\) on the two regions with \(|\lambda|>2\) To fully specify the parametrix \(\boldsymbol{\Phi}_{n}^{(\infty)}(\lambda,x)\) for \(|\lambda|>2\), we concretely take the jump contours for \(|\lambda|>2\) to lie along the real axis in the \(Z\)-plane, corresponding to \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\), respectively. Thus, the part of the domain of analyticity of \(\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\) with \(|\lambda|>2\) has two components, corresponding to the upper and lower half \(Z\)-planes. To properly define \(\widetilde{\boldsymbol{\Phi}}_{n}^{(\infty)}(\lambda,x)\) in these two exterior domains, we firstly take the matrix factor \(\mathbf{H}(\lambda,x)\) defined in (173) in the precise form \[\mathbf{H}(\lambda,x)=\lambda_{\boldsymbol{\Omega}}^{\sigma_{3}/2}\begin{bmatrix} \frac{\mathrm{i}x^{2\kappa_{n}}}{\mu_{n}\gamma_{n}}&0\\ 0&1\end{bmatrix}=x^{\kappa_{n}}\lambda_{\boldsymbol{\Omega}}^{\sigma_{3}/2}x^ {\kappa_{n}\sigma_{3}}\mathbf{D}_{n},\quad\mathbf{D}_{n}:=\begin{bmatrix} \frac{\mathrm{i}}{\mu_{n}\gamma_{n}}&0\\ 0&1\end{bmatrix}. \tag{182}\] Then, we assume different constant matrices \(\mathbf{K}=\mathbf{K}_{n}^{\pm}\) in the two domains by writing the parametrix as \[\begin{split}\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)& =\tilde{\mathbf{\Phi}}_{n}^{(\infty)\pm}(\lambda,x)\\ &=x^{-\kappa_{n}}\mathbf{H}(\lambda,x)\mathbf{W}(\mathrm{i}x \lambda;\kappa_{n},\mu_{n})\mathbf{K}_{n}^{\pm}\\ &=\lambda_{\mathbf{B}}^{\sigma_{3}/2}x^{\kappa_{n}\sigma_{3}} \mathbf{D}_{n}\mathbf{W}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n})\mathbf{K}_{n}^ {\pm},\quad\text{for $|\lambda|>2$ and $\pm\mathrm{Im}(Z)>0$.}\end{split} \tag{183}\] We now express the matrices \(\mathbf{K}_{n}^{\pm}\) in terms of the remaining constants \(\mu_{n}\) and \(\gamma_{n}\) by enforcing the asymptotic condition (159) in each of the two sectors with \(|\lambda|>2\). According to [9, Eqn. 13.19.3], \[W_{\kappa,\mu_{n}}(Z)=\mathrm{e}^{-Z/2}Z^{\kappa}\left(1+\mathcal{O}(Z^{-1}) \right),\quad Z\to\infty,\quad|\arg(Z)|\leq\frac{3\pi}{2}-\delta\] holds for each \(\delta>0\). Hence also \[\mathbf{W}(Z;\kappa,\mu_{n})=\begin{bmatrix}\alpha_{\kappa,\mu_{n}}Z^{\kappa- 2}\left(1+\mathcal{O}(Z^{-1})\right)&(-Z)^{-\kappa}\left(1+\mathcal{O}(Z^{-1} )\right)\\ Z^{\kappa}\left(1+\mathcal{O}(Z^{-1})\right)&(-Z)^{-\kappa}\left(1+\mathcal{O} (Z^{-1})\right)\end{bmatrix}\mathrm{e}^{-Z\sigma_{3}/2},\quad Z\to\infty,\,| \arg(Z)|\leq\frac{3\pi}{2}-\delta.\] Under the condition given on \(\arg(Z)\), we have \[(-Z)^{-\kappa}=Z^{-\kappa}\begin{cases}\mathrm{e}^{\mathrm{i}\pi\kappa},& \mathrm{Im}(Z)>0\;\;\text{(i.e. $0<\arg(Z)<\pi$)},\\ \mathrm{e}^{-\mathrm{i}\pi\kappa},&\mathrm{Im}(Z)<0\;\;\text{(i.e. $-\pi<\arg(Z)<0$)}.\end{cases}\] To calculate \(Z^{\pm\kappa}\), we recall \(Z=\mathrm{i}x\lambda\) and use [5, Eqn. (49)]: \[-\frac{\pi}{2}-\arg(x)<\arg_{\blacksquare}(\lambda)<\frac{3\pi}{2}-\arg(x),\quad| \lambda|\to\infty.\] Next, \[\mathrm{Im}(Z)>0\implies 0<\arg(Z)<\pi\Leftrightarrow-\frac{\pi}{2}-\arg(x)< \arg_{\blacksquare}(\lambda)<\frac{\pi}{2}-\arg(x)\] \[\mathrm{Im}(Z)<0\implies-\pi<\arg(Z)<0\implies-\frac{3\pi}{2}-\arg(x)<\arg_{ \blacksquare}(\lambda)-2\pi<-\frac{\pi}{2}-\arg(x)\] and hence, for any \(x\in\mathbb{C}\setminus\{0\}\) such that \(|\arg(x)|<\pi\), \[Z^{\pm\kappa}=x^{\pm\kappa}\lambda_{\blacksquare}^{\pm\kappa}\begin{cases}\mathrm{e }^{\mathrm{i}\pi\kappa/2},&\mathrm{Im}(Z)>0,\\ \mathrm{e}^{\mp 3\mathrm{i}\pi\kappa/2},&\mathrm{Im}(Z)<0.\end{cases}\] Therefore, for \(\lambda\) large such that \(\mathrm{Im}(Z)>0\), \[\lambda_{\blacksquare}^{\sigma_{3}/2}x^{\kappa_{n}\sigma_{3}}\mathbf{D}_{n}\mathbf{ W}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n})=\mathbf{D}_{n}\begin{bmatrix}\mathcal{O}( \lambda^{-1})&\mathrm{e}^{\mathrm{i}\pi\kappa_{n}/2}\left(1+\mathcal{O}( \lambda^{-1})\right)\\ \mathrm{e}^{\mathrm{i}\pi\kappa_{n}/2}\left(1+\mathcal{O}(\lambda^{-1})\right)& \mathcal{O}(\lambda^{-1})\end{bmatrix}\lambda_{\blacksquare}^{(\kappa_{n}-\frac{1} {2})\sigma_{3}}\mathrm{e}^{-\mathrm{i}x\lambda\sigma_{3}/2},\] so choosing \(\mathbf{K}_{n}^{+}\) so that (183) is consistent with (159) in the sector \(\mathrm{Im}(Z)>0\) requires that \(\mathbf{K}_{n}^{+}\) is an off-diagonal matrix, namely \[\mathbf{K}_{n}^{+}:=\begin{bmatrix}0&\mathrm{e}^{-\mathrm{i}\pi\kappa_{n}/2}\\ -\mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\kappa_{n}/2}\mu_{n}\gamma_{n}&0\end{bmatrix}.\] Similarly, for \(\lambda\) large such that \(\mathrm{Im}(Z)<0\), \[\lambda_{\blacksquare}^{\sigma_{3}/2}x^{\kappa_{n}\sigma_{3}}\mathbf{D}_{n}\mathbf{W} (\mathrm{i}x\lambda;\kappa_{n},\mu_{n})=\mathbf{D}_{n}\begin{bmatrix}\mathcal{O} (\lambda^{-1})&\mathrm{e}^{\mathrm{i}\pi\kappa_{n}/2}\left(1+\mathcal{O}( \lambda^{-1})\right)\\ \mathrm{e}^{-3\mathrm{i}\pi\kappa_{n}/2}\left(1+\mathcal{O}(\lambda^{-1}) \right)&\mathcal{O}(\lambda^{-1})\end{bmatrix}\lambda_{\blacksquare}^{(\kappa_{n}- \frac{1}{2})\sigma_{3}}\mathrm{e}^{-\mathrm{i}x\lambda\sigma_{3}/2},\] so consistency of (183) with (159) in the sector \(\mathrm{Im}(Z)<0\) requires \[\mathbf{K}_{n}^{-}:=\begin{bmatrix}0&\mathrm{e}^{3\mathrm{i}\pi\kappa_{n}/2}\\ -\mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\kappa_{n}/2}\mu_{n}\gamma_{n}&0\end{bmatrix}.\] Some additional useful information can be gleaned by enforcing on \(\tilde{\mathbf{\Phi}}_{n}^{(\infty)}(\lambda,x)\) the jump conditions for \(|\lambda|>2\). The jump rays are illustrated in the \(Z\)-plane with their orientations in Figure 9. The Whittaker function \(W_{\kappa,\mu_{n}}(Z)\) can be viewed as an analytic function on the cut plane \(|\mathsf{Arg}(Z)|<\pi\), and it follows from the connection formula [9, Eqn. 13.14.13] that the boundary values on the negative real axis are related by \[W_{\kappa,\mu_{n}}(-Z+\mathrm{i}0)=\mathrm{e}^{2\pi\mathrm{i}\kappa}W_{\kappa, \mu_{n}}(-Z-\mathrm{i}0)+\frac{2\pi\mathrm{i}\mathrm{e}^{\mathrm{i}\pi\kappa} }{\Gamma(\frac{1}{2}+\mu_{n}-\kappa)\Gamma(\frac{1}{2}-\mu_{n}-\kappa)}W_{- \kappa,\mu_{n}}(Z),\quad Z>0. \tag{184}\] Note that the denominators in the second term on the right hand side of (184) are finite due to condition (iii) in the definition of generic data; see the top of Section 7. Indeed, it follows from (158) and (170) that \[e_{1}^{\pm 2}=e_{\infty}^{2}\Leftrightarrow\frac{1}{2}-\kappa_{n}\pm\mu_{n} \in\mathbb{Z}.\] On \(Z\in\mathbb{R}_{-}\) the left (\(+\)) and right (\(-\)) boundary values correspond to the limit from \(\mathrm{Im}(Z)<0\) and \(\mathrm{Im}(Z)>0\) respectively. Therefore, the second column of \(\mathbf{W}(Z;\kappa,\mu_{n})\) is continuous across \(\mathbb{R}_{-}\), and from (184) (replacing \(Z\) with \(-Z\)), \[\mathbf{W}_{-}(Z;\kappa,\mu_{n})=\begin{bmatrix}\alpha_{\kappa,\mu _{n}}Z^{-1}W_{\kappa-1,\mu_{n}}(Z+\mathrm{i}0)&-Z^{-1}W_{1-\kappa,\mu_{n}}(-Z )\\ W_{\kappa,\mu_{n}}(Z+\mathrm{i}0)&W_{-\kappa,\mu_{n}}(-Z)\end{bmatrix}\] \[=\begin{bmatrix}\mathrm{e}^{2\pi\mathrm{i}(\kappa-1)}\alpha_{ \kappa,\mu_{n}}Z^{-1}W_{\kappa-1,\mu_{n}}(Z-\mathrm{i}0)+\frac{2\pi\mathrm{i} \mathrm{e}^{\mathrm{i}\pi(\kappa-1)}\alpha_{\kappa,\mu_{n}}Z^{-1}W_{1-\kappa, \mu_{n}}(-Z)}{\Gamma(\frac{3}{2}+\mu_{n}-\kappa)\Gamma(\frac{3}{2}-\mu_{n}- \kappa)}&-Z^{-1}W_{1-\kappa,\mu_{n}}(-Z)\\ \mathrm{e}^{2\pi\mathrm{i}\kappa}W_{\kappa,\mu_{n}}(Z-\mathrm{i}0)+\frac{2\pi \mathrm{i}\mathrm{e}^{\mathrm{i}\pi\kappa}W_{-\kappa,\mu_{n}}(-Z)}{\Gamma( \frac{1}{2}+\mu_{n}-\kappa)\Gamma(\frac{1}{2}-\mu_{n}-\kappa)}&W_{-\kappa,\mu _{n}}(-Z)\end{bmatrix}\] \[=\begin{bmatrix}\mathrm{e}^{2\pi\mathrm{i}\kappa}\alpha_{\kappa, \mu_{n}}Z^{-1}W_{\kappa-1,\mu_{n}}(Z-\mathrm{i}0)-\frac{2\pi\mathrm{i}\mathrm{e }^{\mathrm{i}\pi\kappa}Z^{-1}W_{1-\kappa,\mu_{n}}(-Z)}{\Gamma(\frac{1}{2}+\mu_{ n}-\kappa)\Gamma(\frac{1}{2}-\mu_{n}-\kappa)}&-Z^{-1}W_{1-\kappa,\mu_{n}}(-Z)\\ \mathrm{e}^{2\pi\mathrm{i}\kappa}W_{\kappa,\mu_{n}}(Z-\mathrm{i}0)+\frac{2\pi \mathrm{i}\mathrm{e}^{\mathrm{i}\pi\kappa}W_{-\kappa,\mu_{n}}(-Z)}{\Gamma( \frac{1}{2}+\mu_{n}-\kappa)\Gamma(\frac{1}{2}-\mu_{n}-\kappa)}&W_{-\kappa,\mu _{n}}(-Z)\end{bmatrix}\] \[=\mathbf{W}_{+}(Z;\kappa,\mu_{n})\begin{bmatrix}\mathrm{e}^{2\pi \mathrm{i}\kappa}&0\\ \frac{2\pi\mathrm{i}\mathrm{e}^{\mathrm{i}\pi\kappa}}{\Gamma(\frac{1}{2}+\mu_{ n}-\kappa)\Gamma(\frac{1}{2}-\mu_{n}-\kappa)}&1\end{bmatrix},\quad Z<0.\] Here, on the third line we used the definition (174) of \(\kappa_{\kappa,\mu_{n}}\) and the factorial identity \(\Gamma(\diamond+1)=\diamond\Gamma(\diamond)\). Since \(\mathbf{H}(\lambda,x)\) is analytic across the \(\mathrm{i}\mathbb{R}_{+}\), it follows that \(\mathbf{\Phi}^{(\infty)}(\lambda)=\mathbf{\Phi}^{(\infty)}_{n}(\lambda,x)\) satisfies the jump condition \[\tilde{\mathbf{\Phi}}^{(\infty)}_{+}(\lambda) =\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)\left[\mathbf{W}_{ -}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n})\mathbf{K}_{n}^{+}\right]^{-1}\mathbf{ W}_{+}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n})\mathbf{K}_{n}^{-}\] \[=\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)(\mathbf{K}_{n}^{ +})^{-1}\left[\begin{matrix}\mathrm{e}^{2\pi\mathrm{i}\pi\kappa_{n}}&0\\ 2\pi\mathrm{i}\mathrm{e}^{\mathrm{i}\pi\kappa_{n}}&1\end{matrix}\right]^{-1} \mathbf{K}_{n}^{-}\] \[=\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)\begin{bmatrix}1& \frac{2\pi\mathrm{e}^{\mathrm{i}\pi\kappa_{n}}}{\mu_{n}\gamma_{n}\Gamma( \frac{1}{2}+\mu_{n}-\kappa_{n})\Gamma(\frac{1}{2}-\mu_{n}-\kappa_{n})}\\ 0&1\end{bmatrix},\quad\lambda\in\mathrm{i}\mathbb{R}_{+}.\] Requiring that this matches with the corresponding jump condition in Figure 8 gives the condition \[\frac{2\pi\mathrm{e}^{\mathrm{i}\pi\kappa_{n}}}{\mu_{n}\gamma_{n}\Gamma(\frac{ 1}{2}+\mu_{n}-\kappa_{n})\Gamma(\frac{1}{2}-\mu_{n}-\kappa_{n})}=\frac{e_{ \infty}^{2}-e_{1}^{2}}{e_{1}^{2}e_{\infty}^{4}}.\] For \(Z\in\mathbb{R}_{+}\) the \(\pm\) boundary values correspond to the limit from \(\mathrm{Im}(Z)\gtrless 0\). Therefore now the first column of \(\mathbf{W}(Z;\kappa,\mu_{n})\) is continuous across real \(\mathbb{R}_{+}\), and from (184), \[\mathbf{W}_{-}(Z;\kappa,\mu_{n})=\begin{bmatrix}\kappa_{\kappa, \mu_{n}}Z^{-1}W_{\kappa-1,\mu_{n}}(Z)&-Z^{-1}W_{1-\kappa,\mu_{n}}(-Z+\mathrm{i }0)\\ W_{\kappa,\mu_{n}}(Z)&W_{-\kappa,\mu_{n}}(-Z+\mathrm{i}0)\end{bmatrix}\] \[=\begin{bmatrix}\kappa_{\kappa,\mu_{n}}Z^{-1}W_{\kappa-1,\mu_{n} }(Z)&-\mathrm{e}^{2\pi\mathrm{i}\pi(1-\kappa)}Z^{-1}W_{1-\kappa,\mu_{n}}(-Z- \mathrm{i}0)-\frac{2\pi\mathrm{i}\mathrm{e}^{\mathrm{i}\pi(1-\kappa)}Z^{-1}W_{ \kappa-1,\mu_{n}}(Z)}{\Gamma(-\frac{1}{2}+\mu_{n}+\kappa)\Gamma(-\frac{1}{2}- \mu_{n}+\kappa)}\\ W_{\kappa,\mu_{n}}(Z)&\mathrm{e}^{-2\pi\mathrm{i}\kappa}W_{-\kappa,\mu_{n}}(-Z- \mathrm{i}0)+\frac{2\pi\mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\kappa}W_{\kappa, \mu_{n}}(Z)}{\Gamma(\frac{1}{2}+\mu_{n}+\kappa)\Gamma(\frac{1}{2}-\mu_{n}- \kappa)}\end{bmatrix}\] \[=\begin{bmatrix}\kappa_{\kappa,\mu_{n}}Z^{-1}W_{\kappa-1,\mu_{n} }(Z)&-\mathrm{e}^{-2\pi\mathrm{i}\kappa}Z^{-1}W_{1-\kappa,\mu_{n}}(-Z- \mathrm{i}0)+\frac{2\pi\mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\kappa}\kappa_{ \kappa,\mu_{n}}Z^{-1}W_{\kappa-1,\mu_{n}}(Z)}{\Gamma(\frac{1}{2}+\mu_{n}+ \kappa)\Gamma(\frac{1}{2}-\mu_{n}+\kappa)}\\ W_{\kappa,\mu_{n}}(Z)&\mathrm{e}^{-2\pi\mathrm{i}\kappa}W_{-\kappa,\mu_{n}}(-Z- \mathrm{i}0)+\frac{2\pi\mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\kappa}W_{\kappa, \mu_{n}}(Z)}{\Gamma(\frac{1}{2}+\mu_{n}+\kappa)\Gamma(\frac{1}{2}-\mu_{n}+ \kappa)}\end{bmatrix}\] \[=\mathbf{W}_{+}(Z;\kappa,\mu_{n})\begin{bmatrix}1&\frac{2\pi \mathrm{i}\mathrm{e}^{-\mathrm{i}\pi\kappa}}{\Gamma(\frac{1}{2}+\mu_{n}+\kappa) \Gamma(\frac{1}{2}-\mu_{n}+\kappa)}\\ 0&\mathrm{e}^{-2\pi\mathrm{i}\kappa}\end{bmatrix},\quad Z>0.\] Again here, the finiteness of the denominators is guaranteed by condition (iii) at the top of Section 7. Since \(\mathbf{H}(\lambda,x)\) changes sign across \(\mathrm{i}\mathbb{R}_{-}\), we get that \(\mathbf{\Phi}^{(\infty)}(\lambda)=\tilde{\mathbf{\Phi}}^{(\infty)}_{n}(\lambda,x)\) satisfies the jump condition \[\tilde{\mathbf{\Phi}}^{(\infty)}_{+}(\lambda) =-\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)\left[\mathbf{W}_{ -}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n})\mathbf{K}_{n}^{-}\right]^{-1}\mathbf{W }_{+}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n})\mathbf{K}_{n}^{+}\] \[=\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)\left(\mathbf{K}_{n}^ {-}\right)^{-1}\begin{bmatrix}-1&-\frac{2\pi\mathrm{i}\mathrm{e}^{-\mathrm{i} \pi\kappa_{n}}}{\Gamma(\frac{1}{2}+\mu_{n}+\kappa_{n})\Gamma(\frac{1}{2}-\mu_{n }+\kappa_{n})}\\ 0&-\mathrm{e}^{-2\pi\mathrm{i}\kappa_{n}}\end{bmatrix}^{-1}\mathbf{K}_{n}^{+}\] \[=\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)\begin{bmatrix}- \mathrm{e}^{2\pi\mathrm{i}\pi\kappa_{n}}&0\\ \frac{2\pi\mathrm{e}^{-\mathrm{i}\pi\kappa_{n}}\mu_{n}\gamma_{n}}{\Gamma(\frac{1 }{2}+\mu_{n}+\kappa_{n})\Gamma(\frac{1}{2}-\mu_{n}+\kappa_{n})}&-\mathrm{e}^ {-2\pi\mathrm{i}\kappa_{n}}\end{bmatrix}\] \[=\tilde{\mathbf{\Phi}}^{(\infty)}_{-}(\lambda)\begin{bmatrix}e_{ \infty}^{2}&0\\ e_{\infty}^{2}s_{2}^{\infty}&e_{\infty}^{-2}\end{bmatrix}=\tilde{\mathbf{\Phi}} ^{(\infty)}_{-}(\lambda)\mathbf{S}_{2}^{\infty}e_{\infty}^{2\gamma_{3}},\quad \lambda\in L^{\infty}_{\blacksquare}.\] The last two equalities follow by a direct calculation using the definitions of \(s_{2}^{\infty},\kappa_{n}\), \(e_{1}\), and \(e_{\infty}\) in (110), (170), and (158), respectively, along with the expression \[\mu_{n}\gamma_{n}=\frac{2\pi\mathrm{e}^{\pi\mathrm{i}\kappa_{n}}}{\Gamma \left(\frac{1}{2}+\mu_{n}-\kappa_{n}\right)\Gamma\left(\frac{1}{2}-\mu_{n}- \kappa_{n}\right)}\cdot\frac{e_{1}^{2}e_{\infty}^{4}}{e_{\infty}^{2}-e_{1}^{2 }}, \tag{185}\] and the classical identity \[\Gamma\left(\frac{1}{2}-z\right)\Gamma\left(\frac{1}{2}+z\right)=\frac{\pi}{ \cos(\pi z)}. \tag{186}\] #### 7.2.4. The parametrix \(\tilde{\Phi}_{n}^{(\infty)}(\lambda,x)\) in the region \(|\lambda|<2\) We use the identity [9, Eqn. 13.14.33] to express the elements of \(\mathbf{W}(Z;\kappa,\mu)\) in terms of the alternative basis of solutions \(M_{-\kappa,\pm\mu}(-Z)\) of Whittaker's equation with parameters \((\kappa,\mu)\) that form a numerically satisfactory pair in a neighborhood of the origin and that are analytic for \(\mathsf{Arg}(-Z)\in(-\pi,\pi)\). Moreover, these functions are the Maclaurin series associated with the regular singular point at \(Z=0\), so they have the property that \[M_{\kappa,\mu}(-Z)(-Z)^{-\frac{1}{2}-\mu}=1+\mathcal{O}(Z)\quad\text{as} \quad Z\to 0, \tag{187}\] where the power function denotes the principal branch and where the error term represents an analytic function of \(Z\) vanishing at the origin. To deal with the first column of \(\mathbf{W}(Z;\kappa,\mu)\) we also use the corresponding identity \(M_{\kappa,\mu}(Z)=\mathrm{e}^{\pm\mathrm{i}\pi(\frac{1}{2}+\mu)}M_{-\kappa, \mu}(-Z)\) which holds for \(\pm\mathrm{Im}(Z)>0\) (see also [9, Eqn. 13.14.10]). Using the above identities, and under the condition \(2\mu\not\in\mathbb{Z}\) (which follows from the condition (i) at the beginning of Section 7 in our case), we can write the elements of \(\mathbf{W}(Z;\kappa,\mu)\) in the form \[W_{11}(Z;\kappa,\mu)=-Z^{-1}\frac{\Gamma(-2\mu)\Gamma(\frac{1}{2}-\mu+\kappa) }{\Gamma(\frac{1}{2}-\mu-\kappa)\Gamma(-\frac{1}{2}-\mu+\kappa)}\mathrm{e}^{ \pm\mathrm{i}\pi(\frac{1}{2}+\mu)}M_{1-\kappa,\mu}(-Z)\\ -Z^{-1}\frac{\Gamma(2\mu)\Gamma(\frac{1}{2}+\mu+\kappa)}{\Gamma( \frac{1}{2}+\mu-\kappa)\Gamma(-\frac{1}{2}+\mu+\kappa)}\mathrm{e}^{\pm \mathrm{i}\pi(\frac{1}{2}-\mu)}M_{1-\kappa,-\mu}(-Z),\quad\pm\mathrm{Im}(Z)>0,\] \[W_{12}(Z;\kappa,\mu)=-Z^{-1}\frac{\Gamma(-2\mu)}{\Gamma(-\frac{1}{2}-\mu+ \kappa)}M_{1-\kappa,\mu}(-Z)-Z^{-1}\frac{\Gamma(2\mu)}{\Gamma(-\frac{1}{2}+ \mu+\kappa)}M_{1-\kappa,-\mu}(-Z),\] \[W_{21}(Z;\kappa,\mu)=\frac{\Gamma(-2\mu)}{\Gamma(\frac{1}{2}-\mu-\kappa)} \mathrm{e}^{\pm\mathrm{i}\pi(\frac{1}{2}+\mu)}M_{-\kappa,\mu}(-Z)+\frac{\Gamma (2\mu)}{\Gamma(\frac{1}{2}+\mu-\kappa)}\mathrm{e}^{\pm\mathrm{i}\pi(\frac{1}{ 2}-\mu)}M_{-\kappa,-\mu}(-Z),\quad\pm\mathrm{Im}(Z)>0,\] and \[W_{22}(Z;\kappa,\mu)=\frac{\Gamma(-2\mu)}{\Gamma(\frac{1}{2}-\mu+\kappa)}M_{- \kappa,\mu}(-Z)+\frac{\Gamma(2\mu)}{\Gamma(\frac{1}{2}+\mu+\kappa)}M_{-\kappa,-\mu}(-Z).\] These expressions can be usefully combined into a matrix identity: \[\mathbf{W}(Z;\kappa,\mu)=\mathbf{M}(Z;\kappa,\mu)\mathbf{G}_{\kappa,\mu}^{\pm },\quad\pm\mathrm{Im}(Z)>0, \tag{188}\] where \[\mathbf{M}(Z;\kappa,\mu):=\begin{bmatrix}(\frac{1}{2}-\kappa+\mu)Z^{-1}M_{1- \kappa,\mu}(-Z)&(\frac{1}{2}-\kappa-\mu)Z^{-1}M_{1-\kappa,-\mu}(-Z)\\ M_{-\kappa,\mu}(-Z)&M_{-\kappa,-\mu}(-Z)\end{bmatrix},\quad\mathsf{Arg}(-Z) \in(-\pi,\pi), \tag{189}\] and \[\mathbf{G}_{\kappa,\mu}^{\pm}:=\begin{bmatrix}\frac{\Gamma(-2\mu)\mathrm{e}^{ \pm\mathrm{i}\pi(\frac{1}{2}+\mu)}}{\Gamma(\frac{1}{2}-\mu-\kappa)}&\frac{ \Gamma(-2\mu)}{\Gamma(\frac{1}{2}-\mu+\kappa)}\\ \frac{\Gamma(2\mu)\mathrm{e}^{\pm\mathrm{i}\pi(\frac{1}{2}-\mu)}}{\Gamma(\frac{ 1}{2}+\mu-\kappa)}&\frac{\Gamma(2\mu)}{\Gamma(\frac{1}{2}+\mu+\kappa)}\end{bmatrix}.\] To define the parametrix \(\tilde{\Phi}_{n}^{(\infty)}(\lambda,x)\) for \(|\lambda|<2\), we first introduce a constant matrix by \[\mathbf{J}_{n}:=\mathbf{G}_{\kappa_{n},\mu_{n}}^{+}\mathbf{K}_{n}^{+}\mathbf{S} _{1}^{\infty}\mathbf{E}^{\infty}=\mathbf{G}_{\kappa_{n},\mu_{n}}^{-}\mathbf{K}_ {n}^{-}\mathbf{E}^{\infty} \tag{190}\] The equality of these two expressions can be seen as follows. First, combining (183) and (188), and using the fact that the matrix \(\mathbf{M}(\mathrm{i}x\lambda;x,\mu_{n})\) is analytic in a neighborhood of \(\lambda=2\mathrm{i}\), the jump condition for \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) across the positive imaginary axis for \(|\lambda|>2\) shown in Figure 8 implies the identity \(\mathbf{G}_{\kappa_{n},\mu_{n}}^{+}\mathbf{K}_{n}^{+}\mathbf{S}_{1}^{\infty}= \mathbf{G}_{\kappa_{n},\mu_{n}}^{-}\mathbf{K}_{n}^{-}\), which yields the desired equality. \[\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x):=\lambda_{\blacksquare}^{\sigma_{3}/2}x^{ \kappa_{n}\sigma_{3}}\mathbf{D}_{n}\mathbf{M}(\mathrm{i}x\lambda;\kappa_{n}, \mu_{n})\mathbf{J}_{n},\quad\text{for }|\lambda|<2. \tag{191}\] It is straightforward to then check that, regardless of the choice of \(\mu_{n}\), the matrix \(\mathbf{J}_{n}\) defined by (190) is diagonal. Comparing (183) and (191) shows that the jump conditions for \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) across the arcs of the circle \(|\lambda|=2\) shown in Figure 8 are satisfied. Using (187) then proves that \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) satisfies the simple jump condition across the negative imaginary axis with \(|\lambda|<2\) shown in Figure 8 and that an expansion of the form shown in (160) holds. To check that the matrix \(\mathbf{J}_{n}\) is diagonal and arrive at its final form below, we use the identity (186) to get \[\begin{split}\Gamma\left(\frac{1}{2}+\kappa_{n}-\mu_{n}\right) \Gamma\left(\frac{1}{2}-\kappa_{n}+\mu_{n}\right)&=\frac{\pi}{ \cos\left(\pi(\kappa_{n}-\mu_{n})\right)}=\frac{2\pi\mathrm{i}e_{1}e_{\infty}} {e_{1}^{2}-e_{\infty}^{2}},\\ \Gamma\left(\frac{1}{2}+\kappa_{n}+\mu_{n}\right)\Gamma\left( \frac{1}{2}-\kappa_{n}-\mu_{n}\right)&=\frac{\pi}{\cos\left(\pi( \kappa_{n}+\mu_{n})\right)}=\frac{2\pi\mathrm{i}e_{1}e_{\infty}}{1-e_{1}^{2}e_ {\infty}^{2}}.\end{split} \tag{192}\] The result is that the diagonal matrix \(\mathbf{J}_{n}\) from (190) is given by \[\mathbf{J}_{n}=\begin{bmatrix}\frac{\mathrm{e}^{\mathrm{i}\pi/4}e_{1}e_{ \infty}^{2/2}\Gamma(-2\mu_{n})}{\Gamma\left(\frac{1}{2}-\kappa_{n}-\mu_{n} \right)}&0\\ 0&\frac{\mathrm{e}^{\mathrm{i}\pi/4}\left(e_{1}^{4}-1\right)e_{ \infty}^{3/2}\Gamma(2\mu_{n})}{e_{1}\left(e_{1}^{2}-e_{\infty}^{2}\right) \Gamma\left(\frac{1}{2}-\kappa_{n}+\mu_{n}\right)}\end{bmatrix}. \tag{193}\] ### Parametrix for \(\mathbf{\Phi}_{n}(\lambda)\) near \(\lambda=0\) By definition, the parametrix \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) satisfies the following Riemann-Hilbert problem. **Riemann-Hilbert Problem 5**.: _Fix generic monodromy parameters \((e_{1},e_{2})\) determining the Stokes and connection matrices, \(n\in\mathbb{Z}\), and \(x>0\). We seek a \(2\times 2\) matrix function \(\lambda\mapsto\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) satisfying:_ * _Analyticity:_ \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) _is analytic in_ \(\mathbb{C}\setminus\Gamma^{(0)}\)_, where_ \(\Gamma^{(0)}=\{|\lambda|=\frac{1}{2}\}\cup(\mathrm{i}\mathrm{i}\mathrm{R} \cap\{\mathrm{Im}\lambda<\frac{1}{2}\})\) _is the jump contour shown in Figure_ 10_._ * _Jump condition:_ \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) _has continuous boundary values on_ \(\Gamma^{(0)}\setminus\{0\}\) _from each component of_ \(\mathbb{C}\setminus\Gamma^{(0)}\)_, which satisfy_ \(\mathbf{\Phi}_{n,+}^{(0)}(\lambda,x)=\mathbf{\Phi}_{n,-}^{(0)}(\lambda,x) \mathbf{J}_{\mathbf{\Phi}_{n}^{(0)}}(\lambda)\)_, where_ \(\mathbf{J}_{\mathbf{\Phi}_{n}^{(0)}}(\lambda)\) _is as shown in Figure_ 10 _and where the_ \(+\) _(resp.,_ \(-\)_) subscript denotes a boundary value taken from the left (resp., right) of an arc of_ \(\Gamma^{(0)}\)_._ * _Normalization:_ \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) _satisfies the asymptotic conditions_ (194) \[\mathbf{\Phi}_{n}^{(0)}(\lambda,x)=\mathcal{O}(1)\lambda_{\blacksquare}^{\mu_{n} \sigma_{3}}\quad\text{as}\quad\lambda\to\infty,\] _where_ \(\mathcal{O}(1)\) _refers to a function analytic and bounded in a neighborhood of_ \(\lambda=\infty\) _and_ (195) We can write down the unique solution \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) explicitly in terms of the parametrix \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) obtained in Section 7.2, but taken with the index \(1-n\) instead of \(n\) and \(\Theta_{\infty}\) replaced by \(\Theta_{0}\). If we indicate the dependence of \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) and \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)\) on \(\Theta_{\infty}\) and \(\Theta_{0}\) respectively with the notation \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)=\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x,\Theta_{\infty})\) and \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x)=\mathbf{\Phi}_{n}^{(0)}(\lambda,x,\Theta_{0})\), then we have the following. **Proposition 5**.: _Fix \(\Theta_{\infty},\Theta_{0}\in\mathbb{C}\) and generic monodromy parameters \((e_{1},e_{2})\). Then_ \[\tilde{\mathbf{\Phi}}_{n}^{(0)}(\lambda,x,\Theta_{0})=\begin{cases}e_{0}^{3\sigma_{ 3}}\tilde{\mathbf{\Phi}}_{-n}^{(\infty)}(-\lambda^{-1},x,\Theta_{0})e_{0}^{-2 \sigma_{3}}&,\qquad\qquad|\lambda|<\frac{1}{2},\\ e_{0}^{3\sigma_{3}}\tilde{\mathbf{\Phi}}_{-n}^{(\infty)}(-\lambda^{-1},x,\Theta_{ 0})\begin{bmatrix}0&\beta_{n}\\ -\beta_{n}^{-1}&0\end{bmatrix},&\qquad|\lambda|>\frac{1}{2},\end{cases} \tag{196}\] _where_ \[\beta_{n}:=\frac{1-e_{1}^{4}}{e_{1}^{2}(1-e_{0}^{2}e_{1}^{2})} \tag{197}\] Proof.: The mapping \(\lambda\mapsto-\lambda^{-1}\) takes the contour \(\Gamma^{(0)}\) onto the contour \(\Gamma^{(\infty)}\) up to the reversal of orientation of certain arcs, and swaps the circles centered at the origin of radii \(\frac{1}{2}\) and \(2\). Therefore the domain of analyticity of \(\tilde{\mathbf{\Phi}}_{n}^{(0)}(\lambda,x,\Theta_{0})\) is as desired. Under the map \(n\mapsto-n,\Theta_{\infty}\mapsto\Theta_{0}\), the exponentials defined in (158) satisfy \(e_{0}^{2}\mapsto e_{\infty}^{2}\), whereas \(\mu_{n}=\mu_{-n}\) since this quantity depends only on the parity of \(n\). This implies that the Stokes matrices defined in (89)-(137) satisfy the corresponding identities \[\mathbf{S}_{1}^{\infty}\mapsto e_{0}^{-2\sigma_{3}}(\mathbf{S}_{1}^{0})^{-1}e _{0}^{2\sigma_{3}}\quad\text{and}\quad\mathbf{S}_{2}^{\infty}e_{\infty}^{2 \sigma_{3}}\mapsto e_{0}^{-2\sigma_{3}}(\mathbf{S}_{2}^{0}e_{0}^{-2\sigma_{3} })^{-1}e_{0}^{2\sigma_{3}}.\] Comparing Figures 8 and 10 then shows that the function defined by (196) satisfied the required jump conditions across the imaginary axis for \(|\lambda|<\frac{1}{2}\). Likewise, the jump condition on the negative imaginary axis for \(|\lambda|>\frac{1}{2}\) is easily verified due to the identity valid for any \(a\neq 0\): \[e_{1}^{-2\sigma_{3}}\begin{bmatrix}0&a\\ -a^{-1}&0\end{bmatrix}+\begin{bmatrix}0&a\\ -a^{-1}&0\end{bmatrix}e_{1}^{2\sigma_{3}}=\mathbf{0}.\] Finally, the fact that \(\tilde{\mathbf{\Phi}}_{n}^{(0)}(\lambda,x,\Theta_{0})\) defined by (196) satisfies the required jump conditions across the circle \(|\lambda|=\frac{1}{2}\) follows from the corresponding jump conditions for \(\tilde{\mathbf{\Phi}}_{1-n}^{(\infty)}(\lambda,x,\Theta_{0})\) for \(|\lambda|=2\) and the identities \[(\mathbf{E}^{\infty})^{-1}\mapsto\begin{bmatrix}0&\beta_{n}\\ -\beta_{n}^{-1}&0\end{bmatrix}(\mathbf{S}_{1}^{0}\mathbf{E}^{0})^{-1}e_{0}^{2 \sigma_{3}}\quad\text{and}\quad(\mathbf{S}_{1}^{\infty}\mathbf{E}^{\infty})^ {-1}\mapsto\begin{bmatrix}0&\beta_{n}\\ -\beta_{n}^{-1}&0\end{bmatrix}(\mathbf{E}^{0})^{-1}e_{0}^{2\sigma_{3}}\] among the matrices \(\mathbf{E}^{\infty},\mathbf{E}^{0}\) defined in (111)-(112), which hold for the value of \(\beta_{n}\) indicated in (197). It only remains to verify the asymptotics in (194), (195). However, these follow from the corresponding formulae in (159), (160) with the help of the identity \[\lambda_{\blacksquare}^{p}=\mathbf{e}^{\mathrm{i}\pi p}\zeta_{\blacksquare}^{-p},\quad \zeta:=-\lambda^{-1},\] which holds for all \(\lambda\) not on the negative imaginary axis. Since the matrix function \(\mathbf{\Phi}_{n}^{(0)}(\lambda,x,\Theta_{0})\) defined by (196)-(197) satisfies all the required Riemann-Hilbert conditions, and there is at most one solution of those conditions, as is easily confirmed by a Liouville argument, the proof is finished. ### An equivalent Riemann-Hilbert problem on the unit circle The parametrix for \(\mathbf{\Phi}_{n}(\lambda,x)\) is by definition the following matrix function: \[\mathbf{\Phi}_{n}(\lambda,x):=\begin{cases}\mathbf{\tilde{\Phi}}_{n}^{(\infty )}(\lambda,x,\Theta_{\infty}),&|\lambda|>1,\\ \mathbf{\tilde{\Phi}}_{n}^{(0)}(\lambda,x,\Theta_{0}),&|\lambda|<1.\end{cases}\] This matrix function satisfies exactly the same jump conditions in the domains \(|\lambda|>1\) and \(|\lambda|<1\) as does \(\mathbf{\Phi}_{n}(\lambda,x)\) itself, and it is also consistent with the asymptotics given in (149)-(150) (note that \(\mathbf{\Psi}_{n}(\lambda,x)=\mathbf{\Phi}_{n}(\lambda,x)\) for \(|\lambda|\) sufficiently large or small). The parametrix has unit determinant, so the matrix quotient \[\mathbf{Q}_{n}(\lambda,x):=\mathbf{\Phi}_{n}(\lambda,x)\mathbf{\tilde{\Phi}}_ {n}(\lambda,x)^{-1}\] is an analytic function of \(\lambda\) except possibly on the jump contour \(\Gamma\) shown in Figure 6 and on the unit circle, where there is a discontinuity in the definition of \(\mathbf{\tilde{\Phi}}_{n}(\lambda,x)\). However, since the jumps of \(\mathbf{\tilde{\Phi}}_{n}(\lambda,x)\) and \(\mathbf{\Phi}_{n}(\lambda,x)\) agree on \(\Gamma\), a Morera argument shows that \(\mathbf{Q}_{n}(\lambda,x)\) is actually analytic both for \(|\lambda|>1\) and for \(0<|\lambda|<1\). The asymptotic behavior of the factors in \(\mathbf{Q}_{n}(\lambda,x)\) as \(\lambda\to 0\) then shows that any singularity of \(\mathbf{Q}_{n}(\lambda,x)\) at the origin \(\lambda=0\) is removable, and the asymptotic behavior of the same factors in the limit \(\lambda\to\infty\) shows that \(\mathbf{Q}_{n}(\lambda,x)\to\mathbb{I}\) as \(\lambda\to\infty\). \(\mathbf{Q}_{n}(\lambda,x)\) is therefore characterized by its jump condition across the unit circle \(|\lambda|=1\). Taking counter-clockwise orientation for the circle, the jump condition for \(\mathbf{Q}_{n}(\lambda,x)\) reads \[\mathbf{Q}_{n,+}(\lambda,x)=\mathbf{Q}_{n,-}(\lambda,x)\mathbf{\tilde{\Phi}}_ {n}^{(\infty)}(\lambda,x,\Theta_{\infty})e_{2}^{\sigma_{3}}\mathbf{\tilde{ \Phi}}_{n}^{(0)}(\lambda,x,\Theta_{0})^{-1},\quad|\lambda|=1.\] Using Proposition 5, the jump matrix can be written as \[\mathbf{\tilde{\Phi}}_{n}^{(\infty)}(\lambda,x,\Theta_{\infty}) e_{2}^{\sigma_{3}}\mathbf{\tilde{\Phi}}_{n}^{(0)}(\lambda,x,\Theta_{0})^{-1}\\ =\mathbf{\tilde{\Phi}}_{n}^{(\infty)}(\lambda,x,\Theta_{\infty}) e_{2}^{\sigma_{3}}\begin{bmatrix}0&-\beta_{n}\\ \beta_{n}^{-1}&0\end{bmatrix}\mathbf{\tilde{\Phi}}_{-n}^{(\infty)}(-\lambda^{- 1},x,\Theta_{0})^{-1}e_{0}^{-3\sigma_{3}},\quad|\lambda|=1. \tag{198}\] We summarize by writing the Riemann-Hilbert problem for \(\mathbf{Q}_{n}(\lambda,x)\). **Riemann-Hilbert Problem 6**.: _Fix generic monodromy parameters \((e_{1},e_{2})\), \(n\in\mathbb{Z}\), and \(x\in\mathbb{C}\). Seek a \(2\times 2\) matrix function \(\lambda\mapsto\mathbf{Q}_{n}(\lambda,x)\) with the following properties:_ * _Analyticity:_ \(\mathbf{Q}_{n}(\lambda,x)\) _is an analytic function of_ \(\lambda\) _for_ \(|\lambda|\neq 1\)_._ * _Jump condition:_ \(\mathbf{Q}_{n}(\lambda,x)\) _takes analytic boundary values on the unit circle from the interior and exterior, denoted_ \(\mathbf{Q}_{n,+}(\lambda,x)\) _and_ \(\mathbf{Q}_{n,-}(\lambda,x)\) _for_ \(|\lambda|=1\) _respectively, and they are related by_ \[\mathbf{Q}_{n,+}(\lambda,x)=\mathbf{Q}_{n,-}(\lambda,x)\mathbf{\tilde{\Phi}}_ {n}^{(\infty)}(\lambda,x,\Theta_{\infty})e_{2}^{\sigma_{3}}\begin{bmatrix}0&- \beta_{n}\\ \beta_{n}^{-1}&0\end{bmatrix}\mathbf{\tilde{\Phi}}_{-n}^{(\infty)}(-\lambda^{- 1},x,\Theta_{0})^{-1}e_{0}^{-3\sigma_{3}}.\] _Normalization:_ \(\mathbf{Q}_{n}(\lambda,x)\to\mathbb{I}\) _as_ \(\lambda\to\infty\)_._ Henceforth, to avoid the notation becoming unwieldy, we understand that all quantities appearing with subscript \(n\) are evaluated at parameter \(\Theta_{\infty}\) while quantities appearing with subscript \(-n\) are evaluated at parameter \(\Theta_{0}\). ### The limit \(n\to\infty\) Having succeeded in removing the problematic jump conditions along rays emanating from \(0,\infty\) in the \(\lambda\) plane by defining \(\mathbf{Q}_{n}(\lambda,x)\), we would next like to consider the limiting behavior of this problem as \(n\to+\infty\) with \(x=z/n\) and \(z\) fixed. It is convenient to first renormalize \(\mathbf{Q}_{n}(\lambda,x)\), essentially by a transformation that diagonalizes the coefficient \(\mu_{n}\mathbf{B}_{n}(x)\sigma_{3}\mathbf{B}_{n}(x)^{-1}\) of \(\lambda^{-1}\) in the coefficient matrix of the Lax equation in (163). In other words, in the jump condition for \(\mathbf{Q}_{n}(\lambda,x)\) we prefer to replace \(\mathbf{\tilde{\Phi}}_{n}^{(\infty)}(\lambda,x)\) with a suitable left-diagonal multiple of \(\mathbf{B}_{n}(x)^{-1}\mathbf{\tilde{\Phi}}_{n}^{(\infty)}(\lambda,x)\). Observe that the coefficient \(\mathbf{B}_{n}(x)\) is determined up to right-multiplication by a diagonal matrix by (164), in which the second row of the matrix on the right-hand side is \((c_{n}(x),-a_{n})=(\gamma_{n}x^{1-2\kappa_{n}},\mu_{n}^{-1}(\kappa_{n}-\frac{1}{2} ))\), where we used (170) and (179). Indeed, the first column \(\mathbf{b}_{n}^{(1)}(x)\) satisfies \((\gamma_{n}x^{1-2\kappa_{n}},\mu_{n}^{-1}(\kappa_{n}-\frac{1}{2})-1)\mathbf{b} _{n}^{(1)}(x)=0\) while the second column \(\mathbf{b}_{n}^{(2)}(x)\) satisfies \((\gamma_{n}x^{1-2\kappa_{n}},\mu_{n}^{-1}(\kappa_{n}-\frac{1}{2})+1)\mathbf{b} _{n}^{(2)}(x)=0\). By selecting specific constant factors for each column, we obtain a matrix \(\mathbf{P}_{n}(x)\) differing from \(\mathbf{B}_{n}(x)\) by right-multiplication by a diagonal matrix, and given explicitly by \[\mathbf{P}_{n}(x):=\begin{bmatrix}\frac{1}{2}-\kappa_{n}+\mu_{n}&\frac{1}{2}- \kappa_{n}-\mu_{n}\\ \mu_{n}\gamma_{n}x^{1-2\kappa_{n}}&\mu_{n}\gamma_{n}x^{1-2\kappa_{n}}\end{bmatrix},\] in which the dependence on the index \(n\) enters via (170) and (185). Then to get the desired modification of the jump matrix we set \[\mathbf{R}_{n}(\lambda,x):=\begin{cases}\mathbf{P}_{n}(x)^{-1}\mathbf{Q}_{n}( \lambda,x)\mathbf{P}_{n}(x),&|\lambda|>1,\\ \mathbf{P}_{n}(x)^{-1}\mathbf{Q}_{n}(\lambda,x)\epsilon_{0}^{3\sigma_{3}} \mathbf{P}_{-n}(x)d_{n}(x),&|\lambda|<1,\end{cases}\] where \(d_{n}(x)\) is a scalar satisfying \[d_{n}(x)^{2}=\frac{\det(\mathbf{P}_{n}(x))}{\det(\mathbf{P}_{-n}(x))}. \tag{199}\] Then, \(\mathbf{R}_{n}(\lambda,x)\) solves the following Riemann-Hilbert problem **Riemann-Hilbert Problem 7**.: _Fix generic monodromy parameters \((e_{1},e_{2})\), \(n\in\mathbb{Z}\), and \(x\in\mathbb{C}\). Seek a \(2\times 2\) matrix function \(\lambda\mapsto\mathbf{R}_{n}(\lambda,x)\) with the following properties:_ * _Analyticity:_ \(\mathbf{R}_{n}(\lambda,x)\) _is an analytic function of_ \(\lambda\) _for_ \(|\lambda|\neq 1\)_._ * _Jump condition:_ \(\mathbf{R}_{n}(\lambda,x)\) _takes analytic boundary values on the unit circle from the interior and exterior, denoted_ \(\mathbf{R}_{n,+}(\lambda,x)\) _and_ \(\mathbf{R}_{n,-}(\lambda,x)\) _for_ \(|\lambda|=1\) _respectively, and they are related by_ (200) \[\mathbf{R}_{n,+}(\lambda,x)=\mathbf{R}_{n,-}(\lambda,x)\mathbf{P}_{n}(x)^{-1 }\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)e_{2}^{\sigma_{3}}\begin{bmatrix}0&- \beta_{n}\\ \beta_{n}^{-1}&0\end{bmatrix}\mathbf{\Phi}_{-n}^{(\infty)}(-\lambda^{-1},x)^{ -1}\mathbf{P}_{-n}(x)d_{n}(x)\] \[\textbf{Normalization:}\mathbf{R}_{n}(\lambda,x)\to\mathbb{I}\text{ as } \lambda\to\infty.\] The matrices \(\Xi_{n}^{(6)}(x)\) and \(\mathbf{\Delta}_{n}^{(6)}(x)\) defined in (151) and (150), respectively, can be expressed in terms of \(\mathbf{R}_{n}(\lambda,x)\) as follows: \[\begin{split}\Xi_{n}^{(6)}(x)&=\mathbf{A}_{n}(x)+ \mathbf{P}_{n}(x)\left[\lim_{\lambda\to\infty}\lambda\left(\mathbf{R}_{n}( \lambda,x)-\mathbb{I}\right)\right]\mathbf{P}_{n}(x)^{-1}-\frac{1}{2}\mathrm{i} x\sigma_{3},\\ \mathbf{\Delta}_{n}^{(6)}(x)&=\mathbf{P}_{n}(x)\mathbf{R}_{n}(0,x) \mathbf{P}_{-n}(x)^{-1}d_{n}(x)^{-1}e_{0}^{-3\sigma_{3}}.\end{split} \tag{201}\] Here, \(\mathbf{A}_{n}(x)\) is the matrix coefficient defined in (159). We now show that the jump matrix in (200) has explicit limits as \(n\to+\infty\) along even or odd subsequences, with the convergence being uniform for \(|\lambda|=1\) and bounded \(z\) where \(x=z/n\). To this end, we compute the asymptotic behavior of \(\mathbf{P}_{n}(n^{-1}z)^{-1}\mathbf{\Phi}_{n}^{(\infty)}(\lambda,n^{-1}z)\) assuming that \(|\lambda|=1\). The relevant formula for \(\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\) in this setting is (191). When \(|\lambda|=1\), \[\mathbf{P}_{n}(x)^{-1}\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)=\frac{1}{2\mu_{n }^{2}\gamma_{n}x^{1-2\kappa_{n}}}\begin{bmatrix}\mu_{n}\gamma_{n}x^{1-2\kappa _{n}}&-\frac{1}{2}+\mu_{n}+\kappa_{n}\\ -\mu_{n}\gamma_{n}x^{1-2\kappa_{n}}&\frac{1}{2}+\mu_{n}-\kappa_{n}\end{bmatrix} \mathbf{D}_{n}x^{\kappa_{n}\sigma_{3}}\lambda_{\blacksquare}^{\sigma_{3}/2}\mathbf{M} (\mathrm{i}x\lambda;\kappa_{n},\mu_{n})\mathbf{J}_{n}.\] Using (182), we see that \[\mathbf{P}_{n}(x)^{-1}\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)=\frac{1}{2\mu_{n }^{2}\gamma_{n}}x^{\kappa_{n}-\frac{1}{2}}\begin{bmatrix}\mathrm{i}&-\frac{1}{2 }+\mu_{n}+\kappa_{n}\\ -\mathrm{i}&\frac{1}{2}+\mu_{n}-\kappa_{n}\end{bmatrix}x^{\sigma_{3}/2}\lambda _{\blacksquare}^{\sigma_{3}/2}\mathbf{M}(\mathrm{i}x\lambda;\kappa_{n},\mu_{n}) \mathbf{J}_{n}.\] Now, for \(x>0\), the principal branch power \((-{\rm i}x\lambda)^{\alpha_{3}/2}\) has the same domain of analyticity as \(\lambda_{\blacksquare}^{\alpha_{3}/2}\), and these two analytic functions are related by the identity \(\lambda_{\blacksquare}^{\alpha_{3}/2}=x^{-\alpha_{3}/2}{\rm e}^{{\rm i}i\pi\alpha_{3} /4}(-{\rm i}x\lambda)^{\alpha_{3}/2}\). Therefore, \[\begin{split}\mathbf{P}_{n}(x)^{-1}\mathbf{\Phi}_{n}^{(\infty)}( \lambda,x)&=\frac{1}{2\mu_{n}^{2}\gamma_{n}}x^{\kappa_{n}-\frac{1} {2}}\begin{bmatrix}{\rm i}&-\frac{1}{2}+\mu_{n}+\kappa_{n}\\ -{\rm i}&\frac{1}{2}+\mu_{n}-\kappa_{n}\end{bmatrix}{\rm e}^{{\rm i}\pi\alpha_ {3}/4}(-{\rm i}x\lambda)^{\alpha_{3}/2}\mathbf{M}({\rm i}x\lambda;\kappa_{n}, \mu_{n})\mathbf{J}_{n}\\ &=\frac{{\rm e}^{-{\rm i}\pi/4}}{2\mu_{n}^{2}\gamma_{n}}x^{\kappa_{n }-\frac{1}{2}}\begin{bmatrix}-1&-\frac{1}{2}+\mu_{n}+\kappa_{n}\\ 1&\frac{1}{2}+\mu_{n}-\kappa_{n}\end{bmatrix}(-Z)^{\alpha_{3}/2}\mathbf{M}(Z; \kappa_{n},\mu_{n})\mathbf{J}_{n},\quad Z={\rm i}x\lambda.\end{split} \tag{202}\] Now using (189), we have \[(-Z)^{\alpha_{3}/2}\mathbf{M}(Z;\kappa_{n},\mu_{n})\\ =\begin{bmatrix}(-\frac{1}{2}-\mu_{n}+\kappa_{n})(-Z)^{-\frac{1} {2}}M_{1-\kappa_{n},\mu_{n}}(-Z)&(-\frac{1}{2}+\mu_{n}+\kappa_{n})(-Z)^{- \frac{1}{2}}M_{1-\kappa_{n},-\mu_{n}}(-Z)\\ (-Z)^{-\frac{1}{2}}M_{-\kappa_{n},\mu_{n}}(-Z)&(-Z)^{-\frac{1}{2}}M_{-\kappa_ {n},-\mu_{n}}(-Z)\end{bmatrix}.\] The diagonal elements in (202) can be simplified using the identity (see [9, Eqn. 13.15.3]) \[(\kappa-\mu-\tfrac{1}{2})M_{\kappa-\frac{1}{2},\mu+\frac{1}{2}}(\diamond)+(1+ 2\mu)\diamond^{\frac{1}{2}}M_{\kappa,\mu}(\diamond)-(\kappa+\mu+\tfrac{1}{2}) M_{\kappa+\frac{1}{2},\mu+\frac{1}{2}}(\diamond)=0\] replacing \(\kappa\to\frac{1}{2}-\kappa_{n}\) and \(\mu\to\mu_{n}-\frac{1}{2}\) for the \((1,1)\) entry and \(\mu\to-\frac{1}{2}-\mu_{n}\) for the \((2,2)\) entry, and the off-diagonal elements can be simplified using the identity (see [9, Eqn. 13.15.4]) \[2\mu M_{\kappa-\frac{1}{2},\mu-\frac{1}{2}}(\diamond)-2\mu M_{\kappa+\frac{1} {2},\mu-\frac{1}{2}}(\diamond)-\diamond^{\frac{1}{2}}M_{\kappa,\mu}(\diamond)=0\] replacing \(\kappa\to\frac{1}{2}-\kappa_{n}\) and \(\mu\to\frac{1}{2}-\mu_{n}\) for the \((1,2)\) entry and \(\mu\to\frac{1}{2}+\mu_{n}\) for the \((2,1)\) entry. The result is that \[\mathbf{P}_{n}(x)^{-1}\mathbf{\Phi}_{n}^{(\infty)}(\lambda,x)\\ =\frac{{\rm e}^{-{\rm i}\pi/4}}{2\mu_{n}^{2}\gamma_{n}}x^{\kappa_{ n}-\frac{1}{2}}\begin{bmatrix}2\mu_{n}M_{\frac{1}{2}-\kappa_{n},\mu_{n}-\frac{1}{2}}( -Z)&\left(\frac{\kappa_{n}+\mu_{n}-\frac{1}{2}}{1-2\mu_{n}}\right)M_{\frac{1}{2 }-\kappa_{n},\frac{1}{2}-\mu_{n}}(-Z)\\ \left(\frac{\frac{1}{2}-\kappa_{n}+\mu_{n}}{1+2\mu_{n}}\right)M_{\frac{1}{2}- \kappa_{n},\frac{1}{2}+\mu_{n}}(-Z)&2\mu_{n}M_{\frac{1}{2}-\kappa_{n},-\frac{1 }{2}-\mu_{n}}(-Z)\end{bmatrix}\mathbf{J}_{n},\\ Z={\rm i}x\lambda. \tag{203}\] We will need the following result for the large \(n\) limit of Whittaker functions appearing here, cf. [9, Eqn. 13.21.1] **Lemma 3**.: _Assume that \(\mu\) is fixed with \(2\mu\neq-1,-2,-3,\dots\), and let \(f(\zeta;\mu)\) denote the entire function_ \[f(\zeta;\mu):=\sum_{s=0}^{\infty}\frac{(-\zeta)^{s}}{\Gamma(1+2\mu+s)s!}. \tag{204}\] _Then the asymptotic formula_ \[M_{\kappa,\mu}\left(\frac{\zeta}{\kappa}\right)=\Gamma(1+2\mu)\left(\frac{ \zeta}{\kappa}\right)^{\mu+1/2}\left[f(\zeta;\mu)+\mathcal{O}(\kappa^{-2})\right]\] _holds uniformly in the limit \(\kappa\to\infty\) in any (possibly complex) direction under the assumption \(\zeta=\mathcal{O}(1)\)._ Proof.: We start from the formula [9, Eqn. 13.14.6] which holds under the indicated condition on \(\mu\): \[M_{\kappa,\mu}\left(\frac{\zeta}{\kappa}\right)=\Gamma(1+2\mu)\left(\frac{ \zeta}{\kappa}\right)^{\mu+1/2}{\rm e}^{-\zeta/(2\kappa)}\sum_{s=0}^{\infty} \frac{(-\zeta)^{s}}{\Gamma(1+2\mu+s)s!}\prod_{j=0}^{s-1}\left(1-\frac{\mu+1/2+j }{\kappa}\right).\] Clearly \(\mathrm{e}^{-\zeta/(2\kappa)}=1-\zeta/(2\kappa)+\mathcal{O}(\kappa^{-2})\) as \(\kappa\to\infty\) for \(\zeta=\mathcal{O}(1)\), and the product in the summand has the expansion \[\prod_{j=0}^{s-1}\left(1-\frac{\mu+1/2+j}{\kappa}\right)=1-\frac{1}{\kappa} \left[\left(\mu+\frac{1}{2}\right)s+\frac{1}{2}s(s-1)\right]+\mathcal{O}( \kappa^{-2}s^{s}),\quad\kappa\to\infty \tag{205}\] uniformly for all indices \(s\). This follows from the Fredholm expansion formula \[\prod_{j=0}^{s-1}(1+r_{j})=1+\sum_{j=0}^{s-1}r_{j}+\sum_{k=2}^{s}\sum_{ \begin{subarray}{c}S\subset\mathbb{Z}_{s}\\ |S|=k\end{subarray}}\prod_{l\in S}r_{l}\] and the estimate \[\left|\sum_{\begin{subarray}{c}S\subset\mathbb{Z}_{s}\\ |S|=k\end{subarray}}\prod_{l\in S}r_{l}\right|\leq\binom{s}{k}R_{s}^{k},\quad R _{s}:=\max_{0\leq l\leq s-1}\{|r_{l}|\}.\] Indeed, with \(r_{j}=-\kappa^{-1}(\mu+\frac{1}{2}+j)\), we have \[\sum_{j=0}^{s-1}r_{j}=-\frac{1}{\kappa}\sum_{j=0}^{s-1}\left(\mu+\frac{1}{2}+ j\right)=-\frac{1}{\kappa}\left[\left(\mu+\frac{1}{2}\right)s+\frac{1}{2}s(s-1) \right],\] and \(R_{s}\leq|\kappa|^{-1}(|\mu+\frac{1}{2}|+(s-1))\). Therefore, \(R_{s}^{k}\leq|\kappa|^{-2}(|\mu+\frac{1}{2}|+(s-1))^{k}\) holds for all \(k\geq 2\) whenever \(|\kappa|\geq 1\). Consequently, \[\left|\prod_{j=0}^{s-1}\left(1-\frac{\mu+1/2+j}{\kappa}\right)-1 +\frac{1}{\kappa}\left[\left(\mu+\frac{1}{2}\right)s+\frac{1}{2}s(s-1)\right] \right| =\left|\sum_{k=2}^{s}\sum_{\begin{subarray}{c}S\subset\mathbb{Z}_ {s}\\ |S|=k\end{subarray}}\prod_{l\in S}r_{l}\right|\] \[\leq\frac{1}{|\kappa|^{2}}\sum_{k=2}^{s}\binom{s}{k}\left(\left| \mu+\frac{1}{2}\right|+(s-1)\right)^{k}1^{s-k}\] \[\leq\frac{1}{|\kappa|^{2}}\sum_{k=0}^{s}\binom{s}{k}\left(\left| \mu+\frac{1}{2}\right|+(s-1)\right)^{k}1^{s-k}\] \[=\frac{(|\mu+\frac{1}{2}|+s)^{s}}{|\kappa|^{2}},\] which proves (205). Since the series \[\sum_{s=0}^{\infty}\frac{(-\zeta)^{s}s^{s}}{\Gamma(1+2\mu_{n}+s)s!}\] converges uniformly for \(|\zeta|\) bounded, it follows that \[M_{\kappa,\mu}\left(\frac{\zeta}{\kappa}\right)=\Gamma(1+2\mu)\left(\frac{ \zeta}{\kappa}\right)^{\mu+1/2}\left[f(\zeta;\mu)-\frac{1}{\kappa}g(\zeta;\mu) +\mathcal{O}(\kappa^{-2})\right]\] holds as \(\kappa\to\infty\) in \(\mathbb{C}\) with \(\zeta=\mathcal{O}(1)\), where \[g(\zeta;\mu):=\frac{1}{2}\zeta f(\zeta;\mu)+\left(\mu+\frac{1}{2}\right)\zeta f ^{\prime}(\zeta;\mu)+\frac{1}{2}\zeta^{2}f^{\prime\prime}(\zeta;\mu).\] Now using the series (204) one checks that for all indicated values of \(\mu\), \(g(\zeta;\mu)\) vanishes identically, so the proof is complete. The series in (204) defines an entire function of \(\zeta\) related to Bessel functions (see [9, Chapter 10]) in the following way: \[f(\zeta,\mu)=\zeta^{-\mu}J_{2\mu}(2\sqrt{\zeta}). \tag{206}\] We apply Lemma 3 to (203) by taking \(\zeta=-(\frac{1}{2}-\kappa_{n})Z=-(\frac{1}{2}-\kappa_{n})\mathrm{i}x\lambda\). If \(x=z/n\) and \(z=\mathcal{O}(1)\), then using (170), we see that \(\zeta=-\frac{1}{2}\mathrm{i}z\lambda+\mathcal{O}(n^{-1})\) holds for \(|\lambda|=1\). So, (203) becomes \[\mathbf{P}_{n}(x)^{-1}\tilde{\mathbf{\Phi}}_{n}^{(\infty)}( \lambda,x)\\ =\frac{\mathrm{e}^{-\mathrm{i}\pi/4}}{2\mu_{n}^{2}\gamma_{n}}x^{ \kappa_{n}-\frac{1}{2}}\left(\frac{\rho_{\infty}}{2}\begin{bmatrix}\Gamma(2\mu _{n}+1)J_{2\mu_{n}-1}(\rho_{\infty})&-\Gamma(1-2\mu_{n})J_{1-2\mu_{n}}(\rho_ {\infty})\\ \Gamma(2\mu_{n}+1)J_{2\mu_{n}+1}(\rho_{\infty})&-\Gamma(1-2\mu_{n})J_{-1-2\mu_ {n}}(\rho_{\infty})\end{bmatrix}+\mathcal{O}(n^{-1})\right)\,\left(\frac{n}{2} \right)^{-\mu_{n}\sigma_{3}}\mathbf{J}_{n}\] with \[\rho_{\infty}=\rho_{\infty}(\lambda,z):=(-2\mathrm{i}z\lambda)^{1/2}\quad \text{(principal branch)} \tag{207}\] holds in the limit \(n\to\infty\) with \(x=n^{-1}z\) uniformly for \(z=\mathcal{O}(1)\) and \(|\lambda|=1\). Similarly, to study the parametrix near \(0\), it will be convenient to rewrite formula (206) in terms of modified Bessel functions: \[f(\zeta,\mu)=(-\zeta)^{-\mu}I_{2\mu}\left(2\sqrt{-\zeta}\right).\] Replacing \(n\) with \(-n\), \(\Theta_{\infty}\) with \(\Theta_{0}\), and \(\lambda\) with \(-\lambda^{-1}\) and recalling that \(\mu_{-n}=\mu_{n}\), gives in the same limit, \[\mathbf{P}_{-n}(x)^{-1}\tilde{\mathbf{\Phi}}_{-n}^{(\infty)}(- \lambda^{-1},x)\\ =\frac{\mathrm{e}^{-\mathrm{i}\pi/4}}{2\mu_{n}^{2}\gamma_{-n}}x^{ \kappa_{-n}-\frac{1}{2}}\left(\frac{\rho_{0}}{2}\begin{bmatrix}\Gamma(2\mu_{n }+1)I_{2\mu_{n}-1}(\rho_{0})&\Gamma(1-2\mu_{n})I_{1-2\mu_{n}}(\rho_{0})\\ -\Gamma(2\mu_{n}+1)I_{2\mu_{n}+1}(\rho_{0})&-\Gamma(1-2\mu_{n})I_{-2\mu_{n}-1} (\rho_{0})\end{bmatrix}+\mathcal{O}(n^{-1})\right)\,\left(\frac{n}{2}\right)^ {-\mu_{n}\sigma_{3}}\mathbf{J}_{-n}\] with \[\rho_{0}=\rho_{0}(\lambda,z):=(2\mathrm{i}z\lambda^{-1})^{1/2}\quad\text{( principal branch)}. \tag{208}\] The jump matrix in (200) therefore reads \[\mathbf{R}_{n,-}(\lambda,n^{-1}z)^{-1}\mathbf{R}_{n,+}(\lambda,n^ {-1}z)=\\ \frac{\rho_{\infty}}{2}\begin{bmatrix}\Gamma(2\mu_{n}+1)J_{2\mu_{ n}-1}(\rho_{\infty})+\mathcal{O}(n^{-1})&-\Gamma(1-2\mu_{n})I_{1-2\mu_{n}}( \rho_{\infty})+\mathcal{O}(n^{-1})\\ \Gamma(2\mu_{n}+1)J_{2\mu_{n}+1}(\rho_{\infty})+\mathcal{O}(n^{-1})&-\Gamma(1- 2\mu_{n})J_{-1-2\mu_{n}}(\rho_{\infty})+\mathcal{O}(n^{-1})\end{bmatrix}\\ \cdot\frac{\gamma_{-n}}{\gamma_{n}}x^{\kappa_{n}-\kappa_{-n}}d_{n}( x)\left(\frac{n}{2}\right)^{-\mu_{n}\sigma_{3}}\mathbf{J}_{n}e_{2}^{\sigma_{3}} \begin{bmatrix}0&-\beta_{n}\\ \beta_{n}^{-1}&0\end{bmatrix}\mathbf{J}_{-n}^{-1}\left(\frac{n}{2}\right)^{\mu _{n}\sigma_{3}}\\ \cdot\frac{2}{\rho_{0}}\begin{bmatrix}\Gamma(2\mu_{n}+1)I_{2\mu_{ n}-1}(\rho_{0})+\mathcal{O}(n^{-1})&\Gamma(1-2\mu_{n})I_{1-2\mu_{n}}(\rho_{0})+ \mathcal{O}(n^{-1})\\ -\Gamma(2\mu_{n}+1)I_{2\mu_{n}+1}(\rho_{0})+\mathcal{O}(n^{-1})&-\Gamma(1-2\mu_{ n})I_{-2\mu_{n}-1}(\rho_{0})+\mathcal{O}(n^{-1})\end{bmatrix}^{-1}. \tag{209}\] Expanding (199) for large \(n>0\) gives \[d_{n}(x)^{2}=\frac{\gamma_{n}}{\gamma_{-n}}x^{2\kappa_{-n}-2 \kappa_{n}}=\frac{e_{\infty}^{5}}{e_{0}^{5}}\cdot\frac{e_{0}^{2}-e_{1}^{2}}{e_{ \infty}^{2}-e_{1}^{2}}\cdot\frac{\Gamma\left(\frac{1}{2}-\kappa_{-n}-\mu_{n} \right)\Gamma\left(\frac{1}{2}-\kappa_{-n}+\mu_{n}\right)}{\Gamma\left(\frac{1}{ 2}-\kappa_{n}-\mu_{n}\right)\Gamma\left(\frac{1}{2}-\kappa_{n}+\mu_{n} \right)}x^{2\kappa_{-n}-2\kappa_{n}}\\ =\frac{e_{\infty}^{5}e_{1}^{2}}{e_{0}^{3}(1-e_{1}^{2}e_{0}^{2})(e_{ \infty}^{2}-e_{1}^{2})}\cdot\frac{4\pi^{2}}{\Gamma\left(\frac{1}{2}+\kappa_{-n} -\mu_{n}\right)\Gamma\left(\frac{1}{2}+\kappa_{-n}+\mu_{n}\right)\Gamma\left( \frac{1}{2}-\kappa_{n}+\mu_{n}\right)\Gamma\left(\frac{1}{2}-\kappa_{n}-\mu_{n }\right)}x^{2\kappa_{-n}-2\kappa_{n}}\\ =4\frac{e_{\infty}^{4}}{e_{0}^{4}}\frac{e_{0}e_{0}e_{1}^{2}}{(1-e_{1}^{2} e_{0}^{2})(e_{\infty}^{2}-e_{1}^{2})}x^{2\kappa_{-n}-2\kappa_{n}}n^{-2n-\Theta_{0}+ \Theta_{\infty}}\mathrm{e}^{2n}2^{2n+\Theta_{0}-\Theta_{\infty}-2}\left(1+ \mathcal{O}(n^{-1})\right),\quad n\to+\infty.\] We now properly define \(d_{n}(x)\) for large \(n\) by selecting a definite value for the square root of \[\sqrt{\frac{1-e_{1}^{2}e_{0}^{2}}{e_{\infty}^{2}-e_{1}^{2}}} \tag{210}\] after which \(d_{n}(x)\) has the asymptotic expansion \[d_{n}(x)=\frac{e_{1}e_{0}^{5/2}}{e_{0}^{3/2}}\frac{1}{(1-e_{1}^{2}e_{0}^{2})} \sqrt{\frac{1-e_{1}^{2}e_{0}^{2}}{e_{\infty}^{2}-e_{1}^{2}}}x^{\kappa_{-n}- \kappa_{n}}n^{-n+\frac{1}{2}(-\Theta_{0}+\Theta_{\infty})}\mathrm{e}^{n}2^{n+ \frac{1}{2}(\Theta_{0}-\Theta_{\infty})}\left(1+\mathcal{O}(n^{-1})\right), \quad n\to+\infty.\] Then, by definition, we have \[\frac{\gamma_{-n}}{\gamma_{n}}d_{n}(x)x^{\kappa_{n}-\kappa_{-n}}\] \[=\frac{e_{0}^{3/2}}{e_{1}e_{\infty}^{5/2}}(1-e_{1}^{2}e_{0}^{2}) \left(\sqrt{\frac{1-e_{1}^{2}e_{0}^{2}}{e_{\infty}^{2}-e_{1}^{2}}}\right)^{-1 }n^{n-\frac{1}{2}(-\Theta_{0}+\Theta_{\infty})}\mathrm{e}^{-n}2^{-n-\frac{1}{ 2}(\Theta_{0}-\Theta_{\infty})}\left(1+\mathcal{O}(n^{-1})\right),\;n\to+\infty.\] Furthermore, using identities (186), (192), and Stirling's formula yields \[-\left(\frac{2}{n}\right)^{2\mu_{n}}\beta_{n}\frac{J_{n,11}}{J_{- n,22}}=-\left(\frac{2}{n}\right)^{2\mu_{n}}\frac{e_{\infty}^{7/2}\left(e_{1}^{2}-e_{0 }^{2}\right)\Gamma(-2\mu_{n})\Gamma\left(\frac{1}{2}-\kappa_{-n}+\mu_{n} \right)}{e_{0}^{3/2}\left(e_{0}^{2}e_{1}^{2}-1\right)\Gamma(2\mu_{n})\Gamma \left(\frac{1}{2}-\kappa_{n}-\mu_{n}\right)}\\ =\frac{e_{\infty}^{7/2}}{e_{0}^{3/2}}\frac{\mathrm{i}e_{0}e_{1}}{ \left(1-e_{0}^{2}e_{1}^{2}\right)}\frac{\Gamma(-2\mu_{n})}{\Gamma(2\mu_{n})}n ^{-n+\frac{1}{2}(-\Theta_{0}+\Theta_{\infty})}\mathrm{e}^{n}2^{n+\frac{1}{2}( \Theta_{0}-\Theta_{\infty})}\left(1+\mathcal{O}(n^{-1})\right),\quad n\to+\infty,\] and similarly, \[\left(\frac{n}{2}\right)^{2\mu_{n}}\frac{J_{n,22}}{\beta_{n}J_{- n,11}}=\left(\frac{n}{2}\right)^{2\mu_{n}}\frac{e_{\infty}^{3/2}\left(e_{0}^{2}e_{1}^ {2}-1\right)\Gamma(2\mu_{n})\Gamma\left(\frac{1}{2}-\kappa_{-n}-\mu_{n} \right)}{e_{0}^{7/2}\left(e_{1}^{2}-e_{\infty}^{2}\right)\Gamma(-2\mu_{n}) \Gamma\left(\frac{1}{2}-\kappa_{n}+\mu_{n}\right)}\\ =\frac{e_{\infty}^{3/2}}{e_{0}^{7/2}}\frac{\mathrm{i}e_{0}e_{1}}{ \left(e_{\infty}^{2}-e_{1}^{2}\right)}\frac{\Gamma(2\mu_{n})}{\Gamma(-2\mu_{ n})}n^{-n+\frac{1}{2}(-\Theta_{0}+\Theta_{\infty})}\mathrm{e}^{n}2^{n+\frac{1}{2}( \Theta_{0}-\Theta_{\infty})}\left(1+\mathcal{O}(n^{-1})\right),\quad n\to+\infty.\] Therefore, the central factor on the right-hand side of (209) satisfies \[\frac{\gamma_{-n}}{\gamma_{n}}x^{\kappa_{n}-\kappa_{-n}}d_{n}(x) \left(\frac{n}{2}\right)^{-\mu_{n}\sigma_{3}}\mathbf{J}_{n}e_{2}^{\sigma_{3} }\begin{bmatrix}0&-\beta_{n}\\ \beta_{n}^{-1}&0\end{bmatrix}\mathbf{J}_{-n}^{-1}\left(\frac{n}{2}\right)^{\mu_ {n}\sigma_{3}}\\ =\begin{bmatrix}0&-\frac{e_{0}e_{2}e_{\infty}}{\mathrm{i}}\frac{ \Gamma(-2\mu_{n})}{\Gamma(2\mu_{n})}\left(\sqrt{\frac{1-e_{0}^{2}e_{1}^{2}}{e_ {\infty}^{2}-e_{1}^{2}}}\right)^{-1}\\ \frac{\mathrm{i}}{e_{0}e_{2}e_{\infty}}\frac{\Gamma(2\mu_{n})}{\Gamma(-2\mu_{ n})}\sqrt{\frac{1-e_{0}^{2}e_{1}^{2}}{e_{\infty}^{2}-e_{1}^{2}}}&0\end{bmatrix}+\mathcal{O}(n^{-1})\] The leading term is independent of \(n\pmod{2}\) and has unit determinant. This proves the following. **Proposition 6**.: _Define the constant matrix which depends only on the even/odd parity of \(n\) via \(\mu_{n}\), \(e_{1}\), \(e_{0}^{2}\), and \(e_{\infty}^{2}\):_ \[\mathbf{V}^{\mathrm{even/odd}}:=\begin{bmatrix}0&-\frac{e_{0}e_{2}e_{\infty}}{ \mathrm{i}}\left(\sqrt{\frac{1-e_{0}^{2}e_{1}^{2}}{e_{\infty}^{2}-e_{1}^{2}}} \right)^{-1}\\ \frac{\mathrm{i}}{e_{0}e_{2}e_{\infty}}\sqrt{\frac{1-e_{0}^{2}e_{1}^{2}}{e_{ \infty}^{2}-e_{1}^{2}}}&0\end{bmatrix} \tag{211}\] _Then the following asymptotics holds uniformly for \(|\lambda|=1\) and \(z\) bounded:_ \[\mathbf{R}_{n_{t},-}(\lambda,z/n)^{-1}\mathbf{R}_{n_{t},+}(\lambda,z /n) =\] \[\rho_{\infty}\begin{bmatrix}J_{2u_{n}-1}(\rho_{\infty})&-J_{1-2u_{n }}(\rho_{\infty})\\ J_{2u_{n}+1}(\rho_{\infty})&-J_{-1-2u_{n}}(\rho_{\infty})\end{bmatrix}\cdot \mathbf{V}^{\mathrm{even/odd}}\cdot\frac{1}{\rho_{0}}\begin{bmatrix}I_{2u_{n}- 1}(\rho_{0})&I_{1-2u_{n}}(\rho_{0})\\ -I_{2u_{n}+1}(\rho_{0})&-I_{-1-2u_{n}}(\rho_{0})\end{bmatrix}^{-1}+\mathcal{O} (n^{-1}),\] _as \(n\to\infty\) along even/odd subsequences._ Proposition 6 suggests defining the following limiting Riemann-Hilbert problem. **Riemann-Hilbert Problem 8** (Limiting problem, even/odd subsequences of \(n\)).: _Fix generic monodromy parameters \((e_{1},e_{2})\), and \(z\in\mathbb{C}\) with \(|\mathrm{Arg}(z)|<\pi\). Seek a \(2\times 2\) matrix function \(\lambda\mapsto\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) with the following properties:_ * _Analyticity:_ \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) _is an analytic function of_ \(\lambda\) _for_ \(|\lambda|\neq 1\)_._ * _Jump condition:_ \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) _takes analytic boundary values on the unit circle from the interior and exterior, denoted_ \(\hat{\mathbf{R}}^{\mathrm{even/odd}}_{+}(\lambda,z)\) _and_ \(\hat{\mathbf{R}}^{\mathrm{even/odd}}_{-}(\lambda,z)\) _for_ \(|\lambda|=1\) _respectively, and they are related by_ (212) \[\hat{\mathbf{R}}^{\mathrm{even/odd}}_{+}(\lambda,z)=\hat{ \mathbf{R}}^{\mathrm{even/odd}}_{-}(\lambda,z)\\ \rho_{\infty}\begin{bmatrix}J_{2u_{n}-1}(\rho_{\infty})&-J_{1-2u_ {n}}(\rho_{\infty})\\ J_{2u_{n}+1}(\rho_{\infty})&-J_{1-2u_{n}}(\rho_{\infty})\end{bmatrix}\cdot \mathbf{V}^{\mathrm{even/odd}}\cdot\frac{1}{\rho_{0}}\begin{bmatrix}I_{2u_{n}- 1}(\rho_{0})&I_{1-2u_{n}}(\rho_{0})\\ -I_{2u_{n}+1}(\rho_{0})&-I_{-1-2u_{n}}(\rho_{0})\end{bmatrix}^{-1}.\] _Normalization:_ \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\to\mathbb{I}\) _as_ \(\lambda\to\infty\)_._ Note that the Bessel functions \(J_{\nu}(\rho_{\infty})\) and \(I_{\nu}(\rho_{0})\) appearing in the jump matrix in (212) are analytic on the unit circle \(|\lambda|=1\) except at the point \(\lambda=\lambda_{\mathrm{c}}:=-\mathrm{i}\mathrm{e}^{-\mathrm{i}\mathrm{Arg}(z)}\). However, from the identities \[J_{\nu}(\rho_{\infty})\big{|}_{\lambda=\lambda_{\mathrm{c}}\mathrm{e}^{- \mathrm{i}0}}=\mathrm{e}^{\mathrm{i}\pi\nu}\ J_{\nu}(\rho_{\infty})\big{|}_{ \lambda=\lambda_{\mathrm{c}}\mathrm{e}^{\mathrm{i}0}}\quad\text{and}\quad I_{ \nu}(\rho_{0})\big{|}_{\lambda=\lambda_{\mathrm{c}}\mathrm{e}^{\mathrm{i}0}}= \mathrm{e}^{-\mathrm{i}\pi\nu}\ I_{\nu}(\rho_{0})\big{|}_{\lambda=\lambda_{ \mathrm{c}}\mathrm{e}^{\mathrm{i}0}}\] and the fact that the indices \(\nu\) in each column of the Bessel matrix factors in (212) differ by \(2\), combined with the fact that \(\mathbf{V}^{\mathrm{even/odd}}\) is an off-diagonal matrix, one sees easily that \[\begin{bmatrix}J_{2u_{n}-1}(\rho_{\infty})&-J_{1-2u_{n}}(\rho_{\infty})\\ J_{2u_{n}+1}(\rho_{\infty})&-J_{-1-2u_{n}}(\rho_{\infty})\end{bmatrix}\cdot \mathbf{V}^{\mathrm{even/odd}}\cdot\begin{bmatrix}I_{2u_{n}-1}(\rho_{0})&I_{1- 2u_{n}}(\rho_{0})\\ -I_{2u_{n}+1}(\rho_{0})&-I_{-1-2u_{n}}(\rho_{0})\end{bmatrix}^{-1}\] is continuous at \(\lambda=\lambda_{\mathrm{c}}\) and hence is an analytic function of \(\lambda\) on the unit circle. The scalar factor \(\rho_{\infty}/\rho_{0}\) is also analytic for \(|\lambda|=1\), and therefore the jump matrix in (212) is an analytic function of \(\lambda\) when \(|\lambda|=1\). At this stage, the existence of a matrix function \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) satisfying Riemann-Hilbert Problem 8 is not clear. However, it turns out that there exists a discrete set \(\Sigma^{\mathrm{even/odd}}\subset\mathbb{C}\) such that for \(z\in\mathbb{C}\setminus\Sigma^{\mathrm{even/odd}}\), such a matrix does exist and is in fact a meromorphic function of \(z\), see Section 9.1 below. **Lemma 4**.: _Let \((e_{1},e_{2})\) be generic monodromy parameters and take \(z\in\mathbb{C}\setminus\Sigma^{\mathrm{even/odd}}\). Then,_ \[\lim_{\begin{subarray}{c}n\to\infty\\ n\ \mathrm{even/odd}\end{subarray}}u_{n}(n^{-1}z;m)=-\frac{8\mu_{n}^{2}}{z}\left[ \hat{R}^{\mathrm{even/odd}}_{11}(0,z)+\hat{R}^{\mathrm{even/odd}}_{21}(0,z)- \hat{R}^{\mathrm{even/odd}}_{12}(0,z)+\hat{R}^{\mathrm{even/odd}}_{22}(0,z) \right]^{-2}. \tag{213}\] Proof.: Noting that \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) necessarily has unit determinant, we form the matrix quotient \[\mathbf{E}_{n}(\lambda,z):=\mathbf{R}_{n}(\lambda,n^{-1}z)\hat{\mathbf{R}}^{ \mathrm{even/odd}}(\lambda,z)^{-1},\quad|\lambda|\neq 1.\] Clearly, \(\mathbf{E}_{n}(\lambda,z)\) is analytic as a function of \(\lambda\) in the domain of definition, and for each fixed \(n\) it tends to \(\mathbb{I}\) as \(\lambda\to\infty\) as this is true for both \(\mathbf{R}_{n}(\lambda,n^{-1}z)\) and \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\). Across the unit circle, the boundary values of \(\mathbf{E}_{n}(\lambda,z)\) are related by \[\mathbf{E}_{n_{\nu}+}(\lambda,z)=\mathbf{E}_{n_{\nu}-}(\lambda,z) \mathbf{\hat{R}}_{-}^{\mathrm{even/odd}}(\lambda,z)\\ \cdot\left[\mathbf{R}_{n_{\nu}-}(\lambda,n^{-1}z)^{-1}\mathbf{R}_{ n_{\nu}+}(\lambda,n^{-1}z)\right]\left[\mathbf{\hat{R}}_{-}^{\mathrm{even/odd}}( \lambda,z)^{-1}\mathbf{\hat{R}}_{+}^{\mathrm{even/odd}}(\lambda,z)\right]^{-1} \\ \cdot\mathbf{\hat{R}}_{-}^{\mathrm{even/odd}}(\lambda,z)^{-1}, \quad|\lambda|=1.\] Thus, the jump matrix for \(\mathbf{E}_{n}(\lambda,z)\) is the conjugation, by a unit-determinant matrix function of \(\lambda\) independent of \(n\), of the matrix ratio of the jump matrices for \(\mathbf{R}_{n}(\lambda,n^{-1}z)\) and for \(\mathbf{\hat{R}}^{\mathrm{even/odd}}(\lambda,z)\). But by Proposition 6, the latter ratio is \(\mathbb{I}+\mathcal{O}(n^{-1})\) uniformly on the unit circle as \(n\to\infty\) along even or odd subsequences. It follows that in this limit, \(\mathbf{E}_{n,+}(\lambda,z)=\mathbf{E}_{n_{\nu}-}(\lambda,z)(\mathbb{I}+ \mathcal{O}(n^{-1}))\) uniformly for \(|\lambda|=1\) as \(n\to\infty\). By standard small-norm theory, \(\mathbf{E}_{n}(\lambda,z)\) exists for large enough even or odd \(n\), and tends to the identity as \(n\to\infty\), in particular in the sense that \[\lim_{\lambda\to\infty}\lambda(\mathbf{E}_{n}(\lambda,z)-\mathbb{I})\to\mathbf{ 0}\quad\text{and}\quad\mathbf{E}_{n}(0,z)\to\mathbb{I}\] as \(n\to\infty\) along even/odd subsequences. By the definition of \(\mathbf{E}_{n}(\lambda,z)\) it follows that in the same limit \[\lim_{\lambda\to\infty}\lambda(\mathbf{R}_{n}(\lambda,n^{-1}z)-\mathbb{I})\to \lim_{\lambda\to\infty}\lambda(\mathbf{\hat{R}}^{\mathrm{even/odd}}(\lambda,z )-\mathbb{I})\quad\text{and}\quad\mathbf{R}_{n}(0,n^{-1}z)\to\mathbf{\hat{R}} ^{\mathrm{even/odd}}(0,z).\] Combining (152) with (201) then shows (213). \[\lim_{\begin{subarray}{c}n\to\infty\\ n\text{ even/odd}\end{subarray}}u_{n}(n^{-1}z;m)=-\frac{8\mu_{n}^{2}}{z}\left[ \hat{R}_{11}^{\mathrm{even/odd}}(0,z)+\hat{R}_{21}^{\mathrm{even/odd}}(0,z)- \hat{R}_{12}^{\mathrm{even/odd}}(0,z)-\hat{R}_{22}^{\mathrm{even/odd}}(0,z) \right]^{-2}.\] Partly, this works because the dominant term in \(\Xi_{n,12}^{(6)}(n^{-1}z;m)\) is \(A_{n,12}(n^{-1}z)\). ### Transformations of the limiting Riemann-Hilbert problem In this section we transform Riemann-Hilbert Problem 8 to match the form of Riemann-Hilbert Problem 2. To this end, using [9, Eqns. 10.4.4 and 10.4.6] to express the Bessel function \(J_{\nu}(\diamond)\) in terms of the Hankel functions \(H_{\nu}^{(1)}(\diamond),H_{\nu}^{(2)}(\diamond)\) and the relations [9, Eqns. 10.4.4 and 10.4.6], \[H_{-\nu}^{(1)}(\diamond) =\mathrm{e}^{\pi\mathrm{i}\nu}H_{\nu}^{(1)}(\diamond),\] \[H_{-\nu}^{(2)}(\diamond) =\mathrm{e}^{-\pi\mathrm{i}\nu}H_{\nu}^{(2)}(\diamond),\] we arrive at the identity: \[\rho_{\infty}\begin{bmatrix}J_{2\mu_{n}-1}(\rho_{\infty})&-J_{1-2\mu_{n}}( \rho_{\infty})\\ J_{2\mu_{n}+1}(\rho_{\infty})&-J_{-1-2\mu_{n}}(\rho_{\infty})\end{bmatrix}= \frac{\rho_{\infty}}{2}\begin{bmatrix}H_{2\mu_{n}-1}^{(1)}(\rho_{\infty})&H_{ 1-2\mu_{n}}^{(2)}(\rho_{\infty})\\ H_{2\mu_{n}+1}^{(1)}(\rho_{\infty})&H_{-1-2\mu_{n}}^{(2)}(\rho_{\infty})\end{bmatrix} \begin{bmatrix}1&\mathrm{e}^{2\pi\mathrm{i}\mu_{n}}\\ -\mathrm{e}^{2\pi\mathrm{i}\mu_{n}}&-1\end{bmatrix}. \tag{214}\] To obtain appropriate asymptotic formulae for the matrix on the right hand side of (214), we first apply the identity [9, Eqn. 10.6.1] \[H_{\nu-1}^{(k)}(\diamond)+H_{\nu+1}^{(k)}(\diamond)=\frac{2\nu}{\diamond}H_{ \nu}^{(k)}(\diamond),\quad k=1,2,\] which gives \[\rho_{\infty}^{-\sigma_{3}/2}\begin{bmatrix}1&0\\ 1&1\end{bmatrix}\frac{\rho_{\infty}}{2}\begin{bmatrix}H_{2\mu_{n}-1}^{(1)}( \rho_{\infty})&H_{1-2\mu_{n}}^{(2)}(\rho_{\infty})\\ H_{2\mu_{n}+1}^{(1)}(\rho_{\infty})&H_{-1-2\mu_{n}}^{(2)}(\rho_{\infty})\end{bmatrix}= \frac{\sqrt{\rho_{\infty}}}{2}\begin{bmatrix}H_{2\mu_{n}-1}^{(1)}(\rho_{\infty})&H_ {1-2\mu_{n}}^{(2)}(\rho_{\infty})\\ 4\mu_{n}H_{2\mu_{n}}^{(1)}(\rho_{\infty})&-4\mu_{n}H_{-2\mu_{n}}^{(2)}(\rho_{ \infty})\end{bmatrix}. \tag{215}\] The matrix on the right hand side is amenable to asymptotic analysis as \(\rho_{\infty}\to\infty\); using the asymptotics of Hankel functions [9, Eqns. 10.17.5-6] and (215) yields \[\frac{\rho_{\infty}}{2}\begin{bmatrix}H^{(1)}_{2\mu_{n}-1}(\rho_{ \infty})&H^{(2)}_{1-2\mu_{n}}(\rho_{\infty})\\ H^{(1)}_{2\mu_{n}+1}(\rho_{\infty})&H^{(2)}_{-1-2\mu_{n}}(\rho_{\infty})\end{bmatrix} =\begin{bmatrix}1&\frac{1}{2}-\frac{\mu_{n}}{2}-\frac{3}{32\mu_{n}}\\ -1&\frac{1}{2}+\frac{\mu_{n}}{2}+\frac{3}{32\mu_{n}}\end{bmatrix}\] \[\cdot\left(\mathbb{I}+\frac{1}{128\rho_{\infty}^{2}}\begin{bmatrix} (16\mu_{n}^{2}-9)(16\mu_{n}^{2}-1)&\frac{(16\mu_{n}^{2}-13)(16\mu_{n}^{2}-9)( 16\mu_{n}^{2}-1)}{48\mu_{n}}\\ 64\mu_{n}(16\mu_{n}^{2}-1)&-(16\mu_{n}^{2}-9)(16\mu_{n}^{2}-1)\end{bmatrix}+ \mathcal{O}(\rho_{\infty}^{-4})\right)\rho_{\infty}^{\sigma_{3}/2}\] \[\cdot(2\sqrt{\mu_{n}})^{\mathbb{I}-\sigma_{3}}\frac{\mathrm{e}^{- \pi\mathrm{i}\mu_{n}}}{\sqrt{2\pi}}\mathrm{e}^{\pi\mathrm{i}\sigma_{3}/4} \begin{bmatrix}1&\mathrm{i}\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\mathrm{i}\rho_{\infty}\sigma_{3}}, \quad\mathrm{Arg}(\rho_{\infty})\in(-\pi,\pi). \tag{216}\] We turn to analogously treating the final factor of the jump of \(\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\); using [9, Eqn. 10.27.7] and the above relations, we have \[\rho_{0}\begin{bmatrix}I_{2\mu_{n}-1}(\rho_{0})&I_{1-2\mu_{n}}( \rho_{0})\\ -I_{2\mu_{n}+1}(\rho_{0})&-I_{-1-2\mu_{n}}(\rho_{0})\end{bmatrix}\\ =\frac{\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0}}{2}\begin{bmatrix}H^ {(1)}_{2\mu_{n}-1}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})&H^{(2)}_{1-2\mu_{n} }(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})\\ H^{(1)}_{1+2\mu_{n}}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})&H^{(2)}_{-1-2\mu_{n} }(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})\end{bmatrix}\begin{bmatrix}\mathrm{e }^{\pi\mathrm{i}\mu_{n}}&\mathrm{e}^{\pi\mathrm{i}\mu_{n}}\\ -\mathrm{e}^{3\pi\mathrm{i}\mu_{n}}&-\mathrm{e}^{-\pi\mathrm{i}\mu_{n}}\end{bmatrix}.\] This allows us to find the following large-\(\rho_{0}\) asymptotics: \[\frac{\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0}}{2}\begin{bmatrix}H^ {(1)}_{2\mu_{n}-1}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})&H^{(2)}_{1-2\mu_{n} }(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})\\ H^{(1)}_{2\mu_{n}+1}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})&H^{(2)}_{-1-2\mu_{n} }(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})\end{bmatrix}=\begin{bmatrix}1&\frac{ 1}{2}-\frac{\mu_{n}}{2}-\frac{3}{32\mu_{n}}\\ -1&\frac{1}{2}+\frac{\mu_{n}}{2}+\frac{3}{32\mu_{n}}\end{bmatrix}\\ \cdot\left(\mathbb{I}-\frac{1}{128\rho_{0}^{2}}\begin{bmatrix}(16 \mu_{n}^{2}-9)(16\mu_{n}^{2}-1)&\frac{(16\mu_{n}^{2}-13)(16\mu_{n}^{2}-9)(16\mu _{n}^{2}-1)}{48\mu_{n}}\\ 64\mu_{n}(16\mu_{n}^{2}-1)&-(16\mu_{n}^{2}-9)(16\mu_{n}^{2}-1)\end{bmatrix}+ \mathcal{O}(\rho_{0}^{-4})\right)\rho_{0}^{\sigma_{3}/2}\\ \cdot(2\sqrt{\mu_{n}})^{\mathbb{I}-\sigma_{3}}\frac{\mathrm{e}^{- \pi\mathrm{i}\mu_{n}}}{\sqrt{2\pi}}\begin{bmatrix}1&\mathrm{i}\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\rho_{0}\sigma_{3}},\quad\mathrm{Arg}( \rho_{0})\in\left(-\frac{\pi}{2},\frac{3\pi}{2}\right). \tag{217}\] For convenience, we introduce the notation \[\mathbf{H}_{n}(\diamond):=\sqrt{\frac{\pi}{4\mu_{n}}}\mathrm{e}^{\pi\mathrm{i} /4}\mathrm{e}^{\pi\mathrm{i}\mu_{n}}\cdot\frac{\diamond}{2}\begin{bmatrix}H^ {(1)}_{2\mu_{n}-1}(\diamond)&H^{(2)}_{1-2\mu_{n}}(\diamond)\\ H^{(1)}_{1+2\mu_{n}}(\diamond)&H^{(2)}_{-1-2\mu_{n}}(\diamond)\end{bmatrix}, \tag{218}\] with a fixed determination of the square root; this choice of prefactor guarantees that we have \(\det(\mathbf{H}_{n})=1\) identically. Using the identity [9, Eqn. 10.11.4], we note that \(\mathbf{H}_{n}\) satisfies \[\mathbf{H}_{n}(\mathrm{e}^{\pi\mathrm{i}}\diamond)=\mathbf{H}_{n}(\diamond) \begin{bmatrix}0&-1\\ 1&2\cos(2\pi\mu_{n})\end{bmatrix}. \tag{219}\] We can now rewrite the jump condition (212) as \[\hat{\mathbf{R}}^{\mathrm{even/odd}}_{+}(\lambda,z)\\ =\hat{\mathbf{R}}^{\mathrm{even/odd}}_{-}(\lambda,z)\mathbf{H}_{n}( \rho_{\infty})\begin{bmatrix}1&\mathrm{e}^{2\pi\mathrm{i}\mu_{n}}\\ -\mathrm{e}^{2\pi\mathrm{i}\mu_{n}}&-1\end{bmatrix}\cdot\mathbf{V}^{\mathrm{ even/odd}}\cdot\begin{bmatrix}\mathrm{e}^{\pi\mathrm{i}\mu_{n}}&\mathrm{e}^{\pi\mathrm{i}\mu_{n}}\\ -\mathrm{e}^{3\pi\mathrm{i}\mu_{n}}&-\mathrm{e}^{-\pi\mathrm{i}\mu_{n}}\end{bmatrix}^ {-1}\mathbf{H}_{n}^{-1}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0}).\] Next, define \[\boldsymbol{\Omega}^{\mathrm{even/odd}}(\lambda,z):=\begin{bmatrix}1&\dfrac{1}{2}- \dfrac{\mu_{n}}{2}-\dfrac{3}{32\mu_{n}}\\ -1&\dfrac{1}{2}+\dfrac{\mu_{n}}{2}+\dfrac{3}{32\mu_{n}}\end{bmatrix}^{-1}(2 \sqrt{\mu_{n}})^{\sigma_{3}}\hat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z) \left\{\begin{array}{ll}\mathbf{H}_{n}(\rho_{\infty}),&|\lambda|>1,\\ \mathbf{H}_{n}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})&|\lambda|<1.\end{array}\right. \tag{220}\] Then, \(\boldsymbol{\Omega}^{\mathrm{even/odd}}\) satisfies \[\boldsymbol{\Omega}^{\mathrm{even/odd}}_{+}(\lambda,z)=\boldsymbol{\Omega}^{ \mathrm{even/odd}}_{-}(\lambda,z)\begin{bmatrix}1&\mathrm{e}^{2\pi\mathrm{i} \mu_{n}}\\ -\mathrm{e}^{2\pi\mathrm{i}\mu_{n}}&-1\end{bmatrix}\cdot\mathbf{V}^{\mathrm{ even/odd}}\cdot\begin{bmatrix}\mathrm{e}^{\pi\mathrm{i}\mu_{n}}&\mathrm{e}^{\pi \mathrm{i}\mu_{n}}\\ -\mathrm{e}^{3\pi\mathrm{i}\mu_{n}}&-\mathrm{e}^{-\pi\mathrm{i}\mu_{n}}\end{bmatrix}^ {-1},\quad|\lambda|=1, \tag{221}\] where the jump depends only on the parity of \(n\). Furthermore, since \(\rho_{\infty}\) and \(\rho_{0}\) change signs across the negative imaginary axis, we may use (219) to find \[\boldsymbol{\Omega}^{\mathrm{even/odd}}_{+}(\lambda,z)=\boldsymbol{\Omega}^{ \mathrm{even/odd}}_{-}(\lambda,z)\begin{bmatrix}0&-1\\ 1&2\cos(2\pi\mu_{n})\end{bmatrix},\] for \(\lambda\) on the negative imaginary axis with \(|\lambda|>1\), oriented towards the origin and \[\boldsymbol{\Omega}^{\mathrm{even/odd}}_{+}(\lambda,z)=\boldsymbol{\Omega}^{ \mathrm{even/odd}}_{-}(\lambda,z)\begin{bmatrix}0&-1\\ 1&2\cos(2\pi\mu_{n})\end{bmatrix}, \tag{222}\] for \(\lambda\) on the negative imaginary axis with \(|\lambda|<1\), oriented away from the origin. It follows from Riemann-Hilbert Problem 8, (220), and (216) that \(\boldsymbol{\Omega}^{\mathrm{even/odd}}\) has the following asymptotic behavior as \(\lambda\to\infty\): \[\boldsymbol{\Omega}^{\mathrm{even/odd}}(\lambda,z)=\left(\mathbb{I}+ \boldsymbol{\Xi}^{\mathrm{even/odd}}(z)\lambda^{-1}+\mathcal{O}(\lambda^{-2}) \right)\rho_{\infty}^{\sigma_{3}/2}\dfrac{1}{\sqrt{2}}\begin{bmatrix}\mathrm{i }&-1\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\mathrm{i}\rho_{\infty}\sigma_{3}}, \quad\lambda\to\infty, \tag{223}\] where the \(\mathcal{O}(\lambda^{-2})\) represents an asymptotic series that is differentiable term-by-term with respect to both \(\lambda\) and \(z\). Analogously we have \[\boldsymbol{\Omega}^{\mathrm{even/odd}}(\lambda,z)=\boldsymbol{\Delta}^{ \mathrm{even/odd}}(z)\left(\mathbb{I}+\boldsymbol{\Pi}^{\mathrm{even/odd}}(z )\lambda+\mathcal{O}(\lambda^{2})\right)\rho_{0}^{\sigma_{3}/2}\mathrm{e}^{- \pi\mathrm{i}\sigma_{3}/4}\dfrac{1}{\sqrt{2}}\begin{bmatrix}\mathrm{i}&-1\\ 1&-\mathrm{i}\end{bmatrix}\mathrm{e}^{\rho_{0}\sigma_{3}},\quad\lambda\to 0. \tag{224}\] where \(\mathcal{O}(\lambda^{2})\) represents an asymptotic series at the origin \(\lambda=0\) which is similarly term-by-term differentiable. Notice that we can now relate the limiting formula from Lemma 4 to \(\boldsymbol{\Omega}^{\mathrm{even/odd}}\) using definitions (220) and (218) to find that \[\hat{R}^{\mathrm{even/odd}}_{11}(0,z)+\hat{R}^{\mathrm{even/odd}}_ {21}(0,z)-\hat{R}^{\mathrm{even/odd}}_{12}(0,z)-\hat{R}^{\mathrm{even/odd}}_{22} (0,z)\\ =\sqrt{\dfrac{\pi}{4\mu_{n}}}\mathrm{e}^{\pi\mathrm{i}\tau/4} \mathrm{e}^{\pi\mathrm{i}\mu_{n}}\dfrac{\mathrm{e}^{-\pi\mathrm{i}\tau/2}\rho_ {0}}{2}\left[(H^{(2)}_{-1-2\mu_{n}}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})+H^ {(2)}_{1-2\mu_{n}}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0}))\Omega^{\mathrm{ even/odd}}_{21}(\lambda,z)\right.\\ \left.-(H^{(1)}_{2\mu_{n}-1}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0})+H^ {(1)}_{2\mu_{n}+1}(\mathrm{e}^{-\pi\mathrm{i}/2}\rho_{0}))\Omega^{\mathrm{ even/odd}}_{22}(\lambda,z)\right]_{\lambda=0}.\] Then, using (224) and (217) yields \[\lim_{\begin{subarray}{c}n\to\infty\\ n\ \mathrm{even/odd}\end{subarray}}u_{n}(n^{-1}z)=U^{\mathrm{even/odd}}(z):=- \dfrac{1}{2\pi\Delta^{\mathrm{even/odd}}_{21}(z)^{2}}. \tag{225}\] To extract the monodromy parameters of \(U(z)\) from \(\boldsymbol{\Omega}(\lambda,z)\), we notice that it solves Riemann-Hilbert Problem 2 with \[t_{1}^{\infty}=t_{0}^{0}=-2\cos(2\pi\mu_{n})=-\left(e_{1}^{2}+\dfrac{1}{e_{1}^{ 2}}\right), \tag{226}\] and \[\mathbf{C}_{0\infty}=\begin{bmatrix}1&\mathrm{e}^{2\pi{\rm i}\mu_{n}}\\ -\mathrm{e}^{2\pi{\rm i}\mu_{n}}&-1\end{bmatrix}\cdot\mathbf{V}^{\rm even/ odd}\cdot\begin{bmatrix}\mathrm{e}^{\pi{\rm i}\mu_{n}}&\mathrm{e}^{\pi{\rm i}\mu_{n}} \\ -\mathrm{e}^{3\pi{\rm i}\mu_{n}}&-\mathrm{e}^{-\pi{\rm i}\mu_{n}}\end{bmatrix}^ {-1}. \tag{227}\] Since \(\mu_{n}\), \(\mathbf{V}^{\rm even/odd}\) depend only on the parity of \(n\) (see (154), (211), respectively), and \(e_{1}=e_{1,n}=\mathrm{e}^{\pi{\rm i}\mu_{n}}\) (see Remark 7), it follows that (226), (227) depend only on the parity of \(n\). Recalling the formulae for \(y_{i}\) in Section 5.4, one immediately arrives at formulae (20)-(22). ## 8. Small \(x\) asymptotics and proof of Proposition 1 Inspired by [33], see also [23, Theorem 3.2], the goal of this section is to compute the asymptotics as \(x\to 0\) of the Backlund iterates \(u_{n}(x)\) for fixed \(n\) and, by evaluating at \(n=0\), arriving at the asymptotic behavior of a generic solution of \(\mathrm{P}\mathrm{P}\mathrm{P}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{ I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{ I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I} \mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\mathrm{I}\ For \(x>0\), the functions \(\lambda^{p}_{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{ \mathbf{\mathrm{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}},(-\lambda^{-1})^{p}_{ \mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathbf{\mathrm{\mathbf{\mathbf{\mathrm{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\cdot}}}}}}}}}}}}},(-{\rm i}x \lambda^{-1})^{p}\) (the latter two being principal branches) all have the same branch cut, namely \({\rm i}{\rm i}{\rm\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{ \mathbf{\mathrm{\mathbf{\mathbf{\mathrm{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\cdot}}}}}}}}}}}}}}\). One has the following identities: \[(-{\rm i}x\lambda)^{p}={\rm e}^{-{\rm i}\pi p/2}x^{p}\lambda^{p}_{\mathbf{\mathrm{ \mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{ \mathrm{\mathbf{\mathbf{\cdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdot \mathbf{\ The jump \(\mathbf{V}_{\widetilde{\mathbf{Q}}}\) has a limit as \(x\to 0\), uniformly for \(|\lambda|=1\) for \(|\mathrm{Re}\mu_{n}|<\frac{1}{2}\), and satisfies the estimate \[\mathbf{V}_{\widetilde{\mathbf{Q}}}(\varsigma,x)=\mathbb{I}+\mathcal{O}\left(x^ {2-|4\mathrm{Re}\mu_{n}|}\right).\] By the standard theory of small-norm Riemann-Hilbert problems, we arrive at \[\widetilde{\mathbf{Q}}_{n}(\varsigma,x)=\mathbb{I}+\mathcal{O}\left(x^{2-|4 \mathrm{Re}\mu_{n}|}\right)\quad\text{as}\quad x\to 0\] uniformly for \(\varsigma\) sufficiently small, and \[\lim_{\varsigma\to\infty}\varsigma\left(\widetilde{\mathbf{Q}}_{n}(\varsigma, x)-\mathbb{I}\right)=\lim_{\varsigma\to\infty}\varsigma\left(\widetilde{\mathbf{Q}}_{n}( \varsigma,0)-\mathbb{I}\right)+\mathcal{O}\left(x^{2-|4\mathrm{Re}\mu_{n}|} \right)=\mathcal{O}\left(x^{2-|4\mathrm{Re}\mu_{n}|}\right).\] We can now use the above estimate and expressions (228) and (229) to compute the asymptotic behavior of \(u_{n}(x)\) as \(x\to 0\). To this end, note that by (232) and the definition of \(\mathbf{D}_{n}\) and \(\varsigma\), \[\lim_{\lambda\to\infty}\lambda Q_{n,12}(\lambda,x)=-\frac{1}{\mu_{n}\gamma_{n }}x^{2\kappa_{n}-2}\lim_{\varsigma\to\infty}\varsigma\widetilde{Q}_{n,12}( \varsigma,x),\quad|\varsigma|>1,\] and so \[\lim_{\lambda\to\infty}\lambda Q_{n,12}(\lambda,x)=\mathcal{O}\left(x^{2\kappa _{n}-|4\mathrm{Re}\mu_{n}|}\right),\quad x\to 0. \tag{233}\] Likewise, (232) gives \[Q_{n,11}(0,x)Q_{n,12}(0,x)=-\mathrm{i}\frac{D_{n,11}^{2}}{4\mu_{ n}^{2}D_{-n,11}D_{-n,22}}\\ \begin{cases}O_{n,21}^{2}(\kappa_{n}-\frac{1}{2}+\mu_{n})^{2}( \kappa_{-n}-\frac{1}{2}+\mu_{n})x^{2\kappa_{n}-1-4\mu_{n}}+\mathcal{O}(x^{2 \kappa_{n}-|4\mathrm{Re}\mu_{n}|}),&\mathrm{Re}\mu_{n}>0,\\ O_{n,12}^{2}(\kappa_{n}-\frac{1}{2}-\mu_{n})^{2}(\kappa_{-n}-\frac{1}{2}-\mu_{ n})x^{2\kappa_{n}-1+4\mu_{n}}+\mathcal{O}(x^{2\kappa_{n}-|4\mathrm{Re}\mu_{n}|}),& \mathrm{Re}\mu_{n}<0.\end{cases} \tag{234}\] Using (233), (234) in (228) yields \[u_{n}(x)=\frac{4\mathrm{i}\mu_{n}^{3}(1-a_{n}^{2})D_{-n,11}D_{-n,22}}{\gamma_{ n}D_{n,11}^{2}O_{n,21}^{2}(\kappa_{n}-\frac{1}{2}+\mu_{n})^{2}(\kappa_{-n}- \frac{1}{2}+\mu_{n})}x^{4\mu_{n}-1}\left(1+\mathcal{O}(x^{\delta})\right)\quad \text{as}\quad x\to 0,\] when \(\mathrm{Re}\mu_{n}>0\) and \[u_{n}(x)=\frac{4\mathrm{i}\mu_{n}^{3}(1-a_{n}^{2})D_{-n,11}D_{-n,22}}{\gamma_{ n}D_{n,11}^{2}O_{n,12}^{2}(\kappa_{n}-\frac{1}{2}-\mu_{n})^{2}(\kappa_{-n}- \frac{1}{2}-\mu_{n})}x^{-4\mu_{n}-1}\left(1+\mathcal{O}(x^{\delta})\right)\quad \text{as}\quad x\to 0,\] when \(\mathrm{Re}\mu_{n}<0\), where \(\delta=\min\left(1,2-|4\mathrm{Re}(\mu_{n})|\right)\) in both cases. Using (230), (197), (193), (185), (182), and (168) gives the expression9 Footnote 9: The case \(\mathrm{Re}(\mu_{n})=0\) can be treated similarly, and produces a leading term that is a combination of both leading terms, which we omit for brevity. \[u_{n}(x)=-\frac{\Gamma(1-2\varepsilon_{n}\mu_{n})^{2}\Gamma\left(-\frac{n}{2}+ \varepsilon_{n}\mu_{n}-\frac{\Theta_{0}}{2}\right)\Gamma\left(\frac{n}{2}+ \varepsilon_{n}\mu_{n}-\frac{\Theta_{0}}{2}+1\right)}{\Gamma(2\varepsilon_{n} \mu_{n})^{2}\Gamma\left(-\frac{n}{2}-\varepsilon_{n}\mu_{n}-\frac{\Theta_{0}} {2}+1\right)\Gamma\left(\frac{n}{2}-\varepsilon_{n}\mu_{n}-\frac{\Theta_{0}} {2}+1\right)}x^{4\varepsilon_{n}\mu_{n}-1}\left(1+\mathcal{O}(x^{\delta})\right) \\ \cdot\left\{\begin{array}{ll}\frac{e_{0}^{2}2e_{ \infty}^{2}\left(e_{0}^{2}-e_{1}^{2}\right)\left(e_{1}^{2}-e_{\infty}^{2} \right)}{\left(e_{0}^{2}e_{1}^{2}-1\right)^{2}},&\mathrm{Re}\mu_{n}>0,\\ \frac{\left(e_{0}^{2}e_{1}^{2}-1\right)^{2}}{e_{0}^{2}e_{2}^{2}e_{ \infty}^{2}\left(e_{0}^{2}-e_{1}^{2}\right)\left(e_{1}^{2}-e_{\infty}^{2} \right)},&\mathrm{Re}\mu_{n}<0,\end{array}\right. \tag{235}\] where \(\epsilon_{n}=\mathsf{sgn}(\mathrm{Re}\mu_{n})\). The concerned reader may note that the leading coefficient in (235) is finite due to the genericity conditions on \((e_{1},e_{2})\) (see the beginning of Section 7). Indeed, assumption (i) guarantees that \(2\mu_{n}\not\in\mathbb{Z}\), condition (ii) requires \(e_{1}e_{2}\neq 0\), and condition (iii) guarantees that \[\frac{n}{2}\pm\mu_{n}+\frac{\Theta_{0}}{2}\not\in\mathbb{Z}\quad\text{and} \quad\frac{n}{2}\pm\mu_{n}+\frac{\Theta_{\infty}}{2}\not\in\mathbb{Z}.\] Evaluating the above at \(n=0\) yields (24) and finishes the proof of Proposition 1. One notable application of this is to the family of rational solutions of Painleve-III already discussed at the end of Section 4.2. This corresponds to the choice \(m=\Theta_{0}=\Theta_{\infty}-1\) and \(\mu_{0}=1/4\). It follows from (235) that \(u_{n}(x;m)\) has a well-defined value at \(x=0\) which is given by (42), (43) in the case where \(n\) is even or odd, respectively. We can verify that these values are consistent with (235) by noting that \(e_{1},e_{2},e_{0}^{2},e_{\infty}^{2}\) are invariant under an even increment \(n\mapsto n+2\), and so we have the general formulae \[\frac{u_{2k+2}(0)}{u_{2k}(0)} =\frac{(2k+2\mu_{2k+2}+\Theta_{0})(2k+2\mu_{2k+2}+1-\Theta_{\infty })}{(2+2k-2\mu_{2k}+\Theta_{0})(2+2k-2\mu_{2k}-\Theta_{\infty})},\] \[\frac{u_{2k+1}(0)}{u_{2k-1}(0)} =\frac{(2k-1+2\mu_{2k+1}+\Theta_{0})(2k+1+2\mu_{2k+1}-\Theta_{ \infty})}{(2k+1-2\mu_{2k-1}+\Theta_{0})(+2k+1-2\mu_{2k-1}-\Theta_{\infty})}.\] Plugging in the specialized values of the parameters and using the known values of \(u_{0}(0;m),u_{1}(0;m)\) yields the equality of the expression in (235) with the product formulae (42), (43). ## 9. Alternative Riemann-Hilbert problem for Painleve-III(\(D_{8}\)) ### Fabry-type transformation and existence of \(\widehat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\) The Lax pair (146) is unusual in that its coefficient matrices have non-diagonalizable leading terms at both of its singular points \(\lambda=0\) and \(\lambda=\infty\), i.e. the coefficients of \(\lambda^{0}\) and \(\lambda^{-2}\) in (145) are not diagonalizable. To deduce the existence of the matrix functions \(\boldsymbol{\Omega}^{\mathrm{even/odd}}(\lambda,z)\) and \(\widehat{\mathbf{R}}^{\mathrm{even/odd}}(\lambda,z)\), we identify this Lax pair with ones appearing in the literature by considering the following Fabry-type transformation \[\mathbf{S}(\xi,z):=\frac{1}{\sqrt{2}}\begin{bmatrix}-\mathrm{i}&1\\ -1&\mathrm{i}\end{bmatrix}(2z)^{-\sigma_{3}/4}\xi^{-\sigma_{3}/2}\cdot\begin{cases} \boldsymbol{\Omega}(\xi^{2}\mathrm{e}^{\frac{\mathrm{i}\pi}{2}},z),&-\frac{ \pi}{2}<\mathsf{Arg}(\xi)<\frac{\pi}{2},\\ \boldsymbol{\Omega}(\xi^{2}\mathrm{e}^{-\frac{\mathrm{i}\pi}{2}},z)(-\mathrm{i }\sigma_{2}),&\frac{\pi}{2}<\mathsf{Arg}(\xi)<\pi,\\ \boldsymbol{\Omega}(\xi^{2}\mathrm{e}^{\frac{\mathrm{i}\pi}{2}},z)\mathrm{i} \sigma_{2},&-\pi<\mathsf{Arg}(\xi)<-\frac{\pi}{2},\end{cases} \tag{236}\] when \(|\xi|>1\), and \[\mathbf{S}(\xi,z):=\frac{1}{\sqrt{2}}\begin{bmatrix}-\mathrm{i}&1\\ -1&\mathrm{i}\end{bmatrix}(2z)^{-\sigma_{3}/4}\xi^{-\sigma_{3}/2}\cdot\begin{cases} \boldsymbol{\Omega}(\xi^{2}\mathrm{e}^{\frac{\mathrm{i}\pi}{2}},z),&-\frac{ \pi}{2}<\mathsf{Arg}(\xi)<\frac{\pi}{2},\\ \boldsymbol{\Omega}(\xi^{2}\mathrm{e}^{-\frac{\mathrm{i}\pi}{2}},z)\mathrm{i} \sigma_{2},&\frac{\pi}{2}<\mathsf{Arg}(\xi)<\pi,\\ \boldsymbol{\Omega}(\xi^{2}\mathrm{e}^{\frac{\mathrm{i}\pi}{2}},z)(-\mathrm{i} \sigma_{2}),&-\pi<\mathsf{Arg}(\xi)<-\frac{\pi}{2},\end{cases} \tag{237}\] when \(|\xi|<1\). Denoting \[\mathbf{K}:=\frac{1}{\sqrt{2}}\begin{bmatrix}\mathrm{i}&-1\\ 1&-\mathrm{i}\end{bmatrix} \tag{238}\] and using expansions (223), (224) (note the branch choices in (207), (208)) one can directly check that \[\mathbf{S}(\xi,z)=\mathbf{K}^{-1}\left(\mathbb{I}+\begin{bmatrix}0&0\\ (2z)^{1/2}\Xi_{21}^{(8)}(z)&0\end{bmatrix}\frac{1}{\mathrm{i}\xi}+\begin{bmatrix} \Xi_{11}^{(8)}(z)&0\\ 0&\Xi_{22}^{(8)}(z)\end{bmatrix}\frac{1}{\mathrm{i}\xi^{2}}+\mathcal{O}(\xi^{-3 })\right)\mathbf{Ke}^{\mathrm{i}(2z)^{1/2}\xi\sigma_{3}}\\ \text{as}\quad\xi\to\infty,\] and \[\mathbf{S}(\xi,z)=\mathbf{K}^{-1}\left(\begin{bmatrix}\Delta_{11}^{(8)}(z)&0\\ 0&0\end{bmatrix}\frac{1}{\xi}+\begin{bmatrix}0&\frac{\Delta_{12}^{(8)}(z)}{(2z)^ {1/2}}\\ (2z)^{1/2}\Delta_{21}^{(8)}(z)&0\end{bmatrix}+\begin{bmatrix}f(z)&0\\ 0&\Delta_{22}^{(8)}(z)\end{bmatrix}\xi+\mathcal{O}(\xi^{2})\right)\\ \cdot\mathrm{e}^{-\frac{\mathrm{i}\pi}{4}\sigma_{3}}\mathbf{K} \mathrm{e}^{(2z)^{1/2}\xi^{-1}\sigma_{3}}\quad\text{as}\quad\xi\to 0,\] where \(f(z):=\mathrm{i}\left((\Delta_{11}\Pi_{11})(z)+(\Delta_{12}\Pi_{21})(z)\right)\); this partly works due to the identity \[\mathbf{K}\sigma_{2}=-\sigma_{3}\mathbf{K}. \tag{239}\] Furthermore, one can directly verify that the jump relations (221)-(222) translate to the jumps shown in Figure 11, where \(\mathbf{C}_{0\infty}\) is as in (227). The jump matrices satisfy two cyclic relations about the nonsingular self-intersection points of the jump contour, namely \[\text{about }\xi=+\mathrm{i}\colon\quad\mathbf{C}_{0\infty}^{-1} \mathbf{S}_{1}^{\infty}(\mathrm{i}\sigma_{2})\mathbf{C}_{0\infty}(\mathrm{i} \sigma_{2})(\mathbf{S}_{0}^{0})^{-1}=\mathbb{I},\] \[\text{about }\xi=-\mathrm{i}\colon\left[(\mathrm{i}\sigma_{2}) \mathbf{C}_{0\infty}(\mathrm{i}\sigma_{2})\right]^{-1}(\mathbf{S}_{0}^{0})^{ -1}\mathbf{C}_{0\infty}\mathbf{S}_{1}^{\infty}=\mathbb{I}.\] Observe that the matrix \(\mathbf{S}(\xi,z)\) possesses the following useful symmetry: \[\mathbf{S}(\xi,z)=\sigma_{2}\begin{cases}-\mathbf{S}(-\xi,z)\sigma_{2},&|\xi| <1,\\ \mathbf{S}(-\xi,z)\sigma_{2},&|\xi|>1.\end{cases} \tag{240}\] This result also uses the identity (239). Using this symmetry, it can be checked that the Fabry transformation (236)-(237) is invertible with \[\mathbf{\Omega}(\lambda)=\rho_{\infty}^{\sigma_{3}/2}(\lambda,z)\mathbf{K} \mathbf{S}(\sqrt{-\mathrm{i}\lambda}), \tag{241}\] where all roots are principal branches. While the singular behavior of \(\mathbf{S}(\xi,z)\) at \(\xi=0\) is concerning, the fact that the leading coefficient is singular allows us to handle this problem by letting \[\widetilde{\mathbf{S}}(\xi,z)=\left(\mathbb{I}-\frac{1}{\xi}\mathbf{T}(z) \right)\mathbf{S}(\xi,z), \tag{242}\] where \[\mathbf{T}(z)=\mathbf{K}^{-1}\begin{bmatrix}0&\frac{\Delta_{11}^{(8)}(z)}{(2z)^{1/2 }\Delta_{21}^{(8)}(z)}\\ 0&0\end{bmatrix}\mathbf{K}. \tag{243}\] Since the prefactor is analytic in \(\mathbf{C}\setminus\{0\}\), the jumps of \(\widetilde{\mathbf{S}}\) are unchanged. As for the behavior at \(\xi=\infty\) and \(\xi=0\), we have \[\widetilde{\mathbf{S}}(\xi,z)=\begin{pmatrix}\mathbf{I}+\mathbf{K}^{-1} \begin{bmatrix}0&-\frac{\Delta_{11}^{(8)}(z)}{(2z)^{1/2}\Delta_{21}^{(8)}(z)} \\ -\mathrm{i}(2z)^{1/2}\Xi_{21}^{(8)}(z)&0\end{bmatrix}\mathbf{K}\frac{1}{\xi}+ \mathcal{O}(\xi^{-2})\end{pmatrix}\mathbf{e}^{\mathrm{i}(2z)^{1/2}\xi\sigma_{ 3}}\quad\text{as}\quad\xi\to\infty,\] and \[\widetilde{\mathbf{S}}(\xi,z)=\begin{pmatrix}\mathbf{K}^{-1} \begin{bmatrix}0&-\frac{1}{(2z)^{1/2}\Delta_{21}^{(8)}(z)}\\ (2z)^{1/2}\Delta_{21}^{(8)}(z)&0\end{bmatrix}\mathrm{e}^{-\frac{\mathrm{i}\pi} {4}\sigma_{3}}\mathbf{K}+\mathcal{O}(\xi)\end{pmatrix}\mathrm{e}^{(2z)^{1/2} \xi-1\sigma_{3}}\quad\text{as}\quad\xi\to 0.\] _Remark 9_.: Noting that \[\det\begin{pmatrix}\mathbf{K}^{-1}\begin{bmatrix}0&-\frac{1}{(2z)^{1/2}\Delta _{21}^{(8)}(z)}\\ (2z)^{1/2}\Delta_{21}^{(8)}(z)&0\end{bmatrix}\mathrm{e}^{-\frac{\mathrm{i}\pi} {4}\sigma_{3}}\mathbf{K}\end{pmatrix}=1,\] one can carry out a computation similar to the one in Section 5.3 to arrive at a pair of differential equations analogous to (146), but with diagonalizable leading matrices at the two singular points at \(\xi=0,\infty\); this system appears in [33, Chapter 2] and [15], for example. Since we do not make use of this Lax pair, we omit the calculation. Using (239), it follows that \[\mathbf{I}-\frac{1}{\xi}\mathbf{T}(z)=\sigma_{2}\left(\mathbf{I}+\frac{1}{\xi }\mathbf{T}(z)\right)\sigma_{2},\] which implies that matrix \(\widetilde{\mathbf{S}}(\xi,z)\) also satisfies symmetry (240). To simplify this symmetry, let \[\widetilde{\mathbf{S}}(\xi,z):=\mathrm{e}^{-\pi\mathrm{i}\sigma_{3}/4}\begin{cases} \widetilde{\mathbf{S}}(\xi,z)\mathrm{e}^{\pi\mathrm{i}\sigma_{3}/4},&|\xi|>1,\\ \widetilde{\mathbf{S}}(\xi,z)\mathrm{e}^{-\pi\mathrm{i}\sigma_{3}/4},&|\xi|<1.\end{cases} \tag{244}\] Then, \(\widehat{\mathbf{S}}(\xi,z)\) solves the following Riemann-Hilbert problem. **Riemann-Hilbert Problem 10**.: _Let \((y_{1},y_{2},y_{3})\in\mathbb{C}^{3}\) be the monodromy data corresponding to \(U(z)\) given in (20), (21), and (22), and fix \(z\in\mathbb{C}\). See a \(2\times 2\) matrix function \(\xi\mapsto\widehat{\mathbf{S}}(\xi,z)\) satisfying the following properties:_ * _Analyticity:_ \(\widehat{\mathbf{S}}(\xi,z)\) _is an analytic function of_ \(\xi\) _for_ \(|\xi|\neq 1\)_._ * _Jump condition:_ \(\widehat{\mathbf{S}}(\xi,z)\) _takes analytic boundary values on the unit circle from the interior and exterior, denoted_ \(\widehat{\mathbf{S}}_{+}(\xi,z)\) _and_ \(\widehat{\mathbf{S}}_{-}(\xi,z)\) _for_ \(|\xi|=1\) _respectively, and they are related by_ \[\widehat{\mathbf{S}}_{+}(\xi,z)=\widehat{\mathbf{S}}_{-}(\xi,z)\mathbf{J}_{ \widehat{\mathbf{S}}}(\xi),\] _where_ \(\mathbf{J}_{\widehat{\mathbf{S}}}(\xi)\) _is shown in Figure_ 12 _and_ (245) \[\widehat{\mathbf{S}}_{1}^{\infty} :=\mathrm{e}^{-\pi\mathrm{i}\sigma_{3}/4}\mathbf{S}_{1}^{\infty} \mathrm{e}^{\pi\mathrm{i}\sigma_{3}/4}=\mathrm{e}^{\pi\mathrm{i}\sigma_{3}/4}( \mathbf{S}_{1}^{\infty})^{-1}\mathrm{e}^{-\pi\mathrm{i}\sigma_{3}/4},\] \[\widehat{\mathbf{S}}_{0}^{0} :=\mathrm{e}^{\pi\mathrm{i}\sigma_{3}/4}\mathbf{S}_{0}^{0}\mathrm{ e}^{-\pi\mathrm{i}\sigma_{3}/4}=\mathrm{e}^{-\pi\mathrm{i}\sigma_{3}/4}(\mathbf{S}_{0}^{0}) ^{-1}\mathrm{e}^{\pi\mathrm{i}\sigma_{3}/4}.\] _Normalization:_ \[\widehat{\mathbf{S}}(\xi,z)=\left(\mathbb{I}+\widehat{\Xi}^{(8)}(z)\xi^{-1}+ \mathcal{O}(\xi^{-2})\right)\mathrm{e}^{\mathrm{i}(2z)^{1/2}\xi\sigma_{3}}\quad \text{as}\quad\xi\to\infty, \tag{246}\] _and_ \[\widehat{\mathbf{S}}(\xi,z)=\widehat{\boldsymbol{\Delta}}^{(8)}(z)\left( \mathbb{I}+\widehat{\boldsymbol{\Pi}}(z)\xi+\mathcal{O}(\xi^{2})\right) \mathrm{e}^{(2z)^{1/2}\xi^{-1}\sigma_{3}}\quad\text{as}\quad\xi\to 0, \tag{247}\] _where \(\widehat{\boldsymbol{\Delta}}^{(8)}(z)\) may be written in terms of entries of \(\boldsymbol{\Delta}^{(8)}(z)\) and \(\mathbf{K}\)._ Now, the matrix \(\widehat{\mathbf{S}}(\xi,z)\) satisfies the symmetry \[\sigma_{1}\widehat{\mathbf{S}}(-\xi,z)\sigma_{1}=\widehat{\mathbf{S}}(\xi,z).\] Furthermore, it was shown in [33, Theorem 4] that matrix \(\widehat{\mathbf{S}}(\xi,z)\) exists for all \(z\) outside of a discrete set \(\Sigma\) and is a meromorphic function of \(z\) in \(\mathbb{C}\setminus\Sigma\). Since transformations used to arrive to \(\widehat{\mathbf{S}}(\xi,z)\) from \(\widehat{\mathbf{R}}(\lambda,z)\) and \(\boldsymbol{\Omega}(\lambda,z)\) are invertible, we deduce the existence of a matrix functions satisfying Riemann-Hilbert Problems 8 and 2. It was shown in [38] that \(\Sigma\) coincides with the set of zeroes of the \(\tau\)-function associated to the Riemann-Hilbert problem. According to [25] the expression for the logarithmic derivative of the \(\tau\)-function associated to Riemann-Hilbert Problem 10 is given by \[\frac{\mathrm{d}}{\mathrm{d}z}\ln(\tau(z))=-\frac{1}{\sqrt{2z}}\left(\mathrm{ Tr}\left(\widehat{\boldsymbol{\Pi}}(z)\sigma_{3}\right)+\mathrm{i}\mathrm{ Tr}\left(\widehat{\boldsymbol{\Xi}}^{(8)}(z)\sigma_{3}\right)\right). \tag{248}\] After a long symbolic computation using the transformations (236), (237), (242), and (244), we get the following result. The Lax pair (123)-(124) is gauge equivalent to \[\frac{\partial\widehat{\mathbf{S}}}{\partial\xi}(\xi,z)=\widehat{\boldsymbol {\Lambda}}^{(8)}(\xi,z)\widehat{\mathbf{S}}(\xi,z),\quad\frac{\partial\widehat {\mathbf{S}}}{\partial z}(\xi,z)=\widehat{\mathbf{Z}}(\xi,z)\widehat{\mathbf{S }}(\xi,z),\] where \[\widehat{\boldsymbol{\Lambda}}^{(8)}(\xi,z)=\mathrm{i}(2z)^{1/2}\sigma_{3}+ \frac{1}{\xi}\frac{zU^{\prime}(z)}{2U(z)}\sigma_{1}+\frac{\mathrm{i}}{\xi^{2} }\left(\frac{z}{2}\right)^{1/2}\left(U(z)-\frac{1}{U(z)}\right)\sigma_{3}+ \frac{1}{\xi^{2}}\left(\frac{z}{2}\right)^{1/2}\left(U(z)+\frac{1}{U(z)} \right)\sigma_{2},\] and \[\widehat{\mathbf{Z}}(\xi,z)=\frac{\mathrm{i}\xi}{(2z)^{1/2}}\sigma_{3}+\frac {U^{\prime}(z)}{4U(z)}\sigma_{1}-\frac{\mathrm{i}}{2\xi}\frac{1}{(2z)^{1/2}} \left(U(z)-\frac{1}{U(z)}\right)\sigma_{3}-\frac{1}{2\xi}\frac{1}{(2z)^{1/2} }\left(U(z)+\frac{1}{U(z)}\right)\sigma_{2}.\] Figure 12. The jump contour and matrices for \(\widehat{\mathbf{S}}\), where the jump matrices are as in (245). Similarly the coefficients in (246)-(247) have the following expressions \[\widehat{\boldsymbol{\Lambda}}^{(8)}(z) =(\mathrm{e}^{-\mathrm{i}\pi/4}U(z)^{1/2})^{\sigma_{1}},\] \[\widehat{\boldsymbol{\Xi}}^{(8)}(z) =\mathrm{i}\left(\frac{z}{2}\right)^{1/2}\left(\frac{zU^{\prime}( z)^{2}}{8U(z)^{2}}-U(z)+\frac{1}{U(z)}\right)\sigma_{3}-\left(\frac{z}{2} \right)^{1/2}\frac{U^{\prime}(z)}{4U(z)}\sigma_{2},\] \[\widehat{\boldsymbol{\Pi}}(z) =-\left(\frac{z}{2}\right)^{1/2}\left(\frac{zU^{\prime}(z)^{2}}{8 U(z)^{2}}-U(z)+\frac{1}{U(z)}\right)\sigma_{3}+\mathrm{i}\left(\frac{z}{2} \right)^{1/2}\frac{U^{\prime}(z)}{4U(z)}\sigma_{2}. \tag{249}\] In our computation we expressed \(W(z)\), \(X(z)\), and \(V(z)\) in terms of \(U(z)\) and \(U^{\prime}(z)\) using the identities (125)-(126) and the first equation in (127). Plugging (249) into (248) we get \[\frac{\mathrm{d}}{\mathrm{d}z}\ln(\tau(z))=\frac{zU^{\prime}(z)^{2}}{4U(z)^{2 }}-2U(z)+\frac{2}{U(z)}.\] Differentiating once again, we have \[\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\ln(\tau(z))=-\frac{1}{4}\left(\frac{ \mathrm{d}}{\mathrm{d}z}\ln(U(z))\right)^{2}. \tag{250}\] Now we see from (250) that the set \(\Sigma\) of zeroes of the \(\tau\)-function coincides precisely with the union of poles and zeroes of the function \(U(z)\). ### Relationship between \(U^{\mathrm{even}},U^{\mathrm{odd}}\) To complete the proof of Theorem 3, we must show that \(U^{\mathrm{odd}}(z)=-1/U^{\mathrm{even}}(z)\). One can already observe that this should be the case by checking that the leading behavior predicted in Theorem 4 satisfies the involution, but we now present a proof on the level of Riemann-Hilbert problems. First, note that if one chooses the square root in (211) in such a way that \(\mathbf{V}^{\mathrm{even}}=\mathrm{i}\sigma_{3}\mathbf{V}^{\mathrm{odd}}\), it follows from (227) that10 Footnote 10: One can check that making the other choice of the square root yields the same connection matrix but with the opposite sign, and so it follows from Remark 5 that this choice is immaterial. \[\mathbf{C}^{\mathrm{odd}}_{0\infty}=\sigma_{3}\mathbf{C}^{\mathrm{even}}_{0 \infty}\sigma_{3}. \tag{251}\] This, in particular, implies the symmetry \[\widetilde{\mathbf{S}}^{\mathrm{odd}}(\lambda,z)=\sigma_{3}\widetilde{ \mathbf{S}}^{\mathrm{even}}(\lambda,z)\sigma_{3}, \tag{252}\] and, in view of (241), we have \[\boldsymbol{\Omega}^{\mathrm{odd}}(\lambda,z)=\rho_{\infty}^{\sigma_{3}/2}( \lambda,z)\mathbf{K}\left(\mathds{I}+\frac{1}{\sqrt{-\mathrm{i}\lambda}} \mathbf{T}^{\mathrm{even}}(z)\right)\sigma_{3}\widetilde{\mathbf{S}}^{\mathrm{ even}}(\sqrt{-\mathrm{i}\lambda},z)\sigma_{3}. \tag{253}\] Recalling (207), (238), (243), and the identity \(2\mathrm{i}U(z)X(z)=\Delta_{11}^{(8)}(z)/\Delta_{21}^{(8)}(z)\), see (130), we have \(\boldsymbol{\Omega}^{\mathrm{odd}}(\lambda,z)=\mathbf{G}(\lambda,z) \boldsymbol{\Omega}^{\mathrm{even}}(\lambda,z)\) where \[\mathbf{G}(\lambda,z) :=\rho_{\infty}^{\sigma_{3}/2}(\lambda,z)\mathbf{K}\left( \mathds{I}+\frac{1}{\sqrt{-\mathrm{i}\lambda}}\mathbf{T}^{\mathrm{even}}(z) \right)\sigma_{3}\left(\mathds{I}+\frac{1}{\sqrt{-\mathrm{i}\lambda}} \mathbf{T}^{\mathrm{even}}(z)\right)\rho_{\infty}^{-\sigma_{3}/2}(\lambda,z)\] \[=\frac{1}{\rho_{\infty}(\lambda,z)}\begin{bmatrix}2U^{\mathrm{ even}}(z)X^{\mathrm{even}}(z)&-4\mathrm{i}(U^{\mathrm{even}}(z)X^{\mathrm{even}}(z))^{2} \\ -\mathrm{i}&-2U^{\mathrm{even}}(z)X^{\mathrm{even}}(z)\end{bmatrix}+\rho_{\infty}( \lambda,z)\begin{bmatrix}0&\mathrm{i}\\ 0&0\end{bmatrix}. \tag{254}\] To deduce the relationship between \(U^{\mathrm{even}},U^{\mathrm{odd}}\), we now recall that \(\boldsymbol{\Omega}^{\mathrm{even}/\mathrm{odd}}\) satisfy the Lax pair (146). Transforming \(\boldsymbol{\Omega}^{\mathrm{even}}\) as in the right hand side of (253) induces a gauge transformation of the \(\lambda\)-equation and we have that \(\boldsymbol{\Omega}^{\mathrm{odd}}\) satisfies two equations; the first is the one in (146) and the second is \[\frac{\partial\boldsymbol{\Omega}^{\mathrm{odd}}}{\partial\lambda}\left( \lambda,z\right)=\widetilde{\boldsymbol{\Lambda}}(\lambda,z)\boldsymbol{ \Omega}^{\mathrm{odd}}(\lambda,z),\] where \[\widetilde{\boldsymbol{\Lambda}}(\lambda,z)=\frac{\partial\mathbf{G}}{\partial \lambda}\left(\lambda,z\right)\mathbf{G}^{-1}(\lambda,z)+\mathbf{G}(\lambda,z )\boldsymbol{\Lambda}^{\mathrm{even}}(\lambda,z)\mathbf{G}^{-1}(\lambda,z).\] Using (254) and (125), we see that \[\widetilde{\boldsymbol{\Lambda}}(\lambda,z)=\begin{bmatrix}0&\mathrm{i}z\\ 0&0\end{bmatrix}+\frac{1}{4\lambda}\begin{bmatrix}2-V^{\mathrm{even}}(z)+8 \mathrm{i}U^{\mathrm{even}}(z)X^{\mathrm{even}}(z)&F^{\mathrm{even}}(z)\\ 2&-2+V^{\mathrm{even}}(z)-8\mathrm{i}U^{\mathrm{even}}(z)X^{\mathrm{even}}(z) \end{bmatrix}\\ -\frac{1}{\lambda^{2}}\begin{bmatrix}(U^{\mathrm{even}}(z))^{2}X^ {\mathrm{even}}(z)&2\mathrm{i}(U^{\mathrm{even}}(z))^{3}(X^{\mathrm{even}}(z)) ^{2}\\ \mathrm{i}U^{\mathrm{even}}(z)/2&-(U^{\mathrm{even}}(z))^{2}X^{\mathrm{even}}(z) \end{bmatrix}, \tag{255}\] where \[F^{\mathrm{even}}(z):=4\mathrm{i}U^{\mathrm{even}}(z)X^{\mathrm{even}}(z)\left( V^{\mathrm{even}}(z)+6U^{\mathrm{even}}(z)X^{\mathrm{even}}(z)-2\mathrm{i}U^{ \mathrm{even}}(z)\right)-\frac{4z}{U^{\mathrm{even}}(z)}.\] Since \(\det\boldsymbol{\Omega}^{\mathrm{odd}}=1\), it follows that \(\widetilde{\boldsymbol{\Lambda}}(\lambda,z)=\boldsymbol{\Lambda}^{\mathrm{odd }}(\lambda,z)\) and we arrive at identities relating all the potentials \(U^{\mathrm{even/odd}}(z)\), \(V^{\mathrm{even/odd}}(z)\), \(W^{\mathrm{even/odd}}(z)\), \(X^{\mathrm{even/odd}}(z)\); comparing the (2,1) entries of the coefficient of \(\lambda^{-2}\) yields the desired relation \[U^{\mathrm{even/odd}}(z)=-1/U^{\mathrm{odd/even}}(z).\] ### Solutions of Suleimanov Considering the limit of even Backlund iterates when \(\mu=1/4\) yields a particularly symmetric solution of Painleve-III\((D_{8})\). In which case we have \[\mathbf{S}_{0}^{0}=\mathbf{S}_{1}^{\infty}=\mathbf{I},\quad\text{and}\quad \mathbf{C}_{0\infty}=(\mathrm{i}\sigma_{2})\mathbf{C}_{0\infty}(\mathrm{i} \sigma_{2})=\begin{bmatrix}-y_{2}&y_{1}\\ y_{1}&y_{2}\end{bmatrix}.\] In this case, we have that \(y_{3}=0\). This is, for example, the situation when considering rational solutions of Painleve-III as outlined in Section 4.5. _Remark 10_.: In this setting and up to a rescaling of the \(z\) and \(\xi\) variables, Riemann-Hilbert Problem 10 is the same Riemann-Hilbert problem as in [12, Section 13.1], which corresponds to solutions of the sine-Gordon reduction of Painleve-III \[\frac{\mathrm{d}^{2}w}{\mathrm{d}t^{2}}+\frac{1}{t}\,\frac{\mathrm{d}w}{ \mathrm{d}t}+\sin w(t)=0. \tag{256}\] This is partly due to the parameters \(y_{1},y_{2}\) satisfying the condition \(y_{1}^{2}+y_{2}^{2}+1=0\) (i.e. \(\det\mathbf{C}_{0\infty}=1\)), and is to be expected since equation (256) is equivalent to (3) and their solutions are related via the formula \[U(z)=\mathrm{i}\mathrm{e}^{-\mathrm{i}w\left(\pm\mathrm{e}^{3\pi\mathrm{i}/4 }\mathrm{e}\sqrt{2z}\right)}. \tag{257}\] We can also mention that real-valued (for real \(z\)) solutions of (256) are singled out by condition (258) below. We use identity (257) with minus sign to formulate Theorem 4. Since this connection will not be used further, we do not elaborate on it. We end this discussion by noting yet another interesting connection to certain highly symmetric solutions of \(\mathrm{P}\mathrm{III}(D_{8})\) which appear in the work of Suleimanov [39] on nonlinear optics, and later were found in the context of the focusing nonlinear Schrodinger equation [1, 2]. More precisely, note that Riemann-Hilbert Problem 10 (and the Riemann-Hilbert problem satisfied by \(\widetilde{\mathbf{S}}(\xi,z)\)) agrees with [1, Riemann-Hilbert Problem 4] up to an appropriate rescaling of \(z,\tilde{\xi}\) in the special case when \(y_{1},y_{2}\) are chosen such that \[\sigma_{2}\begin{bmatrix}-y_{2}&y_{1}\\ y_{1}&y_{2}\end{bmatrix}\sigma_{2}=\overline{\begin{bmatrix}-y_{2}&y_{1}\\ y_{1}&y_{2}\end{bmatrix}}\Leftrightarrow y_{1}=-\overline{y_{1}}\quad\text{ and}\quad y_{2}=-\overline{y_{2}}. \tag{258}\] This imposes conditions on \(e_{0},e_{2},e_{\infty}\), which can be written out explicitly in the case of the rational solutions of Painleve-III. Namely, in this case \[y_{1}=\frac{\mathrm{i}\mathrm{e}^{\mathrm{i}\pi m}}{\sqrt{1+\mathrm{e}^{2\pi \mathrm{i}m}}},\quad y_{2}=\frac{\mathrm{i}}{\sqrt{1+\mathrm{e}^{2\pi \mathrm{i}m}}},\quad y_{3}=0.\] These satisfy the symmetry conditions above exactly when \(m\in\mathrm{i}\mathrm{R}+\mathbb{Z}\).
2303.03482
Recent Advances in Software Effort Estimation using Machine Learning
An increasing number of software companies have already realized the importance of storing project-related data as valuable sources of information for training prediction models. Such kind of modeling opens the door for the implementation of tailored strategies to increase the accuracy in effort estimation of whole teams of engineers. In this article we review the most recent machine learning approaches used to estimate software development efforts for both, non-agile and agile methodologies. We analyze the benefits of adopting an agile methodology in terms of effort estimation possibilities, such as the modeling of programming patterns and misestimation patterns by individual engineers. We conclude with an analysis of current and future trends, regarding software effort estimation through data-driven predictive models.
Victor Uc-Cetina
2023-03-06T20:25:16Z
http://arxiv.org/abs/2303.03482v1
# Recent Advances in Software Effort Estimation using Machine Learning ###### Abstract An increasing number of software companies have already realized the importance of storing project-related data as valuable sources of information for training prediction models. Such kind of modeling opens the door for the implementation of tailored strategies to increase the accuracy in effort estimation of whole teams of engineers. In this article we review the most recent machine learning approaches used to estimate software development efforts for both, non-agile and agile methodologies. We analyze the benefits of adopting an agile methodology in terms of effort estimation possibilities, such as the modeling of programming patterns and misestimation patterns by individual engineers. We conclude with an analysis of current and future trends, regarding software effort estimation through data-driven predictive models. _Keywords--_ effort estimation, agile software, stories estimation A major goal of software project managers and software developers is to calculate accurate estimates of the effort required to complete a software development project. Effort prediction in software engineering projects is considered by some software development managers as a mission impossible kind of task. Some other managers think it is certainly possible to accomplish it, at least for specific categories of software applications, if enough historical information is available and the right methodology is chosen [7]. The common scenario is that each software company ends up developing its own strategy over time, depending on the kind of applications they develop. In this article we review the most recently proposed machine learning methods for software effort prediction, and we categorize them in non-agile and agile methods. It is well known that machine learning (ML) has become a very robust data-driven tool for solving a variety of problems, ranging from face recognition to chatbots development, from product recommendation to protein structure prediction. We have been noticing significant impact in different fields such as computer vision, natural language processing, e-commerce, bioinformatics, robotics, audio processing, etc. In general applications and fields benefiting from ML algorithms have been multiplying in the last decade. The same can be said about ML applications for software engineering problems. In a recent survey [20], five software engineering activities were clearly identified as the most investigated using deep ML, namely, * Software design, * Software implementation, * Software testing and debugging, * Software maintenance, and * Software management. Each of these activities include a different number of more specific research topics. For instance, in software management, two are the most common topics: * Software repository mining, and * Effort cost prediction. Predicting projects efforts is by itself a difficult task. Estimation or prediction of software development effort is not an exception. In order to estimate with the highest of prediction ability we need to exploit past information, we need to develop data-driven strategies. As Jorgensen argues [6], historical data relevant to the target project improves estimation accuracy. Moreover, based on recently published well-supported research results [5], he says that, what we currently know about software effort and cost estimation is not enough to solve the estimation challenges in the software industry. Nevertheless, it helps us to identify several actions that might improve estimation accuracy. In this article we are interested in the effort prediction problem for whole projects or large modules of software (non-agile prediction), and also in agile software effort prediction. ## 1 Non-Agile Effort Prediction In one of the main works on effort prediction carried out two decades ago Kitchenham et al. [7] studied the effort and duration of 147 projects, including maintenance and development ones. This data set was obtained from one software company of outsourcing. They discovered that 63% of the estimates were calculated within 25% of the actual value, with an average absolute error of 0.26. In other words, this study supports the idea that estimation of software development efforts is achievable with a reasonable margin of error, small enough to consider it as a useful practice in many cases. One of the earliest studies that used machine learning for building estimators of software development effort from historical data was presented by Srinivasan et al. [14]. In this work, regression trees and multilayer perceptrons algorithms were compared against traditional parametric models for effort prediction. In both cases the results show that machine learning algorithms accomplished competitive performances. One of the main advantages of the methods studied is their adaptability to new data sets. Jorgensen et al. [6] systematically reviewed 304 software cost estimation articles, classifying them according to research topic, estimation approach, research approach, study context, and use of data sets. Based on this systematic categorization of papers, the authors make some recommendations over changes they identified as being useful in estimation research. There are some data-driven models typically studied to solve the effort prediction software. Two categories that have been compared are linear regression models and neural networks, being the second type more complex models with many more parameters to be estimated. Lopez-Martin et al. [8] performed a comparison of these two categories of algorithms. Specifically they compared a multiple linear regression methods with multilayer perceptrons and radial basis function networks. Their conclusion is somewhat expected: neural networks showed higher accuracy than multiple linear regression. In a similar approach Bisi et al. [4] tried a methodology based on evolutionary computation combining swarm optimization for training, principal component analysis to reduce the dimension of the input data, and finally genetic algorithms to optimize the weights of a neural network, also reporting competitive performances. More recently, BaniMustafa et al. [1] evaluated 3 machine learning models using the COCOMO NASA data set which includes 93 software projects. The algorithms used in this study are naive Bayes, logistic regression, and random forests. Although these are not by any means state-of-the-art algorithms, the approach is of general applicability and provide a baseline of methods to start with and further increase the robustness of our methodology for effort prediction. The data set includes 24 attributes such as the category of the application (with 14 possibilities, including avionics, communications, science, and simulation), development mode (indicating one of the following options: embedded, organic, semidetached), lines of code measure and the actual effort in months. The experimental results show a competitive performance of all 3 methods over the COCOMO model, being the random forest model the best one. This is not surprising considering that random forest is a much more robust method compared to the two other tested algorithms. Effort predictions based on team size has been also approached by Rai et al. [11]. Recently, they proposed the use of the constructive cost model known as COCOMO [2] together with support vector regression, to predict effort estimation. ABLE:S1.T1][ENDTABLE] ## 2 Agile Effort Prediction Effort estimation in agile projects focuses on estimating the effort required for completing user stories. In a recent article, Choetkiertikul et al. [3] proposed a prediction model for estimating story points based on the combination of long short-term memory and recurrent highway networks. Their empirical evaluation with 16 open source projects demonstrates that this approach consistently outperforms baselines such as random guessing, mean, and median methods and six other more complex methods such as Doc2Vec and random forests. Also, Ochodek et al. [9] investigated several neural networks testing them with 437 use cases from 27 software development projects. The main goal of this study was to develop an easy-to-train method, which is comparable in performance to the existing methodologies. The experimental results shows as the best performer a convolutional neural network used together with a word-embedding model. Phannachitta et al. [10] carried out a systematic comparison of software effort estimators on 13 standard benchmark datasets. Performance metrics and statistical test methods were used together with different machine learning algorithms. The experimental results show that an analogy-based model that is adapted through a gradient boosting machine algorithm and a traditional adaptation method based on productivity adjustment is the best performer. Sarro et al. [12] introduced the approach called learning from mistakes. The main idea is focused on automatically learning from past estimation errors made by human experts. These errors are used to improve the accuracy of the predictions by calculating possible future errors in estimates. This empirical investigation included 402 maintenance and development projects. The main finding of this study is that the type, severity and magnitude of errors are all predictable. The algorithms used in this work to predict errors are classification and regression trees, k-nearest neighbors, naive Bayes, and linear programming. Tawosi et al. [18] replicated and extended a study [13] on multi-objective software effort estimation in order to increase the confidence in those previous results. Later on, Tawosi et al. [17] carried out a replication study of the analysis previously presented by Ochodek et al. [9]. This time the authors used a larger dataset. However, the results they obtained are not supporting the hypothesis stating that deep neural networks can significantly outperform less sophisticated methods for predicting actual efforts in software development. Moreover, they concluded that semantic analysis of the texts describing the stories is not enough and it is prone to introduce errors in the estimations. There are a couple of datasets usually employed for researching effort prediction in agile software development projects: the Choetkiertikul dataset [3] and the TAWOS dataset [16]. The Choetkiertikul dataset was created in 2019 as part of a study that investigated the use of a deep learning model for estimating story points. This dataset is stored in a csv format file and contains 23,313 issues from 16 open-source projects. The data can be mainly used for story points-based estimation. On the other hand, the TAWOS dataset is a dataset created in 2022 from Jira repositories and it includes 44 agile open-source software projects with a total of more than 500,000 issues. It is an easy-to-use dataset stored as a relational database that can be used with MySql workbench for investigating different software engineering problems, i. e. agile software effort estimation. For example, Tawosi et al. [15] used that dataset to investigate how clustering could be used to estimate story points in agile software development projects. ## 3 Non-Agile vs. Agile Effort estimation in non-agile project is usually carried out at the level of the whole project or whole modules, which increases the prediction error, simply due to the amount of lines of codes that need to be considered. When the project is developed using an agile methodology, the prediction can be performed at the level of individual engineers, passing through teams-level, up to module and project-level, as summarized in Figure 1. One way of achieving more precise effort estimation in agile development projects is by keeping track of stories effort estimations and stories actual efforts needed by engineers of software teams. We argue that when this information is stored, the following benefits are available: 1. Calculation of the current misestimation rate of each engineer. 2. Possibility of giving data-driven feedback when new estimations are needed. 3. Identification of those engineers with above average skills in you team or company, for specific tasks or technologies. Moreover, we believe that: Figure 1: Benefits of performing effort prediction under an agile software development methodology. 1. As software engineers become more experienced in designing stories, their stories gradually become more compact and are more easily attainable. We consider this as a highly desirable design skill. 2. As a result of being able to design more compact stories, stories become more predictable in terms of needed effort and therefore software engineers gradually reduce their effort misestimation. Consequently, it is reasonable to expect that over the years, the percentage of misestimated stories by an engineer will decrease. In other words, there are patterns inherent to the process that an engineer follows to design stories and estimate their complexity and those patterns can be learned using machine learning models. ## 4 Current and Future Trends We can identify four main families of methods used in recent approaches for competitively predicting software development effort, all of them with applications in non-agile and agile software effort prediction (See Figure 2): * Evolutionary algorithms, * Decision trees algorithms, * Shallow and deep neural networks, and * Others. On the other hand the predictions in all these approaches are calculated differently, depending on the historical information available. Some of the features used as inputs for the predictive models are: * Attributes describing the complexity (e.g. lines of codes), * Attributes describing the type of software application (e.g. mobile app), * Names of use cases, * Features extracted using analogy-based methods, * Time required to accomplish stories, and * Complexity points of user stories. In spite of the fact that deep neural networks are currently a dominating force in practically all domains where data-driven predictions are needed, we argue that in the particular case of effort prediction of agile software stories, even simple methods such Figure 2: Four main families of methods recently used for software effort prediction in non-agile and agile methodologies. as linear regression models can be very effective and computationally more suitable to be implemented and maintained in software development management systems. However, we expect to see more neural network models being applied to the effort prediction problem. For instance, transformer architectures [19], based on attention layers, have revolutionized the language processing domain. They are currently being investigated for different tasks in the computer vision domain, and we will not have to wait much before transformers start to be applied as predictive models in software engineering tasks, such as effort prediction. In recent years, more and more companies have started using various types of project management software to keep track of their daily development tasks and as a result of this practice, larger data sets are being collected. Those data sets become over time an important asset, allowing companies to generate more accurate prediction models in the future. From our point of view, in the coming years we will witness an increment in tracking and management software systems with a diversity of data-driven prediction features. The prediction of efforts at different levels of complexity, ranging from individual software stories up to whole modules or projects will definitely be one of these features. ## 5 Conclusion Effort prediction of software development is attainable with some degree of accuracy, making it a viable practice in industry. Modern neural network models are capable to learn from historical data of software development projects and extract critical patterns that allow them to predict efforts in similar projects. However, when the project to be analyzed for effort prediction is particularly different to the projects in our available historical data, then the accuracy in prediction decreases significantly. Therefore more research is still needed to solve this kind of extrapolation in the estimation of project efforts. Finally, since an increasing number of software companies are adopting an agile software development methodology, effort estimation methods focusing on tracking individual engineers development patterns will be developed.
2307.01762
The Quantum Advantage in Binary Teams and the Coordination Dilemma: Part I
We have shown that entanglement assisted stochastic strategies allow access to strategic measures beyond the classically correlated measures accessible through passive common randomness, and thus attain a quantum advantage in decentralised control. In this two part series of articles, we investigate the decision theoretic origins of the quantum advantage within a broad superstructure of problem classes. Each class in our binary team superstructure corresponds to a parametric family of cost functions with a distinct algebraic structure. In this part, identify the only problem classes that benefit from quantum strategies. We find that these cost structures admit a special decision-theoretic feature -- `the coordination dilemma'. Our analysis hence reveals some intuition towards the utility of non-local quantum correlations in decentralised control.
Shashank A. Deshpande, Ankur A. Kulkarni
2023-07-04T15:05:07Z
http://arxiv.org/abs/2307.01762v1
# The Quantum Advantage in Binary Teams and the Coordination Dilemma: Part I ###### Abstract We have shown that entanglement assisted stochastic strategies allow access to strategic measures beyond the classically correlated measures accessible through passive common randomness, and thus attain a quantum advantage in decentralised control. In this two part series of articles, we investigate the decision theoretic origins of the quantum advantage within a broad superstructure of problem classes. Each class in our binary team superstructure corresponds to a parametric family of cost functions with a distinct algebraic structure. In this part, identify the only problem classes that benefit from quantum strategies. We find that these cost structures admit a special decision-theoretic feature -'_the coordination dilemma_'. Our analysis hence reveals some intuition towards the utility of non-local quantum correlations in decentralised control. ## I Introduction Strategies correlated through passive common randomness constitute a well-known space of classically implementable strategies in decentralised control. However, it is known, thanks to a counter-example by Ananthram and Borkar [1] that this constitution does not exhaust the space of occupation measures allowed by the information structure of the problem. We have recently shown that this 'limitation' of common randomness in decentralised control can be alleviated with the use of a quantum mechanical architecture [2] to generate randomness. Specifically, we considered a decentralised estimation problem and demonstrated a new class of entanglement assisted stochastic strategies that, while still respecting the information structure, produce a cost improvement over what is achievable through common randomness - a phenomenon we called the _quantum advantage_ in decentralised control. However, we also found, numerically, that the quantum advantage varies with our problem parameters and problem structure. It appears prima facie that the structure of the cost function, and the constants involved in it critically determine the manifestation of the quantum advantage. Moreover, the quantum advantage appears and disappears as we vary the fidelity of the observation channels of the decision makers. In this two part series of articles, our goal is to shed light on these occurrences from a decision-theoretic vantage point. Our aim is to delineate decision theoretic features of the problem that explain or characterize these observations. In this paper, we define a superstructure of binary static team problems, and situate the decentralised estimation problem from [2] within this superstructure. The problem from [2] demanded that the players coordinate on nonoverlapping subsets of actions for each value of the environmental uncertainty, as part of estimation effort. Since the environmental uncertainty is observed only partially, such a cost function imposes a _coordination dilemma_ on the decision makers. It is apparent from [2] that richer forms of correlations afforded by quantum entanglement are instrumental in extracting a quantum advantage in the face of this dilemma. In this paper, we ask the converse question: is it the case that problems that do not have such a dilemma, also do not benefit from quantum strategies? We find that the answer is _yes_ - problems where the coordination dilemma exists are the _only_ class admitting a quantum advantage within our problem class superstructure. In all other classes, quantum strategies perform just as well as classical ones, _regardless_ of the parameter values. Thus decision-theoretically the coordination dilemma is in some sense fundamental to the appearance of a quantum advantage. This is the main contribution of the present paper. We find it to be a point of great caution that not every problem class admits a quantum advantage. In the second part, we investigate parametric values within this problem class that allow for the quantum advantage, thereby further constraining the instances where quantum strategies are useful. These parametric values capture the intensity of the coordination dilemma and the quality of information of the agents. More details about this are discussed in the second part. Our quantum strategies require that the decision makers share a pair of entangled particles; creation of such particles and protecting their state from decoherence is part of an ongoing technological effort. Some highly successful implementations exist, e.g., quantum key distribution [3] has been realised by several distribution networks like the DARPA, SECOQC, SwissQuantum and Tokyo QKD (see also [4] and [5]). These successes notwithstanding, quantum strategies enabled through entanglement are, as of today, an expensive and fragile resource. Our results give a sharp decision-theoretic boundary to ascertaining when such a resource is worth investing in. Though we work with the specific class of binary problems, there are hints in our calculations that our results can be used as ingredients in a larger structural investigation of more general problems. This more general analysis is part of our ongoing research. Non-classical correlations arising from entanglement and their implications for the nature of physical reality have been a subject of the greatest of scientific debates, starting with Einstein [6] and Bohr [7], later to Bell [8] and more recently CHSH [9] and Aspect [10]. The Nobel prize in physics in 2022 was awarded for the experimental confirmation of the existence of these very correlations. For stochastic control, the feature is the _geometry_ of nonclassical correlations (which reduce to occupation measures in stochastic control). The set of classically attainable distributions satisfy what are known as _Bell inequalities_; these can be expressed as faces of the polytope formed by classical distributions. A violation of the Bell inequalities by a physical experiment implies these inequalities form a separating hyperplane between the distribution attained by the experiment and _all_ classically attainable distributions. A cost function can be loosely thought of as a 'normal' to this hyperplane, which if aligned appropriately, exhibits a quantum advantage. However, this intuition is loose due to two reasons: first, it is not only the cost function but also the probability distribution of the observations and the environmental uncertainty that appears in the cost, and second, the environmental uncertainty is not observed by the players. More importantly, a geometric picture such as this does not allow for much understanding on the underlying decision-theoretic dilemmas that are at play. Our finding relating the coordination dilemma to the quantum advantage is thus also a novel, decision-theoretic insight into the powers of quantum correlations. Moreover, the finding is rather strong - in the absence of the coordination dilemma, the quantum advantage does not manifest for any values of the parameter, i.e., the parameters in the cost or the probability distribution. This article is organised as follows. In section II we elaborate the coordination dilemma and develop the problem class superstructure that rest of the article investigates upon. In section III we briefly introduce different strategic classes and define the non-local advantages with respect to our superstructure. We also offer here, a fresh control theoretic perspective on some well known attributes of correlations in quantum information theory. Section IV investigates some invariances that our class superstructure enjoys, that compress our elimination procedure in Section V to a few 'generating' problem classes. Ultimately in Section V we exhaustively scan through our superstructure and isolate problem classes that admit the quantum advantage. ## II The Problem Class Superstructure and the Coordination Dilemma ### _Notation_ We use \(\mathcal{P}(S)\) to denote the set of probability distributions on the set \(S\). Similarly, \(\mathcal{P}(S|T)\) denotes the the conditional probability distribution on \(S\) given an element in \(T\). We denote by \(\mathcal{B}(\mathcal{H})\), the set of all complex bounded linear operators on a Hilbert space \(\mathcal{H}\). We employ the following notation for operations among boolean variables \(a,b\in\{0,1\}\). \(a\cdot b\) denotes the logical AND, \(a\lor b\) denotes the logical OR, \(a\oplus b\) denotes the logical XOR, \(\sim a\) denotes the negation of \(a\). We denote the conjugate transpose of an operator \(\rho\in\mathcal{B}(\mathcal{H})\) by \(\rho^{\dagger}\). We use \(\mathrm{Tr}(\rho)\) to denote its trace. We denote \(\mathbf{I}\) to be the identity matrix (operator); its ambient dimension would be clear from the context. Let \(\mathsf{R}\) denote the permutation matrix given by \[\mathsf{R}:=\begin{pmatrix}0&1\\ 1&0\end{pmatrix} \tag{1}\] ### _Team decision problems_ We consider decentralised decision problems with static information structure with two agents (or decision makers or players) \(A\) and \(B\). The state of nature is described as a tuple \((\xi_{A},\xi_{B},\xi_{W})\) of correlated binary random variables with a known distribution \(\mathbb{P}\) where \(\xi_{A}\in\Xi_{A}\), \(\xi_{B}\in\Xi_{B}\), \(\xi_{W}\in\Xi_{W}\) and \(\Xi_{A}=\Xi_{B}=\Xi_{W}=\{0,1\}\). Players \(A\) and \(B\) observe \(\xi_{A}\) and \(\xi_{B}\) respectively and must choose actions \(u_{A}\in\mathcal{U}_{A}\), and \(u_{B}\in\mathcal{U}_{B}\), respectively as a function of their observations. Action spaces of \(A\) and \(B\) are sets \(\mathcal{U}_{A}:=\{u_{A}^{0},u_{A}^{1}\}\) and \(\mathcal{U}_{B}:=\{u_{B}^{0},u_{B}^{1}\}\) respectively; note that \(u_{A}\), \(u_{B}\) (without the superscript) denote generic elements of \(\mathcal{U}_{A}\),\(\mathcal{U}_{B}\), respectively. Occasionally we will need to order the elements of \(\mathcal{U}_{A}\) and \(\mathcal{U}_{B}\), in which case it will be convenient to think of \(\mathcal{U}_{A},\mathcal{U}_{B}\) as two dimensional vectors (say in \(\mathbb{R}^{2}\)) with distinct components each. Based on their actions \(u_{A}\in\mathcal{U}_{A}\), \(u_{B}\in\mathcal{U}_{B}\) and the value of \(\xi_{W}\), the decision makers incur a cost \(\ell(u_{A},u_{B},\xi_{W})\). The goal of the players is to minimize \(\mathbb{E}[\ell(u_{A},u_{B},\xi_{W})]\), where the expectation is taken with respect to \(u_{A},u_{B}\), \(\xi_{A},\xi_{B}\) and \(\xi_{W}\), via a suitable choice of a policy. A policy is conditional joint distribution of \(u_{A},u_{B}\) given \(\xi_{A},\xi_{B}\), \(Q(\cdot|\cdot)\in\mathcal{P}(\mathcal{U}|\Xi)\), where \(\mathcal{U}:=\mathcal{U}_{A}\times\mathcal{U}_{B}\) and \(\Xi:=\Xi_{A}\times\Xi_{B}\). In addition to belonging to \(\mathcal{P}(\mathcal{U}|\Xi)\), a policy must also satisfy some constraints, the nature of which form a central topic in this paper; we defer this discussion to the next section. ### _Coordination dilemma_ The cost function \(\ell\) depends on \(\xi_{W}\) which is not observed by either player, although players do observe \(\xi_{W}\) partially through \(\xi_{A},\xi_{B}\). The lack of knowledge of \(\xi_{W}\) is a source of a significant dilemma for the players. As an illustration consider the following cost function \(\ell(u_{A},u_{B},\xi_{W})\) from [2]. \[\begin{array}{|c|c|c|c|}\hline\xi_{W}=0&u_{B}^{0}&u_{B}^{1}\\ \hline u_{A}^{0}&-1&0\\ u_{A}^{1}&0&-1\\ \hline\end{array}\begin{array}{|c|c|c|c|}\hline\xi_{W}=1&u_{B}^{0}&u_{B}^{1} \\ \hline u_{A}^{0}&0&-3/4\\ \hline u_{A}^{1}&-3/4&0\\ \hline\end{array} \tag{2}\] When \(\xi_{W}=0\), it is beneficial for the players to concentrate the mass of \(Q(\cdot|\cdot)\) on the subset \(\{(u_{A}^{0},u_{B}^{0}),(u_{A}^{1},u_{B}^{1})\}\), whereas when \(\xi_{W}=1\), it is beneficial to do so on the complement \(\{(u_{A}^{1},u_{B}^{0}),(u_{A}^{0},u_{B}^{1})\}\). We refer to this situation as the _coordination dilemma_. The dilemma is about whether they should match the index of their actions, i.e. 'coordinate', or mismatch these indices, or 'anti-coordinate'. In a hypothetical centralized setting where both players had access to both \(\xi_{A}\) and \(\xi_{B}\), players could agree on the 'best' estimate of \(\xi_{W}\) and choose to either coordinate or anti-coordinate. But the decentralized nature of the problem implies that players could have differing views about the value of \(\xi_{W}\), thereby leading to the above dilemma. It is intuitive that if players could correlate their actions such a way that reflects an optimal midway compromise between coordination and anti-coordination, they could potentially achieve a better cost on average than either coordination or anti-coordination. Unfortunately, with classical strategies, possibilities for creating such correlation is limited. In fact, even with the optimal classical strategies, players cannot do better than they could _without_ randomization. However, remarkably, we showed in [2] that players can do better through _quantum randomization_ obtained by correlating their actions through _entanglement_. For the above problem, we demonstrated a physically realizable quantum strategy that strictly outperforms all classical strategies, thereby showing the existence of a quantum advantage in decentralized control. ### _Problem classes_ A decision problem in our setting is specified by the prior distribution \(\mathbb{P}(\xi_{A},\xi_{B},\xi_{W})\) on the states of nature, the action spaces \(\mathcal{U}_{A}\),\(\mathcal{U}_{B}\) and the cost function \(\ell\). We assume that \(\ell\) satisfies \[\ell(u_{A},u_{B},\xi_{W}=0)\in\{0,-1\},\;\ell(u_{A},u_{B},\xi_{W}=1)\in\{0,- \chi\}, \tag{3}\] where \(\chi>0\) is a parameter. Such 'binary' costs capture settings where a fixed cost is incurred based on whether an underlying event (such as successful transmission of a packet) occurs or does not, given the background state \(\xi_{W}\). The parameter \(\chi\) captures the degree to which the costs differ depending on \(\xi_{W}\). With the specification in (3), we now construct a superstructure of subclasses. **Definition II.1**: _Let \(M,N\in\{0,-1\}^{2\times 2}\) be matrices with each entry in \(\{0,-1\}\). A problem class \(\mathcal{C}(M,N)\) specified by the tuple \((M,N)\) is the set_ \[\mathcal{C}(M,N):= \{D|D=(M,N,\mathbb{P},\mathcal{U}_{A},\mathcal{U}_{B},\chi)\;\; where\] \[\mathbb{P}\in\mathcal{P}(\Xi),|\mathcal{U}_{A}|=|\mathcal{U}_{B}| =2\;\;and\;\chi\in[0,\infty)\} \tag{4}\] _Denote the set of all problem classes in our superstructure by \(\mathscr{C}\)._ An element \(D=(M,N,\mathbb{P},\mathcal{U}_{A},\mathcal{U}_{B},\chi)\in\mathcal{C}(M,N)\) is called a problem instance. The cost function \(\ell:\mathcal{U}_{A}\times\mathcal{U}_{B}\times\Xi_{W}\to\{0,-1,-\chi\}\) of this instance is given by \[\ell(u_{A}^{i},u_{B}^{j},0)=[M]_{i+1\;\;j+1},\quad\ell(u_{A}^{i},u_{B}^{j},1)= \chi[N]_{i+1\;\;j+1} \tag{5}\] **Definition II.2** (CAC class): _The problem class \(\mathcal{C}(M,N)\) with \(M,N\) as_ \[M=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\quad\text{and}\quad N=\begin{pmatrix}0&-1\\ -1&0\end{pmatrix}, \tag{6}\] _is referred to as the coordinate-anti-coordinate class, or in short, CAC class. \((M,N)\) satisfying (6) are said to be CAC form._ Notice that the cost described in equation (2) belongs to the CAC class with and \(\chi=\frac{3}{4}\). **Definition II.3** (\(\frac{1}{2}\)-Cac class): _The problem class \(\mathcal{C}(M,N)\) with \(M,N\) as_ \[M=\begin{pmatrix}-1&0\\ 0&0\end{pmatrix}\quad\text{and}\quad N=\begin{pmatrix}0&-1\\ -1&0\end{pmatrix}, \tag{7}\] _is referred to as the half coordinate-anti-coordinate class, or in short, \(\frac{1}{2}\)-CAC class. \((M,N)\) satisfying (7) are said to be \(\frac{1}{2}\)-CAC form._ Each of the four entries in matrices \(M\) and \(N\) are allowed to take binary values in \(\{0,-1\}\) for instances in our superstructure. We thus have \(2^{4}\times 2^{4}=256\) possible problem classes in our superstructure. ### _Decentralised estimation: a motivating example_ We motivate our investigation through a concrete example of a decentralised estimation problem we introduced in [2]. Suppose that the agents \(A\) and \(B\) collaborate to produce an estimate \(f:\mathcal{U}_{A}\times\mathcal{U}_{B}\to\Xi_{W}\) of \(\xi_{W}\), given their local information. The choice of such an \(f\) determines how their actions collate to produce the desired estimate, and thereby shapes the cost function. With \(\chi(0):=1\) and \(\chi(1):=\chi\), suppose that the cost is given by \[\ell(u_{A},u_{B},\xi_{W})=-\chi(\xi_{W})\delta(\xi_{W},f(u_{A},u_{B})). \tag{8}\] We now find that the choice of \(f\), dictated by some estimation'mechanism', allots this estimation problem to a problem class within our superstructure. For instance if \(f(u_{A}^{i},u_{B}^{j})=i\oplus j\), we find that the problem is an instance of the CAC class with \(M\) and \(N\) as given by (6). On the other hand, if \(f(u_{A}^{i},u_{B}^{j})=i\,\vee j\), then the problem belongs to another class \(\mathcal{C}(M,N)\) with \[M=\begin{pmatrix}-1&0\\ 0&0\end{pmatrix}\quad\text{and}\quad N=\begin{pmatrix}0&-1\\ -1&-1\end{pmatrix}. \tag{9}\] In our previous article [2], we found that instances of the estimation problem with \(f(u_{A}^{i},u_{B}^{j})=i\oplus j\) admit a quantum advantage. In particular, we find that quantum strategies enable the two agents to effectively collaborate during game-play, when such collaborations are restricted within the classical realm of passive common randomness. It is therefore of interest to examine what aggregations \(f\) induce a cost structure that admits such a quantum advantage. Our investigation through this two-part series answers this query in a reasonably detailed manner. In this particular context of estimation, our analysis reveals that the cost structure (9) induced by the aggregate \(f(u_{A}^{i},u_{B}^{j})=i\,\vee j\) does not admit a quantum advantage while that (6) induced by \(f(u_{A}^{i},u_{B}^{j})\) does. ## III Decision Strategies and Non-Local Advantages ### _Decision strategies_ We study decision problems in the above superstructure in space of stochastic policies that specify a probability distribution on \(\mathcal{U}\) given the information of both players; in the classical Markov decision processes setting, these reduce to what are known as occupation measures [11] and have been employed in other information structures as well [12]. We refer the reader to our earlier work [2] for more details. Under this framework, any decision strategy is described as a joint conditional probability distribution \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\) that is required to satisfy a certain set of constraints. Based on these constraints we have classes \(\mathcal{NS},\mathcal{G}\) and \(\mathcal{Q}\) defined below. In each case the expected cost of a problem instance \(D\) under a strategy \(Q\) is given by \[J(Q;D)= \sum_{\xi_{A},\xi_{B},\xi_{W}}\sum_{u_{A},u_{B}}\mathbb{P}(\xi_{A },\xi_{B},\xi_{W})\] \[\times\ell(u_{A},u_{B},\xi_{W})Q(u_{A},u_{B}|\xi_{A},\xi_{B}). \tag{10}\] Any distribution \(Q\) by virtue of belonging to \(\mathcal{P}(\mathcal{U}|\Xi)\), regardless of the strategic class under consideration, satisfies the positivity and normalisation constraints for probability distributions, \[Q(u|\xi)\geq 0\ \forall u,\xi,\quad\sum_{u}Q(u|\xi)=1\ \forall\ \xi. \tag{11}\] We investigate across three different strategic classes, each described by further restrictions on \(Q\). #### Iii-A1 Local distributions Set of _local distributions_ (\(\mathcal{D}\)) is the set of distributions \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\) that correspond to locally randomized strategies. In a locally randomized strategy (also called _behavioural strategy_ in game theory), decision maker \(i\) assigns a conditional probability distribution \(Q_{i}\in\mathcal{P}(\mathcal{U}_{i}|\Xi_{i})\) on his actions given his information. Then \(Q\) assumes the form \[Q(u_{A},u_{B}|\xi_{A},\xi_{B})=Q_{A}(u_{A}|\xi_{A})Q_{B}(u_{B}|\xi_{B}) \tag{12}\] An important subset of local distributions is the set of deterministic strategies, \(\Pi\), which is the set of all strategies \(Q\in\mathcal{D}\) expressible as \[Q(u|\xi)=\delta(u_{A},\gamma_{A}(\xi_{A}))\delta(u_{B},\gamma_{B}(\xi_{B}))\] where \(\gamma_{i}:\Xi_{i}\to\mathcal{U}_{i}\) for each \(i=A,B\). #### Iii-A2 Local polytope (\(\mathcal{L}\)) The local polytope \(\mathcal{L}\) is the set of all classical strategies implementable through an arbitrary amount of passive common randomness. It is the set of all \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\) expressible as \[Q=\sum_{\omega\in\Omega}\Phi(\omega)\prod_{i}Q(u_{i}|\xi_{i})\] for a finite set \(\Omega\), and distributions \(\Phi\in\mathcal{P}(\Omega)\) amd \(Q_{i}\in\mathcal{P}(\mathcal{U}_{i}|\Xi_{i})\) for each \(i\in A,B\). By definition, one can note that \(\mathcal{L}=\mathrm{conv}(\mathcal{D})\). In fact, \(\mathcal{L}=\mathrm{conv}(\Pi)\). We refer the reader to [2, 13] for more details. #### Iii-A3 Quantum ellitope \(\mathcal{Q}\) The _quantum ellitope_ denoted \(\mathcal{Q}\) is the set of distributions \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\) generated by quantum strategies. We mathematically specify a quantum strategy as a tuple \(Q=(\mathcal{H}_{A},\mathcal{H}_{B},\rho_{AB},\{P^{A}_{u_{A}}(\xi_{A})\},\{P^{B }_{u_{B}}(\xi_{B})\})\) where **(i)**\(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) are finite dimensional Hilbert spaces. **(ii)**\(\rho_{AB}\in\mathcal{B}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\) is a density matrix, i.e. \(\rho_{AB}\succeq 0\) and \(\mathrm{Tr}(\rho_{AB})=1\). **(iii)** For each \(i\in\{A,B\}\), \(P^{i}_{u_{i}}(\xi_{i})\in\mathcal{B}(\mathcal{H}_{i})\) are projection operators that obey \(P^{i}_{u_{i}}(\xi_{i})^{2}=P^{i}_{u_{i}}(\xi_{i})\) and \(\sum_{u_{i}}P^{i}_{u_{i}}(\xi_{i})=\mathbf{I}_{i}\) for each \(\xi_{i}\in\Xi_{i}\) where \(\mathbf{I}_{i}\in\mathcal{H}_{i}\) is the identity operator. The described quantum strategy \(Q\) renders the following occupation measures. \[Q(u_{A},u_{B}|\xi_{A},\xi_{B})=\mathrm{Tr}\left(P^{(A)}_{u_{A}}(\xi_{A})\otimes P ^{(B)}_{u_{B}}(\xi_{B})\rho_{AB}\right). \tag{13}\] Thus \(\mathcal{Q}\) is the set of all distributions \(Q\) that satisfy (13) for some choice of the tuple \((\mathcal{H}_{A},\mathcal{H}_{B},\rho_{AB},\{P^{(A)}_{u_{A}}(\xi_{A})\}_{u_{A},\xi_{A}}\), \(\{P^{(B)}_{u_{B}}\}_{u_{B},\xi_{B}})\). A detailed discussion on the physical implementation of quantum strategies during gameplay can be found in [2]. Nevertheless, the expected cost of a problem \(D\) under the strategy \(Q\) is given by (from (13) and (10)) \[J(Q;D) =\sum_{\xi_{A},\xi_{B},\xi_{W}}\mathbb{P}(\xi_{A},\xi_{B},\xi_{W} )\sum_{u_{A},u_{B}}\ell(u_{A},u_{B},\xi_{W})\] \[\times\mathrm{Tr}\left(\rho_{AB}P^{(A)}_{u_{A}}(\xi_{A})P^{(B)}_{ u_{B}}(\xi_{B})\right) \tag{14}\] #### Iii-A4 No-signalling polytope The set of _no-signalling distributions_, denoted \(\mathcal{NS}\), is the set of distributions \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\) that satisfy the following _no-signalling_ constraints that prohibits communication between the two agents [14, 13], as demanded by the stasis of the information structure. We have \(\forall u_{A},\xi_{A},\xi_{B},\xi^{\prime}_{B}\) \[\sum_{u_{B}}Q(u_{A},u_{B}|\xi_{A},\xi_{B})=\sum_{u^{\prime}_{B}}Q(u_{A},u^{ \prime}_{B}|\xi_{A},\xi^{\prime}_{B}); \tag{15}\] and \(\forall u_{B},\xi_{A},\xi^{\prime}_{A},\xi_{B}\) \[\sum_{u_{A}}Q(u_{A},u_{B}|\xi_{A},\xi_{B})=\sum_{u^{\prime}_{A}}Q(u^{\prime}_{A},u_{B}|\xi^{\prime}_{A},\xi_{B}); \tag{16}\] This asserts that the choice of conditional distribution of one agent given his information does not affect the outcome distribution of the other agent, and thus the joint distribution respects the prohibition of _faster than light communication_. Since this set of distributions is characterised by a finite number of linear equalities and inequalities, \(\mathcal{NS}\) a polytope. #### Iii-A5 Centralised polytope We call the whole set of conditional distributions on \(\mathcal{P}(\mathcal{U}|\Xi)\) the _centralised polytope_. ### _Advantages_ The following proposition shows that the quantum ellitope includes all local distributions. **Proposition III.1**: _For every \(\pi_{\gamma}\in\Pi\) specified by some deterministic strategy \(\gamma\), there exists a \(Q\in\mathcal{Q}\) such that \(\pi_{\gamma}\equiv Q\). Thus,_ \[\mathcal{L}\subset\mathcal{Q}\subset\mathcal{NS}\subset\mathcal{P}(\mathcal{U}| \Xi). \tag{17}\] Choose \(\mathcal{H}_{A}\), \(\mathcal{H}_{B}\) and \(\rho_{AB}\in\mathcal{B}(\mathcal{H}_{A}\times\mathcal{H}_{B})\). Take \(P^{(i)}_{u_{i}}(\xi_{i})=\delta_{u_{i}\gamma_{i}(\xi_{i})}\mathbf{I}\) so that_ \[Q(u_{A},u_{B}|\xi_{A},\xi_{B}) =\mathrm{Tr}(P^{(A)}_{u_{A}}(\xi_{A})\otimes P^{(B)}_{u_{B}}(\xi_{B })\rho_{AB})\] \[=\delta_{u_{A\mathcal{H}_{A}}(\xi_{A})}\delta_{u_{B\gamma\mathcal{H }_{B}}(\xi_{B})}\] \[=\pi_{\gamma}(u_{A},u_{B}|\xi_{A},\xi_{B}), \tag{18}\] _for all \(u_{A},u_{B},\xi_{A},\xi_{B}\). The convexity of \(\mathcal{Q}\)[2] completes the proof._ For \(S=\mathcal{L},\Pi,\mathcal{Q},\mathcal{NS}\) and \(\mathcal{P}(\mathcal{U}|\Xi)\), define \[J^{*}_{S}(D):=\inf_{Q\in S}J(Q;D) \tag{19}\] and denote the centralised optimum \(J^{*}_{\mathcal{P}(\mathcal{U}|\Xi)}\) by \(J^{**}(D)\) Note that since \(\mathcal{L}=\mathrm{conv}(\Pi)\) and \(J(Q;D)\) is linear in \(Q\), \(J^{*}_{\Pi}(D)=J^{*}_{\mathcal{L}}(D)\). Then from (17) the following relationship holds between the respective infima, \[J^{*}_{\Pi}(D)=J^{*}_{\mathcal{L}}(D)\geq J^{*}_{\mathcal{Q}}(D)\geq J^{*}_{ \mathcal{NS}}(D)\geq J^{**}(D). \tag{20}\] **Definition III.1**: _We say that the problem class \(\mathcal{C}(M,N)\)_ 1. _admits a quantum advantage if_ \(\exists D\in\mathcal{C}(M,N):J^{*}_{\mathcal{L}}(D)>J^{*}_{\mathcal{S}}(D)\)_._ 2. _admits a no-signalling advantage if_ \(\exists D\in\mathcal{C}(M,N):J^{*}_{\mathcal{L}}(D)>J^{*}_{\mathcal{NS}}(D)\)_._ 3. _admits a centralisation advantage if_ \(\exists D\in\mathcal{C}(M,N):J^{*}_{\mathcal{L}}(D)>J^{**}(D)\)_._ For any instance \(D\), we have from (20) that if \(J^{*}_{\mathcal{L}}(D)=J^{*}_{\mathcal{NS}}(D)\) then \(J^{*}_{\mathcal{L}}(D)=J^{*}_{\mathcal{Q}}(D)=J^{*}_{\mathcal{NS}}(D)\). Hence if a problem class \(\mathcal{C}(M,N)\) does not admit a no-signalling advantage then it does not admit a quantum advantage. Similarly \(J^{\star}_{\mathcal{L}}(D)=J^{\star\star}(D)\) forces \(J^{\star}_{\mathcal{L}}(D)=J^{\star}_{\mathcal{Q}}(D)=J^{\star}_{\mathcal{NS}}(D )=J^{\star\star}(D)\) and thus if a problem class \(\mathcal{C}(M,N)\) does not admit a centralisation advantage then it does not admit a no-signalling and a quantum advantage. ### _No-signalling polytope and Bell inequalities_ At some level our work is directly in correspondence with the state of the art in the geometry of quantum correlations. The description of the quantum ellipteo remains largely abstract to this date, though it is known to be convex and non-polytopic [15, 16]. A convergent hierarchy of semi-definite programs is known that characterizes the set [17]. More is known about the no-signalling polytope \(\mathcal{NS}\) that contains the quantum ellipteo [18]. In our case, since \(|\mathcal{U}_{i}|=\Xi_{i}|=2\) for each \(i\), the \(\mathcal{NS}\) has 24 vertices, 16 of which are local and correspond to the vertices of \(\mathcal{L}\) which constitute the set of deterministic strategies, \(\Pi\). The following proposition concisely enumerates these vertices. We refer the reader to [19] for further discussion. **Proposition III.2**: 1. _The set_ \(\Pi\) _of deterministic strategies is given by_ \[\pi^{\alpha\gamma\beta\delta}(u^{i}_{A},u^{j}_{B}|\xi_{A},\xi_{B})=\begin{cases} 1&i=\alpha\xi_{A}\oplus\beta\\ &j=\gamma\xi_{B}\oplus\delta\\ 0&\text{otherwise.}\end{cases}.\] (21) _where_ \(\alpha,\beta,\gamma,\delta\in\{0,1\}\)_._ 2. _The 8 non-local vertices of_ \(\mathcal{NS}\) _are given by_ \[\begin{split}& Q^{\alpha\beta\delta}(u^{i}_{A},u^{j}_{B}|\xi_{A}, \xi_{B})\\ &=\begin{cases}1/2&i\oplus j=\xi_{A}.\xi_{B}\oplus\alpha\xi_{A} \oplus\beta\xi_{B}\oplus\delta\\ 0&\text{otherwise.}\end{cases}\\ \end{split}\] (22) _where_ \(\alpha,\beta,\delta\in\{0,1\}\)_._ In quantum information theory, the non-local nature of quantum correlations is principally captured by their violation of a sum of experimentally testable correlations, known as the _Bell inequalities_[8]. Geometrically, non-locality implies that the first inclusion in (17) is strict; thus Bell inequalities linearly separate \(\mathcal{L}\) from some point in \(\mathcal{Q}\). One of the faces of the local polytope which is not a face of the no-signalling polytope is described by the popular CHSH inequality [9, 20], which is a generalization of Bell's original inequality. We illustrate this geometry of the CHSH inequality in figure 1. We direct the reader to [20] for a deeper peek into the relative geometry of \(\mathcal{L}\), \(\mathcal{Q}\) and \(\mathcal{NS}\). ## IV Equivalences in the Class Superstructure Recall that our superstructure comprises of \(256\) different problem classes, each corresponding to a binary matrix tuple \((M,N)\). In this section we establish a set of equivalences between the existence of a quantum advantage across problem classes. The proofs of these propositions are relegated to Appendix A. To begin, Proposition IV.1 asserts such an equivalence between a problem class \(\mathcal{C}(M^{\top},N^{\top})\) and the \(\mathcal{C}(M,N)\), for an arbitrary tuple \((M,N)\). This is intuitive since transposition of \(M,N\) in fact corresponds to exchanging the two agents in the problem instance. **Proposition IV.1**: _(Transposition equivalence (exchange of agents))_ _The following are equivalent:_ 1. \(\mathcal{C}(M,N)\) _admits a quantum advantage._ 2. \(\mathcal{C}(M^{\top},N^{\top})\) _admits a quantum advantage._ Recall the matrix \(\mathsf{R}\) from (1). We show in Proposition IV.2 that the existence of a quantum advantage in \(\mathcal{C}(\mathsf{R}M,\mathsf{R}N)\) and likewise in \(\mathcal{C}(M\mathsf{R},N\mathsf{R})\) is equivalent to that in \(\mathcal{C}(M,N)\). The tuples \((\mathsf{R}M,\mathsf{R}N)\) and \((\mathsf{M}R,\mathsf{N}\mathsf{R})\) correspond to an exchange of rows and columns in \((M,N)\), respectively. An exchange of rows is tantamount to relabelling the actions of player \(A\) as \((u^{0}_{A},u^{1}_{A})\mapsto(u^{1}_{A},u^{0}_{A})\), and those of columns corresponds to a similar relabelling for player \(B\). **Proposition IV.2**: _(Permutation equivalence (relabelling of actions)) Let \(\mathsf{R}\) be as defined (1). Then the following are equivalent:_ 1. \(\mathcal{C}(\mathsf{R}M,\mathsf{R}N)\) _admits quantum advantage._ 2. \(\mathcal{C}(M\mathsf{R},\mathsf{N}\mathsf{R})\) _admits quantum advantage._ 3. \(\mathcal{C}(M,N)\) _admits quantum advantage._ Finally we have Proposition IV.3 asserting the equivalence of \(\mathcal{C}(N,M)\) and \(\mathcal{C}(M,N)\). This corresponds to relabelling the values of \(\xi_{W}\). **Proposition IV.3**: _(Exchange equivalence (relabelling of \(\xi_{W}\))) The following are equivalent:_ 1. \(\mathcal{C}(M,N)\) _admits quantum advantage._ 2. \(\mathcal{C}(N,M)\) _admits quantum advantage._ ## V Main result Let \(V:=(M,N)\) and define actions \(\mathsf{T},\mathsf{R},\mathsf{R}^{\prime},\mathsf{E}\) so that \(\mathsf{T}V:=(M^{\top},N^{\top})\), \(\mathsf{R}V=(\mathsf{R}M,\mathsf{R}N)\), \(\mathsf{R}^{\prime}V:=(M\mathsf{R},N\mathsf{R})\) Fig. 1: Geometry of \(\mathcal{NS}\), \(\mathcal{L}\) and \(\mathcal{Q}\) and violation of a CHSH (Bell) inequality. and \(\mathsf{E}V:=(N,M)\). Let \[\Omega:=\{\mathbf{I},\mathsf{T},\mathsf{R},\mathsf{R}^{\prime},\mathsf{E}\}, \tag{23}\] be the set of group actions with \(\mathbf{I}\) being the identity and denote by \((V;\Omega)\) the matrix-pairs generated by an arbitrary sequence of group actions on \(V\) (technically, the _orbit_ of \(V\) under \(\Omega\)). Then using Propositions IV.1-IV.3, then if \(\mathcal{C}(V)\) does not admit a quantum advantage, then \(\mathcal{C}(X)\) does not admit quantum advantage for all \(X\in(V;\Omega)\). We henceforth use the notation \[\mathcal{C}((V;\Omega)):=\{\mathcal{C}(X):X\in(V;\Omega)\},\] and refer to this as the _orbit_ of the class \(V\). Following is the main theorem of this paper. **Theorem V.1**: _Consider the problem class superstructure defined in Definition II.1. A problem class \(\mathcal{C}(A,B)\) in this superstructure admits a quantum advantage if and only if \((A,B)\in((M,N);\Omega)\) where \((M,N)\) are either in the CAC form or in the \(\frac{1}{2}\)-CAC form._ Thus a problem class admits a quantum advantage if and only if it lies in the orbit of the CAC class or the \(\frac{1}{2}\)-CAC class. We now proceed to prove this claim. In our earlier paper [2], we showed that the CAC class admits a quantum advantage. In Section VI we show that the \(\frac{1}{2}\)-CAC class admits a quantum advantage. In the sections below we systematically eliminate all classes not in the orbit of the CAC and \(\frac{1}{2}\)-CAC class to show Theorem V.1. **Definition V.1**: _We call a problem class \(\mathcal{C}(M,N)\in\mathscr{C}\) an \(m\)-\(n\) class if the number of non-zero entries in \(M\) is \(m\in\{0,1,2,3,4\}\) and the number of non-zero entries in \(N\) is \(n\in\{0,1,2,3,4\}\) and call \(V=(M,N)\) an \(m\)-\(n\) tuple. Let \(\mathcal{C}_{mn}\subset\mathscr{C}\) denote the set of all \(m\)-\(n\) problem classes._ Notice that \(|\mathcal{C}_{mn}|={}^{4}C_{m}\,^{4}C_{n}\) and \(\{\mathcal{C}_{mn}\}_{m,n}\) defines a partition on \(\mathscr{C}\). For \((M,N)\) in the CAC form, \(\mathcal{C}((M,N);\Omega)\subset\mathcal{C}_{22}\). Similarly, for \((M,N)\) in the \(\frac{1}{2}\)-CAC form, \(\mathcal{C}((M,N);\Omega)\in\mathcal{C}_{12}\cup\mathcal{C}_{21}\). To proceed with our systematic elimination, we eliminate \(\mathcal{C}_{mn}\) for all pairs \((m,n)\notin\{(2,2),(1,2),(2,1)\}\) through a pigeonhole principle-based argument; this is done in Section V-A. In the subsequent sections, we eliminate classes in \(\mathcal{C}_{22}\), \(\mathcal{C}_{12}\) and \(\mathcal{C}_{21}\) that do not belong in the orbit of the CAC or the \(\frac{1}{2}\)-CAC class. ### _Problem classes with no centralisation advantage_ Since \(\ell\) takes only binary values, if there exists a pair of actions \(u^{*}_{A},u^{*}_{B}\) such that \(\ell(u^{*}_{A},u^{*}_{B},\xi_{W})\) is non zero for both values of \(\xi_{W}\), then the strategy \[\bar{Q}(u_{A},u_{B}|\xi_{A},\xi_{B})\equiv\delta_{u_{A}=u^{*}_{A},u_{B}=u^{*} _{B}} \tag{24}\] which lies in \(\Pi\) is optimal over \(\mathcal{P}(\mathcal{U}|\Xi)\). In other words, the problem admits no centralisation advantage. The following definition and the proposition that succeeds formalises this line of arguments. **Definition V.2**: _We call a pair \(V=(M,N)\) overlapping if \(\exists i,j\) such that \([M]_{ij}=[N]_{ij}=-1\). Denote the set of all classes \(\mathcal{C}(V)\) where \(V\) is overlapping by \(\mathcal{C}^{o}\),_ \[\mathcal{C}^{o}:=\{\mathcal{C}(M,N)|\exists i,j:[M]_{ij}=[N]_{ij}=-1\}. \tag{25}\] **Lemma V.2**: _If \(\mathcal{C}(M,N)\in\mathcal{C}^{o}\), then \(\mathcal{C}(M,N)\) does not admit a centralisation, and hence a quantum advantage._ Let \(\mathcal{C}(M,N)\in\mathcal{C}^{o}\) and let \(D\in\mathcal{C}(M,N)\). By (25), there exists \(u^{*}_{A}\in\mathcal{U}_{A},u^{*}_{B}\in\mathcal{U}_{B}\) such that \[\ell(u^{*}_{A},u^{*}_{B},\xi_{W})\leq\ell(u_{A},u_{B},\xi_{W})\quad\forall u_ {A},u_{B},\xi_{W}. \tag{26}\] Thus for any \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\), \[J(Q;D)\] \[\geq\sum_{\xi_{W}}\mathbb{P}(\xi_{W}|\xi_{A},\xi_{B})\sum_{\xi_{A },\xi_{B}}\mathbb{P}(\xi_{A},\xi_{B})\min_{u_{A},u_{B}}\ell(u_{A},u_{B},\xi_{W})\] \[=J(\bar{Q};D),\] where \(\bar{Q}\) is as defined in (24). Since \(\bar{Q}\in\Pi\), we get \(J^{**}(D)=J^{*}_{\mathcal{L}}(D)\) and the proposition is established. Although (25) gives a tractable definition of \(\mathcal{C}^{o}\), it is not straightforward to exhaustively enumerate subclasses in \(\mathcal{C}^{o}\). Hence we will use Lemma V.2 as an enabling lemma to eliminate some subclasses \(\mathcal{C}_{mn}\in\mathscr{C}\). Following are two results that accomplish this. **Corollary V.3**: _Let either \(m=0\) or \(n=0\) and let \(\mathcal{C}(M,N)\in\mathcal{C}_{mn}\). Then \(\mathcal{C}(M,N)\) does not admit a quantum advantage._ If one of the matrices \(M\) and \(N\) is null, then there exist \(u^{*}_{A},u^{*}_{B}\) such that (26) holds. The rest follows as in Proposition V.2. **Corollary V.4**: _Let \(\mathcal{C}(M,N)\) be such that \(m+n\geq 5\). Then \(\mathcal{C}(M,N)\) does not admit quantum advantage._ If \(m+n\geq 5\), then by the pigeonhole principle, \(\mathcal{C}(M,N)\in\mathcal{C}^{o}\). ### _Limination of other problem classes_ Corollaries V.3 and V.4 help eliminate the possibility of a quantum advantage for all \(m\)-\(n\) classes where \(m+n\geq 5\) or \(\min(m,n)=0\). Thus, out of the \(256\) classes in \(\mathscr{C}\), we have eliminated \(\sum_{m+n\geq 5}{}^{4}C^{4}_{m}C_{n}+\sum_{\min(m,n)=0}{}^{4}C^{4}_{m}C_{n}=93+31=124\) classes. We now scan through remaining elements in \(\mathscr{C}\), namely, \(\mathcal{C}_{11},\mathcal{C}_{12},\mathcal{C}_{21},\mathcal{C}_{22},\mathcal{C}_ {13},\mathcal{C}_{31}\). For \(i\in\{1,2\}\), let \(-i\) denote the element in \(\{1,2\}\setminus\{i\}\). We call \(V=(M,N)\) and the class \(\mathcal{C}(V)\) achiral if \(V\) is non-overlapping and \(\exists i,j\) such that \([M]_{ij}=[N]_{-i-j}=-1\). We call a \(V\) and the class \(\mathcal{C}(V)\) chiral if \(V\) is non-overlapping and not achiral. **Lemma V.5**: _1) \(V\) is overlapping if and only if \(V^{\prime}\) is overlapping for all \(V^{\prime}\in(V;\Omega)\)._ (1) It is easy to see that by inspection all actions in \(\Omega\) map an overlapping pair \((M,N)\) to another overlapping pair. Moreover, since \(M,N\) are \(2\times 2\) matrices, all actions \(\mathsf{R},\mathsf{R}^{\prime},\mathsf{T},\mathsf{E}\) are involutions, i.e., when applied twice, are equivalent to \(\mathbf{I}\). In other words if \(V^{\prime}\in(V;\Omega)\), then by a suitable application actions, one can map \(V^{\prime}\) to back to \(V\), whereby if \(V^{\prime}\) is overlapping, then so must be \(V\). (2) This part follows in a similar manner as (1). Suppose that \(V\) is achiral. Then owing to part (a), every \(V^{\prime}\) in the orbit \((V;\Omega)\) is non-overlapping. We will show that the action \(\mathsf{R}\) preserves achirality of \(V\); this can be shown for other actions can be proved similarly. Let \(i,j\) be such that \([M]_{ij}=[N]_{-i-j}=-1\). Then, \([R]_{-ij}=[M]_{ij}=[N]_{-i-j}=[R implying that \(\mathsf{R}V\) is achiral. Thus, the orbit \((V;\Omega)\) is achiral. Again, using that the actions in \(\Omega\) are involutions we get that if any \(V^{\prime}\in(V,\Omega)\) is achiral, then so is \(V\). Our elimination procedure for \(\mathcal{C}_{mn}\) can be described as follows. We define \(\mathcal{C}^{o}_{mn}=\mathcal{C}_{mn}\cap\mathcal{C}^{o}\) as the collection of all overlapping \(m\)-\(n\) classes. Observe that \[|\mathcal{C}^{o}_{mn}|={}^{4}C_{m}\times(\sum_{k=1}^{n}{}^{m}C_{k}{}^{4-m}C_{ n-k}), \tag{27}\] since we have \({}^{4}C_{m}\) choices for a '\(-1\)' in \(M\) following which we have \({}^{m}C_{k}{}^{4-m}C_{n-k}\) choices for \(k\) overlapping \(-1\)'s in \(N\) and \({}^{4-m}C_{n-k}\) for the remaining \((n-k)\) nonoverlapping \(-1\)'s. We then explicitly specify a chiral \(V_{c}=(M_{c},N_{c})\) and an achiral \(V_{a}=(M_{a},N_{a})\) and define \(\mathcal{C}^{c}_{mn}:=\mathcal{C}((V_{c};\Omega))\cap\mathcal{C}_{mn}\), \(\mathcal{C}^{a}_{mn}:=\mathcal{C}((V_{a};\Omega))\cap\mathcal{C}_{mn}\). Following Lemma V.5, such a specification ensures all all classes in \(\mathcal{C}^{a}_{mn}\) are chiral and all those in \(\mathcal{C}^{a}_{mn}\) are achiral so that \(\mathcal{C}^{o}_{mn},\mathcal{C}^{a}_{mn},\mathcal{C}^{a}_{mn}\) are mutually disjoint. We then establish that our choice of \(V_{c}\), \(V_{a}\) ensures that \(\mathcal{C}^{o}_{mn},\mathcal{C}^{a}_{mn}\) and \(\mathcal{C}^{c}_{mn}\) exhaust \(\mathcal{C}_{mn}\), whereby these constitute a partition of \(\mathcal{C}_{mn}\). We then examine \(\mathcal{C}(V_{c})\) and \(\mathcal{C}(V_{a})\) and eliminate those that do not admit a quantum advantage. #### V-B1 Elimination of 1-1 problem class \(\mathcal{C}_{11}\) Consider \(\mathcal{C}_{11}\) and notice \(|\mathcal{C}_{11}|={}^{4}C_{1}{}^{4}C_{1}=16\). Define \(\mathcal{C}^{o}_{11}=\mathcal{C}_{11}\cap\mathcal{C}^{o}\). Define the following achiral pair \(V_{a}=(M_{a},N_{a})\), \[M_{a}:=\begin{pmatrix}-1&0\\ 0&0\end{pmatrix},\quad N_{a}:=\begin{pmatrix}0&0\\ 0&-1\end{pmatrix}, \tag{28}\] and let \(\mathcal{C}^{a}_{11}:=\mathcal{C}((V^{a};\Omega))\cap\mathcal{C}_{11}\). Observe that, \[\mathcal{C}^{a}_{11}=\{\mathcal{C}(V_{a}),\mathcal{C}(\mathsf{R}V_{a}), \mathcal{C}(V_{a}\mathsf{R}),\mathcal{C}(\mathsf{R}V_{a}\mathsf{R})\}, \tag{29}\] so that \(|\mathcal{C}^{a}_{11}|=4\). Now take the chiral pair \(V_{c}=(M_{c},N_{c})\), \[M_{c}:=\begin{pmatrix}-1&0\\ 0&0\end{pmatrix},N_{c}:=\begin{pmatrix}0&-1\\ 0&0\end{pmatrix}, \tag{30}\] and let \(\mathcal{C}^{c}_{11}:=\mathcal{C}((V^{c};\Omega))\cap\mathcal{C}_{11}\). It is easy to verify that \[\mathcal{C}^{c}_{11}= \{\mathcal{C}(V_{c}),\mathcal{C}(\mathsf{R}V_{c}),\mathcal{C}(V_ {c}\mathsf{R}),\mathcal{C}(\mathsf{R}V_{c}\mathsf{R}),\mathcal{C}(\mathsf{T} V_{c}),\] \[\mathcal{C}(\mathsf{R}\mathsf{T}V_{c}),\mathcal{C}(\mathsf{T}V_{ c}\mathsf{R}),\mathcal{C}(\mathsf{R}\mathsf{T}V_{c}\mathsf{R})\}, \tag{31}\] whereby \(|\mathcal{C}^{c}_{11}|=8=|\mathcal{C}_{11}|-|\mathcal{C}^{o}_{11}|-|\mathcal{ C}^{a}_{11}|\). Thus \(\mathcal{C}^{c}_{11},\mathcal{C}^{o}_{11},\mathcal{C}^{a}_{11}\) is a partition of \(\mathcal{C}_{11}\). The following proposition eliminates \(\mathcal{C}_{11}\) by elimination of each of the elements in this partition. **Proposition V.6**: \(\mathcal{C}^{o}_{11}\) _does not admit a quantum advantage since \(\mathcal{C}^{o}_{11}\subset\mathcal{C}^{o}\). We now show the same for \(\mathcal{C}^{a}_{11}\). For an instance \(D\in\mathcal{C}(M_{a},N_{a})\), for \((M_{a},N_{a})\) as defined in (28), note that \(\ell(u^{0}_{A},u^{0}_{B},0)=-1\), \(\ell(u^{1}_{A},u^{1}_{B},1)=-\chi\) and \(\ell(.)\equiv 0\) otherwise. Now consider deterministic policies \(\hat{\gamma},\overline{\gamma}\) :_ \[\hat{\gamma}_{A}(\xi_{A})\equiv u^{0}_{A},\hat{\gamma}_{B}(\xi_{B})\equiv u^{0 }_{B}\text{ and }\overline{\gamma}_{A}(\xi_{A})\equiv u^{1}_{A},\overline{\gamma}_{B}(\xi_{B}) \equiv u^{1}_{B}.\] It is easy to evaluate, \[J(\pi_{\hat{\gamma}};D)=-\mathbb{P}(\xi_{w}=0),\quad J(\pi_{\overline{\gamma}}; D)=-\chi\mathbb{P}(\xi_{W}=1)\] Now consider for a no-signalling vertex \(Q^{\alpha\beta\delta}\in\mathcal{NS}\) (recall (22)), \[J(Q^{\alpha\beta\delta};D)=-\sum_{\xi_{A},\xi_{B}}(\mathbb{P}(\xi_ {A},\xi_{B},0)Q^{\alpha\beta\delta}(u^{0}_{A},u^{0}_{B}|\xi_{A},\xi_{B})\] \[\qquad\qquad\quad+\chi\mathbb{P}(\xi_{A},\xi_{B},1)Q^{\alpha\beta \delta}(u^{0}_{A},u^{1}_{B}|\xi_{A},\xi_{B}))\] \[=-\frac{1}{2}\sum_{\xi_{A},\xi_{B}}(\mathbb{P}(\xi_{A},\xi_{B},0)( \sim\xi_{A}\cdot\xi_{B}\oplus\alpha\cdot\xi_{A}\oplus\beta\cdot\xi_{B}\oplus\delta)\] \[\qquad\qquad\quad+\chi\mathbb{P}(\xi_{A},\xi_{B},1)(\sim\xi_{A} \cdot\xi_{B}\oplus\alpha\cdot\xi_{A}\oplus\beta\cdot\xi_{B}\oplus\delta))\] \[\geq\frac{1}{2}\left(J(\pi_{\hat{\gamma}};D)+J(\pi_{\overline{ \gamma}};D)\right),\] where in the last step we have used that the terms multiplying the probabilities are nonnegative. Since the RHS is independent of the no-signalling vertex, the cost of every no-signalling vertex is bounder below by the cost of a deterministic policy \(J(Q^{\alpha\beta\delta};D)\geq\min(J(\pi_{\hat{\gamma}};D),J(\pi_{\overline{ \gamma}};D))\). Since the instance \(D\) was arbitrary, this establishes that \(\mathcal{C}^{a}_{11}\) does not admit a no-signalling and hence quantum advantage. We follow a similar line of arguments for \(\mathcal{C}^{c}_{11}\). For an instance \(D\in\mathcal{C}(M_{c},N_{c})\), we have \(\ell(u^{0}_{A},u^{0}_{B},0)=-1\), \(\ell(u^{0}_{A},u^{1}_{B},1)=-\chi\) and \(\ell(.)\equiv 0\) otherwise. Now consider deterministic policies \(\hat{\gamma}\): \(\hat{\gamma}_{A}(\xi_{A})\equiv u^{0}_{A},\hat{\gamma}_{B}(\xi_{B})\equiv u^{0 }_{B}\) and \(\overline{\gamma}\): \(\overline{\gamma}_{A}(\xi_{A})\equiv u^{0}_{A},\overline{\gamma}_{B}(\xi_{B}) \equiv u^{0}_{B}\) It is straightforward to evaluate \[J(\pi_{\hat{\gamma}};D)=-\mathbb{P}(\xi_{W}=0)\quad J(\pi_{\overline{\gamma}};D)=- \chi\mathbb{P}(\xi_{w}=1).\] Now for any no-signalling vertex \(Q^{\alpha\beta\delta}\in\mathcal{NS}\), we again have, \[J(Q^{\alpha\beta\delta};D)\geq\frac{1}{2}\left(J(\pi_{\hat{\gamma}};D)+J(\pi_{ \overline{\gamma}};D)\right),\] whereby \(J^{*}_{\mathcal{NS}}(D)=J^{*}_{\mathcal{L}}(D)\), and that \(\mathcal{C}^{c}_{11}\) does not admit a quantum advantage. This establishes the proposition. #### V-B2 Elimination of 1-3 and 3-1 problem classes \(\mathcal{C}_{13}\) and \(\mathcal{C}_{31}\) Now consider the set of 1-3 class \(\mathcal{C}_{13}\). We argue that it does not admit a binary \(a,b,c,d\). For any no-signalling vertex \(Q^{\alpha\beta\delta}\), we claim that \[J(Q^{\alpha\beta\delta};D)=\frac{1}{2}(J(\pi^{xyzw};D)+J(\pi^{1ab};D)) \tag{34}\] where \(\pi^{xyzw}\) and \(\pi^{11ab}\), as defined in (21), are vertices of the local polytope specified by the Boolean variables \(x,y,z,w,a,b\in\{0,1\}\), which in turn are given in terms of \(\alpha,\beta\) and \(\delta\) as, \[x=(\sim\beta\cdot\sim\delta)\vee(\beta\cdot\alpha\cdot\delta) \tag{35}\] \[y=\sim\alpha\cdot\sim\delta\cdot\beta\] (36) \[z=\delta\vee(\sim\delta\cdot\alpha\cdot\beta)\] (37) \[w=(\alpha\cdot(\beta\oplus\delta))\vee(\sim\alpha\cdot\delta)\] (38) \[a=\sim\beta\vee(\beta\cdot\sim\alpha\cdot\sim\delta)\] (39) \[b=(\sim\alpha\cdot(\beta\vee\delta))\vee(\alpha\cdot(\sim\beta \oplus\delta)). \tag{40}\] To establish this claim, notice for \(D\in\mathcal{C}(V_{a})\), and any \(Q\in\mathcal{P}(\mathcal{U}|\Xi)\), \[J(Q;D)= -\sum_{\xi_{A},\xi_{B}}\left(\mathbb{P}(\xi_{A},\xi_{B},0)Q(u^{0} _{A},u^{0}_{B}|\xi_{A},\xi_{B})\right.\] \[\left.-\chi\mathbb{P}(\xi_{A},\xi_{B},1)(1-Q(u^{0}_{A},u^{0}_{B}| \xi_{A},\xi_{B}))\right). \tag{41}\] To establish (34), we show that for all \(\alpha,\beta,\delta,\xi_{A},\xi_{B}\in\{0,1\}\), and with \(x,y,z,w,a,b\) as specified by (35)-(40), \[Q^{\alpha\beta\delta}(u^{0}_{A},u^{0}_{B}|\xi_{A},\xi_{B})\] \[=\frac{1}{2}(\pi^{xyzw}(u^{0}_{A},u^{0}_{B}|\xi_{A},\xi_{B})+\pi^ {11ab}(u^{0}_{A},u^{0}_{B}|\xi_{A},\xi_{B})), \tag{42}\] so that (34) now follows from (42). The validity of (42) can be done through straightforward computation; due to the large number of variables involved, we relegate this to a Python notebook [21] provided in the supplementary material. This establishes our claim (34). We have thus established that for each no-signalling \(Q^{\alpha\beta\delta}\) policy, there exists a policy in \(\pi\in\mathcal{L}\) such that \(J(Q^{\alpha\beta\delta};D)\geq J(\pi;D),\) whereby establishing that \(J^{\star}_{\mathcal{NS}}(D)\geq J^{\star}_{\mathcal{L}}(D)\). Since \(D\) is arbitrary, there is no quantum advantage in \(\mathcal{C}_{13}\), and from Proposition IV.3, none in \(\mathcal{C}_{31}\). ### \(\mathcal{C}_{12}\), \(\mathcal{C}_{21}\) and the 1/2-CAC problem class We now come to the \(1\)-\(2\) class \(\mathcal{C}_{12}\); we will quickly address \(\mathcal{C}_{21}\) at the end of this subsection. Define \(\mathcal{C}^{o}_{12}=\mathcal{C}_{12}\cap\mathcal{C}^{o}\), the achiral pair \(V_{a}=(M_{a},N_{a})\), \[M_{a}:=\begin{pmatrix}-1&0\\ 0&0\end{pmatrix},N_{a}:=\begin{pmatrix}0&-1\\ 0&-1\end{pmatrix}, \tag{43}\] and the chiral pair \(V_{c}=(M_{c},N_{c})\) \[M_{c}:=\begin{pmatrix}-1&0\\ 0&0\end{pmatrix},N_{c}:=\begin{pmatrix}0&-1\\ -1&0\end{pmatrix}. \tag{44}\] Note that the chiral pair is \(\frac{1}{2}\)-CAC form. Let \(\mathcal{C}^{a}_{12}:=\mathcal{C}((V^{a},\Omega))\cap\mathcal{C}_{12}\) and \(\mathcal{C}^{c}_{12}:=\mathcal{C}((V_{c},\Omega))\cap\mathcal{C}_{12}\). Note that \[\mathcal{C}^{a}_{12}= \{\mathcal{C}(V_{a}),\mathcal{C}(\text{RV}_{a}),\mathcal{C}(V_{a} \mathsf{R}),\mathcal{C}(\text{RV}_{a}\mathsf{R}),\mathcal{C}(\text{TV}_{a}),\] \[\mathcal{C}(\text{RTV}_{a}),\mathcal{C}(\text{TV}_{a}\mathsf{R}), \mathcal{C}(\text{RTV}_{a}\mathsf{R})\}, \tag{45}\] \[\mathcal{C}^{c}_{12}= \{\mathcal{C}(V_{c}),\mathcal{C}(\text{RV}_{c}),\mathcal{C}(V_{c} \mathsf{R}),\mathcal{C}(\text{RV}_{c}\mathsf{R})\}. \tag{46}\] Further, notice that \(|\mathcal{C}_{12}|={}^{4}C_{1}\,{}^{4}C_{2}=24\), \(|\mathcal{C}^{o}_{12}|+|\mathcal{C}^{a}_{12}|+|\mathcal{C}^{c}_{12}|=\mathcal{C }_{12}=24\) so \(\mathcal{C}^{o}_{12}\), \(\mathcal{C}^{a}_{12}\) and \(\mathcal{C}^{c}_{12}\) partition the set \(\mathcal{C}_{12}\). We eliminate all 1-2 classes not in the orbit of the \(\frac{1}{2}\)-CAC class (i.e., not in the orbit of the chiral pair \((M_{c},N_{c})\)) in the following proposition. **Proposition V.8**: _1) \(\mathcal{C}^{o}_{12}\) does not admit a quantum advantage._ _2) \(\mathcal{C}^{o}_{12}\) does not admit quantum advantage._ _1) Immediate from \(\mathcal{C}^{o}_{12}\subset\mathcal{C}^{o}\)._ _2) For an instance \(D\in\mathcal{C}(V_{a})\), Consider two deterministic policies \(\hat{\gamma}\) and \(\overline{\gamma}\), and the corresponding costs:_ \[\hat{\gamma}_{A}(\xi_{A})\equiv u^{0}_{A},\hat{\gamma}_{B}(\xi_{B} )\equiv u^{0}_{B};J(\pi_{\hat{\gamma}};D)=-\mathbb{P}(\xi_{W}=0)\] \[\overline{\gamma}_{A}(\xi_{A})\equiv 1,\overline{\gamma}_{B}(\xi_{B} )\equiv 1;J(\pi_{\overline{\gamma}};D)=-\chi\mathbb{P}(\xi_{W}=1). \tag{47}\] _Now consider for a no-signalling vertex \(Q^{\alpha\beta\delta}\in\mathcal{NS}\) and recall (22) to express_ \[J(Q^{\alpha\beta\delta};D)=-\sum_{\xi_{A},\xi_{B}}\mathbb{P}(\xi_{A },\xi_{B},0)Q^{\alpha\beta\delta}(u^{0}_{A},u^{0}_{B}|\xi_{A},\xi_{B})+\] \[\chi\mathbb{P}(\xi_{A},\xi_{B},1)\sum_{u_{A}}Q^{\alpha\beta \delta}(u_{A},u^{1}_{B}|\xi_{A},\xi_{B})\] \[=\frac{-1}{2}\sum_{\xi_{A},\xi_{B}}(\mathbb{P}(\xi_{A},\xi_{B},0)( \sim\xi_{A}\cdot\xi_{B}\oplus\alpha\cdot\xi_{A}\oplus\beta\cdot\xi_{B}\oplus\delta)\] \[\qquad\qquad+\chi\mathbb{P}(\xi_{A},\xi_{B},1)(\xi_{A}\cdot\xi_{B} \oplus\alpha\cdot\xi_{A}\oplus\beta\cdot\xi_{B}\oplus\delta\] \[\qquad\qquad+\sim\xi_{A}\cdot\xi_{B}\oplus\alpha\cdot\xi_{A} \oplus\beta\cdot\xi_{B}\oplus\delta)),\] \[\geq\frac{1}{2}(J(\pi_{\hat{\gamma}};D)+J(\pi_{\overline{\gamma}};D)).\] _In the last inequality we have again used the nonnegativity of the terms multiplying the probabilities. Thus, the cost of every no-signalling policy is bounded below by the cost of a deterministic policy in \(\mathcal{L}\). Arguing as in Proposition V.6, we see that there is no no-signalling advantage and quantum advantage within in \(\mathcal{C}^{a}_{12}\). This establishes the proposition._ Now notice that \(\mathcal{C}_{21}=\mathsf{E}\mathcal{C}_{12}\). Thus define \(\mathcal{C}^{o}_{21}=\mathsf{E}\mathcal{C}^{o}_{12},\mathcal{C}^{a}_{21}= \mathsf{E}\mathcal{C}^{a}_{12}\) and \(\mathcal{C}^{c}_{21}=\mathsf{E}\mathcal{C}^{c}_{12}\), and the 2-1 class partitions into \(\mathcal{C}^{o}_{21}\), \(\mathcal{C}^{o}_{21}\) and \(\mathcal{C}^{c}_{21}\). \(\mathcal{C}^{o}_{21}\) here lies within the orbit of the \(\frac{1}{2}\)-CAC class, and the elimination of the other two \(\mathcal{C}^{o}_{21}\) and \(\mathcal{C}^{o}_{21}\) follows from Proposition V.8 and Proposition V.6. This subsection thus eliminates all 1-2 and 2-1 classes that do not lie in the orbit of \(\frac{1}{2}\)-CAC class. ### \(\mathcal{C}_{22}\) _and the CAC problem class_ Ultimately, It is easy to check by inspection, and using (27), \(|\mathcal{C}^{o}_{22}|+|\mathcal{C}^{a}_{22}|+|\mathcal{C}^{e}_{22}|=|\mathcal{C}_ {22}|\) so that \(\mathcal{C}^{o}_{22}\), \(\mathcal{C}^{a}_{22}\) and \(\mathcal{C}^{e}_{22}\) partition \(\mathcal{C}_{22}\). Consequently, the sets \(\mathcal{C}^{o}_{22}\) and \(\mathcal{C}^{a}_{22}\) capture all 2-2 classes which are outside the orbit of CAC, and we eliminate these sets in the following proposition. **Proposition V.9**: _1) \(\mathcal{C}^{o}_{22}\) does not admit a quantum advantage. 2) \(\mathcal{C}^{a}_{22}\) does not admit quantum advantage._ 1) Immediate since \(\mathcal{C}^{o}_{22}\subset\mathcal{C}^{o}\). 2) Let \(D\) be an instance in \(\mathcal{C}^{a}_{22}\) and let \(\hat{\gamma}\) and \(\overline{\gamma}\) be as defined in (47). Consider any no-signalling policy \(Q^{\alpha\beta\delta}\in\mathcal{NS}\) and notice, \[J(Q^{\alpha\beta\delta};D)=-\sum_{\xi_{A},\xi_{B}}(\mathbb{P}( \xi_{A},\xi_{B},0)(Q^{\alpha\beta\delta}(u^{0}_{A},u^{0}_{B}|\xi_{A},\xi_{B}))\] \[\qquad\quad+\chi\mathbb{P}(\xi_{A},\xi_{B},1)(\sum_{u_{A}}Q^{ \alpha\beta\delta}(u^{0}_{A},u^{1}_{B}|\xi_{A},\xi_{B})))\] Using (22), \[J(Q^{\alpha\beta\delta};D) =-\frac{1}{2}\sum_{\xi_{A},\xi_{B}}(\mathbb{P}(\xi_{A},\xi_{B},0) +\chi\mathbb{P}(\xi_{A},\xi_{B},1))\times\] \[(\sim\xi_{A}\cdot\xi_{B}\oplus\alpha\cdot\xi_{A}\oplus\beta\cdot \xi_{B}\oplus\delta\] \[\qquad\qquad+\xi_{A}\cdot\xi_{B}\oplus\alpha\cdot\xi_{A}\oplus \beta\cdot\xi_{B}\oplus\delta)\] \[=\frac{1}{2}(J(\pi_{\hat{\gamma}};D)+J(\pi_{\overline{\gamma}};D)).\] Arguing as in Proposition V.6, \(J^{*}_{\mathcal{NS}}(D)=J^{*}_{L}(D)\), and the no-signalling and quantum advantages are absent in \(\mathcal{C}^{a}_{22}\). This establishes the proposition. ## VI Proof of Theorem V.1: Quantum Advantage in \(\frac{1}{2}\)-Cac We now have all but one ingredient to prove Theorem V.1. We have shown that all classes not in the orbit of the CAC and \(\frac{1}{2}\)-CAC class do not admit a quantum advantage. That CAC admits a quantum advantage was shown in [2]. We now show this for \(\frac{1}{2}\)-CAC. Consider a problem instance \(D\) in \(\frac{1}{2}\)-CAC class with the specification \(D=(M_{c},N_{c},\mathbb{P},\mathcal{U}_{A},\mathcal{U}_{B},\mathcal{V}_{B})\) where \(M_{c},N_{c}\) are as in (44), and \(\mathbb{P}\in\mathcal{P}(\Xi)\) is such that \[\mathbb{P}(\xi_{A},\xi_{B},\xi_{W})=\begin{cases}0.2&\xi=(0,0,1),(0,1,1),(1,0,1)\\ 0.4&\xi=(1,1,0)\\ 0&\text{otherwise}\end{cases} \tag{52}\] and \(\chi=2\). It is straightforward to sift through all 16 deterministic strategies in \(\Pi\). We state at optimal policy and the optimal local cost here, \(\gamma^{*}_{A}\equiv u^{0}_{A}\), \(\gamma^{*}_{B}\equiv u^{1}_{B}\) and \(J^{*}_{\mathcal{L}}(D)=-6/5\), and justify this statement in Lemma I.1 in the appendix. We now specify a quantum strategy \(Q\) that achieves a lower cost. We consider two dimensional Hilbert spaces \(\mathcal{H}_{A}\), \(\mathcal{H}_{B}\) and a four dimensional \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). Let \(\{|z^{+}_{i}\rangle,|z^{-}_{i}\rangle\}\) be an orthonormal basis of \(\mathcal{H}_{i}\) for \(i\in\{A,B\}\). We work with a Euclidean representation \(|z^{+}_{i}\rangle\equiv(1,0)^{\top};|z^{-}_{i}\rangle\equiv(0,1)^{\top}\). \(\mathcal{H}\) is then spanned by \(\{|z^{+}_{A}\rangle\otimes|z^{+}_{B}\rangle,|z^{+}_{A}\rangle\otimes|z^{-}_{B }\rangle,|z^{-}_{A}\rangle\otimes|z^{+}_{B}\rangle,|z^{-}_{A}\rangle\otimes|z ^{-}_{B}\rangle\}\) and enumerated in that order. Let \(\rho_{AB}\) be the following density operator on \(\mathcal{H}\), \[\rho_{AB}=\begin{pmatrix}1/4&0&0&\sqrt{3}/4\\ 0&0&0&0\\ 0&0&0&0\\ \sqrt{3}/4&0&0&3/4\end{pmatrix}.\] It is clear that \(\rho_{AB}\) satisfies \(\rho_{AB}^{\dagger}=\rho_{AB},\mathrm{Tr}(\rho_{AB})=1\): Next, we specify the projection operators \(P_{u_{A}}^{(A)}(\xi_{A})\in\mathcal{B}(\mathcal{H}_{A}),P_{u_{B}}^{(B)}(\xi_{ B})\in\mathcal{B}(\mathcal{H}_{B})\): \[P_{u_{A}^{0}}^{(A)}(0) =\begin{pmatrix}1&0\\ 0&0\end{pmatrix};P_{u_{A}^{1}}^{(A)}(0)=\mathbf{I}-P_{u_{A}^{0}}^{(A)}(0),\] \[P_{u_{A}^{0}}^{(A)}(1) =\frac{1}{2}\begin{pmatrix}1&e^{-i\frac{\pi}{3}}\\ e^{-i\frac{\pi}{3}}&1\end{pmatrix};P_{u_{A}^{1}}^{(A)}(1)=\mathbf{I}-P_{u_{A}^{1 }}^{(A)}(1),\] \[P_{u_{B}^{0}}^{(B)}(0) =\frac{1}{4}\begin{pmatrix}2-\sqrt{3}&e^{-i\frac{\pi}{3}}\\ e^{-i\frac{\pi}{3}}&2+\sqrt{3}\end{pmatrix};P_{u_{B}^{0}}^{(B)}(0)=\mathbf{I}-P_{u _{B}^{0}}^{(B)}(0),\] \[P_{u_{B}^{0}}^{(B)}(1) =\frac{1}{4}\begin{pmatrix}2-\sqrt{3}&e^{i\frac{\pi}{3}}\\ e^{-i\frac{\pi}{3}}&2+\sqrt{3}\end{pmatrix};P_{u_{B}^{0}}^{(B)}(1)=\mathbf{I}-P_{u _{B}^{0}}^{(B)}(0).\] \(P_{u_{A}^{0}}^{(A)}(0)\) is trivially a projector. To verify that the rest are indeed projectors, denote \(P(\lambda,a,b,\theta):=\frac{1}{\lambda}\begin{pmatrix}a&e^{-i\theta}\\ e^{i\theta}&b\end{pmatrix}\), and notice that \[P(\lambda,a,b,\theta)^{2}=\frac{1}{\lambda}\begin{pmatrix}\frac{1+a^{2}}{\lambda} \begin{array}{cc}\frac{(a+b)e^{-i\theta}}{\lambda}\\ \frac{(a+b)e^{i\theta}}{\lambda}&\frac{1+b^{2}}{\lambda}\end{array}\end{pmatrix}. \tag{53}\] Thus \(P(\lambda,a,b,\theta)\) is a projector if \(1+a^{2}=\lambda a,1+b^{2}=\lambda b\) and \(a+b=\lambda\). Taking \((\lambda,a,b)=(2,1,1)\), and \(\theta=\pi/3\), (53) implies \(P_{u_{B}^{0}}^{(A)}(1)\) is a projector. Similarly \((\lambda,a,b)=(4,2-\sqrt{3},2+\sqrt{3})\) shows \(P_{u_{B}^{0}}^{(B)}(0)\) and \(P_{u_{B}^{0}}^{(B)}(1)\) are projectors with\(\theta=\pi/3\) and \(\theta=-\pi/3\), respectively. We have the cost of an instance \(D=(M_{c},N_{c},\mathbb{P},\mathcal{U}_{A},\mathcal{U}_{B},2)\) in \(\frac{1}{2}\)-CAC with \(\mathbb{P}\) specified in (52), under policy \(Q\) expressed as \[J(Q;D)\] \[=-\sum_{\xi_{A},\xi_{B}}\mathbb{P}(\xi_{A},\xi_{B},0)Q(u_{A}^{0},u_ {B}^{0}|\xi_{A},\xi_{B})\] \[+2\mathbb{P}(\xi_{A},\xi_{B},1)(Q(u_{A}^{0},u_{B}^{1}|\xi_{A}, \xi_{B})+Q(u_{A}^{1},u_{B}^{0}|\xi_{A},\xi_{B}))\] \[=-0.4(Q(u_{A}^{0},u_{B}^{0}|1,1)+Q(u_{A}^{0},u_{B}^{1}|0,0)+Q(u_{A} ^{1},u_{B}^{0}|0,0)\] \[+Q(u_{A}^{0},u_{B}^{1}|0,1)+Q(u_{A}^{ This strategy thus attains the cost \[J(Q;D)=\frac{-7-3\sqrt{3}}{10}\approx-1.22<J^{*}_{\mathcal{L}}(D)=-\frac{6}{5}, \tag{54}\] and thus finishes our demonstration of the quantum advantage in the \(\frac{1}{2}\)-CAC problem class. ## VII Conclusion An exhaustive scan of the introduced superstructure has thus revealed a restriction of the quantum advantage to the CAC and \(\frac{1}{2}\)-CAC classes, which are precisely the ones that admit the coordination dilemma. In addition, these classes do admit the quantum advantage as our numerical demonstration through [2] and Section VI have revealed. The coordination dilemma is thus central to the advantage offered by the entire set of non-locally correlated strategies that respect absence of communication in the problem. Quantum strategies are indeed, a physically implementable subset of this class. While our line of analysis has been restricted to a specialised superstructure of binary teams, it hints that the coordination dilemma will remain an intuitive description of the parametric subspaces that admit the quantum advantage in more general problems. In the successive article of this two part series, we look within the CAC and \(\frac{1}{2}\)-CAC classes, and identify subspaces within them that subsume the quantum advantage. Our results there characterise the favourable extent of the coordination dilemma for quantum advantage to manifest.
2306.16137
Localization and the landscape function for regular Sturm-Liouville operators
We consider the localization in the eigenfunctions of regular Sturm-Liouville operators. After deriving non-asymptotic and asymptotic lower and upper bounds on the localization coefficient of the eigenfunctions, we characterize the landscape function in terms of the first eigenfunction. Several numerical experiments are provided to illustrate the obtained theoretical results.
Mirza Karamehmedović, Faouzi Triki
2023-06-28T12:07:22Z
http://arxiv.org/abs/2306.16137v1
# Localization and the landscape function for regular Sturm-Liouville operators ###### Abstract. We consider the localization in the eigenfunctions of regular Sturm-Liouville operators. After deriving non-asymptotic and asymptotic lower and upper bounds on the localization coefficient of the eigenfunctions, we characterize the landscape function in terms of the first eigenfunction. Several numerical experiments are provided to illustrate the obtained theoretical results. ## 1. Introduction and main results Let \(L>0\), assume \(p,w\in C^{2}([0,L])\) are positive-valued functions satisfying \(p^{-1},w^{-1}\in L^{\infty}(]0,L[)\), and let \(q\in C([0,L])\) be nonnegative-valued. Define the unbounded operator \(T\) by \[Tu=-\frac{1}{w}(pu^{\prime})^{\prime}+\frac{q}{w}u,\] with domain \[D(T)=\left\{u\in L^{2}(]0,L[,w(x)dx):\;Tu\in L^{2}(]0,L[,w(x)dx),\;u(0)=u(L)=0 \right\},\] and recall that \(T\) is self-adjoint in \(L^{2}(]0,L[,w(x)dx)\) with a compact resolvent. We here investigate the "localization" in the solution \(\phi_{\lambda}\in D(T)\) of the regular Sturm-Liouville problem \[T\phi_{\lambda} = \lambda\phi_{\lambda}, \tag{1}\] for positive \(\lambda\). In particular, writing \(\|\cdot\|_{t}\) for \(\|\cdot\|_{L^{t}(]0,L[)}\), we find _non-asymptotic as well as asymptotic_ lower and upper bounds for the 'existence surface' [2, 3], also called the 'localization coefficient', \[\alpha(\phi_{\lambda})=\|\phi_{\lambda}\|_{2}^{4}/\|\phi_{\lambda}\|_{4}^{4}.\] The quantity \(\alpha(\phi_{\lambda})\) is independent of any normalization of \(\phi_{\lambda}\) by a scalar factor, and it is a standard measure of the localization of \(\phi_{\lambda}\), with low \(\alpha(\phi_{\lambda})\) indicating high localization. 'High localization' means that the amplitude of the solution function is relatively high over a small connected sub-interval \(I\subset]0,L[\), and relatively low in \(]0,L[\setminus I\). Figure 1 helps illustrate the concept of localization. Here, we let \(L=1\), \(q\equiv 0\), \(w\equiv 1\), and \[p(x)=\tanh(40x/L-10)+1.1,\quad x\in[0,L], \tag{2}\] making the operator \(T\) in (1) the Dirichlet Laplacian on \([0,1]\) with a non-trivial metric, \(Tu=-(pu^{\prime})^{\prime}\). Note that the most localized eigenfunctions correspond to relatively small eigenvalues, and that the localization coefficient seems to approach a constant with increasing eigenvalues. Our results, valid well beyond this single example case, predict both these empirical observations on localization. We first derive lower and upper bounds for \(\alpha(\phi_{\lambda})\) in the non-asymptotic regime, specifically showing that \(\alpha(\phi_{\lambda})\) can attain relatively low values only at relatively low frequencies (small \(\lambda\)). Then, to complete the picture, we prove the lower and upper bounds for \(\alpha(\phi_{\lambda})\) in the asymptotic regime as \(\lambda\to\infty\). The treatment of the Sturm-Liouville problem (1) when the coefficients are smooth usually starts with the Liouville transformation to the eigenvalue problem for the Schrodinger operator [1]. We work with this transformation in Section 2, but to state our second and third main results we already here define some of the involved quantities. Thus let \[y(x)=\int_{0}^{x}\sqrt{w(s)/p(s)}ds,\quad x\in[0,L],\] and \[B=\int_{s=0}^{L}\sqrt{w(s)/p(s)}ds.\] The function \(y:]0,L[\to]0,B[\) is strictly increasing, and has an inverse denoted by \(x(y)\). Let \[f(y)=(w(y))p(x(y)))^{1/4},\quad y\in[0,B],\] Figure 1. Localization of eigenfunctions of the Dirichlet Laplacian on \([0,1]\) with metric \(p\). and \[Q(y)=f^{\prime\prime}(y)/f(y)+q(x(y))/w(x(y)),\quad y\in[0,B]. \tag{3}\] Write also \[a(B,\lambda)=\frac{B\|Q\|_{\infty}}{2\sqrt{\lambda}}, \tag{4}\] and \[b(B,\lambda)=\left(\frac{B^{3}}{12}+\frac{5B}{32\lambda}+\frac{5}{32\lambda^{3 /2}}\right)^{1/4}\|Q\|_{4}/\sqrt{\lambda}. \tag{5}\] Finally, for any real \(\lambda\), let \[\Phi_{\lambda}(y)=\sin(\sqrt{\lambda}y),\quad y\in[0,B].\] Our first main result gives non-asymptotic bounds on \(\alpha(\phi_{\lambda})\). Let \[\beta(p,w)=\|w\|_{\infty}^{-2}\|p^{-1/2}w^{-3/2}\|_{\infty}^{-1}\quad\text{ and}\quad\gamma(p,w)=\|w^{-1}\|_{\infty}^{2}\|p^{1/2}w^{3/2}\|_{\infty}.\] **Theorem 1**.: _If_ \[a(B,\lambda)<1 \tag{6}\] _and_ \[b(B,\lambda)<1 \tag{7}\] _then_ \[\beta(p,w)\left(\frac{1-b(B,\lambda)}{1+a(B,\lambda)}\right)^{4}\leq\frac{ \alpha(\phi_{\lambda})}{\alpha(\Phi_{\lambda})}\leq\gamma(p,w)\left(\frac{1+b (B,\lambda)}{1-a(B,\lambda)}\right)^{4}.\] Figure 2 illustrates the bounds on the localization coefficient from Theorem 1. Let \(\mathrm{BV}([0,B])\) and \(\mathrm{AC}([0,B])\) be respectively the space of bounded variation functions, and the space of absolutely continuous functions. Our final main result concerns the asymptotic behavior of \(\alpha(\phi_{\lambda})\): **Theorem 2**.: _As \(\lambda\to\infty\), we have_ \[\beta(p,w)\frac{2B}{3}+O(\lambda^{-1/2})\leq\alpha(\phi_{\lambda})\leq\gamma( p,w)\frac{2B}{3}+O(\lambda^{-1/2})\] _when \(Q\in C([0,B])\),_ \[\beta(p,w)\frac{\frac{B^{2}}{4}-B(\frac{1}{4}+\|Q\|_{1}B)\lambda ^{-1/2}}{\frac{3B}{8}+(\frac{9}{32}+2\|Q\|_{1}B)\lambda^{-1/2}} +O(\lambda^{-1})\leq\alpha(\phi_{\lambda})\] \[\leq\gamma(p,w)\frac{\frac{B^{2}}{4}+B(\frac{1}{4}+\|Q\|_{1}B) \lambda^{-1/2}}{\frac{3B}{8}-(\frac{9}{32}+2\|Q\|_{1}B)\lambda^{-1/2}}+O( \lambda^{-1})\] _when \(Q\in\mathrm{BV}([0,B])\), and_ \[\beta(p,w)\frac{2B}{3}+O(\lambda^{-3/2})\leq\alpha(\phi_{\lambda})\leq\gamma (p,w)\frac{2B}{3}+O(\lambda^{-3/2})\] _when \(Q\in C^{4}([0,B])\cap\mathrm{AC}([0,B])\) and \(Q^{\prime}\in\mathrm{BV}([0,B])\)._ **Remark 1**.: _A straightforward calculation shows that the localization coefficient of the function \(\Phi_{\lambda}\) is for any positive \(\lambda\) given by_ \[\alpha(\Phi_{\lambda})=\frac{B^{2}/4+\cos(\sqrt{\lambda}B)^{2}/4\lambda-B\cos( \sqrt{\lambda}B)\sin(\sqrt{\lambda}B)/2\sqrt{\lambda}-\cos(\sqrt{\lambda}B)^{ 4}/4\lambda}{3B/8+\cos(\sqrt{\lambda}B)^{3}\sin(\sqrt{\lambda}B)/4\sqrt{ \lambda}-5\cos(\sqrt{\lambda}B)\sin(\sqrt{\lambda}B)/8\sqrt{\lambda}}. \tag{8}\] _The eigenvalues and eigenfunctions of the Dirichlet Laplacian \(-d^{2}/dy^{2}\) on \((0,B)\) are given by \(\lambda_{n}=n^{2}\pi^{2}/B^{2}\) and \(\Phi_{\lambda_{n}}(y)=\sin(n\pi y/B)\), respectively, with \(n\in\boldsymbol{N}_{0}:=\boldsymbol{N}\setminus\{0\}=\{1,2,\dots\}\). In particular, \(\alpha(\Phi_{\lambda_{n}})=2B/3\) for \(n\in\boldsymbol{N}_{0}\), that is, all eigenfunctions of the Dirichlet Laplacian on \((0,B)\) have the same localization coefficient. We furthermore readily see that_ \[\lim_{\lambda\to\infty}\alpha(\Phi_{\lambda})=\frac{2B}{3}, \tag{9}\] _and more precisely that, for large \(\lambda\),_ \[\alpha(\Phi_{\lambda}) =\frac{B^{2}/4+O(\lambda^{-1/2})}{3B/8+O(\lambda^{-1/2})}=\frac{2 B}{3}\frac{1}{1+O(\lambda^{-1/2})}+O(\lambda^{-1/2}) \tag{10}\] \[=\frac{2B}{3}+O(\lambda^{-1/2}).\] _Figure 3 shows \(\alpha(\Phi_{\lambda})\) as function of \(\lambda\) for the choice \(B=1\)._ **Remark 2**.: _If \(p\equiv 1\) and \(w\equiv 1\) then \(T\) is the Schrodinger operator \(-d^{2}/dx^{2}+q(x)\). For this case, in the high-frequency limit (\(\lambda\to\infty\)) the localization coefficient of \(\phi_{\lambda}\) approaches that of \(\Phi_{\lambda}\) (so it approaches the value \(2B/3\)), that is, **the presence of the potential \(q\) becomes insignificant**._ Figure 2. Lower and upper bounds on \(\alpha(\phi_{\lambda})\) from Theorem 1 for the eigenvalue problem (1) after Liouville transformation, with \(\beta(p,w)=1\), \(\gamma(p,w)=1\), \(B=1\), \(\|Q\|_{\infty}=1\), and \(\|Q\|_{4}=1\). The constant value is the asymptotic \(2B/3\). For the chosen parameter values, the assumptions (6)–(7) are satisfied for \(\lambda\gtrapprox 0.74\). **Remark 3**.: _It is readily seen that the lower bound on \(\alpha(\phi_{\lambda})/\alpha(\Phi_{\lambda})\) in Theorem 1 is a monotonically increasing function of \(\lambda\), for any fixed positive \(B\). In view of this, and of the behavior of \(\alpha(\Phi_{\lambda})\) discussed above, we conclude that if a solution of (1) is to exhibit high localization (relatively small value of \(\alpha(\phi_{\lambda})\)) then \(\lambda\) must be relatively small, that is, **localization is a low-frequency phenomenon**._ We next focus on the localization in the eigenfunctions associated to low frequencies (small \(\lambda\)). In [4] the authors have given a simple but efficient way to predict the behavior of first eigenfunctions. Precisely they used the _landscape function_\(\ell\in D(T)\), solving \[T\ell=1,\quad\ell\in D(T), \tag{11}\] to identify the regions where the solution of (1) localizes. This can be observed through the following pointwise key inequality [8] \[\phi(x)\leq\lambda\ell(x)\|\phi\|_{\infty},\quad x\in\Omega.\] Indeed \(\lambda\|\ell\|_{\infty}\geq 1\) and \(\phi\) can then localize only in the region \(\{x\in\Omega;\lambda\ell(x)\geq 1\}\). Our first main result in this part of the paper thus characterizes the landscape function in terms of the first eigenfunction of the operator \(T\). **Proposition 1**.: _Assume that \(w\equiv 1\), and let \(k\in\boldsymbol{N}_{0}\). If_ \[T^{k}\widetilde{\ell}_{k}=1,\quad\widetilde{\ell}_{k}\in D(T^{k}),\quad\ell_ {k}=\widetilde{\ell}_{k}/\|\widetilde{\ell}_{k}\|_{2},\] _as well as_ \[T\phi_{1}=\lambda_{1}\phi_{1},\quad\phi_{1}\in D(T),\quad\|\phi_{1}\|_{2}=1, \quad\phi_{1}>0,\quad\lambda_{1}<\lambda_{j}\;\;\mathrm{for}\;\;j=2,3,\ldots,\] _where \(\lambda_{j},j\in\textbf{N}_{0}\) is the non-decreasing sequence of eigenvalues of \(T\). Then_ \[\|\ell_{k}-\phi_{1}\|_{\infty}\leq 2\lambda_{1}^{1/2}L\|P_{1}1\|_{2}^{-1}\left( \frac{\lambda_{1}}{\lambda_{2}}\right)^{k-1/2}, \tag{12}\] _where \(P_{1}\) is the spectral projection onto the eigenspace associated with \(\lambda_{1}\)._ **Remark 4**.: _The asymptotic result in (12) shows that the convergence is exponentially fast if the fundamental gap \(\lambda_{2}-\lambda_{1}\) is large enough. When \(p=1\), and \(q\) is a weakly convex potential it is known that [7]_ \[\lambda_{2}-\lambda_{1}\geq\frac{3\pi^{2}}{L^{2}}.\] _The inequality conjectured by Yau [9] is still an open problem in higher dimensions. It turns out that this spectral gap also determines the rate at which positive solutions of the heat equation tend to their projections onto the first eigenspace._ **Proposition 2**.: _Assume that \(w\equiv 1\), and let \(\lambda_{j}\), \(j\in\textbf{N}_{0}\), be the non-decreasing sequence of eigenvalues of \(T\). Let \(P_{j}\) be the spectral projection onto the eigenspace associated with \(\lambda_{j}\). Let \(k\), \(n_{0}\in\textbf{N}_{0}\), and \(t\in]\lambda_{n_{0}+1}^{-1},\lambda_{n_{0}}^{-1}[\). If_ \[(tT)^{k}\ell_{k,t}=1,\quad\ell_{k,t}\in D(T^{k}),\] _then_ \[\|\ell_{k,t}-\sum_{j=1}^{n_{0}}\frac{1}{(t\lambda_{j})^{k}}P_{j}1\|_{\infty} \leq\frac{L}{t^{1/2}}\frac{1}{(t\lambda_{n_{0}+1})^{k-1/2}}. \tag{13}\] **Remark 5**.: _The value of \(t\) fix the number of the eigenfunctions covered by the generalized landscape function \(\ell_{t,k}\). Notice that \(t\in]\lambda_{n_{0}+1}^{-1},\lambda_{n_{0}}^{-1}[\) is equivalent to_ \[\frac{1}{t\lambda_{n_{0}+1}}<1<\frac{1}{t\lambda_{n_{0}}},\] _which implies that the contribution of the eigenfunctions \(P_{j}1,\;j>n_{0}\) in the localization of \(\ell_{k,t}\) is exponentially small for large \(k\) while the contribution of \(P_{j}1\) for \(1\leq j\leq n_{0}\) can be exponentially large if in addition \(P_{j}1\) is not zero. These observations are confirmed in Section 6 by several numerical tests. Finally the results of Propositions 1 and 2 are still valid for \(w\) non-constant and sufficiently smooth (\(\|\cdot\|_{2}\) should be substituted by \(\|\cdot\|_{L^{2}(]0,L[;wdx)}\)))._ Theorem 1 is proved in Section 2 using a Volterra integral equation representation of solutions of (1) given by Fulton [5], while Theorem 2 is proved in Section 3 via the asymptotic expansions of \(\phi_{\lambda}\), as \(\lambda\to\infty\), given in Fulton and Pruess [6]. Finally, we prove Propositions 1 and 2 in respectively Sections 4 and 5 using the power method. ## 2. Proof of Theorem 1 (non-asymptotic bounds on \(\alpha(\phi_{\lambda})\)) Using the Liouville transformation \[y(x)=\int_{0}^{x}\sqrt{w(s)/p(s)}ds\;\;\text{for}\;x\in[0,L];\quad B=y(L);\] \[f(y)=(w(x(y))p(x(y)))^{1/4},\quad y\in[0,B]; \tag{14}\] \[v_{\lambda}(y)=\phi_{\lambda}(x(y))f(y),\quad y\in[0,B];\] and \[Q(y)=f^{\prime\prime}(y)/f(y)+q(x(y))/w(x(y)),\quad y\in[0,B],\] we recast the problem (1) in the Liouville normal form [6, pp. 303-304] \[\left\{\begin{array}{rcl}-v_{\lambda}^{\prime\prime}+Q(y)v_{\lambda}&=& \lambda v_{\lambda},\quad y\in(0,B),\\ v_{\lambda}(0)=v_{\lambda}(B)&=&0.\end{array}\right. \tag{15}\] It follows from our assumptions on \(p\), \(w\) and \(q\) that \(Q\in C([0,B])\), hence \(Q\in L^{t}([0,B])\) for \(t\in[1,\infty]\). Now \[\int_{x=0}^{L}\phi_{\lambda}(x)^{2}dx=\int_{y=0}^{B}\frac{v_{\lambda}(y)^{2}}{ w(x(y))}dy\in\left[\|w\|_{\infty}^{-1},\|w^{-1}\|_{\infty}\right]\times\int_{y=0}^ {B}v_{\lambda}(y)^{2}dy\] and \[\int_{x=0}^{L}\phi_{\lambda}(x)^{4}dx =\int_{y=0}^{B}\frac{v_{\lambda}(y)^{4}}{p(x(y))^{1/2}w(x(y))^{3 /2}}dy\] \[\in\left[\|p^{1/2}w^{3/2}\|_{\infty}^{-1},\|p^{-1/2}w^{-3/2}\|_{ \infty}\right]\times\int_{y=0}^{B}v_{\lambda}(y)^{4}dy,\] so it remains to examine \(\|v_{\lambda}\|_{2}^{2}\) and \(\|v_{\lambda}\|_{4}^{4}\). To this end we recall from Fulton and Pruess [6, p. 308] that a solution of the ODE in (15), normalized such that \(v_{\lambda}(0)=0\) and \(v_{\lambda}^{\prime}(0)=(w(0)p(0))^{-1/4}\neq 0\), satisfies the associated Volterra integral equation \[\left(\operatorname{Id}-\frac{1}{\sqrt{\lambda}}K_{Q}\right)v_{\lambda}(y)= \frac{v_{\lambda}^{\prime}(0)}{\sqrt{\lambda}}\Phi_{\lambda}(y),\quad y\in[0, B], \tag{16}\] where for any \(u\in C^{2}([0,B])\) we have \[K_{Q}u(y)=\int_{z=0}^{y}Q(z)\sin(\sqrt{\lambda}(y-z))u(z)dz,\quad y\in]0,B[.\] Now write \(\|K_{Q}\|_{t}\) for the operator norm of \(K_{Q}\) as a mapping from \(L^{t}(]0,B[)\) to \(L^{t}(]0,B[)\). **Lemma 1**.: _For every positive \(\lambda\) we have_ \[\|K_{Q}\|_{2}^{2}\leq\frac{B^{2}}{4}\|Q\|_{\infty}^{2}\] _and_ \[\|K_{Q}\|_{4}^{4}\leq\left(\frac{B^{3}}{12}+\frac{5B}{32\lambda}+\frac{5}{32 \lambda^{3/2}}\right)\|Q\|_{4}^{4}.\] Proof.: The estimates follow readily from applying Holder's inequality. We have \[\|K_{Q}u\|_{2}^{2} \leq\|Q\|_{\infty}^{2}\|u\|_{2}^{2}\int_{y=0}^{B}\|\sin(\sqrt{ \lambda}(y-\cdot))\|_{L^{2}(]0,y]}^{2}dy\] \[=\|Q\|_{\infty}^{2}\|u\|_{2}^{2}\int_{y=0}^{B}\left(\frac{y}{2}- \frac{\cos\sqrt{\lambda}y\sin\sqrt{\lambda}y}{2\sqrt{\lambda}}\right)\] \[=\frac{\|Q\|_{\infty}^{2}}{4}\|u\|_{2}^{2}\left(B^{2}-\frac{\sin^ {2}\sqrt{\lambda}B}{\lambda}\right)\] \[\leq\frac{\|Q\|_{\infty}^{2}B^{2}}{4}\|u\|_{2}^{2},\quad u\in L^ {2}(]0,B[),\] as well as \[\|K_{Q}u\|_{4}^{4} \leq\|Q\|_{4}^{4}\|u\|_{4}^{4}\int_{y=0}^{B}\|\sin(\sqrt{\lambda}(y- \cdot))\|_{L^{2}(]0,y[)}^{4}dy\] \[=\|Q\|_{4}^{4}\|u\|_{4}^{4}\left(\frac{B^{3}}{12}+\frac{B\cos( \sqrt{\lambda}B)^{2}}{4\lambda}-\frac{\sin\sqrt{\lambda}B\cos(\sqrt{\lambda}B )^{3}}{16\lambda^{3/2}}\right.\] \[\left.-\frac{3B}{32\lambda}-\frac{3\cos\sqrt{\lambda}B\sin\sqrt{ \lambda}B}{32\lambda^{3/2}}\right)\] \[\leq\|Q\|_{4}^{4}\left(\frac{B^{3}}{12}+\frac{5B}{32\lambda}+ \frac{5}{32\lambda^{3/2}}\right)\|u\|_{4}^{4},\quad u\in L^{4}([0,B]),\] We have from (16) and from Lemma 1 that, for all positive \(\lambda\), \[\|v_{\lambda}\|_{2} \geq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{2 }-\lambda^{-1/2}\|K_{Q}\|_{2}\|v_{\lambda}\|_{2}\] \[\geq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{2 }-\lambda^{-1/2}B\|Q\|_{\infty}\|v_{\lambda}\|_{2}/2,\] \[\|v_{\lambda}\|_{2} \leq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{2 }+\lambda^{-1/2}\|K_{Q}\|_{2}\|v_{\lambda}\|_{2}\] \[\leq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{2 }+\lambda^{-1/2}B\|Q\|_{\infty}\|v_{\lambda}\|_{2}/2,\] \[\|v_{\lambda}\|_{4} \geq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{4 }-\lambda^{-1/2}\|K_{Q}\|_{4}\|v_{\lambda}\|_{4}\] \[\geq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{4 }-\lambda^{-1/2}\left(\frac{B^{3}}{12}+\frac{5B}{32\lambda}+\frac{5}{32 \lambda^{3/2}}\right)^{1/4}\|Q\|_{4}\|v_{\lambda}\|_{4},\] and \[\|v_{\lambda}\|_{4} \leq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{4 }+\lambda^{-1/2}\|K_{Q}\|_{4}\|v_{\lambda}\|_{4}\] \[\leq\lambda^{-1/2}|v_{\lambda}^{\prime}(0)|\|\Phi_{\lambda}\|_{4 }+\lambda^{-1/2}\left(\frac{B^{3}}{12}+\frac{5B}{32\lambda}+\frac{5}{32\lambda^ {3/2}}\right)^{1/4}\|Q\|_{4}\|v_{\lambda}\|_{4}.\] Specifically, for \(B\|Q\|_{\infty}/2<\sqrt{\lambda}\) (assumption (6)) and \[\left(\frac{B^{3}}{12}+\frac{5B}{32\lambda}+\frac{5}{32\lambda^{3/2}}\right)^ {1/4}\|Q\|_{4}<\sqrt{\lambda}\] (assumption (7)), we have \[\left(\frac{1-b(B,\lambda)}{1+a(B,\lambda)}\right)^{4}\leq\frac{\alpha(v_{ \lambda})}{\alpha(\Phi_{\lambda})}\leq\left(\frac{1+b(B,\lambda)}{1-a(B, \lambda)}\right)^{4}.\] ## 3. Proof of Theorem 2 (asymptotic bounds on \(\alpha(\phi_{\lambda})\)) The first part of Theorem 2 follows from the fact that \[\frac{1\mp b(B,\lambda)}{1\pm a(B,\lambda)}=1+O(\lambda^{-1/2}),\quad\lambda \rightarrow\infty,\] together with (10) and the estimates in Theorem 1. Next, if \(Q\in\mathrm{BV}([0,B])\) then we can use the asymptotic expansion of \(v_{\lambda}\) from (15) given by Eq. (3.3)\({}_{2N}\) of Fulton and Pruess [6] with \(N=1\), and get \[v_{\lambda}(y)/v_{\lambda}^{\prime}(0)=\lambda^{-1/2}\sin(\lambda^{1/2}y)-\frac {1}{2\lambda}\int_{0}^{y}Q(s)ds\cdot\cos(\lambda^{1/2}y)+O(\lambda^{-3/2}), \quad y\in[0,B],\] where the remainder \(O(\lambda^{-3/2})\) is uniform in \(y\in[0,B]\). This, in turn, implies \[\frac{\|v_{\lambda}\|_{2}^{4}}{v_{\lambda}^{\prime}(0)^{4}} =\lambda^{-2}\frac{B^{2}}{4}\] \[-\lambda^{-5/2}B\left(\frac{1}{4}\sin(2B\lambda^{1/2})+\int_{y=0 }^{B}\int_{s=0}^{y}Q(s)ds\sin(\lambda^{1/2}y)\cos(\lambda^{1/2}y)dy\right)+O( \lambda^{-3})\] and \[\frac{\|v_{\lambda}\|_{4}^{4}}{v_{\lambda}^{\prime}(0)^{4}} =\lambda^{-2}\frac{3B}{8}\] \[+\lambda^{-5/2}\Bigg{(}\frac{\sin(4B\lambda^{1/2})-8\sin(2B \lambda^{1/2})}{32}\] \[\quad-2\int_{y=0}^{B}\int_{s=0}^{y}Q(s)ds\sin^{3}(\lambda^{1/2}y) \cos(\lambda^{1/2}y)dy\Bigg{)}+O(\lambda^{-3})\] as \(\lambda\to\infty\). Thus \(c_{-}\leq\lambda^{2}\|v_{\lambda}\|_{2}^{4}/v_{\lambda}^{\prime}(0)^{4}\leq c _{+}\) with \[c_{\pm}=\frac{B^{2}}{4}\pm\lambda^{-1/2}B\left(\frac{1}{4}+B\|Q\|_{1}\right)+O (\lambda^{-1}),\quad\lambda\to\infty,\] and \(d_{-}\leq\lambda^{2}\|v_{\lambda}\|_{4}^{4}/v_{\lambda}^{\prime}(0)^{4}\leq d_ {+}\) with \[d_{\pm}=\frac{3B}{8}\pm\lambda^{-1/2}\left(\frac{9}{32}+2B\|Q\|_{1}\right)+O( \lambda^{-1}),\quad\lambda\to\infty.\] Finally, if \(Q\in\mathrm{AC}([0,B])\) and \(Q^{\prime}\in\mathrm{BV}([0,B])\) then we can use the asymptotic expansion of \(v_{\lambda}\) given by [6, Eq. (3.3)\({}_{2N+1}\)] with \(N=1\), to get \[v_{\lambda}(y)/v_{\lambda}^{\prime}(0) =\lambda^{-1/2}\sin(\lambda^{1/2}y)-\lambda^{-1}\frac{1}{2}\cos( \lambda^{1/2}y)\int_{0}^{y}Q(s)ds \tag{17}\] \[+\lambda^{-3/2}\frac{1}{4}\sin(\lambda^{1/2}y)\left(\int_{0}^{y}Q (s)\int_{0}^{s}Q(\tau)d\tau ds+Q(0)+Q(y)\right)+O(\lambda^{-2}),\] where the remainder \(O(\lambda^{-2})\) is uniform in \(y\). This, in turn, implies \[\frac{\|v_{\lambda}\|_{2}^{4}}{v_{\lambda}^{\prime}(0)^{4}} =\lambda^{-2}\frac{B^{2}}{4}\] \[-\lambda^{-5/2}B\left(\frac{\sin(2B\lambda^{1/2})}{4}+\int_{0}^{B }\sin(\lambda^{1/2}y)\cos(\lambda^{1/2}y)\int_{0}^{y}Q(s)dsdy\right)\] \[+\lambda^{-3}B\Bigg{(}\frac{1}{4}\int_{0}^{B}\cos^{2}(\lambda^{1/2 }y)\left(\int_{0}^{y}Q(s)ds\right)^{2}dy\] \[+\frac{1}{2}\int_{0}^{B}\sin^{2}(\lambda^{1/2}y)\left(\int_{0}^{y }Q(s)\int_{0}^{s}Q(\tau)d\tau ds+Q(0)+Q(y)\right)dy\Bigg{)}+O(\lambda^{-7/2})\] and \[\frac{\|v_{\lambda}\|_{4}^{4}}{v_{\lambda}^{\prime}(0)^{4}} =\lambda^{-2}\frac{3B}{8}\] \[+\lambda^{-5/2}\left(\frac{\sin(4B\lambda^{1/2})-8\sin(2B\lambda^{1 /2})}{32}-2\int_{y=0}^{B}\sin^{3}(\lambda^{1/2}y)\cos(\lambda^{1/2}y)\int_{0}^ {y}Q(s)dsdy\right)\] \[+\lambda^{-3}\Bigg{(}\frac{3}{2}\int_{0}^{B}\sin^{2}(\lambda^{1/2 }y)\cos^{2}(\lambda^{1/2}y))\left(\int_{0}^{y}Q(s)ds\right)^{2}dy\] \[+\int_{0}^{B}\sin^{4}(\lambda^{1/2}y)\left(\int_{0}^{y}Q(s)\int_{ 0}^{s}Q(\tau)d\tau ds+Q(0)+Q(y)\right)dy\Bigg{)}+O(\lambda^{-7/2}).\] Now each eigenvalue \(\lambda\) is a zero of \(\lambda\mapsto v_{\lambda}(B)\)[6, p. 319, Case 4], and in light of (17) we therefore have \[\sin(\lambda^{1/2}B)=\lambda^{-1/2}\frac{1}{2}\int_{0}^{B}Q(s)ds\cdot\cos( \lambda^{1/2}B)+O(\lambda^{-3/2}),\] \[\sin^{2}(\lambda^{1/2}B)=\lambda^{-1}\frac{1}{4}\left(\int_{0}^{B}Q(s)ds \right)^{2}\cos^{2}(\lambda^{1/2}B)+O(\lambda^{-2}),\quad\cos^{2}(\lambda^{1/2 }B)=1+O(\lambda^{-1}),\] \[\sin(2\lambda^{1/2}B)=\lambda^{-1/2}\int_{0}^{B}Q(s)ds\cdot\cos^{2}(\lambda^{1 /2}B)+O(\lambda^{-3/2}),\] and \[\sin(4\lambda^{1/2}B)=\lambda^{-1/2}2\int_{0}^{B}Q(s)ds\cdot\cos(2\lambda^{1/2 }B)\cos^{2}(\lambda^{1/2}B)+O(\lambda^{-3/2}).\] Also, using integration by parts, we find for any \(\phi,\psi\in C^{4}([0,B])\) with \(\phi(0)=0\) that \[\int_{0}^{B}\sin(\lambda^{1/2}y)\cos(\lambda^{1/2}y)\phi(y)dy =\lambda^{-1/2}\left(\frac{\sin^{2}(\lambda^{1/2}B)}{2}-\frac{1}{ 4}\right)\phi(B)+O(\lambda^{-1})\] \[=-\lambda^{-1/2}\frac{1}{4}\phi(B)+O(\lambda^{-1}),\] \[\int_{0}^{B}\cos^{2}(\lambda^{1/2}y)\phi(y)dy =\frac{1}{2}\int_{0}^{B}\phi(y)dy+\lambda^{-1/2}\frac{\sin(2 \lambda^{1/2}B)}{4}\phi(B)+O(\lambda^{-1})\] \[=\frac{1}{2}\int_{0}^{B}\phi(y)dy+O(\lambda^{-1}),\] \[\int_{0}^{B}\sin^{2}(\lambda^{1/2}y)\psi(y)dy =\frac{1}{2}\int_{0}^{B}\psi(y)dy-\lambda^{-1/2}\frac{\sin(2 \lambda^{1/2}B)}{4}\psi(B)+O(\lambda^{-1})\] \[=\frac{1}{2}\int_{0}^{B}\psi(y)dy+O(\lambda^{-1}),\] \[\int_{0}^{B}\sin^{3}(\lambda^{1/2}y)\cos(\lambda^{1/2}y)\phi(y)dy =\lambda^{-1/2}\left(\frac{\sin^{4}(\lambda^{1/2}B)}{4}-\frac{3} {32}\right)\phi(B)+O(\lambda^{-1})\] \[=-\lambda^{-1/2}\frac{3}{32}\phi(B)+O(\lambda^{-1}),\] \[\int_{0}^{B}\sin^{2}(\lambda^{1/2}y)\cos^{2}(\lambda^{1/2}y)\phi(y)dy =\frac{1}{8}\int_{0}^{B}\phi(y)dy\] \[+\lambda^{-1/2}\Bigg{(}\frac{\sin(2\lambda^{1/2}B)-4\sin(\lambda^{ 1/2}B)\cos^{3}(\lambda^{1/2}B)}{16}\phi(B)\] \[-\frac{\cos^{4}(\lambda^{1/2}B)}{2}\phi^{\prime}(B)+\frac{3}{16} \phi^{\prime}(B)+\frac{5}{16}\phi^{\prime}(0)\Bigg{)}+O(\lambda^{-1})\] \[=\frac{1}{8}\int_{0}^{B}\phi(y)dy+\lambda^{-1/2}\frac{5}{16}\left( \phi^{\prime}(0)-\phi^{\prime}(B)\right)+O(\lambda^{-1}),\] and \[\int_{0}^{B}\sin^{4}(\lambda^{1/2}y)\psi(y)dy =\frac{3}{8}\int_{0}^{B}\psi(y)dy\] \[-\lambda^{-1/2}\left(\frac{\sin^{3}(\lambda^{1/2}B)\cos(\lambda^ {1/2}B)}{4}+\frac{3\sin(2\lambda^{1/2}B)}{16}\right)\psi(B)+O(\lambda^{-1})\] \[=\frac{3}{8}\int_{0}^{B}\psi(y)dy+O(\lambda^{-1}).\] Using these expansions, we get \[\lambda^{2}\frac{\|v_{\lambda}\|_{2}^{4}}{v_{\lambda}^{\prime}(0) ^{4}} =\frac{B^{2}}{4}+\lambda^{-1}\frac{B}{4}\Bigg{[}\frac{1}{2}\int_ {0}^{B}\left(\int_{0}^{y}Q(s)ds\right)^{2}dy\] \[+\int_{0}^{B}\left(\int_{0}^{y}Q(s)\int_{0}^{s}Q(\tau)d\tau ds+Q( 0)+Q(y)\right)dy\Bigg{]}+O(\lambda^{-3/2})\] and \[\lambda^{2}\frac{\|v_{\lambda}\|_{4}^{4}}{v_{\lambda}^{\prime}(0) ^{4}} =\frac{3B}{8}\] \[+\lambda^{-1}\frac{3}{8}\Bigg{[}\frac{1}{2}\int_{0}^{B}\left( \int_{0}^{y}Q(s)ds\right)^{2}dy\] \[+\int_{0}^{B}\left(\int_{0}^{y}Q(s)\int_{0}^{s}Q(\tau)d\tau ds+Q( 0)+Q(y)\right)dy\Bigg{]}+O(\lambda^{-3/2}).\] Note that the factors multiplying \(\lambda^{0}\) and \(\lambda^{-1}\) in \(\|v_{\lambda}\|_{2}^{4}\) are proportional to those in \(\|v_{\lambda}\|_{4}^{4}\), with the proportionality constant \(2B/3\). We thus have \[\alpha(v_{\lambda})=\frac{B^{2}/4+\lambda^{-1}s+O(\lambda^{-3/2})}{(3/2B)(B^{2 }/4+\lambda^{-1}s+O(\lambda^{-3/2}))}=\frac{2B}{3}+O(\lambda^{-3/2}),\] where \[s=\frac{B}{4}\Bigg{[}\frac{1}{2}\int_{0}^{B}\left(\int_{0}^{y}Q(s)ds\right)^{ 2}dy+\int_{0}^{B}\left(\int_{0}^{y}Q(s)\int_{0}^{s}Q(\tau)d\tau ds+Q(0)+Q(y) \right)dy\Bigg{]}.\] ## 4. Proof of Proposition 1 By construction \(T\) is self-adjoint and diagonalizable. Hence it can be written in the following form \[T=\sum_{j=1}^{\infty}\lambda_{j}P_{j}, \tag{18}\] where \((\lambda_{j})_{j\in\textbf{{N}}_{0}}\) is the strictly increasing sequence of eigenvalues of \(T\), and \(P_{j}\) are the orthogonal projections onto the eigenspaces associated to \(\lambda_{j},\;j\in\textbf{{N}}_{0}\). Since \(\ell_{k}\in D(T)\), it has the following expansion \[\ell_{k}=\tilde{\ell}_{k}/\|\tilde{\ell}_{k}\|_{2},\quad\tilde{\ell}_{k}=\sum _{j=1}^{\infty}\lambda_{j}^{-k}P_{j}1.\] Straightforward computations give \[\|\lambda_{1}^{k}\tilde{\ell}_{k}-P_{1}1\|_{2}\leq L^{1/2}\left(\frac{\lambda _{1}}{\lambda_{2}}\right)^{k},\quad\text{and}\;\|P_{1}1\|_{2}\leq\|\lambda_{1 }^{k}\tilde{\ell}_{k}\|_{2}. \tag{19}\] We then deduce \[\|\lambda_{1}^{k}\tilde{\ell}_{k}\|_{2}-\|P_{1}1\|_{2}\leq L^{1/2}\left(\frac{ \lambda_{1}}{\lambda_{2}}\right)^{k}. \tag{20}\] Similarly, since \(T\tilde{\ell}_{k}\in L^{2}(]0,L[)\), we have \[\|\lambda_{1}^{k}T^{1/2}\tilde{\ell}_{k}-T^{1/2}P_{1}1\|_{2}\leq(\lambda_{1}L )^{1/2}\left(\frac{\lambda_{1}}{\lambda_{2}}\right)^{k-1/2}. \tag{21}\] Recall that \(\phi_{1}>0\), which implies \(\|P_{1}1\|_{2}>0\) and \(\phi_{1}=P_{1}1/\|P_{1}1\|_{2}\). Now combining inequalities (19), (20) and (21), we get \[\|T^{1/2}\ell_{k}-T^{1/2}\phi_{1}\|_{2}\leq\|\lambda_{1}^{k}T^{1/ 2}\tilde{\ell}_{k}-T^{1/2}P_{1}1\|_{2}\|P_{1}1\|_{2}^{-1}+\lambda_{1}^{1/2} \left(\|\lambda_{1}^{k}\tilde{\ell}_{k}\|_{2}-\|P_{1}1\|_{2}\right)\|P_{1}1\|_ {2}^{-1} \tag{22}\] \[\leq 2(\lambda_{1}L)^{1/2}\left(\frac{\lambda_{1}}{\lambda_{2}} \right)^{k-1/2}\|P_{1}1\|_{2}^{-1}.\] On the other hand, we have \[|\varphi(x)|\leq\int_{0}^{x}|\varphi^{\prime}(s)|ds\leq L^{1/2}\|\varphi^{ \prime}\|_{2},\quad\forall x\in]0,L[,\] for all \(\varphi\in C_{0}^{\infty}(]0,L[)\). Since \(C_{0}^{\infty}(]0,L[)\) is dense in \(D(T^{1/2})\), we get \[\|\varphi\|_{\infty}\leq L^{1/2}\|T^{1/2}\varphi\|_{2},\quad\forall\varphi\in D (T^{1/2}). \tag{23}\] Combining inequalities (23) and (22), we obtain the desired result. ## 5. Proof of Proposition 2 Using the spectral expansion (18), we get \[\ell_{k,t}=\sum_{j=1}^{\infty}\frac{1}{(t\lambda_{j})^{k}}P_{j}1.\] Hence \[T^{1/2}\left(\ell_{k,t}-\sum_{j=1}^{n_{0}}\frac{1}{(t\lambda_{j})^{k}}P_{j}1 \right)=\sum_{j=n_{0}+1}^{\infty}\frac{1}{(t\lambda_{j})^{k}}T^{1/2}P_{j}1.\] Therefore \[\left\|T^{1/2}\left(\ell_{k,t}-\sum_{j=1}^{n_{0}}\frac{1}{(t\lambda_{j})^{k}}P_ {j}1\right)\right\|_{2}\leq\frac{L^{1/2}}{t^{1/2}}\frac{1}{(t\lambda_{n_{0}+1} )^{k-1/2}}. \tag{24}\] Applying again the Sobolev inequality (23), we recover the final estimate. ## 6. The landscape function: numerical tests We start by illustrating the consequences of Proposition 1. For Figure 4 we use \(L=1\) and \[p(x)=\tanh(40x/L-10)+1.1,\quad q(x)=0,\quad x\in[0,L],\] as in (2) in Section 1), while Figure 5 shows the graphs of \[p(x)=\tanh(40x/L-20)+1.1,\quad q(x)=2+\sin(2\pi x),\quad x\in[0,L],\] used, with \(L=1\), for the results of Figure 6. Finally, Figure 7 shows the functions \[p(x)=\tanh(40x/L-10)+1.1,\quad q(x)=2+\sin(2\pi x),\quad x\in[0,L],\] used, with \(L=5\), for the results of Figure 8. Next, in Figure 9 we illustrate the validity of the upper bound on \(\|\ell_{k,t}-\sum_{j=1}^{n_{0}}(t\lambda_{j})^{-k}P_{j}1\|_{\infty}\), as given in Proposition 2. Since the constants \((t\lambda_{j})^{-1}\), \(j=1,\ldots,n_{0}\), are greater than \(1\), the numerical error present in the above \(L^{\infty}\)-norm can grow exponentially with \(k\). To avoid this numerical instability, in Figure 9 we plot the equivalent quantity \(\|\sum_{j=n_{0}+1}^{\infty}(t\lambda_{j})^{-k}(t\lambda_{j})^{-k}P_{j}1\|_{\infty}\), with the series truncated at \(j=20\). Figure 4. Left: the functions \(\ell_{k}\) approach the first eigenvector \(\phi_{1}\) pointwise as \(k\) increases. Right: actual value vs. upper bound on \(\|\phi_{1}-\ell_{k}\|_{\infty}\), see Proposition 1. ## Acknowledgments M. K. was supported by The Villum Foundation (grant no. 25893). Figure 5. The functions \(p(x)\) and \(q(x)\) used for the results of Figure 6. Figure 6. Left: the functions \(\ell_{k}\) approach the first eigenvector \(\phi_{1}\) pointwise as \(k\) increases. Right: actual value vs. upper bound on \(\|\phi_{1}-\ell_{k}\|_{\infty}\), see Proposition 1. Figure 7. The functions \(p(x)\) and \(q(x)\) used for the results of Figure 8.
2308.07596
Twisting theory, relative Rota-Baxter type operators and $L_\infty$-algebras on Lie conformal algebras
Based on Nijenhuis-Richardson bracket and bidegree on the cohomology complex for a Lie conformal algebra, we develop a twisting theory of Lie conformal algebras. By using derived bracket constructions, we construct $L_\infty$-algebras from (quasi-)twilled Lie conformal algebras. And we show that the result of the twisting by a $\mathbb{C}[\partial]$-module homomorphism on a (quasi-)twilled Lie conformal algebra is also a (quasi-)twilled Lie conformal algebra if and only if the $\mathbb{C}[\partial]$-module homomorphism is a Maurer-Cartan element of the $L_\infty$-algebra. In particular, we show that relative Rota-Baxter type operators on Lie conformal algebras are Maurer-Cartan elements. Besides, we propose a new algebraic structure, called NS-Lie conformal algebras, that is closely related to twisted relative Rota-Baxter operators and Nijenhuis operators on Lie conformal algebras. As an application of twisting theory, we give the cohomology of twisted relative Rota-Baxter operators and study their deformations.
Lamei Yuan, Jiefeng Liu
2023-08-15T06:54:19Z
http://arxiv.org/abs/2308.07596v1
Twisting theory, relative Rota-Baxter type operators and \(L_{\infty}\)-algebras on Lie conformal algebras ###### Abstract. Based on Nijenhuis-Richardson bracket and bidegree on the cohomology complex for a Lie conformal algebra, we develop a twisting theory of Lie conformal algebras. By using derived bracket constructions, we construct \(L_{\infty}\)-algebras from (quasi-)twilled Lie conformal algebras. And we show that the result of the twisting by a \(\mathbb{C}[\partial]\)-module homomorphism on a (quasi-)twilled Lie conformal algebra is also a (quasi-)twilled Lie conformal algebra if and only if the \(\mathbb{C}[\partial]\)-module homomorphism is a Maurer-Cartan element of the \(L_{\infty}\)-algebra. In particular, we show that relative Rota-Baxter type operators on Lie conformal algebras are Maurer-Cartan elements. Besides, we propose a new algebraic structure, called NS-Lie conformal algebras, that is closely related to twisted relative Rota-Baxter operators and Nijenhuis operators on Lie conformal algebras. As an application of twisting theory, we give the cohomology of twisted relative Rota-Baxter operators and study their deformations. Key words and phrases:Lie conformal algebra, twisting, \(L_{\infty}\)-algebra, twisted relative Rota-Baxter operator, cohomology 2010 Mathematics Subject Classification: 16D70, 17A30, 17B55 \({}^{*}\) the corresponding author ###### Contents * 1 Introduction * 2 Preliminaries * 3 Nijenhuis-Richardson bracket, quasi-twilled Lie conformal algebras and \(L_{\infty}\)-algebras * 3.1 Nijenhuis-Richardson bracket for Lie conformal algebras and bidegrees * 3.2 Quasi-twilled Lie conformal algebras and \(L_{\infty}\)-algebras * 4 Twisting on Lie conformal algebras and relative Rota-Baxter type operators * 4.1 The case of twilled Lie conformal algebras * 4.2 The case of quasi-twilled Lie conformal algebras * 5 NS-Lie conformal algebras * 6 Cohomology and deformations of twisted relative Rota-Baxter operators * 6.1 Cohomology of twisted relative Rota-Baxter operators * 6.2 Infinitesimal deformations of twisted relative Rota-Baxter operators ## 1. Introduction The notion of a Lie conformal algebra was introduced by Kac in [28] as an algebraic language to encode the properties of operator product expansions in conformal field theory, and, at the same time, of local Poisson brackets in the theory of integrable evolution equations [6]. The general structure theory, cohomology theory and representation theory for conformal algebras have been established and widely developed in the literatures (see, for example, [5, 7, 12, 15]). Rota-Baxter operators on associative algebras were initially introduced by Baxter [3] in the study of the fluctuation theory in probability, and then popularized by Rota [40], Atkinson [1] and Cartier [9] during the process of finding their interrelations with combinatorics. Rota-Baxter algebras have broad connections with mathematical physics, including the application in Connes-Kreimer's algebraic approach to the renormalization in perturbative quantum field theory [11], noncommutative symmetric functions and Hopf algebras [20, 49], splitting of operads [4, 39], quantum analogue of Poisson geometry [41] and double Poisson algebras [2, 22, 23]. We refer the reader to [18] for more details about Rota-Baxter algebras. In the Lie algebra context, a Rota-Baxter operator was introduced independently in the 1980s as the operator form of the classical Yang-Baxter equation, named after the physicists C.-N. Yang and R. Baxter [48, 8], whereas the classical Yang-Baxter equation plays important roles in mathematics and mathematical physics such as integrable systems and quantum groups [10, 44]. In order to gain better understanding of the interrelation between the classical Yang-Baxter equation and the related integrable systems, the more general notion of a relative Rota-Baxter operator (also called \(\mathcal{O}\)-operator) on Lie algebras was introduced by Kupershmidt [32]. Recently, Rota-Baxter operators are defined in the categories of Lie groups [19] and cocommutative Hopf algebras [21]. Besides, cohomology, deformations, extensions and homotopy theory of Rota-Baxter operators were well studied in [27, 35, 45]. Motivated by the study of conformal analogue of Lie bialgebras, Liberati developed the theory of Lie conformal bialgebras in [37]. The notion of conformal classical Yang-Baxter equation was introduced to construct coboundary Lie conformal bialgebras. In order to study the operator forms of solutions of the conformal classical Yang-Baxter equation, the authors in [24] introduced the notion of relative Rota-Baxter operators on Lie conformal algebras. See [24, 38, 50] for further details of relative Rota-Baxter operators on Lie and associative conformal algebras. The twisting theory was introduced by Drinfeld in [16] motivated by the study of quasi-Lie bialgebras and quasi-Hopf algebras. As a useful tool in the study of bialgebras, the twisting theory was applied to Poisson geometry and associative algebras by Kosmann-Schwarzbach [31], Roytenberg [43] and Uchino [42]. In this paper, we develop the twisting theory of Lie conformal algebras and use this theory to study relative Rota-Baxter type operators. By the derived bracket constructions of Lie conformal algebras, we construct an \(L_{\infty}\)-algebra associated to a quasi-twilled Lie conformal algebra and show that Rota-Baxter type operators are Maurer-Cartan elements. We introduce the notion of NS-Lie conformal algebras, which is the underlying algebraic structure of the twisted relative Rota-Baxter operators and Nijenhuis operators on a Lie conformal algebra. As an application of twisting theory of Lie conformal algebras, we give the cohomology of twisted relative Rota-Baxter operators and study their deformations. In Section 2, we recall the definitions of Lie conformal algebras and their modules. We also review the cohomology theory of Lie conformal algebras and gather some facts. In Section 3, we first recall Nijenhuis-Richardson bracket \([-,-]_{\mathrm{NR}}\) on the cohomology complex \(C^{*}(\mathcal{A},\mathcal{A})\) for a Lie conformal algebra \(\mathcal{A}\) and thus obtain a graded Lie algebra \((C^{*}(\mathcal{A},\mathcal{A}),[-,-]_{\mathrm{NR}})\). Next, assuming \(\mathcal{A}=\mathcal{A}_{1}\oplus\mathcal{A}_{2}\) has a decomposition into a direct sum of two \(\mathbb{C}[\partial]\)-modules \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), we propose a bidegree on \(C^{*}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus\mathcal{A}_{ 2})\). We will see that the Lie conformal algebra structure \(\Pi\) of \(\mathcal{A}\) is decomposed into the unique four substructures \[\Pi=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2}. \tag{1.1}\] When \(\dot{\phi}_{2}=0\), \(\mathcal{A}_{2}\) is a subalgebra of \(\mathcal{A}\). We call such a triple \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) a quasi-twilled Lie conformal algebra. When \(\dot{\phi}_{1}=\dot{\phi}_{2}=0\), namely, both \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are subalgebras of \(\mathcal{A}\), the triple \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) is called a twilled Lie conformal algebra and denoted by \(\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\). By the derived bracket construction in [30], we show that the quasi-twilled Lie conformal algebra structure on \(\mathcal{A}_{1}\oplus\mathcal{A}_{2}\) induces an \(L_{\infty}\)-algebra structure on \(C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\) (see Theorem 3.14). In Section 4, we define a twisting operation associated to a \(\mathbb{C}[\theta]\)-module homomorphism \(H:\mathcal{A}_{2}\to\mathcal{A}_{1}\) as \(\Pi^{H}:=e^{X_{\dot{H}}}(\Pi)\), where \(\dot{H}\) is the lift of \(H\). The twisting operation \(\Pi^{H}\) is also decomposed into the unique four substructures corresponding to those of \(\Pi\) in (1.1). We will present explicit formulas of the transformation rules (see Theorem 4.4). In the twilled case, we show that \((\mathcal{A}_{1}\bowtie\mathcal{A}_{2},\Pi^{H})\) is again a twilled Lie conformal algebra if and only if \(H\) is a Maurer-Cartan element of the differential graded Lie algebra \((C^{*}(\mathcal{A}_{2},\mathcal{A}_{1}),d_{\dot{\mu}_{2}},[-,-]_{\dot{\mu}_{1}})\) (see Proposition 4.5), namely, \[d_{\dot{\mu}_{2}}\dot{H}+\frac{1}{2}[\hat{H},\hat{H}]_{\dot{\mu}_{1}}=0.\] Furthermore, we show that \(T\) is a relative Rota-Baxter operator if and only if \(\hat{T}\) is a Maurer-Cartan element of a graded Lie algebra (see Proposition 4.8). As an application, we obtain that \(r=\sum_{i}a_{i}\otimes b_{i}\in\mathcal{A}\otimes\mathcal{A}\) is a skew-symmetric solution of the conformal classical Yang-Baxter equation if and only if \(r_{0}^{\sharp}=r_{\lambda}^{\sharp}|_{\lambda=0}\) is a Maurer-Cartan element of the graded Lie algebra \((C^{*}(\mathcal{A}^{*c},\mathcal{A}),[\cdot,\cdot]_{\dot{\mu}})\) (see Proposition 4.13). In the quasi-twilled case, we introduce the notion of a twisted relative Rota-Baxter operator, which is a generalization of Rota-Baxter operator and characterized by a 2-cocycle. We show that basic properties of a relative Rota-Baxter operator are satisfied for twisted one. Moreover, we introduce Nijenhuis operators and Reynolds operators to illustrate examples of twisted relative Rota-Baxter operators. In Section 5, we construct a new algebraic structure, called NS-Lie conformal algebras. We show that NS-Lie conformal algebras connect closely with Lie conformal algebras, conformal NS-algebras, twisted relative Rota-Baxter operators and Nijenhuis operators. The corresponding results are stated in Theorems 5.4-5.7 and Proposition 5.11. In Section 6, we introduce cohomology of twisted relative Rota-Baxter operators and study infinitesimal deformations of them from cohomological point of view. Throughout this paper, all vector spaces, linear maps and tensor products are over the complex field \(\mathbb{C}\). ## 2. Preliminaries In this section, we recall the notions of Lie conformal algebras, conformal modules and cohomology. Also, we gather some known results for later use. The material can be found in the literatures [7, 12, 15, 24, 28]. A **conformal algebra**\(\mathcal{A}\) is a \(\mathbb{C}[\theta]\)-module endowed with a \(\mathbb{C}\)-linear map \(\mathcal{A}\otimes\mathcal{A}\to\mathcal{A}[\lambda],\ a\otimes b\mapsto a_{ \lambda}b\), satisfying conformal sesquilinearity: \[(\partial a)_{\lambda}b=-\lambda a_{\lambda}b,\ a_{\lambda}\partial b=( \partial+\lambda)a_{\lambda}b,\ \forall\ a,b\in\mathcal{A}. \tag{2.1}\] If, in addition, it satisfies associativity: \[(a_{\lambda}b)_{\lambda+\mu}c=a_{\lambda}(b_{\mu}c),\ \forall\ a,b\in \mathcal{A}, \tag{2.2}\] then \(\mathcal{A}\) is called an **associative conformal algebra**. A conformal algebra \(\mathcal{A}\) is called **finite** if it has finite rank as \(\mathbb{C}[\partial]\)-module. By conformal sesquilinearity, the following equalities hold \((a,b,c\in\mathcal{A})\): \[(a_{-\lambda-\partial}b)_{\lambda+\mu}c=(a_{\mu}b)_{\lambda+\mu}c,\ \ a_{\mu}(b_{-\lambda-\partial}c)=a_{\mu}(b_{-\lambda-\mu-\partial}c). \tag{2.3}\] Changing the variables in (2.3) gives \[(a_{-\mu-\partial}b)_{-\lambda-\partial}c=(a_{-\lambda-\mu-\partial}b)_{- \lambda-\partial}c,\ \ a_{-\lambda-\mu-\partial}(b_{-\lambda-\partial}c)=a_{-\lambda-\mu-\partial}( b_{\mu}c). \tag{2.4}\] **Definition 2.1**.: A **Lie conformal algebra**\(\mathcal{A}\) is a \(\mathbb{C}[\partial]\)-module endowed with a \(\mathbb{C}\)-linear map \(\mathcal{A}\otimes\mathcal{A}\rightarrow\mathcal{A}[\lambda],\ a\otimes b \mapsto[a_{\lambda}b]\), called the \(\lambda\)-bracket, and satisfying the following axioms for all \(a,b,c\in\mathcal{A}\): Conformal sesquilinearity: \([\partial a_{\lambda}b]=-\lambda[a_{\lambda}b],\ \ [a_{\lambda}\partial b]=( \partial+\lambda)[a_{\lambda}b]\), Skew-symmetry: \([a_{\lambda}b]=-[b_{-\lambda-\partial}a]\), Jacobi identity: \([a_{\lambda}[b_{\mu}c]]=[[a_{\lambda}b]_{\lambda+\mu}c]+[b_{\mu}[a_{\lambda}c]]\). Let \(M\) and \(N\) be \(\mathbb{C}[\partial]\)-modules. A **conformal linear map** from \(M\) to \(N\) is a \(\mathbb{C}\)-linear map \(f_{\lambda}:M\to N[\lambda]\) satisfying \(f_{\lambda}(\partial u)=(\partial+\lambda)f_{\lambda}u\), for \(u\in M.\) The vector space of all such maps, denoted by \(\operatorname{Chom}(M,N)\), is a \(\mathbb{C}[\partial]\)-module via: \[(\partial f)_{\lambda}=-\lambda f_{\lambda},\ \text{for}\ \ f_{\lambda}\in \operatorname{Chom}(M,N).\] Define the **conformal dual** of a \(\mathbb{C}[\partial]\)-module \(M\) as \(M^{*c}=\operatorname{Chom}(M,\mathbb{C})\), where \(\mathbb{C}\) is viewed as the trivial \(\mathbb{C}[\partial]\)-module, that is, \(M^{*c}=\{a:M\rightarrow\mathbb{C}[\lambda]\mid a\text{ is }\mathbb{C}\text{-linear and }a_{\lambda}(\partial b)=\lambda a_{ \lambda}b\}\). If \(M\) is a finitely generated \(\mathbb{C}[\partial]\)-module, then \(\operatorname{Cend}(M):=\operatorname{Chom}(M,M)\) is an associative conformal algebra via: \[(f_{\lambda}g)_{\mu}v=f_{\lambda}(g_{\mu-\lambda}v),\ \text{for}\ v\in M,\ f,g\in \operatorname{Cend}(M).\] Hence \(\operatorname{Cend}(M)\) becomes a Lie conformal algebra, denoted by \(\operatorname{gc}(M)\) and called the **general Lie conformal algebra** on \(M\), with respect to the following \(\lambda\)-bracket: \[[f_{\lambda}g]_{\mu}v=f_{\lambda}(g_{\mu-\lambda}v)-g_{\mu-\lambda}(f_{ \lambda}v),\ \text{for}\ v\in M,\ f,g\in\operatorname{Cend}(M).\] Hereafter all \(\mathbb{C}[\partial]\)-modules are assumed to be finitely generated. **Definition 2.2**.: A \(\mathbb{C}[\partial]\)-module \(M\) is called a **module** of a Lie conformal algebra \(\mathcal{A}\) if there is a \(\mathbb{C}\)-linear map \(\rho:\mathcal{A}\rightarrow\operatorname{Cend}(M)\), such that \[\rho(a)_{\lambda}\rho(b)_{\mu}-\rho(b)_{\mu}\rho(a)_{\lambda}=[\rho(a)_{ \lambda}\rho(b)]_{\lambda+\mu}=\rho([a_{\lambda}b])_{\lambda+\mu},\ \rho(\partial(a))_{\lambda}=-\lambda\rho(a)_{\lambda},\ \forall\ a,b\in\mathcal{A}. \tag{2.5}\] That is, \(\rho\) is a homomorphism of Lie conformal algebras from \(\mathcal{A}\) to \(\operatorname{gc}(M)\). For convenience, we will denote a module \(M\) of the Lie conformal algebra \(\mathcal{A}\) by \((M;\rho)\). It is straightforward to check that the following identities hold for all \(a,b\in\mathcal{A}\): \[\rho([a_{\lambda}b])_{-\partial-\mu}= \rho(a)_{\lambda}\rho(b)_{-\partial-\mu}-\rho(b)_{-\partial-\lambda -\mu}\rho(a)_{\lambda}, \tag{2.7}\] \[\rho([a_{\mu}b])_{-\partial-\lambda}= \rho(a)_{\mu}\rho(b)_{-\partial-\lambda}-\rho(b)_{-\partial-\lambda -\mu}\rho(a)_{-\partial-\lambda}. \tag{2.6}\] It is not hard to check that \((M^{*c};\rho^{*})\) is a module of \(\mathcal{A}\), where \(\rho^{*}:\mathcal{A}\rightarrow\operatorname{gc}(M^{*c})\) is defined by \[(\rho^{*}(a)_{\lambda}\varphi)_{\mu}v=-\varphi_{\mu-\lambda}(\rho(a)_{\lambda} v),\ \text{for}\ a\in\mathcal{A},\ \varphi\in M^{*c},\ v\in M.\] Define \(\operatorname{ad}:\mathcal{A}\rightarrow\operatorname{gc}(\mathcal{A})\) by \(\operatorname{ad}(a)_{\lambda}b=[a_{\lambda}b]\) for \(a,b\in\mathcal{A}\). Then \((\mathcal{A};\operatorname{ad})\) is a module of \(\mathcal{A}\), called the **adjoint module**. Hence, \((\mathcal{A}^{*c};\operatorname{ad}^{*})\) is also a module of \(\mathcal{A}\), called the **coadjoint module**. **Proposition 2.3**.: ([15]) _Given a Lie conformal algebra \(\mathcal{A}\) and an \(\mathcal{A}\)-module \((M;\rho)\), the vector space \(\mathcal{A}\oplus M\) is a \(\mathbb{C}[\partial]\)-module via \(\partial(a,m)=(\partial^{\mathcal{A}}a,\partial^{M}m)\) and then carries a Lie conformal algebra structure given by_ \[[(a,m)_{\lambda}(b,n)]=([a_{\lambda}b],\rho(a)_{\lambda}n-\rho(b)_{-\partial- \lambda}m),\text{ for }a,b\in\mathcal{A},\;m,n\in M.\] _It is called the_ **semi-direct product** _of \(\mathcal{A}\) and \(M\), and denoted by \(\mathcal{A}\ltimes_{\rho}M\)._ Let us recall the cohomology complex for a Lie conformal algebra \(\mathcal{A}\) with coefficients in a module \((M;\rho)\) (see [15] for details). Set \(C^{0}(\mathcal{A},M)=M/\partial^{M}M\). For \(k\geq 1\), denote by \(C^{k}(\mathcal{A},M)\) the space of \(\mathbb{C}\)-linear map \(f:\mathcal{A}^{\otimes k}\to\mathbb{C}[\lambda_{1},\cdots,\lambda_{k-1}] \otimes M\) satisfying conformal sesquilinearity: \[f_{\lambda_{1},\cdots,\lambda_{k-1}}(a_{1},\cdots,\partial a_{i}, \cdots,a_{k})=-\lambda_{i}f_{\lambda_{1},\cdots,\lambda_{k-1}}(a_{1},\cdots,a _{k}),\quad 1\leq i\leq k-1, \tag{2.9}\] \[f_{\lambda_{1},\cdots,\lambda_{k-1}}(a_{1},\cdots,a_{k-1}, \partial a_{k})=-\lambda_{k}^{\dagger}f_{\lambda_{1},\cdots,\lambda_{k-1}}(a_{ 1},\cdots,a_{k}), \tag{2.8}\] where \(\lambda_{k}^{\dagger}=-\sum_{j=1}^{k-1}\lambda_{j}-\partial^{M}\), and \(f\) is skew-symmetric with respect to simultaneous permutations of the \(a_{i}\)'s and the \(\lambda_{i}\)'s in the sense that for every permutation \(\sigma\) of the indices \(\{1,\cdots,k\}\), that is, \[f_{\lambda_{1},\cdots,\lambda_{k-1}}(a_{1},\cdots,a_{k-1},a_{k})=(-1)^{\sigma }f_{\lambda_{\sigma(1)},\cdots,\lambda_{\sigma(k-1)}}(a_{\sigma(1)},\cdots,a _{\sigma(k-1)},a_{\sigma(k)})|_{\lambda_{k}\mapsto\lambda_{k}^{\dagger}}, \tag{2.10}\] where the notation in the RHS means that \(\lambda_{k}\) is replaced by \(\lambda_{k}^{\dagger}=-\sum_{j=1}^{k-1}\lambda_{j}-\partial^{M}\), if it occurs. For \(\bar{m}\in C^{0}(\mathcal{A},M)=M/\partial^{M}M\), let \(\mathbf{d}\;\bar{m}\in C^{1}(\mathcal{A},M)\) be the following \(\mathbb{C}[\partial]\)-module homomorphism: \[(\mathbf{d}\;\bar{m})(a)=\rho(a)_{-\partial^{M}}m. \tag{2.11}\] This is well defined since, if \(\partial^{M}m\in\partial^{M}M\), the RHS is zero due to conformal sesquilinearity. For \(f\in C^{k}(\mathcal{A},M)\), with \(k\geq 1\), define \(\mathbf{d}f\in C^{k+1}(\mathcal{A},M)\) by \[\begin{split}(\mathbf{d}f)_{\lambda_{1},\cdots,\lambda_{k}}(a_{ 1},a_{2},\cdots,a_{k+1})&=\sum_{i=1}^{k}(-1)^{i+1}\rho(a_{i})_{ \lambda_{i}}f_{\lambda_{1},\cdots,\hat{\lambda}_{i},\cdots,\lambda_{k}}(a_{1},\cdots,\hat{a}_{i},\cdots,a_{k+1})\\ &+\sum_{i=1}^{k}(-1)^{i}f_{\lambda_{1},\cdots,\hat{\lambda}_{i}, \cdots,\lambda_{k}}(a_{1},\cdots,\hat{a}_{i},\cdots,a_{k},\{a_{i\lambda_{i}}a _{k+1}\})\\ &+\sum_{i,j=1,i<j}^{k}(-1)^{k+i+j+1}f_{\lambda_{1},\cdots,\hat{ \lambda}_{i},\cdots,\lambda_{j},\cdots,\lambda_{k},\lambda_{i+1}^{\dagger}}(a_ {1},\cdots,a_{k},[a_{i\lambda_{i}}a_{j}])\\ &+(-1)^{k}\rho(a_{k+1})_{\lambda_{k+1}^{\dagger}}f_{\lambda_{1}, \cdots,\lambda_{k-1}}(a_{1},\cdots,a_{k}),\end{split} \tag{2.12}\] where \(a_{1},\cdots,a_{k+1}\in\mathcal{A}\), \(\hat{a}_{i}\) means that the entry \(a_{i}\) is omitted, and \(\lambda_{k+1}^{\dagger}=-\sum_{j=1}^{k}\lambda_{j}-\partial^{M}\) with \(\partial^{M}\) acting from the left. **Theorem 2.4**.: ([15]) _For \(f\in C^{k}(\mathcal{A},M)\), we have \(\mathbf{d}f\in C^{k+1}(\mathcal{A},M)\) and \(\mathbf{d}^{2}f=0\). This makes \((C^{*}(\mathcal{A},M),\mathbf{d})\) into a cohomology complex._ ## 3. Nijenhuis-Richardson bracket, quasi-twilled Lie conformal algebras and \(L_{\infty}\)-algebras In this section, we first recall the Nijenhuis-Richardson bracket for Lie conformal algebras and give the notion of bidegree on Lie conformal algebra cohomology complex. Then we introduce the notions of twilled Lie conformal algebras and quasi-twilled Lie conformal algebras and show that they induce a differential graded Lie algebra and an \(L_{\infty}\)-algebra, respectively. ### Nijenhuis-Richardson bracket for Lie conformal algebras and bidegrees A permutation \(\sigma\in\mathbb{S}_{n}\) is called an \((i,n-i)\)**-unshuffle** if \(\sigma(1)<\cdots<\sigma(i)\) and \(\sigma(i+1)<\cdots<\sigma(n)\). If \(i=0\) or \(i=n\), we assume \(\sigma=\mathrm{Id}\). The set of all \((i,n-i)\)-unshuffles is denoted by \(\mathbb{S}_{(i,n-i)}\). Let \(\mathcal{A}\) be a \(\mathbb{C}[\partial]\)-module. Set \(C^{*}(\mathcal{A},\mathcal{A})=\oplus_{k\geq 1}C^{k}(\mathcal{A},\mathcal{A})\), where \(C^{k}(\mathcal{A},\mathcal{A})\) is the space of \(\mathbb{C}\)-linear maps from \(\mathcal{A}^{\otimes k}\) to \(\mathbb{C}[\lambda_{1},\cdots,\lambda_{k-1}]\otimes\mathcal{A}\) satisfying (2.8)-(2.10). For \(f\in C^{m}(\mathcal{A},\mathcal{A})\) and \(g\in C^{n}(\mathcal{A},\mathcal{A})\), define the Nijenhuis-Richardson (NR) bracket on \(C^{*}(\mathcal{A},\mathcal{A})\) by \[[f,g]_{\mathrm{NR}}=f\diamond g-(-1)^{(m-1)(n-1)}g\diamond f, \tag{3.1}\] where \(f\diamond g\in C^{m+n-1}(\mathcal{A},\mathcal{A})\) is defined by \((a_{1},a_{2},\cdots,a_{m+n-1}\in\mathcal{A})\) \[(f\diamond g)_{\lambda_{1},\cdots,\lambda_{m+n-2}}(a_{1},a_{2}, \cdots,a_{m+n-1})\] \[= \sum_{\sigma\in\mathbb{S}_{(n,m-1)}}(-1)^{\sigma}f_{\lambda_{ \sigma(1)}+\cdots+\lambda_{\sigma(n)},\lambda_{\sigma(n+1)},\cdots,\lambda_{ \sigma(m+n-2)}}(g_{\lambda_{\sigma(1)},\cdots,\lambda_{\sigma(n-1)}}(a_{\sigma (1)},\cdots,a_{\sigma(n)}),\] \[a_{\sigma(n+1)},\cdots,a_{\sigma(m+n-1)})|_{\lambda_{m+n-1}\mapsto \lambda_{m+n-1}^{\dagger}},\] where \(\lambda_{m+n-1}^{\dagger}=-\sum_{i=1}^{m+n-2}\lambda_{i}-\partial\). Furthermore, we have **Lemma 3.1**.: ([14, 47])_\((C^{*}(\mathcal{A},\mathcal{A}),[-,-]_{\mathrm{NR}})\) is a graded Lie algebra. Moreover, a 2-cochain \(\pi\in C^{2}(\mathcal{A},\mathcal{A})\) defines a Lie conformal algebra structure on \(\mathcal{A}\) by_ \[[a_{\lambda}b]:=\pi_{\lambda}(a,b),\quad\forall\ a,b\in\mathcal{A},\] _if and only if \([\pi,\pi]_{\mathrm{NR}}=0\)._ Let \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) be \(\mathbb{C}[\partial]\)-modules. In the following, the elements in \(\mathcal{A}_{1}\) are usually denoted by \(a,b,a_{1},a_{2},\cdots\) and the elements in \(\mathcal{A}_{2}\) by \(u,v,v_{1},v_{2},\cdots\). Let \(f:\mathcal{A}_{1}^{\otimes k}\otimes\mathcal{A}_{2}^{\otimes l}\mapsto\mathbb{C }[\lambda_{1},\cdots,\lambda_{k+l-1}]\otimes\mathcal{A}_{1}\) be a linear map satisfying (2.8), (2.9) and the following condition: \[f_{\lambda_{1},\cdots,\lambda_{k+l-1}}(a_{1},\cdots,a_{k},v_{k+1 },\cdots,v_{k+l})\] \[= (-1)^{\sigma}(-1)^{\tau}f_{\lambda_{\sigma(1)},\cdots,\lambda_{ \sigma(k)},\lambda_{\tau(k+l)},\cdots,\lambda_{\tau(k+l-1)}}(a_{\sigma(1)}, \cdots,a_{\sigma(k)},v_{\tau(k+1)},\cdots,v_{\tau(k+l)})|_{\lambda_{k+l}\mapsto \lambda_{k+l}^{\dagger}}, \tag{3.2}\] for every permutation \(\sigma\) of the indices \(\{1,\cdots,k\}\) and permutation \(\tau\) of the indices \(\{k+1,\cdots,k+l\}\). We can define a linear map \(\hat{f}\in C^{k+l}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus \mathcal{A}_{2})\) by \[\hat{f}_{\lambda_{1},\cdots,\lambda_{k+l-1}}((a_{1},v_{1}), \cdots,(a_{k+l},v_{k+l}))\] \[= \left.\big{(}\sum_{\tau\in\mathbb{S}_{(k,l)}}(-1)^{\tau}f_{\lambda _{\tau(1)},\cdots,\lambda_{\tau(k+l-1)}}(a_{\tau(1)},\cdots,a_{\tau(k)},v_{\tau( k+1)},\cdots,v_{\tau(k+l)}),0\big{)}\right|_{\lambda_{k+l}\mapsto\lambda_{k+l}^{ \dagger}}.\] Similarly, for a linear map \(f:\mathcal{A}_{1}^{\otimes k}\otimes\mathcal{A}_{2}^{\otimes l}\mapsto\mathbb{C }[\lambda_{1},\cdots,\lambda_{k+l-1}]\otimes\mathcal{A}_{2}\) satisfying (2.8), (2.9) and (3.2), we obtain a linear map \(\hat{f}\in C^{k+l}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus \mathcal{A}_{2})\) by \[\hat{f}_{\lambda_{1},\cdots,\lambda_{k+l-1}}((a_{1},v_{1}), \cdots,(a_{k+l},v_{k+l}))\] \[= \left.\big{(}0,\sum_{\tau\in\mathbb{S}_{(k,l)}}(-1)^{\tau}f_{ \lambda_{\tau(1)},\cdots,\lambda_{\tau(k+l-1)}}(a_{\tau(1)},\cdots,a_{\tau(k)},v_{\tau(k+1)},\cdots,v_{\tau(k+l)})\big{)}\right|_{\lambda_{k+l}\mapsto \lambda_{k+l}^{\dagger}}.\] The linear map \(\hat{f}\) is called a **lift** of \(f\). For example, the lifts of linear maps \(\alpha:\mathcal{A}_{1}\otimes\mathcal{A}_{1}\rightarrow\mathcal{A}_{1}[\lambda]\) and \(\beta:\mathcal{A}_{1}\otimes\mathcal{A}_{2}\rightarrow\mathcal{A}_{2}[\lambda]\) are respectively given by \[\hat{\alpha}_{\lambda}((a_{1},v_{1}),(a_{2},v_{2}))= (\alpha_{\lambda}(a_{1},a_{2}),0), \tag{3.4}\] \[\hat{\beta}_{\lambda}((a_{1},v_{1}),(a_{2},v_{2}))= (0,\beta_{\lambda}(a_{1},v_{2})-\beta_{-\partial-\lambda}(a_{2},v_{1} )). \tag{3.3}\] **Definition 3.2**.: _Let \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) be \(\mathbb{C}[\partial]\)-modules. A cochain \(f\in C^{k+l+1}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus \mathcal{A}_{2})\) has a_ **bidegree**_\(k|l\)_, if the following conditions hold:_ 1. _If_ \(X\) _is an element in_ \(\mathcal{A}_{1}^{\otimes k+1}\otimes\mathcal{A}_{2}^{\otimes l}\)_, then_ \(f(X)\in\mathbb{C}[\lambda_{1},\cdots,\lambda_{k+l}]\otimes\mathcal{A}_{1}\)_;_ 2. _If_ \(X\) _is an element in_ \(\mathcal{A}_{1}^{\otimes k}\otimes\mathcal{A}_{2}^{\otimes l+1}\)_, then_ \(f(X)\in\mathbb{C}[\lambda_{1},\cdots,\lambda_{k+l}]\otimes\mathcal{A}_{2}\)_;_ 3. _All the other cases,_ \(f(X)=0\)_._ We call \(f\in C^{n}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus \mathcal{A}_{2})\) a **homogeneous cochain** if \(f\) has a bidegree and denote its bidegree by \(\|f\|\). Obviously, \(\hat{\alpha}\) and \(\hat{\beta}\) given by (3.3) and (3.4) are elements in \(C^{2}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus\mathcal{A}_ {2})\) with \(\|\hat{\alpha}\|=\|\hat{\beta}\|=1|0\). Naturally we obtain a homogeneous linear map of bidegree \(1|0\), namely, \(\hat{\mu}:=\hat{\alpha}+\hat{\beta}\). Observe that \(\hat{\mu}\) is a multiplication of the semi-direct product type: \[\hat{\mu}_{\lambda}((a_{1},v_{1}),(a_{2},v_{2}))=(\alpha_{\lambda}(a_{1},a_{2} ),\beta_{\lambda}(a_{1},v_{2})-\beta_{-\lambda-\partial}(a_{2},v_{1})),\ a_{1},a_{2} \in\mathcal{A}_{1},\ v_{1},v_{2}\in\mathcal{A}_{2}.\] Even though \(\hat{\mu}\) is not a lift (there is no \(\mu\)), we still use the symbol for our convenience below. The following lemma shows that the NR bracket on \(C^{*}(\mathcal{A}_{1}\oplus\mathcal{A}_{2},\mathcal{A}_{1}\oplus\mathcal{A}_ {2})\) is compatible with the bigrading. **Lemma 3.3**.: _If \(\|f\|=k_{f}\|f_{f}\) and \(\|g\|=k_{g}\|f_{g}\), then \([f,g]_{\rm NR}\) has the bidegree \(k_{f}+k_{g}\|f_{f}+l_{g}\)._ It is straightforward to check that **Lemma 3.4**.: _If \(\|f\|=-1|l\) (resp. \(l|-1\)) and \(\|g\|=-1|k\) (resp. \(k|-1\)), then \([f,g]_{\rm NR}=0\)._ **Proposition 3.5**.: _Let \((\mathcal{A},\pi)\) be a Lie conformal algebra, \(M\) a \(\mathbb{C}[\partial]\)-module and \(\rho:\mathcal{A}\to{\rm Cend}(M)\) a linear map satisfying \(\rho(\partial(a))_{\lambda}=-\lambda\rho(a)_{\lambda}\). Then \((M;\rho)\) is a module of \(\mathcal{A}\) if and only if_ \[[\hat{\pi}+\hat{\rho},\hat{\pi}+\hat{\rho}]_{\rm NR}=0.\] Proof.: It follows by a direct calculation. Let \((\mathcal{A},\pi)\) be a Lie conformal algebra and \((M;\rho)\) a module over \(\mathcal{A}\). By the definitions of lift and bidegree, \(\hat{\pi}+\hat{\rho}\in C^{1|0}(\mathcal{A}\oplus M,\mathcal{A}\oplus M)\), and the subspace \(C^{k}(\mathcal{A},M)\) is identified with \(C^{k|-1}(\mathcal{A}\oplus M,\mathcal{A}\oplus M)\). Set \(C^{*}(\mathcal{A},M)=\oplus_{k=1}^{+\infty}C^{k}(\mathcal{A},M)\). Define the coboundary operator \(\mathbf{d}_{\pi+\rho}:C^{k}(\mathcal{A},M)\to C^{k+1}(\mathcal{A},M)\) by \[\mathbf{d}_{\pi+\rho}f:=(-1)^{k-1}[\hat{\pi}+\hat{\rho},\hat{f}]_{\rm NR},\quad \forall\ f\in C^{k}(\mathcal{A},M). \tag{3.5}\] By Lemma 3.3, \(\mathbf{d}_{\pi+\rho}f\in C^{k+1}(\mathcal{A},M)\). By Proposition 3.5 and the graded Jacobi identity, we have \(\mathbf{d}_{\pi+\rho}\circ\mathbf{d}_{\pi+\rho}=0\). Thus we obtain a well-defined cochain complex \((C^{*}(\mathcal{A},M),\mathbf{d}_{\pi+\rho})\). By a direct calculation, we have **Proposition 3.6**.: _Let \((\mathcal{A},\pi)\) be a Lie conformal algebra and \((M;\rho)\) a module over \(\mathcal{A}\). Then for all \(f\in C^{k}(\mathcal{A},M)\) and \(a_{1},a_{2},\cdots,a_{k+1}\in\mathcal{A}\), we have_ \[(\mathbf{d}_{\pi+\rho}f)_{\lambda_{1},\cdots,\lambda_{k}}(a_{1}, \ldots,a_{k+1})\] \[=\sum_{i=1}^{k+1}(-1)^{i+1}\rho(a_{i})_{\lambda_{i}}f_{\lambda_{1},\cdots,\lambda_{i},\cdots,\lambda_{k}}(a_{1},\ldots,\hat{a_{i}},\ldots,a_{k+1 })|_{\lambda_{k+1}\mapsto\lambda_{k+1}^{\dagger}}\] \[+\sum_{1\leq i<j\leq k+1}(-1)^{i+j}f_{\lambda_{i}+\lambda_{j}, \lambda_{1},\cdots,\hat{\lambda_{i}},\cdots,\hat{\lambda_{j}},\cdots,\lambda_{k }}(\pi_{\lambda_{i}}(a_{i},a_{j}),\ldots,\hat{a_{i}},\ldots,\hat{a_{j}},\ldots,a _{k+1})|_{\lambda_{k+1}\mapsto\lambda_{k+1}^{\dagger}}.\] _Thus the coboundary operator \(\mathbf{d}_{\pi+\rho}:C^{k}(\mathcal{A},M)\to C^{k+1}(\mathcal{A},M)\) defined by (3.5) is exactly the Lie conformal algebra coboundary operator for \((\mathcal{A},\pi)\) with coefficients in \((M;\rho)\) for \(k\geq 1\)._ ### Quasi-twilled Lie conformal algebras and \(L_{\infty}\)-algebras Let \((\mathcal{A},[\cdot_{\lambda}\cdot])\) be a Lie conformal algebra with a decomposition into the direct sum of two \(\mathbb{C}[\partial]\)-modules \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), that is, \(\mathcal{A}=\mathcal{A}_{1}\oplus\mathcal{A}_{2}\). **Lemma 3.7**.: _Let \(\pi\in C^{2}(\mathcal{A},\mathcal{A})\) be a \(2\)-cochain. Then \(\pi\) can be uniquely decomposed into four homogeneous linear maps_ \[\pi=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2},\] _where the bidegrees of \(\phi_{1}\), \(\mu_{1}\), \(\mu_{2}\) and \(\phi_{2}\) are \(2|-1,\ 1|0,\ 0|1\) and \(-1|2\), respectively._ Proof.: By the definition of bidegree, the space \(C^{2}(\mathcal{A},\mathcal{A})\) can be decomposed into four subspaces \[C^{2}(\mathcal{A},\mathcal{A})=C^{2|-1}(\mathcal{A},\mathcal{A})+C^{1|0}( \mathcal{A},\mathcal{A})+C^{0|1}(\mathcal{A},\mathcal{A})+C^{-1|2}(\mathcal{ A},\mathcal{A}).\] Thus \(\pi\) is uniquely decomposed into homogeneous linear maps of bidegrees \(2|-1,\ 1|0,\ 0|1\) and \(-1|2\). Let \(\mathrm{p}_{1}:\mathcal{A}\to\mathcal{A}_{1}\) and \(\mathrm{p}_{2}:\mathcal{A}\to\mathcal{A}_{2}\) be the natural projections. For \(a,b\in\mathcal{A}_{1}\), \(u,v\in\mathcal{A}_{2}\), define \[[a_{\lambda}b]_{1}=\mathrm{p}_{1}([a_{\lambda}b]),\quad\rho_{2}( v)_{\lambda}a=\mathrm{p}_{1}([v_{\lambda}a]),\quad\phi_{2\lambda}(u,v)= \mathrm{p}_{1}([u_{\lambda}v]),\] \[[u_{\lambda}v]_{2}=\mathrm{p}_{2}([u_{\lambda}v]),\quad\rho_{1}( a)_{\lambda}v=\mathrm{p}_{2}([a_{\lambda}v]),\quad\phi_{1\lambda}(a,b)= \mathrm{p}_{2}([a_{\lambda}b]).\] Then the \(\lambda\)-bracket of \(\mathcal{A}\) can be uniquely written as \[[(a,u)_{\lambda}(b,v)]=([a_{\lambda}b]_{1}+\rho_{2}(u)_{\lambda}b-\rho_{2}(v) _{-\lambda-\partial}a+\phi_{2\lambda}(u,v),[u_{\lambda}v]_{2}+\rho_{1}(a)_{ \lambda}v-\rho_{1}(b)_{-\lambda-\partial}u+\phi_{1\lambda}(a,b)).\] Now we denote the Lie conformal algebra structure on \(\mathcal{A}\) by \(\Pi\), i.e. \[\Pi_{\lambda}((a,u),(b,v)):=[(a,u)_{\lambda}(b,v)].\] Set \(\Pi_{\lambda}=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2}\) as in Lemma 3.7. Then \[\hat{\phi}_{1\lambda}((a,u),(b,v)) =(0,\phi_{1\lambda}(a,b)), \tag{3.7}\] \[\hat{\mu}_{1\lambda}((a,u),(b,v)) =([a_{\lambda}b]_{1},\rho_{1}(a)_{\lambda}v-\rho_{1}(b)_{-\lambda -\partial}u),\] (3.8) \[\hat{\mu}_{2\lambda}((a,u),(b,v)) =(\rho_{2}(u)_{\lambda}b-\rho_{2}(v)_{-\lambda-\partial}a,[u_{ \lambda}v]_{2}),\] (3.9) \[\hat{\phi}_{2\lambda}((a,u),(b,v)) =(\phi_{2\lambda}(u,v),0). \tag{3.6}\] **Lemma 3.8**.: _The Maurer-Cartan equation \([\Pi,\Pi]_{\mathrm{NR}}=0\) is equivalent to the following conditions:_ \[\left\{\begin{array}{rcl}[\hat{\mu}_{1},\hat{\phi}_{1}]_{\mathrm{NR}}&=&0, \\ \frac{1}{2}[\hat{\mu}_{1},\hat{\mu}_{1}]_{\mathrm{NR}}+[\hat{\mu}_{2},\hat{\phi }_{1}]_{\mathrm{NR}}&=&0,\\ [\hat{\mu}_{1},\hat{\mu}_{2}]_{\mathrm{NR}}+[\hat{\phi}_{1},\hat{\phi}_{2}]_{ \mathrm{NR}}&=&0,\\ \frac{1}{2}[\hat{\mu}_{2},\hat{\mu}_{2}]_{\mathrm{NR}}+[\hat{\mu}_{1},\hat{ \phi}_{2}]_{\mathrm{NR}}&=&0,\\ [\hat{\mu}_{2},\hat{\phi}_{2}]_{\mathrm{NR}}&=&0.\end{array}\right. \tag{3.10}\] Proof.: By Lemma 3.4, we have \[[\Pi,\Pi]_{\mathrm{NR}}= [\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2},\hat{ \phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2}]_{\mathrm{NR}}\] \[= [\hat{\phi}_{1},\hat{\mu}_{1}]_{\mathrm{NR}}+[\hat{\phi}_{1},\hat{ \mu}_{2}]_{\mathrm{NR}}+[\hat{\phi}_{1},\hat{\phi}_{2}]_{\mathrm{NR}}+[\hat{ \mu}_{1},\hat{\phi}_{1}]_{\mathrm{NR}}+[\hat{\mu}_{1},\hat{\mu}_{1}]_{\mathrm{NR}}\] \[+[\hat{\mu}_{1},\hat{\mu}_{2}]_{\mathrm{NR}}+[\hat{\mu}_{1},\hat{ \phi}_{2}]_{\mathrm{NR}}+[\hat{\mu}_{2},\hat{\phi}_{1}]_{\mathrm{NR}}+[\hat{ \mu}_{2},\hat{\mu}_{1}]_{\mathrm{NR}}\] \[+[\hat{\mu}_{2},\hat{\mu}_{2}]_{\mathrm{NR}}+[\hat{\mu}_{2},\hat{ \phi}_{2}]_{\mathrm{NR}}+[\hat{\phi}_{2},\hat{\phi}_{1}]_{\mathrm{NR}}+[\hat{ \phi}_{2},\hat{\mu}_{1}]_{\mathrm{NR}}+[\hat{\phi}_{2},\hat{\mu}_{2}]_{\mathrm{NR}}\] \[= (2[\hat{\mu}_{1},\hat{\phi}_{1}]_{\mathrm{NR}})+([\hat{\mu}_{1},\hat{ \mu}_{1}]_{\mathrm{NR}}+2[\hat{\mu}_{2},\hat{\phi}_{1}]_{\mathrm{NR}})+(2[ \hat{\mu}_{1},\hat{\mu}_{2}]_{\mathrm{NR}}+2[\hat{\phi}_{1},\hat{\phi}_{2}]_{ \mathrm{NR}})\] \[+([\hat{\mu}_{2},\hat{\mu}_{2}]_{\mathrm{NR}}+2[\hat{\mu}_{1}, \hat{\phi}_{2}]_{\mathrm{NR}})+(2[\hat{\mu}_{2},\hat{\phi}_{2}]_{\mathrm{NR}}).\] By Lemma 3.3 and the definition of bidegree, \([\Pi,\Pi]_{\mathrm{NR}}=0\) if and only if (3.10) holds. **Definition 3.9**.: Let \(\mathcal{A}\) be a Lie conformal algebra with a decomposition into the direct sum of two \(\mathbb{C}[\partial]\)-modules \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) and the Lie conformal algebra structure \(\Pi=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2}\). 1. The triple \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) is called a **twilled Lie conformal algebra** if \(\phi_{1}=\phi_{2}=0\), or equivalently, \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are subalgebras of \(\mathcal{A}\). In this case, we denote \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) by \(\mathcal{A}=\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\). 2. The triple \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) is called a **quasi-twilled Lie conformal algebra** if \(\phi_{2}=0\), i.e., \(\mathcal{A}_{2}\) is a subalgebra of \(\mathcal{A}\). By Lemma 3.8, we have **Lemma 3.10**.: _The triple \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) is a twilled Lie conformal algebra if and only if the following conditions hold:_ \[\frac{1}{2}[\hat{\mu}_{1},\hat{\mu}_{1}]_{\rm NR} = 0, \tag{3.12}\] \[[\hat{\mu}_{1},\hat{\mu}_{2}]_{\rm NR} = 0,\] (3.13) \[\frac{1}{2}[\hat{\mu}_{2},\hat{\mu}_{2}]_{\rm NR} = 0. \tag{3.11}\] By (3.11), \(\hat{\mu}_{1}\) is a Lie conformal algebra structure on \(\mathcal{A}=\mathcal{A}_{1}\oplus\mathcal{A}_{2}\). By (3.7), \((\mathcal{A}_{1},[\cdot\cdot]_{1})\) is a Lie conformal algebra and \((\mathcal{A}_{2};\rho_{1})\) is a module over \((\mathcal{A}_{1},[\cdot\cdot\cdot]_{1})\). Similarly,\((\mathcal{A}_{2},[\cdot\cdot\cdot]_{2})\) is a Lie conformal algebra and \((\mathcal{A}_{1};\rho_{2})\) is a module over \((\mathcal{A}_{2},[\cdot\cdot\cdot]_{2})\). Hence the \(\lambda\)-bracket on the twilled Lie conformal algebra \(\mathcal{A}=\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\) is given by \((a,b\in\mathcal{A}_{1},\ u,v\in\mathcal{A}_{2})\): \[[(a,u)_{\lambda}(b,v)]=([a_{\lambda}b]_{1}+\rho_{2}(u)_{\lambda}b-\rho_{2}(v) _{-\lambda-\partial}a,[u_{\lambda}v]_{2}+\rho_{1}(a)_{\lambda}v-\rho_{1}(b)_{ -\lambda-\partial}u). \tag{3.14}\] **Lemma 3.11**.: \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) _forms a quasi-twilled Lie conformal algebra if and only if there hold_ \[[\hat{\mu}_{1},\hat{\phi}_{1}]_{\rm NR} = 0, \tag{3.16}\] \[\frac{1}{2}[\hat{\mu}_{1},\hat{\mu}_{1}]_{\rm NR}+[\hat{\mu}_{2},\hat{\phi}_{1}]_{\rm NR} = 0,\] (3.17) \[[\hat{\mu}_{1},\hat{\mu}_{2}]_{\rm NR} = 0,\] (3.18) \[\frac{1}{2}[\hat{\mu}_{2},\hat{\mu}_{2}]_{\rm NR} = 0. \tag{3.15}\] The \(\lambda\)-bracket on the quasi-twilled Lie conformal algebra \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) is given by \[[(a,u)_{\lambda}(b,v)]=([a_{\lambda}b]_{1}+\rho_{2}(u)_{\lambda}b-\rho_{2}(v) _{-\lambda-\partial}a,[u_{\lambda}v]_{2}+\rho_{1}(a)_{\lambda}v-\rho_{1}(b)_{ -\lambda-\partial}u+\phi_{1\lambda}(a,b)). \tag{3.19}\] Since \([\hat{\mu}_{1},\hat{\mu}_{1}]_{\rm NR}\) is not zero in general, \((\mathcal{A}_{1},[\cdot\cdot\cdot]_{1})\) is not a Lie conformal algebra. By (3.18), \((\mathcal{A}_{2},[\cdot\cdot\cdot]_{2})\) is a Lie conformal algebra and \((\mathcal{A}_{1};\rho_{2})\) is a module over \((\mathcal{A}_{2},[\cdot\cdot\cdot]_{2})\). **Corollary 3.12**.: _Let \((\mathcal{A},[\cdot\cdot\cdot])\) be a Lie conformal algebra and \((M;\rho)\) a module over \(\mathcal{A}\). For any \(2\)-cocycle \(\phi\) in \(C^{2}(\mathcal{A},M)\), the \(\mathbb{C}[\partial]\)-module \(\mathcal{A}\oplus M\) carries a quasi-twilled Lie conformal algebra structure via_ \[[(a,m)_{\lambda}(b,n)]^{\phi}=([a_{\lambda}b],\rho(a)_{\lambda}n-\rho(b)_{- \lambda-\partial}m+\phi_{\lambda}(a,b)), \tag{3.20}\] _where \(a,b\in\mathcal{A}\) and \(m,n\in M.\) We called it the \(\phi\)_**-twisted semi-direct product of \(\mathcal{A}\) and \(M\)**, denoted by \(\mathcal{A}\mathsf{r}\mathsf{\ast}_{\phi}M\)._ **Definition 3.13**.: ([33, 34]) An \(L_{\infty}\)**-algebra** is a graded vector space \(\mathfrak{g}=\bigoplus_{i\in\mathbb{N}}\mathfrak{g}^{i}\) equipped with a collection of multilinear maps \(l_{k}:\otimes^{k}\mathfrak{g}\to\mathfrak{g}\) of degree \(2-k\), satisfying 1. Skew-symmetry: \(l_{k}(v_{\sigma(1)},\cdots,v_{\sigma(k)})=\chi(\sigma)l_{k}(v_{1},\cdots,v_{k})\), for \(k\geq 1\) and \(\sigma\in S_{k}\). Here \(\chi(\sigma)\) is the Koszul sign. 2. Higher Jacobi identity: for \(n\geq 1\), \[\sum_{i+j=n+1}(-1)^{j}\sum_{\sigma\in S_{(i,n-i)}}\chi(\sigma)l_{j}(l_{i}(v_{ \sigma(1)},\cdots,v_{\sigma(i)}),v_{\sigma(i+1)},\cdots,v_{\sigma(n)})=0,\] where \(v_{1},\cdots,v_{n}\) are homogeneous elements in \(\mathfrak{g}\). Let \((\mathfrak{g}=\bigoplus_{i\in\mathbb{N}}\mathfrak{q}^{i},\{l_{k}\}_{k=1}^{\infty})\) be an \(L_{\infty}\)-algebra. An element \(\alpha\in\mathfrak{g}^{1}\) is called an **Maurer-Cartan element** if it satisfies the following **Maurer-Cartan equation**: \[\sum_{k=1}^{+\infty}\frac{1}{k!}l_{k}(\alpha,\cdots,\alpha)=0.\] The following theorem says that a quasi-twilled Lie conformal algebra induces an \(L_{\infty}\)-algebra. **Theorem 3.14**.: _Let \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2},\Pi)\) be a quasi-twilled Lie conformal algebra with \(\Pi=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}\). Define \(d_{\hat{\mu}_{2}}:C^{m}(\mathcal{A}_{2},\mathcal{A}_{1})\to C^{m+1}( \mathcal{A}_{2},\mathcal{A}_{1}),\ [\cdot,\cdot]_{\hat{\mu}_{1}}:C^{m}(\mathcal{A}_{2}, \mathcal{A}_{1})\times C^{n}(\mathcal{A}_{2},\mathcal{A}_{1})\to C^{m+n}( \mathcal{A}_{2},\mathcal{A}_{1})\) and \([\cdot,\cdot,\cdot]_{\hat{\phi}_{1}}:C^{m}(\mathcal{A}_{2},\mathcal{A}_{1}) \times C^{n}(\mathcal{A}_{2},\mathcal{A}_{1})\times C^{k}(\mathcal{A}_{2}, \mathcal{A}_{1})\to C^{m+n+k-1}(\mathcal{A}_{2},\mathcal{A}_{1})\) as follows:_ \[d_{\hat{\mu}_{2}}(f_{1})= [\hat{\mu}_{2},\hat{f}_{1}]_{\mathrm{NR}}, \tag{3.22}\] \[[f_{1},f_{2}]_{\hat{\mu}_{1}}= (-1)^{m-1}[[\hat{\mu}_{1},\hat{f}_{1}]_{\mathrm{NR}},\hat{f}_{2} ]_{\mathrm{NR}},\] (3.23) \[[f_{1},f_{2},f_{3}]_{\hat{\phi}_{1}}= (-1)^{n-1}[[[\hat{\phi}_{1},\hat{f}_{1}]_{\mathrm{NR}},\hat{f}_{2} ]_{\mathrm{NR}},\hat{f}_{3}]_{\mathrm{NR}}, \tag{3.21}\] _for \(f_{1}\in C^{m}(\mathcal{A}_{2},\mathcal{A}_{1}),\ f_{2}\in C^{n}(\mathcal{A}_{2 },\mathcal{A}_{1}),\ f_{3}\in C^{k}(\mathcal{A}_{2},\mathcal{A}_{1}).\) Then \((C^{*}(\mathcal{A}_{2},\mathcal{A}_{1}),d_{\hat{\mu}_{2}},[\cdot,\cdot]_{\hat{ \mu}_{1}},[\cdot,\cdot,\cdot]_{\hat{\phi}_{1}})\) is an \(L_{\infty}\)-algebra._ Proof.: Set \(d_{0}:=[\hat{\mu}_{2},\cdot]_{\mathrm{NR}}\). By the graded Jacobi identity of \([\cdot,\cdot]_{\mathrm{NR}}\), \((C^{*}(\mathcal{A},\mathcal{A}),[\cdot,\cdot]_{\mathrm{NR}},d_{0})\) is a differential graded Lie algebra. By Lemma 3.3, the brackets on \(sC^{*}(\mathcal{A},\mathcal{A})\) given by \[d_{\hat{\mu}_{2}}(sf_{1})= s[\hat{\mu}_{2},f_{1}]_{\mathrm{NR}},\] \[[sf_{1},sf_{2}]_{\hat{\mu}_{1}}= (-1)^{|f_{1}|}s[[\hat{\mu}_{1},f_{1}]_{\mathrm{NR}},f_{2}]_{\mathrm{ NR}},\] \[[sf_{1},sf_{2},sf_{3}]_{\hat{\phi}_{1}}= (-1)^{|f_{2}|}s[[[\hat{\phi}_{1},f_{1}]_{\mathrm{NR}},f_{2}]_{ \mathrm{NR}},f_{3}]_{\mathrm{NR}},\] \[l_{i}= 0,\ i\geq 4,\] are closed on \(C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\), where \(f_{1},\ f_{2},\ f_{3}\in C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\) and \(s:C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\to sC^{*}(\mathcal{A}_{2},\mathcal{A} _{1})\) is the suspension operator by assigning \(C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\) to the graded vector space \(sC^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\) with \(|sC^{i}(\mathcal{A}_{2},\mathcal{A}_{1})|:=i-1\). Thus \(C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\) is an abelian subalgebra of the graded Lie algebra \((C^{*}(\mathcal{A},\mathcal{A}),[\cdot,\cdot]_{\mathrm{NR}})\). The rest follows from [46, Corollary 3.5]. **Corollary 3.15**.: _Let \(\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\) be a twilled Lie conformal algebra with the Lie conformal algebra structure \(\Pi=\hat{\mu}_{1}+\hat{\mu}_{2}.\). Then \((C^{*}(\mathcal{A}_{2},\mathcal{A}_{1}),d_{\hat{\mu}_{2}},[\cdot,\cdot]_{\hat{ \mu}_{1}})\) is a differential graded Lie algebra, where \(d_{\hat{\mu}_{2}}\) and \([\cdot,\cdot]_{\hat{\mu}_{1}}\) are given by (3.21) and (3.22), respectively._ ## 4. Twisting on Lie conformal algebras and relative Rota-Baxter type operators In this section, we study twisting theory of Lie conformal algebras and characterize relative Rota-Baxter type operators as Maurer-Cartan elements of a suitable \(L_{\infty}\)-algebra. Let \((\mathcal{A},\Pi)\) be a Lie conformal algebra with a decomposition into the direct sum of two \(\mathbb{C}[\partial]\)-modules \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) and \(\Pi=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2}\). Let \(\hat{H}\) be the lift of a \(\mathbb{C}[\partial]\)-module homomorphism \(H:\mathcal{A}_{2}\to\mathcal{A}_{1}\). Notice that the bidegree of \(H\) is \(-1|1\). For the operator \(X_{\hat{H}}:=[\cdot,\hat{H}]_{\text{NR}}\), we define \[e^{X_{\hat{H}}}(\cdot):=\text{Id}+X_{\hat{H}}+\frac{1}{2!}X_{\hat{H}}^{2}+\frac {1}{3!}X_{\hat{H}}^{3}+\cdots,\] where \(X_{\hat{H}}^{2}:=[[\cdot,\hat{H}]_{\text{NR}},\hat{H}]_{\text{NR}}\) and \(X_{\hat{H}}^{n}\) for \(n\geq 3\) is defined similarly. As \(\hat{H}\circ\hat{H}=0\), the operator \(e^{X_{\hat{H}}}\) is well-defined. **Definition 4.1**.: The transformation \(\Pi^{H}:=e^{X_{\hat{H}}}(\Pi)\) is called a **twisting** of \(\Pi\) by \(H\). The following lemma is useful. **Lemma 4.2**.: 1. \(\Pi^{H}=\Pi+[\Pi,\hat{H}]_{\text{NR}}+\frac{1}{2}[[\Pi,\hat{H}]_{\text{NR}}, \hat{H}]_{\text{NR}}+\frac{1}{6}[[[\Pi,\hat{H}]_{\text{NR}},\hat{H}]_{\text{ NR}},\hat{H}]_{\text{NR}}\)_;_ 2. \(\Pi^{H}_{\lambda}=e^{-\hat{H}}\circ\Pi_{\lambda}\circ(e^{\hat{H}}\otimes e^{ \hat{H}})\)_._ Proof.: (i) Since \(\|H\|=-1|1\), \(X_{\hat{H}}^{i}(\Pi)=0\) for \(i\geq 4\) by Lemmas 3.3 and 3.4. This proves (i). (ii) For \(a_{1},a_{2}\in\mathcal{A}_{1},v_{1},v_{2}\in\mathcal{A}_{2}\), we compute separately \[([\Pi,\hat{H}]_{\text{NR}})_{\lambda}((a_{1},v_{1}),(a_{2},v_{2}))\] \[= \Pi_{\lambda}((H(v_{1}),0),(a_{2},v_{2}))+\Pi_{\lambda}((a_{1},v_ {1}),(H(v_{2}),0))-\hat{H}(\Pi_{\lambda}((a_{1},v_{1}),(a_{2},v_{2})));\] \[([[\Pi,\hat{H}]_{\text{NR}},\hat{H}]_{\text{NR}})_{\lambda}((a_{ 1},v_{1}),(a_{2},v_{2}))\] \[= 2\Pi_{\lambda}((H(v_{1}),0),(H(v_{2}),0))-2\hat{H}\Pi_{\lambda}( (H(v_{1}),0),(a_{2},v_{2}))-2\hat{H}\Pi_{\lambda}((a_{1},v_{1}),(H(v_{2}),0));\] \[([[[[\pi,\hat{H}]_{\text{NR}},\hat{H}]_{\text{NR}},\hat{H}]_{ \text{NR}})_{\lambda}((a_{1},v_{1}),(a_{2},v_{2}))=-6\hat{H}\Pi_{\lambda}((H(v _{1}),0),(H(v_{2}),0)).\] By assertion (i), we have \[\Pi^{H}_{\lambda}= \Pi_{\lambda}-\hat{H}\circ\Pi_{\lambda}+\Pi_{\lambda}\circ(\hat {H}\otimes\text{Id})+\Pi_{\lambda}\circ(\text{Id}\otimes\hat{H})-\hat{H} \circ\Pi_{\lambda}\circ(\text{Id}\otimes\hat{H})\] \[-\hat{H}\circ\Pi_{\lambda}\circ(\hat{H}\otimes\text{Id})+\Pi_{ \lambda}\circ(\hat{H}\otimes\hat{H})-\hat{H}\circ\Pi_{\lambda}\circ(\hat{H} \otimes\hat{H}).\] On the other hand, by \(\hat{H}\circ\hat{H}=0\), we have \[e^{-\hat{H}}\circ\Pi_{\lambda}\circ(e^{\hat{H}}\otimes e^{\hat{ H}})= (\text{Id}-\hat{H})\circ\Pi_{\lambda}\circ((\text{Id}+\hat{H}) \otimes(\text{Id}+\hat{H}))\] \[= \Pi_{\lambda}+\Pi_{\lambda}\circ(\text{Id}\otimes\hat{H})+\Pi_{ \lambda}\circ(\hat{H}\otimes\text{Id})+\Pi_{\lambda}\circ(\hat{H}\otimes\hat{ H})-\hat{H}\circ\Pi_{\lambda}\] \[-\hat{H}\circ\Pi_{\lambda}\circ(\text{Id}\otimes\hat{H})-\hat{H} \circ\Pi_{\lambda}\circ(\hat{H}\otimes\text{Id})-\hat{H}\circ\Pi_{\lambda} \circ(\hat{H}\otimes\hat{H}).\] This proves assertion (ii). **Proposition 4.3**.: _The twisting \(\Pi^{H}\) of \(\Pi\) is a Lie conformal algebra structure on \(\mathcal{A}\), namely, \([\Pi^{H},\Pi^{H}]_{\text{NR}}=0\). Moreover, \(e^{\hat{H}}:(\mathcal{A},\Pi^{H})\to(\mathcal{A},\Pi)\) is a Lie conformal algebra isomorphism._ Proof.: By Lemma 4.2, we have \[([\Pi^{H},\Pi^{H}]_{\text{NR}})_{\lambda_{1},\lambda_{2}}=2(\Pi^{ H}\diamond\Pi^{H})_{\lambda_{1},\lambda_{2}}= 2e^{-\hat{H}}\circ(\Pi\diamond\Pi)_{\lambda_{1},\lambda_{2}}\circ(e^{ \hat{H}}\otimes e^{\hat{H}}\otimes e^{\hat{H}})\] \[= e^{-\hat{H}}\circ([\Pi,\Pi]_{\text{NR}})_{\lambda_{1},\lambda_{2}} \circ(e^{\hat{H}}\otimes e^{\hat{H}}\otimes e^{\hat{H}})=0.\] The second claim follows directly. Since \(\Pi^{H}\) is a \(2\)-cochain, \(\Pi^{H}\) is also decomposed into the unique four substructures with respect to the bidegrees. The relations between \(\Pi^{H}\) and \(\Pi\) are given by the following theorem. **Theorem 4.4**.: _Assume that \(\Pi=\hat{\phi}_{1}+\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{2}\). Then \(\Pi^{H}=\hat{\phi}_{1}^{H}+\hat{\mu}_{1}^{H}+\hat{\mu}_{2}^{H}+\hat{\phi}_{2}^{H}\), where_ \[\hat{\phi}_{1}^{H}= \hat{\phi}_{1}, \tag{4.2}\] \[\hat{\mu}_{1}^{H}= \hat{\mu}_{1}+[\hat{\phi}_{1},\hat{H}]_{\rm NR},\] (4.3) \[\hat{\mu}_{2}^{H}= \hat{\mu}_{2}+d_{\hat{\mu}_{1}}\hat{H}+\frac{1}{2}[[\hat{\phi}_{1 },\hat{H}]_{\rm NR},\hat{H}]_{\rm NR},\] (4.4) \[\hat{\phi}_{2}^{H}= \hat{\phi}_{2}+d_{\hat{\mu}_{2}}\hat{H}+\frac{1}{2}[\hat{H},\hat{ H}]_{\hat{\mu}_{1}}+\frac{1}{6}[[[\hat{\phi}_{1},\hat{H}]_{\rm NR},\hat{H}]_{ \rm NR},\hat{H}]_{\rm NR}, \tag{4.1}\] _where \(d_{\hat{\mu}_{i}}:=[\hat{\mu}_{i},-]_{\rm NR}\)\((i=1,2)\) and \([\hat{H},\hat{H}]_{\hat{\mu}_{1}}:=[[\hat{\mu}_{1},\hat{H}]_{\rm NR},\hat{H}]_{ \rm NR}\)._ Proof.: By the decomposition of bidegree and Lemmas 3.3 and 3.4, the theorem follows. ### The case of twilled Lie conformal algebras Let \((\mathcal{A}_{1}\bowtie\mathcal{A}_{2},\Pi)\) be a twilled Lie conformal algebra with \(\Pi=\hat{\mu}_{1}+\hat{\mu}_{2}\). The twisted structure \(\Pi^{H}=\hat{\mu}_{1}^{H}+\hat{\mu}_{2}^{H}+\hat{\phi}_{2}^{H}\) by \(H:\mathcal{A}_{2}\to\mathcal{A}_{1}\) is given by \[\hat{\mu}_{1}^{H} = \hat{\mu}_{1},\] \[\hat{\mu}_{2}^{H} = \hat{\mu}_{2}+d_{\hat{\mu}_{1}}\hat{H},\] \[\hat{\phi}_{2}^{H} = d_{\hat{\mu}_{2}}\hat{H}+\frac{1}{2}[\hat{H},\hat{H}]_{\hat{\mu} _{1}}.\] We have shown in Corollary 3.15 that there is a differential graded Lie algebra structure associated to \((\mathcal{A}_{1}\bowtie\mathcal{A}_{2},\Pi)\). Now we show that the Maurer-Cartan element of this differential graded Lie algebra can give a new twilled Lie conformal algebra by the twisting transformation. **Proposition 4.5**.: _Let \((\mathcal{A}_{1}\bowtie\mathcal{A}_{2},\Pi)\) be a twilled Lie conformal algebra and \(H:\mathcal{A}_{2}\longrightarrow\mathcal{A}_{1}\) a \(\mathbb{C}[\partial]\)-module homomorphism. Then \((\mathcal{A}_{1}\bowtie\mathcal{A}_{2},\Pi^{H})\) is a twilled Lie conformal algebra if and only if \(H\) is a Maurer-Cartan element of the differential graded Lie algebra \((C^{*}(\mathcal{A}_{2},\mathcal{A}_{1}),d_{\hat{\mu}_{2}},[-,-]_{\hat{\mu}_{1}})\) given in Corollary 3.15, i.e._ \[d_{\hat{\mu}_{2}}\hat{H}+\frac{1}{2}[\hat{H},\hat{H}]_{\hat{\mu}_{1}}=0.\] _This is equivalent to_ \[[H(u)_{\lambda}H(v)]_{1}+\rho_{2}(u)_{\lambda}H(v)-\rho_{2}(v)_{ -\lambda-\partial}H(u)\] \[\qquad\qquad\qquad=H(\rho_{1}(H(u))_{\lambda}v-\rho_{1}(H(v))_{- \lambda-\partial}u)+H([u_{\lambda}v]_{2}),\ \forall u,v\in\mathcal{A}_{2}.\] Proof.: A direct calculation gives \[d_{\hat{\mu}_{2}}(\hat{H})(u,v)= \rho_{2}(u)_{\lambda}H(v)-\rho_{2}(v)_{-\lambda-\partial}H(u)-H([ u_{\lambda}v]_{2}),\] \[[\hat{H},\hat{H}]_{\hat{\mu}_{1}}(u,v)= 2H(\rho_{1}(H(u))_{\lambda}v-\rho_{1}(H(v))_{-\lambda-\partial}u )-2[H(u)_{\lambda}H(v)]_{1}.\] Thus \[d_{\hat{\mu}_{2}}\hat{H}(u,v)+\frac{1}{2}[\hat{H},\hat{H}]_{\hat {\mu}_{1}}(u,v)= \rho_{2}(u)_{\lambda}H(v)-\rho_{2}(v)_{-\lambda-\partial}H(u)-H([ u_{\lambda}v]_{2})\] \[+H(\rho_{1}(H(u))_{\lambda}v-\rho_{1}(H(v))_{-\lambda-\partial}u )-[H(u)_{\lambda}H(v)]_{1}=0.\] **Corollary 4.6**.: _Let \((\mathcal{A}_{1}\bowtie\mathcal{A}_{2},\Pi)\) be a twilled Lie conformal algebra and \(H\) a Maurer-Cartan element of the associated differential graded Lie algebra. Then we have a Lie conformal algebra structure on \(\mathcal{A}_{2}\) given by_ \[[u_{\lambda}v]_{H}:=\rho_{1}(H(u))_{\lambda}v-\rho_{1}(H(v))_{-\lambda- \partial}u+[u_{\lambda}v]_{2},\ \text{for}\ u,v\in\mathcal{A}_{2}. \tag{4.5}\] Proof.: By Lemma 3.10, \(\hat{\mu}_{2}^{H}\) is a Lie conformal algebra structure on \(\mathcal{A}\). Furthermore, \(\hat{\mu}_{2}^{H}\) restricted to \(\mathcal{A}_{2}\) is a Lie conformal algebra structure and the \(\lambda\)-bracket on \(\mathcal{A}_{2}\) is exactly (4.5). In the case of \(\hat{\mu}_{2}=0\), \(\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\) is exactly a semi-direct product Lie conformal algebra \(\mathcal{A}_{1}\ltimes\mathcal{A}_{2}\). To study this special case, we need the notion of a relative Rota-Baxter operator on a module over a Lie conformal algebra. **Definition 4.7**.: ([25]) Let \((M;\rho)\) be a module over a Lie conformal algebra \(\mathcal{A}\). A \(\mathbb{C}[\partial]\)-module homomorphism \(T:M\to\mathcal{A}\) is called a **relative Rota-Baxter operator** (or an \(O\)**-operator**) on \(M\) over \(\mathcal{A}\) if it satisfies \[[T(m)_{\lambda}T(n)]=T\big{(}\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda- \partial}m\big{)},\text{ for }m,n\in M. \tag{4.6}\] Let us denote the semi-direct product Lie conformal algebra structure on \(\mathcal{A}\ltimes_{\rho}M\) by \(\hat{\mu}\). **Proposition 4.8**.: _A \(\mathbb{C}[\partial]\)-module homomorphism \(T:M\to\mathcal{A}\) is a relative Rota-Baxter operator if and only if \(\hat{T}\) is a solution of the Maurer-Cartan equation in the graded Lie algebra \((C^{*}(M,\mathcal{A}),[-,-]_{\hat{\mu}})\) given in Corollary 3.15._ Proof.: It follows from \(\frac{1}{2}[\hat{T},\hat{T}]_{\hat{\mu}}(m,n)=[T(m)_{\lambda}T(n)]-T\big{(}\rho (T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial}m\big{)}.\) **Corollary 4.9**.: ([25]) _Let \(T\) be a relative Rota-Baxter operator on the module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\). Then \((M,[\cdot_{\lambda}\cdot]^{T})\) is a Lie conformal algebra, where the \(\lambda\)-bracket \([\cdot_{\lambda}\cdot]^{T}\) is given by_ \[[m_{\lambda}n]^{T}=\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial}m,\text { for }m,n\in M, \tag{4.7}\] _and \(T\) is a Lie conformal algebra homomorphism from \((M,[\cdot_{\lambda}\cdot]^{T})\) to \(\mathcal{A}\)._ **Corollary 4.10**.: _Let \(T\) be a relative Rota-Baxter operator on the module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\). Then \((\mathcal{A}\oplus M,\Pi^{T}=\hat{\mu}_{1}^{T}+\hat{\mu}_{2}^{T})\) is a twilled Lie conformal algebra, where \(\hat{\mu}_{1}^{T}=\hat{\mu}\), \(\hat{\mu}_{2}^{T}=[\hat{\mu},\hat{T}]_{\mathrm{NR}}\)._ Given a relative Rota-Baxter operator \(T\) on the module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\), we have a twilled Lie conformal algebra \(\mathcal{A}\bowtie M_{T}\) by twisting \(\mathcal{A}\ltimes_{\rho}M\) by \(T\), where \(M_{T}=(M,[\cdot_{\lambda}\cdot]^{T})\) is the Lie conformal algebra defined by (4.7). Define \(\rho^{T}:M\to\mathrm{Cend}(\mathcal{A})\) by \[\rho^{T}(m)_{\lambda}a:=(\hat{\mu}_{2}^{T})_{\lambda}((0,m),(a,0))=[T(m)_{ \lambda}a]+T(\rho(a)_{-\lambda-\partial}m),\text{ for }a\in\mathcal{A},\ m\in M.\] By \([\hat{\mu}_{2}^{T},\hat{\mu}_{2}^{T}]_{\mathrm{NR}}=0\) and Proposition 3.5, \((\mathcal{A};\rho^{T})\) is a module of \(M_{T}\). Thus, by (3.14), we have the following: **Proposition 4.11**.: _The Lie conformal algebra structure on \(\mathcal{A}\bowtie M_{T}\) is explicitly given by (\(a,b\in\mathcal{A}\), \(m,n\in\ M\))_ \[[(a,m)_{\lambda}(b,n)]_{\bowtie}=([a_{\lambda}b]+\rho^{T}(m)_{\lambda}b-\rho^ {T}(n)_{-\lambda-\partial}a,[m_{\lambda}n]^{T}+\rho(a)_{\lambda}n-\rho(b)_{- \lambda-\partial}m). \tag{4.8}\] The notion of a conformal classical Yang-Baxter equation was introduced in [37] in the study of Lie conformal bialgebras. Let \(\mathcal{A}\) be a Lie conformal algebra and \(r=\sum_{i}a_{i}\otimes b_{i}\in\mathcal{A}\otimes\mathcal{A}\). Set \(\partial^{\otimes^{3}}=\partial\otimes 1\otimes 1+1\otimes\partial\otimes 1+1 \otimes 1\otimes\partial\). The equation \[\begin{split}[\![r,r]\!]=&\sum_{i,j}\left([a_{i_{ \mu}}a_{j}]\otimes b_{i}\otimes b_{j}]_{\mu=1\otimes\partial\otimes 1}-a_{i}\otimes[a_{j_{ \mu}}b_{i}]\otimes b_{j}]_{\mu=1\otimes 1\otimes\partial}-a_{i}\otimes a_{j}\otimes[b_{j_{\mu}}b_{i}] \right|_{\mu=1\otimes\partial\otimes 1})\\ \equiv& 0\ (\mathrm{mod}\ \partial^{\otimes^{3}}) \end{split} \tag{4.9}\] is called the **conformal classical Yang-Baxter equation** in \(\mathcal{A}\). For any \(r=\sum_{i}a_{i}\otimes b_{i}\in\mathcal{A}\otimes\mathcal{A}\), we associate a conformal linear map \(r^{\sharp}\in\mathrm{Chom}(\mathcal{A}^{*c},\mathcal{A})\) as follows: \[r^{\sharp}_{\lambda}(\alpha)=\sum_{i}\alpha_{-\lambda-\partial}(a_{i})b_{i}, \quad\text{for $\alpha\in\mathcal{A}^{*c}$}.\] Set \(r^{21}=\sum_{i}b_{i}\otimes a_{i}\). We say \(r\) is **skew-symmetric** if \(r=-r^{21}\). Relative Rota-Baxter operators are closely related to the conformal classical Yang-Baxter equation, as the following lemma shows. **Lemma 4.12**.: ([25]) _Let \(\mathcal{A}\) be a finite Lie conformal algebra. Then \(r\) is a skew-symmetric solution of the conformal classical Yang-Baxter equation if and only if \(r^{\sharp}_{0}=r^{\sharp}_{\lambda}|_{\lambda=0}\) is a relative Rota-Baxter operator on \((\mathcal{A}^{*c};\mathrm{ad}^{*})\) over \(\mathcal{A}\)._ This, together with Proposition 4.8, gives the following result. **Proposition 4.13**.: _Let \(\mathcal{A}\) be a finite Lie conformal algebra. Then \(r\) is a skew-symmetric solution of the conformal classical Yang-Baxter equation if and only if \(r^{\sharp}_{0}=r^{\sharp}_{\lambda}|_{\lambda=0}\) is a Maurer-Cartan element of the graded Lie algebra \((C^{*}(\mathcal{A}^{*c},\mathcal{A}),[\cdot,\cdot]_{\hat{\mu}})\) given in Corollary 3.15, where \(\mu\) is the Lie conformal algebra structure on \(\mathcal{A}\ltimes_{\mathrm{ad}^{*}}\mathcal{A}^{*c}\)._ By Corollaries 4.9 and 4.10 and Proposition 4.8, we have **Proposition 4.14**.: _Let \(\mathcal{A}\) be a finite Lie conformal algebra. if \(r\) is a skew-symmetric solution of the conformal classical Yang-Baxter equation, then \(\mathcal{A}^{*c}_{r^{\sharp}}:=(\mathcal{A}^{*c},[\cdot\cdot\cdot]^{\sharp})\) is a Lie conformal algebra, where the \(\lambda\)-bracket \([\cdot\cdot\cdot]^{r^{\sharp}}\) is given by_ \[[\alpha_{\lambda}\beta]^{r^{\sharp}}=\mathrm{ad}^{*}(r^{\sharp}_{0}(\alpha))_ {\lambda}\beta-\mathrm{ad}^{*}(r^{\sharp}_{0}(\beta))_{-\lambda-\partial} \alpha,\quad\forall\ \alpha,\beta\in\mathcal{A}^{*c}.\] _Furthermore, the map \(\mathrm{ad}^{*}:\mathcal{A}^{*c}\to\mathrm{gc}(\mathcal{A})\) defined by_ \[\mathrm{ad}^{*}(\alpha)_{\lambda}a=[r^{\sharp}_{0}(\alpha)_{\lambda}a]+r^{ \sharp}_{0}(\mathrm{ad}^{*}(a)_{-\lambda-\partial}\alpha),\quad\forall\ \alpha\in\mathcal{A}^{*c},\ a\in\mathcal{A}\] _makes \(\mathcal{A}\) into a module of \(\mathcal{A}^{*c}_{r^{\sharp}}\), and there is a Lie conformal algebra structure on \(\mathcal{A}\bowtie\mathcal{A}^{*c}_{r^{\sharp}}\) given by \((a,b\in\mathcal{A},\alpha,\beta\in\mathcal{A}^{*c})\)_ \[[(a,\alpha)_{\lambda}(b,\beta)]_{\bowtie}=([a_{\lambda}b]+\mathrm{ad}^{*}( \alpha)_{\lambda}b-\mathrm{ad}^{*}(\beta)_{-\lambda-\partial}a,[\alpha_{ \lambda}\beta]^{r^{\sharp}}+\mathrm{ad}^{*}(a)_{\lambda}\beta-\mathrm{ad}^{*} (b)_{-\lambda-\partial}\alpha).\] **Remark 4.15**.: _Given a skew-symmetric solution \(r\) of the conformal classical Yang-Baxter equation on \(\mathcal{A}\), the author in [37] showed that there exists a Lie conformal algebra structure on \(\mathcal{A}\oplus\mathcal{A}^{*c}\) corresponding to the coboundary Lie conformal bialgebra induced by \(r\). Using our twisting theory, we also obtain this Lie conformal algebra structure on \(\mathcal{A}\oplus\mathcal{A}^{*c}\) and give the concrete expression of this Lie conformal algebra structure directly._ ### The case of quasi-twilled Lie conformal algebras Let \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) be a quasi-twilled Lie conformal algebra with the structure \(\Pi=\hat{\mu}_{1}+\hat{\mu}_{2}+\hat{\phi}_{1}\). The twisted Lie conformal algebra structure \(\Pi^{H}\) by \(H:\mathcal{A}_{2}\to\mathcal{A}_{1}\) has the forms: \[\hat{\phi}_{1}^{H} = \hat{\phi}_{1},\] \[\hat{\mu}_{1}^{H} = \hat{\mu}_{1}+[\hat{\phi}_{1},\hat{H}]_{\mathrm{NR}},\] \[\hat{\mu}_{2}^{H} = \hat{\mu}_{2}+d_{\hat{\mu}_{1}}\hat{H}+\frac{1}{2}[[\hat{\phi}_{1 },\hat{H}]_{\mathrm{NR}},\hat{H}]_{\mathrm{NR}},\] \[\hat{\phi}_{2}^{H} = d_{\hat{\mu}_{2}}\hat{H}+\frac{1}{2}[\hat{H},\hat{H}]_{\hat{\mu} _{1}}+\frac{1}{6}[[[\hat{\phi}_{1},\hat{H}]_{\mathrm{NR}},\hat{H}]_{\mathrm{ NR}},\hat{H}]_{\mathrm{NR}}.\] Recall that \(C^{*}(\mathcal{A}_{2},\mathcal{A}_{1})\) has an \(L_{\infty}\)-algebra structure \((d_{\hat{\mu}_{2}},[-,-]_{\hat{\mu}_{1}},[-,-,-]_{\hat{\phi}_{1}})\) (see Theorem 3.14). Moreover, we have **Proposition 4.16**.: _The result of twisting \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2},\Pi^{H})\) is also a quasi-twilled Lie conformal algebra if and only if \(H\) is a Maurer-Cartan element of the above \(L_{\infty}\)-algebra, i.e._ \[d_{\hat{\mu}_{2}}(\hat{H})+\frac{1}{2}[\hat{H},\hat{H}]_{\hat{\mu}_{1}}+\frac{ 1}{6}[\hat{H},\hat{H},\hat{H}]_{\hat{\phi}_{1}}=0.\] _This is equivalent to_ \[[H(u)_{\lambda}H(v)]_{1}+\rho_{2}(u)_{\lambda}H(v)-\rho_{2}(v)_{- \lambda-\partial}H(u)\] \[\qquad\qquad=H(\rho_{1}(H(u))_{\lambda}v-\rho_{1}(H(v))_{-\lambda -\partial}u)+H([u_{\lambda}v]_{2})+H(\phi_{1\lambda}(H(u),H(v))),\text{ for }u,v\in\mathcal{A}_{2}.\] Proof.: It follows by a direct calculation. **Corollary 4.17**.: _Let \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) be a quasi-twilled Lie conformal algebra and \(H\) a solution of the Maurer-Cartan equation in the associated \(L_{\infty}\)-algebra. Then_ \[[u_{\lambda}v]^{H}:=\rho_{1}(H(u))_{\lambda}v-\rho_{1}(H(v))_{-\lambda- \partial}u+[u_{\lambda}v]_{2}+\phi_{1\lambda}(H(u),H(v)),\text{ for }u,v\in \mathcal{A}_{2} \tag{4.10}\] _defines a Lie conformal algebra structure on \(\mathcal{A}_{2}\)._ Proof.: By Lemma 3.10, we deduce that \(\hat{\mu}_{2}^{H}\) restrited to \(\mathcal{A}_{2}\) is a Lie conformal structure on \(\mathcal{A}_{2}\). Furthermore, \(\hat{\mu}_{2}^{H}\) restricted to \(\mathcal{A}_{2}\) is exactly (4.10). In the case of \(\hat{\mu}_{2}=0\), the quasi-twilled Lie conformal algebra \((\mathcal{A},\mathcal{A}_{1},\mathcal{A}_{2})\) is a \(\phi\)-twisted semi-direct product Lie conformal algebra \(\mathcal{A}_{1}\ltimes_{\phi}\mathcal{A}_{2}\). In the following, we propose the notion of a twisted relative Rota-Baxter operator on a Lie conformal algebra, which can be used to twist a quasi-twilled Lie conformal algebra. **Definition 4.18**.: Let \((\mathcal{A},[\cdot_{\lambda}\cdot])\) be a Lie conformal algebra, \((M;\rho)\) an \(\mathcal{A}\)-module and \(\phi\) a 2-cocycle in \(C^{2}(\mathcal{A},M)\). A \(\mathbb{C}[\partial]\)-module homomorphism \(T:M\to\mathcal{A}\) is called a \(\phi\)**-twisted relative Rota-Baxter operator** if it satisfies \[[T(m)_{\lambda}T(n)]=T(\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial}m+ \phi_{\lambda}(T(m),T(n))),\text{ for }m,n\in M. \tag{4.11}\] Let \((M;\rho)\) be a module over a Lie conformal algebra \(\mathcal{A}\) and \(\phi\) a 2-cocycle associated to the module \((M;\rho)\). Let \(\hat{\mu}+\hat{\phi}\) denote the structure of the quasi-twilled Lie conformal algebra \(\mathcal{A}\ltimes_{\phi}M\). **Proposition 4.19**.: _A \(\mathbb{C}[\partial]\)-module homomorphism \(T:M\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator if and only if the lift \(\hat{T}\) is a Maurer-Cartan element of the \(L_{\infty}\)-algebra \((C^{*}(M,\mathcal{A}),d_{\hat{\mu}_{2}}=0,[-,-]_{\hat{\mu}_{1}},[-,-,-]_{\hat{ \phi}_{1}})\) given in Theorem 3.14, namely,_ \[\frac{1}{2}[\hat{T},\hat{T}]_{\hat{\mu}_{1}}+\frac{1}{6}[\hat{T},\hat{T},\hat {T}]_{\hat{\phi}_{1}}=0,\] _where \(\mu_{1}=\mu\), \(\mu_{2}=0\) and \(\phi_{1}=\phi\)._ Proof.: It follows from Proposition 4.16. The author in [17] constructed a new \(L_{\infty}\)-algebra from an old \(L_{\infty}\)-algebra along with a Maurer-Cartan element. As a direct application, we have **Proposition 4.20**.: _Let \(T\) be a \(\phi\)-twisted relative Rota-Baxter operator on a module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\). Then \((C^{*}(M,\mathcal{A}),l_{1}^{T},l_{2}^{T},l_{3}^{T})\) is an \(L_{\infty}\)-algebra with trivial higher brackets, where_ \[l_{1}^{T}(f)=[\hat{T},\hat{f}]_{\hat{\mu}_{1}}+\frac{1}{2}[\hat{T},\hat{T}, \hat{f}]_{\hat{\phi}_{1}},\quad l_{2}^{T}(f,g)=[\hat{f},\hat{g}]_{\hat{\mu}_{1 }}+[\hat{T},\hat{f},\hat{g}]_{\hat{\phi}_{1}},\quad l_{3}^{T}(f,g,h)=[\hat{f}, \hat{g},\hat{h}]_{\hat{\phi}_{1}},\] _for all \(f\in C^{m}(M,\mathcal{A}),\ g\in C^{n}(M,\mathcal{A}),\ h\in C^{k}(M,\mathcal{ A}).\) We call this a \(T\)_**-twisted \(L_{\infty}\)-algebra**_._ **Proposition 4.21**.: _Let \(T:M\to\mathcal{A}\) be a \(\phi\)-twisted relative Rota-Baxter operator. Then for any \(\mathbb{C}[\partial]\)-module homomorphism \(T^{\prime}:M\to\mathcal{A}\), \(T+T^{\prime}\) is also a \(\phi\)-twisted relative Rota-Baxter operator if and only \(T^{\prime}\) is a Maurer-Cartan element of the \(T\)-twisted \(L_{\infty}\)-algebra from Proposition 4.20._ Proof.: It follows by a direct calculation. By Proposition 4.19 and Corollary 4.17, we have **Corollary 4.22**.: _Let \(T\) be a \(\phi\)-twisted relative Rota-Baxter operator. Then_ * \((M,[\cdot_{\lambda}\cdot]^{T,\phi})\) _is a Lie conformal algebra, where the_ \(\lambda\)_-bracket_ \([\cdot_{\lambda}\cdot]^{T,\phi}\) _is given by_ (4.12) \[[m_{\lambda}n]^{T,\phi}=\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial}m+ \phi_{\lambda}(T(m),T(n)),\ \text{for}\ m,n\in M.\] _Furthermore,_ \(T\) _is a Lie conformal algebra homomorphism from_ \((M,[\cdot_{\lambda}\cdot]^{T,\phi})\) _to_ \(\mathcal{A}\)_._ * \((\mathcal{A}\oplus M,\Pi^{T}=\hat{\mu}_{1}^{T}+\hat{\mu}_{2}^{T}+\hat{\phi}_{ 1}^{T})\) _is a quasi-twilled Lie conformal algebra, where_ \[\hat{\mu}_{1}^{T}=\hat{\mu},\ \hat{\mu}_{2}^{T}=[\hat{\mu},\hat{T}]_{\text{NR}}+ \frac{1}{2}[[\hat{\phi},\hat{T}]_{\text{NR}},\hat{T}]_{\text{NR}},\ \hat{\phi}_{1}^{T}=\hat{\phi}.\] Given a \(\phi\)-twisted relative Rota-Baxter operator \(T\) on a module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\), we have a quasi-twilled Lie conformal algebra \(\mathcal{A}\bowtie_{\phi}M_{T,\phi}\) by twisting \(\mathcal{A}\bowtie_{\phi}M\) by \(T\), where \(M_{T,\phi}=(M,[\cdot_{\lambda}\cdot]^{T,\phi})\) is given by (4.12). Define \(\rho^{T}:M\to\text{Cend}(\mathcal{A})\) by \[\rho^{T}(m)_{\lambda}a:=(\hat{\mu}_{2}^{T})_{\lambda}((0,m),(a,0))=[T(m)_{ \lambda}a]+T(\rho(a)_{-\lambda-\partial}m)-T(\phi_{\lambda}(T(m),a)), \tag{4.13}\] for \(a\in\mathcal{A},\ m\in M.\) By \([\hat{\mu}_{2}^{T},\hat{\mu}_{2}^{T}]_{\text{NR}}=0\) and Proposition 3.5, \((\mathcal{A};\rho^{T})\) is a module of \(M_{T,\phi}\). Furthermore, by (3.19), we have **Proposition 4.23**.: _The Lie conformal algebra structure on \(\mathcal{A}\bowtie_{\phi}M_{T,\phi}\) is explicitly given by_ \[[(a,m)_{\lambda}(b,n)]_{\bowtie_{\phi}}= ([a_{\lambda}b]+\rho^{T}(m)_{\lambda}b-\rho^{T}(n)_{-\lambda- \partial}a,[m_{\lambda}n]^{T,\phi}+\rho(a)_{\lambda}n-\rho(b)_{-\lambda- \partial}m\] \[+\phi_{\lambda}(H(m),b)+\phi_{\lambda}(a,H(n))+\phi_{\lambda}(a,b)),\] _for \(a,b\in\mathcal{A}\) and \(m,n\in\ M.\)_ Assume that \(T:M\to\mathcal{A}\) is a \(\mathbb{C}[\partial]\)-module homomorphism. We denote the graph of \(T\) by \(\text{Gr}(T)\), \[\text{Gr}(T)=\{(T(m),m)|m\in M\}.\] **Proposition 4.24**.: _Let \((M;\rho)\) be a module over a Lie conformal algebra \(\mathcal{A}\) and \(\phi\) a \(2\)-cocycle in \(C^{2}(\mathcal{A},M)\). Then \(T:M\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator if and only if \(\text{Gr}(T)\) is a subalgebra of \(\mathcal{A}\ltimes_{\phi}M.\)_ Proof.: For any \((T(m),m)\), \((T(n),n)\in\text{Gr}(T)\), we have \[[(T(m),m)_{\lambda}(T(n),n)]=\big{(}[T(m)_{\lambda}T(n)],\rho(T(m))_{\lambda}n -\rho(T(n))_{-\lambda-\partial}m+\phi_{\lambda}(T(m),T(n))).\] Hence \(T\) is a \(\phi\)-twisted relative Rota-Baxter operator if and only if \(\big{(}[T(m)_{\lambda}T(n)],\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial}m +\phi_{\lambda}(T(m),T(n))\big{)}\) is in \(\text{Gr}(T)[\lambda]\) **Example 4.25**.: Let \(\omega:\mathcal{A}\to M\) be an invertible \(1\)-cochain in \(C^{1}(\mathcal{A},M)\). Set \(\phi=-\mathbf{d}\omega\). Then the inverse \(\omega^{-1}:M\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator. In fact, putting \(T=\omega^{-1}\), the condition (4.11) is equivalent to \[\omega([T(m)_{\lambda}T(n)])=\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial }m+\phi_{\lambda}(T(m),T(n)),\ \forall\ m,n\in M.\] This is the same as \[\phi_{\lambda}(T(m),T(n))=-\rho(T(m))_{\lambda}n+\rho(T(n))_{-\lambda-\partial }m+\omega([T(m)_{\lambda}T(n)])=-(\mathbf{d}\omega)_{\lambda}(T(m),T(n)).\] **Example 4.26**.: Let \(\mathcal{A}\) be a Lie conformal algebra and \(\phi\in C^{2}(\mathcal{A},\mathcal{A})\) defined by \(\phi_{\lambda}(a,b)=-[a_{\lambda}b]\), for \(a,b\in\mathcal{A}\). Since \(\phi\) is a \(2\)-cocycle, \(\mathrm{Id}:\mathcal{A}\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator. **Definition 4.27**.: Let \(\mathcal{A}\) be a Lie conformal algebra. A \(\mathbb{C}[\partial]\)-module homomorphism \(N:\mathcal{A}\to\mathcal{A}\) is called a **Nijenhuis operator** on \(\mathcal{A}\) if the following condition is satisfied for all \(a,b\in\mathcal{A}\): \[[N(a)_{\lambda}N(b)]=N([N(a)_{\lambda}b]+[a_{\lambda}N(b)]-N[a_{\lambda}b]). \tag{4.14}\] **Lemma 4.28**.: ([38]) _Let \(N:\mathcal{A}\to\mathcal{A}\) be a Nijenhuis operator on a Lie conformal algebra \((\mathcal{A},[\cdot_{\lambda}\cdot])\). Then \(\mathcal{A}^{N}:=(\mathcal{A},[\cdot_{\lambda}\cdot]_{N})\) is also a Lie conformal algebra, where \([\cdot_{\lambda}\cdot]_{N}\) is defined by_ \[[a_{\lambda}b]_{N}=[N(a)_{\lambda}b]+[a_{\lambda}N(b)]-N[a_{\lambda}b],\ \text{for}\ a,b\in\mathcal{A}, \tag{4.15}\] _and \(N\) is a Lie conformal algebra homomorphism from \(\mathcal{A}^{N}\) to \(\mathcal{A}\)._ **Example 4.29**.: Let \(N:\mathcal{A}\to\mathcal{A}\) be a Nijenhuis operator on a Lie conformal algebra \(\mathcal{A}\). By Lemma 4.28, \(N\) induces a new Lie conformal algebra structure \(\mathcal{A}^{N}\). Then \(\mathcal{A}\) becomes an \(\mathcal{A}^{N}\)-module by \[\rho(a)_{\lambda}x=[N(a)_{\lambda}x],\ \text{for}\ a\in\mathcal{A}^{N},\ x\in \mathcal{A}.\] With this module, the map \(\phi_{\lambda}(a,b):=-N[a_{\lambda}b]\) is a \(2\)-cocycle in \(C^{2}(\mathcal{A}^{N},\mathcal{A})\). Then the identity map \(\mathrm{Id}:\mathcal{A}\to\mathcal{A}^{N}\) is a \(\phi\)-twisted relative Rota-Baxter operator. **Definition 4.30**.: _Let \(\mathcal{A}\) be a Lie conformal algebra. A \(\mathbb{C}[\partial]\)-module homomorphism \(R:\mathcal{A}\to\mathcal{A}\) is called a_ **Reynolds operator** _on \(\mathcal{A}\) if it satisfies_ \[[R(a)_{\lambda}R(b)]=R([R(a)_{\lambda}b]+[a_{\lambda}R(b)]-[R(a)_{\lambda}R(b )]),\ \text{for}\ a,b\in\mathcal{A}. \tag{4.16}\] Notice that the last term \(-[R(a)_{\lambda}R(b)]\) in (4.16) is the \(\lambda\)-bracket on \(\mathcal{A}\), which is a \(2\)-cocycle. Therefore, each Reynolds operator \(R\) can be seen as a \(\phi\)-twisted relative Rota-Baxter operator with \(\phi_{\lambda}(a,b):=-[a_{\lambda}b]\) for all \(a,b\in\mathcal{A}\). It follows from (4.12) that \(R\) gives rise to a new Lie conformal algebra structure, denoted by \(\mathcal{A}^{R}\), on \(\mathcal{A}\) given by \[[a_{\lambda}b]^{R}=[R(a)_{\lambda}b]+[a_{\lambda}R(b)]-[R(a)_{\lambda}R(b)], \ \text{for}\ a,b\in\mathcal{A}.\] By (4.16), \(R\) is a Lie conformal algebra homomorphism from \(\mathcal{A}^{R}\) to \(\mathcal{A}\). If \(R\) is invertible, it follows from (4.16) that \[R^{-1}([a_{\lambda}b])=[R^{-1}(a)_{\lambda}b]+[a_{\lambda}R^{-1}(b)]-[a_{ \lambda}b],\ \text{for}\ a,b\in\mathcal{A}.\] So \((R^{-1}-\mathrm{Id})([a_{\lambda}b])=[(R^{-1}-\mathrm{Id})(a)_{\lambda}b]+[a_{ \lambda}(R^{-1}-\mathrm{Id})(b)]\). Namely, \(R^{-1}-\mathrm{Id}:\mathcal{A}\to\mathcal{A}\) is a \(1\)-cocycle. Conversely, if \(d:\mathcal{A}\to\mathcal{A}\) is a \(1\)-cocycle such that \(\mathrm{Id}+d\) is invertible, then \((\mathrm{Id}+d)^{-1}\) is a Reynolds operator on \(\mathcal{A}\). Even if \(\mathrm{Id}+d\) is not invertible but \((\mathrm{Id}+d)^{-1}=\sum_{n=0}^{\infty}(-1)^{n}d^{n}\) converges pointwise, \((\mathrm{Id}+d)^{-1}\) is again a Reynolds operator on \(\mathcal{A}\). A more precise statement is given below by a verbatim repetition of the proof of [51, Proposition 2.8] in terms of \(\lambda\)-bracket. **Proposition 4.31**.: _Let \(\mathcal{A}\) be a Lie conformal algebra with a \(1\)-cocycle \(d\in C^{1}(\mathcal{A},\mathcal{A})\). If the series \(\sum_{n=0}^{\infty}(-1)^{n}d^{n}(x)\) is convergent for all \(x\in\mathcal{A}\), then \(\sum_{n=0}^{\infty}(-1)^{n}d^{n}\) is a Reynolds operator on \(\mathcal{A}\). This is the case when \(d\) is locally nilpotent._ A Reynolds operator on a Lie conformal algebra is said to be **nontrivial** if it is neither invertible nor equal to \(0\). Otherwise, it is called **trivial**. **Example 4.32**.: The Virasoro Lie conformal algebra \(\operatorname{Vir}=\mathbb{C}[\partial]L\) is a free \(\mathbb{C}[\partial]\)-module generated by the symbol \(L\), satisfying \([L_{\lambda}L]=(\partial+2\lambda)L\). Assume that \(R\) is Reynolds operator on \(\operatorname{Vir}\). Write \(R(L)=f(\partial)L\) for some \(f(\partial)\in\mathbb{C}[\partial]\). Substituting this into the defining relation of Reynolds operator gives \[f(-\lambda)f(\partial+\lambda)=(f(-\lambda)+f(\partial+\lambda)-f(-\lambda)f( \partial+\lambda))f(\partial).\] Equating terms of highest degree in \(\partial\) in both sides of the above equation, we obtain that \(f\) is constant. So \(R\) is trivial. ## 5. NS-Lie conformal algebras In this section, we introduce the notion of an NS-Lie conformal algebra, which is a conformal analogue of NS-Lie algebras given in [13]. We show that NS-Lie conformal algebras connect closely with Lie conformal algebras, conformal NS-algebras, twisted relative Rota-Baxter operators and Nijenhuis operators. Various examples of NS-Lie conformal algebras are given. **Definition 5.1**.: Let \(\mathcal{A}\) be a \(\mathbb{C}[\partial]\)-module equipped with two binary \(\lambda\)-multiplications \(\circ_{\lambda}\) and \(\vee_{\lambda}\). Then \(\mathcal{A}\) is called an **NS-Lie conformal algebra**, if \(\circ_{\lambda}\) and \(\vee_{\lambda}\) are conformal sesquilinear maps, and \(\vee_{\lambda}\) is skew-symmetric, i.e., \[a\vee_{\lambda}b=-b\vee_{-\lambda-\partial}a,\;\forall\;\;a,b\in\mathcal{A}, \tag{5.1}\] and the following axioms hold for all \(a,b,c\in\mathcal{A}\): \[(a\circ_{\lambda}b)\circ_{\lambda+\mu}c-a\circ_{\lambda}(b\circ_{\mu}c)-(b \circ_{\mu}a)\circ_{\lambda+\mu}c+b\circ_{\mu}(a\circ_{\lambda}c)+(a\vee_{ \lambda}b)\circ_{\lambda+\mu}c=0, \tag{5.2}\] \[a\vee_{\lambda}[b_{\mu}c]-[a_{\lambda}b]\vee_{\lambda+\mu}c-b\vee_{\mu}[a_{ \lambda}c]+a\circ_{\lambda}(b\vee_{\mu}c)-b\circ_{\mu}(a\vee_{\lambda}c)+c \circ_{-\lambda-\mu-\partial}(a\vee_{\lambda}b)=0, \tag{5.3}\] where the \(\lambda\)-bracket \([\cdot_{\lambda}\cdot]\) is defined by \[[a_{\lambda}b]=a\circ_{\lambda}b-b\circ_{-\lambda-\partial}a+a\vee_{\lambda}b,\;\forall\;a,b\in\mathcal{A}. \tag{5.4}\] **Remark 5.2**.: _If the binary \(\lambda\)-multiplication \(\circ_{\lambda}\) in Definition 5.1 is trivial, then \((\mathcal{A},\vee_{\lambda})\) is a Lie conformal algebra. If \(\vee_{\lambda}\) is trivial, then \((\mathcal{A},\circ_{\lambda})\) becomes a left-symmetric conformal algebra. Hence NS-Lie conformal algebras are a generalization of both Lie conformal algebras and left-symmetric conformal algebras. See [26] for details about left-symmetric conformal algebras._ By conformal sesquilinearity, (5.2) is equivalent to the following identity: \[\begin{split} 0=&(a\circ_{\lambda}b)\circ_{-\mu- \partial}c-a\circ_{\lambda}(b\circ_{-\mu-\partial}c)-(b\circ_{-\lambda- \partial}a)\circ_{-\mu-\partial}c+b\circ_{-\lambda-\mu-\partial}(a\circ_{ \lambda}c)\\ &+(a\vee_{\lambda}b)\circ_{-\mu-\partial}c.\end{split} \tag{5.5}\] **Definition 5.3**.: _Let \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) and \((\mathcal{B},\circ_{\lambda}^{\prime},\vee_{\lambda}^{\prime})\) be two_ NS_-Lie conformal algebras. A_ **homomorphism** _from \(\mathcal{A}\) to \(\mathcal{B}\) is a \(\mathbb{C}[\partial]\)-module homomorphism \(\phi\) satisfying_ \[\phi(a\circ_{\lambda}b)=\phi(a)\circ_{\lambda}^{\prime}\phi(b),\quad\phi(a \vee_{\lambda}b)=\phi(a)\vee_{\lambda}^{\prime}\phi(b),\quad\forall\;a,b\in \mathcal{A}.\] **Theorem 5.4**.: _Let \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) be an NS-Lie conformal algebra. Then \((\mathcal{A},[\cdot_{\lambda}\cdot])\) is a Lie conformal algebra, where the \(\lambda\)-bracket \([\cdot_{\lambda}\cdot]\) is given by (5.4), which is called the_ **sub-adjacent Lie conformal algebra** _of \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) and denoted by \(\mathcal{A}_{Lie}\). Furthermore, the \(\lambda\)-action of \(\mathcal{A}_{Lie}\) on \(\mathcal{A}\) defined by_ \[\rho(a)_{\lambda}x=a\circ_{\lambda}x,\text{ for }a\in\mathcal{A}_{Lie},\ x\in \mathcal{A} \tag{5.6}\] _gives a module of \(\mathcal{A}_{Lie}\) on \(\mathcal{A}\)._ Proof.: It is easy to see that the \(\lambda\)-bracket \([\cdot_{\lambda}\cdot]\) is conformal sesquilinear and skew-symmetric. To check the Jacobi identity, we compute separately, \[[a_{\lambda}[b_{\mu}c]]= [a_{\lambda}(b\circ_{\mu}c-c\circ_{-\mu-\partial}b+b\vee_{\mu}c)]\] \[= a\circ_{\lambda}(b\circ_{\mu}c-c\circ_{-\mu-\partial}b+b\vee_{ \mu}c)-(b\circ_{\mu}c-c\circ_{-\mu-\partial}b+b\vee_{\mu}c)\circ_{-\lambda- \partial}a+a\vee_{\lambda}(b\ast_{\mu}c),\] \[[b_{\mu}[a_{\lambda}c]]= b\circ_{\mu}(a\circ_{\lambda}c-c\circ_{-\lambda- \partial}a+a\vee_{\lambda}c)-(a\circ_{\lambda}c-c\circ_{-\lambda-\partial}a+ a\vee_{\lambda}c)\circ_{-\mu-\partial}b+b\vee_{\mu}(a\ast_{\lambda}c),\] \[[[a_{\lambda}b]_{\lambda+\mu}c]= [(a\circ_{\lambda}b-b\circ_{-\lambda-\partial}a+a\vee_{\lambda} b)_{\lambda+\mu}c]\] \[= (a\circ_{\lambda}b-b\circ_{-\lambda-\partial}a+a\vee_{\lambda}b) \circ_{\lambda+\mu}c-c\circ_{-\lambda-\mu-\partial}(a\circ_{\lambda}b\] \[-b\circ_{-\lambda-\partial}a+a\vee_{\lambda}b)+(a\ast_{\lambda} b)\vee_{\lambda+\mu}c.\] Thus, we have \[[[a_{\lambda}b]_{\lambda+\mu}c]+[b_{\mu}[a_{\lambda}c]]-[a_{ \lambda}[b_{\mu}c]]\] \[= ((a\circ_{\lambda}b-b\circ_{-\lambda-\partial}a+a\vee_{\lambda}b )\circ_{\lambda+\mu}c-a\circ_{\lambda}(b\circ_{\mu}c)+b\circ_{\mu}(a\circ_{ \lambda}c)) \tag{5.7}\] \[+\left((b\circ_{\mu}c-c\circ_{-\mu-\partial}b+b\vee_{\mu}c)\circ _{-\lambda-\partial}a-b\circ_{\mu}(c\circ_{-\lambda-\partial}a)+c\circ_{- \lambda-\mu-\partial}(b\circ_{-\lambda-\partial}a)\right)\] \[-\left((a\circ_{\lambda}c-c\circ_{-\lambda-\partial}a+a\vee_{ \lambda}c)\circ_{-\mu-\partial}b-a\circ_{\lambda}(c\circ_{-\mu-\partial}b)-c \circ_{-\lambda-\mu-\partial}(a\circ_{\lambda}b)\right)\] \[+((a\ast_{\lambda}b)\vee_{\lambda+\mu}c+b\vee_{\mu}(a\ast_{ \lambda}c)-a\vee_{\lambda}(b\ast_{\mu}c)-c\circ_{-\lambda-\mu-\partial}(a\lor _{\lambda}b)\] \[+b\circ_{\mu}(a\vee_{\lambda}c)-a\circ_{\lambda}(b\vee_{\mu}c)).\] The first term in the RHS of (5.7) vanishes due to (5.2) and (2.3). The second and third terms in the RHS of (5.7) vanish due to (2.4) and (5.5). The last term in the RHS of (5.7) vanishes due to (5.3). This proves that \((\mathcal{A},[\cdot_{\lambda}\cdot])\) is a Lie conformal algebra. By conformal sesquilinearity of the \(\lambda\)-operation \(\circ_{\lambda}\), (5.6) satisfies the second condition of (2.5). By using (2.3) and (5.4), (5.2) can be written as \[[a_{\lambda}b]\circ_{\lambda+\mu}c-a\circ_{\lambda}(b\circ_{\mu}c)+b\circ_{ \mu}(a\circ_{\lambda}c)=0,\] which implies that (5.6) satisfies the first condition of (2.5). This ends the proof. We introduced conformal NS-algebras in [50] by analogy with Leroux's NS-algebras [36]. **Definition 5.5**.: Let \(\mathcal{A}\) be a \(\mathbb{C}[\partial]\)-module equipped with three binary \(\lambda\)-multiplications \(>_{\lambda},<_{\lambda}\) and \(\vee_{\lambda}\). Then \(\mathcal{A}\) is called a **conformal NS-algebra**, if \(>_{\lambda},<_{\lambda}\) and \(\vee_{\lambda}\) are conformal sesquilinear maps, and satisfy the following axioms for all \(x,y,z\in\mathcal{A}\): \[x\succ_{\lambda}(y\succ_{\mu}z) =(x\times_{\lambda}y)\succ_{\lambda+\mu}z, \tag{5.9}\] \[x\prec_{\lambda}(y\times_{\mu}z) =(x\prec_{\lambda}y)\prec_{\lambda+\mu}z,\] (5.10) \[x\succ_{\lambda}(y\prec_{\mu}z) =(x\succ_{\lambda}y)\prec_{\lambda+\mu}z,\] (5.11) \[x\succ_{\lambda}(y\vee_{\mu}z)-(x\times_{\lambda}y)\vee_{\lambda+ \mu}z =(x\vee_{\lambda}y)\prec_{\lambda+\mu}z-x\vee_{\lambda}(y\times_{\mu}z), \tag{5.8}\] where \(\times_{\lambda}\) is defined as \[x\times_{\lambda}y=x\succ_{\lambda}y+x\prec_{\lambda}y+x\vee_{\lambda}y. \tag{5.12}\] **Theorem 5.6**.: _Let \((\mathcal{A},\succ_{\lambda},\prec_{\lambda},\vee_{\lambda})\) be a conformal NS-algebra. Then \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) forms an NS-Lie conformal algebra, where_ \[x\circ_{\lambda}y=x\succ_{\lambda}y-y\prec_{-\lambda-\partial}x,\ \ x\vee_{ \lambda}y=x\vee_{\lambda}y-y\vee_{-\lambda-\partial}x,\ \text{for}\ x,y\in\mathcal{A}. \tag{5.13}\] Proof.: It follows by a direct calculation. We omit the details. The following theorem reveals a close connection between twisted relative Rota-Baxter operators and NS-Lie conformal algebras. **Theorem 5.7**.: _Assume that \(M\) is a module over a Lie conformal algebra \(\mathcal{A}\), \(\phi\) is a \(2\)-cocycle in \(C^{2}(\mathcal{A},M)\), and \(T:M\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator. Then \(M\) becomes an NS-Lie conformal algebra under the following \(\lambda\)-multiplications:_ \[u\circ_{\lambda}v=T(u)_{\lambda}v,\ u\vee_{\lambda}v=\phi_{\lambda}(T(u),T(v)),\text{ for}\ u,v\in M. \tag{5.14}\] Proof.: It is easy to check that \(\circ_{\lambda}\) and \(\vee_{\lambda}\) are conformal sesquilinear, and \(\vee_{\lambda}\) is skew-symmetric. For \(u,v,w\in M\), we have \[(u\circ_{\lambda}v)\circ_{\lambda+\mu}w-u\circ_{\lambda}(v\circ_ {\mu}w)-(v\circ_{\mu}u)\circ_{\lambda+\mu}w+v\circ_{\mu}(u\circ_{\lambda}w)\] \[\overset{\eqref{eq:2.1}}{=}T(T(u)_{\lambda}v)_{\lambda+\mu}w-T(u )_{\lambda}(T(v)_{\mu}w)-T(T(v)_{\mu}u)_{\lambda+\mu}w+T(v)_{\mu}(T(u)_{\lambda }w)\] \[\overset{\eqref{eq:2.2}}{=}T(T(u)_{\lambda}v)_{\lambda+\mu}w-T(T(v )_{\mu}u)_{\lambda+\mu}w-[T(u)_{\lambda}T(v)]_{\lambda+\mu}w\] \[\overset{\eqref{eq:2.3}}{=}T(T(u)_{\lambda}v)-T(v)_{-\lambda- \partial}u))_{\lambda+\mu}w-[T(u)_{\lambda}T(v)]_{\lambda+\mu}w\] \[\overset{\eqref{eq:2.4}}{=}-T\phi_{\lambda}(T(u),T(v))_{\lambda+ \mu}w\overset{\eqref{eq:2.1}}{=}-(u\vee_{\lambda}v)\circ_{\lambda+\mu}w.\] Hence (5.2) is valid. To show (5.3), we start with the cocycle condition of \(\phi\): \[0= T(u)_{\lambda}\phi_{\mu}(T(v),T(w))-T(v)_{\mu}\phi_{\lambda}(T(u),T(w))+T(w)_{-\lambda-\mu-\partial}\phi_{\lambda}(T(u),T(v))\] \[+\phi_{\lambda}(T(u),[T(v)_{\mu}T(w)])-\phi_{\mu}(T(v),[T(u)_{ \lambda}T(w)])-\phi_{\lambda+\mu}([T(u)_{\lambda}T(v)],T(w)).\] This, together with (5.14), gives \[0= u\circ_{\lambda}(v\vee_{\mu}w)-v\circ_{\mu}(u\vee_{\lambda}w)+w\circ_{ -\lambda-\mu-\partial}(u\vee_{\lambda}v)\] \[+\phi_{\lambda}(T(u),[T(v)_{\mu}T(w)])-\phi_{\mu}(T(v),[T(u)_{ \lambda}T(w)])-\phi_{\lambda+\mu}([T(u)_{\lambda}T(v)],T(w)). \tag{5.15}\] On the other hand, we have \[[T(u)_{\lambda}T(v)] =T(T(u)_{\lambda}v-T(v)_{-\lambda-\partial}u+\phi_{\lambda}(T(u),T(v)))\] \[=T(u\circ_{\lambda}v-v\circ_{-\lambda-\partial}u+u\vee_{\lambda} v)=T(u\ast_{\lambda}v).\] It follows that \[\phi_{\lambda}(T(u),[T(v)_{\mu}T(w)]) =\phi_{\lambda}(T(u),T(v\ast_{\mu}w))=u\vee_{\lambda}(v\ast_{\mu }w),\] \[\phi_{\mu}(T(v),[T(u)_{\lambda}T(w)]) =\phi_{\mu}(T(v),T(u\ast_{\lambda}w))=v\vee_{\mu}(u\ast_{\lambda }w),\] \[\phi_{\lambda+\mu}([T(u)_{\lambda}T(v)],T(w)) =\phi_{\lambda+\mu}(T(u\ast_{\lambda}v),T(w))=(u\ast_{\lambda}v) \vee_{\lambda+\mu}w.\] Plugging this back into (5.15), we obtain (5.3). It follows immediately from Theorems 5.4 and 5.7 that if \(T:M\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator, then \(T\) equips \(M\) with a structure of Lie conformal algebra, which is exactly the one defined by (4.12). Let \(\mathcal{A}\) be a Lie conformal algebra with a module \((M;\rho)\), and \(\mathcal{A}^{\prime}\) a Lie conformal algebra with a module \((M^{\prime};\rho^{\prime})\). Suppose that \(T:M\to\mathcal{A}\) is a \(\phi\)-twisted relative Rota-Baxter operator and \(T^{\prime}:M^{\prime}\to\mathcal{A}^{\prime}\) is a \(\phi^{\prime}\)-twisted relative Rota-Baxter operator, where \(\phi\) and \(\phi^{\prime}\) are \(2\)-cocycles in \(C^{2}(\mathcal{A},M)\) and \(C^{2}(\mathcal{A}^{\prime},M^{\prime})\), respectively. **Definition 5.8**.: \(A\) **morphism** _of twisted relative Rota-Baxter operators from \(T\) to \(T^{\prime}\) consists of a pair \((\chi,\psi)\) of a Lie conformal algebra homomorphism \(\chi:\mathcal{A}\to\mathcal{A}^{\prime}\) and a \(\mathbb{C}[\partial]\)-module homomorphism \(\psi:M\to M^{\prime}\) satisfying_ \[\chi\circ T=T^{\prime}\circ\psi,\ \ \rho^{\prime}(\chi(a))_{\lambda}\psi(m)= \psi(\rho(a)_{\lambda}m),\ \ \psi\phi_{\lambda}(a,b)=\phi^{\prime}_{\lambda}(\chi(a),\chi(b)), \tag{5.16}\] _for all \(a,b\in\mathcal{A}\) and \(m\in M\). It is called an_ **isomorphism** _if \(\phi\) and \(\psi\) are both linear isomorphisms._ **Proposition 5.9**.: _With notations above. If \((\chi,\psi)\) is a morphism from \(T\) to \(T^{\prime}\), then \(\psi:M\to M^{\prime}\) is an \(\mathrm{NS}\)-Lie conformal algebra homomorphism from \((M,\circ_{\lambda},\vee_{\lambda})\) to \((M^{\prime},\circ^{\prime}_{\lambda},\vee^{\prime}_{\lambda})\), where \(\circ_{\lambda},\ \vee_{\lambda},\ \circ^{\prime}_{\lambda}\) and \(\vee^{\prime}_{\lambda}\) are given by_ \[u\circ_{\lambda}v=T(u)_{\lambda}v,\quad u\vee_{\lambda}v=\phi_{ \lambda}(T(u),T(v)),\ \text{for}\ u,v\in M;\] \[u^{\prime}\circ^{\prime}_{\lambda}v^{\prime}=T^{\prime}(u^{\prime })_{\lambda}v^{\prime},\quad u^{\prime}\vee^{\prime}_{\lambda}v^{\prime}=\phi_ {\lambda}(T^{\prime}(u^{\prime}),T^{\prime}(v^{\prime})),\ \text{for}\ u^{\prime},v^{\prime}\in M^{\prime}.\] Proof.: By (5.14) and (5.16), we have \[\psi(u\circ_{\lambda}v)=\psi(T(u)_{\lambda}v)=(\chi\circ T)(u)_{ \lambda}\psi(v){=}(T^{\prime}\circ\psi)(u)_{\lambda}\psi(v)=\psi(u)\circ^{ \prime}_{\lambda}\psi(v),\] \[\psi(u\vee_{\lambda}v)=\psi\phi_{\lambda}(T(u),T(v)){=}\phi^{ \prime}_{\lambda}(\chi\circ T(u),\chi\circ T(v)){=}\phi^{\prime}_{\lambda}(T^ {\prime}\circ\psi(u),T^{\prime}\circ\psi(v))=\psi(u)\vee^{\prime}_{\lambda} \psi(v),\] for any \(u,v\in M\). Then the result follows from Theorem 5.7. **Proposition 5.10**.: _Let \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) be an \(\mathrm{NS}\)-Lie conformal algebra. Define_ \[\phi_{\lambda}(a,b)=a\vee_{\lambda}b,\ \forall\ a,b\in\mathcal{A}.\] _Then \(\phi\) is a \(2\)-cocycle of the Lie conformal algebra \(\mathcal{A}_{Lie}\) with coefficients in the module \(\mathcal{A}\) given by (5.6). Furthermore, the identity map \(\mathrm{Id}:\mathcal{A}\to\mathcal{A}_{Lie}\) is a \(\phi\)-twisted relative Rota-Baxter operator._ Proof.: By a direct calculation, we have \[(\mathbf{d}\phi)_{\lambda,\mu}(a,b,c)= a\vee_{\lambda}[b_{\mu}c]-[a_{\lambda}b]\vee_{\lambda+\mu}c-b\vee_{ \mu}[a_{\lambda}c]+a\circ_{\lambda}(b\vee_{\mu}c)\] \[-b\circ_{\mu}(a\vee_{\lambda}c)+c\circ_{-\lambda-\mu-\partial}(a \vee_{\lambda}b)\stackrel{{\eqref{eq:2-cocycle}}}{{=}}0,\] which proves that \(\phi\) is a \(2\)-cocycle. The rest follows from (5.4). **Proposition 5.11**.: _Let \((\mathcal{A},[\cdot_{\lambda}\cdot])\) be a Lie conformal algebra with a Nijenhuis operator \(N\). Define_ \[a\circ_{\lambda}b=[N(a)_{\lambda}b],\ a\vee_{\lambda}b=-N[a_{\lambda}b],\ \forall\ a,b\in\mathcal{A}.\] _Then \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) is an \(\mathrm{NS}\)-Lie conformal algebra._ Proof.: By Example 4.29, the identity map \(\mathrm{Id}:\mathcal{A}\to\mathcal{A}^{N}\) is a \(\phi\)-twisted relative Rota-Baxter operator, where the \(\lambda\)-action of \(\mathcal{A}^{N}\) on \(\mathcal{A}\) and \(\phi\) are given by \[\rho(a)_{\lambda}x=[N(a)_{\lambda}x],\quad\phi_{\lambda}(a,b)=-N[a_{\lambda}b],\quad\forall\ a,b\in\mathcal{A}^{N},x\in\mathcal{A}.\] By Theorem 5.7, the \(\lambda\)-operations \(\circ_{\lambda}\) and \(\vee_{\lambda}\) defined by \[a\circ_{\lambda}b=\mathrm{Id}(a)_{\lambda}b=[N(a)_{\lambda}b],\quad a\vee_{ \lambda}b=-N[\mathrm{Id}(a)_{\lambda}\mathrm{Id}(b)]=-N[a_{\lambda}b]\] make \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) into an \(\mathrm{NS}\)-Lie conformal algebra. Similar to the properties of Nijenhuis operators on Lie algebras given in [29], we also have **Proposition 5.12**.: _Let \((\mathcal{A},[\cdot_{\lambda}\cdot])\) be a Lie conformal algebra with a Nijenhuis operator \(N\). For all \(k,l\in\mathbb{N}\), we have_ 1. \((\mathcal{A},[\cdot\cdot\cdot]_{N^{k}})\) _is a Lie conformal algebra;_ 2. \(N^{l}\) _is also a Nijenhuis operator on the Lie conformal algebra_ \((\mathcal{A},[\cdot\cdot\cdot]_{N^{k}})\)_;_ 3. _The Lie conformal algebras_ \((\mathcal{A},([\cdot\cdot\cdot]_{N^{k}})_{N^{l}})\) _and_ \((\mathcal{A},[\cdot\cdot\cdot]_{N^{k}+l})\) _coincide;_ 4. _The Lie conformal algebras_ \((\mathcal{A},[\cdot\cdot\cdot]_{N^{k}})\) _and_ \((\mathcal{A},[\cdot\cdot\cdot]_{N^{l}})\) _are compatible, that is, any linear combination of_ \([\cdot\cdot\cdot]_{N^{k}}\) _and_ \([\cdot\cdot\cdot]_{N^{l}}\) _still makes_ \(\mathcal{A}\) _into a Lie conformal algebra;_ 5. \(N^{l}\) _is a Lie conformal algebra homomorphism from_ \((\mathcal{A},[\cdot\cdot\cdot]_{N^{k+l}})\) _to_ \((\mathcal{A},[\cdot\cdot\cdot]_{N^{k}})\)_._ Proof.: It follows by a straightforward calculation. We omit the details. By Propositions 5.11 and 5.12, we have **Corollary 5.13**.: _Let \((\mathcal{A},[\cdot\cdot\cdot])\) be a Lie conformal algebra and \(N\) a Nijenhuis operator on \(\mathcal{A}\). For any \(k,l\in\mathbb{N}\), \((\mathcal{A},\circ_{\lambda}^{k,l},\vee_{\lambda}^{k,l})\) is an NS-Lie conformal algebra, where \(\circ_{\lambda}^{k,l}\) and \(\vee_{\lambda}^{k,l}\) are given by_ \[a\circ_{\lambda}^{k,l}b=[N^{k}(a)_{\lambda}b]_{N^{l}},\;a\vee_{\lambda}^{k,l} b=-N^{k}[a_{\lambda}b]_{N^{l}},\;\text{for}\;a,b\in\mathcal{A}.\] **Example 5.14**.: _Let \(\mathcal{A}=\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\) be a twilled Lie conformal algebra. If \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{2}\) are the corresponding projections of \(\mathcal{A}\) onto \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), respectively, then any linear combination of \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{2}\) is a Nijenhuis operator on \(\mathcal{A}\). Furthermore, for any \(k,l\in\mathbb{N}\), \((\mathcal{A},\circ_{\lambda}^{k,l},\vee_{\lambda}^{k,l})\) is an NS-Lie conformal algebra, where \(\circ_{\lambda}^{k,l}\) and \(\vee_{\lambda}^{k,l}\) are given by_ \[a\circ_{\lambda}^{k,l}b=k[\mathfrak{p}_{1}(a)_{\lambda}b]+l[\mathfrak{p}_{2}( a)_{\lambda}b],\;a\vee_{\lambda}^{k,l}b=-k\mathfrak{p}_{1}[a_{\lambda}b]-l \mathfrak{p}_{2}[a_{\lambda}b],\;\text{for}\;a,b\in\mathcal{A}.\] **Example 5.15**.: _Let \(\mathcal{A}=\mathcal{A}_{1}\bowtie\mathcal{A}_{2}\) be a twilled Lie conformal algebra. For any \(k_{1},k_{2}\in\mathbb{C}\), define \(N:\mathcal{A}\to\mathcal{A}\) by \(N\left|{}_{\mathfrak{A}_{i}}=k_{i}\;\mathrm{id}_{\mathcal{A}_{i}},\;i=1,2.\) Then \(N\) is a Nijenhuis operator of \(\mathcal{A}\), and \((\mathcal{A},\circ_{\lambda},\vee_{\lambda})\) is an NS-Lie conformal algebra, where \(\circ_{\lambda}\) and \(\vee_{\lambda}\) are given by_ \[a\circ_{\lambda}b=k_{1}[\mathfrak{p}_{1}(a)_{\lambda}b]+k_{2}[\mathfrak{p}_{2 }(a)_{\lambda}b],\;a\vee_{\lambda}b=-k_{1}\mathfrak{p}_{1}[a_{\lambda}b]-k_{2 }\mathfrak{p}_{2}[a_{\lambda}b],\;\text{for}\;a,b\in\mathcal{A}.\] ## 6. Cohomology and deformations of twisted relative Rota-Baxter operators In this section, we give the cohomology of twisted relative Rota-Baxter operators and use this cohomology to study infinitesimal deformations of twisted relative Rota-Baxter operators. ### Cohomology of twisted relative Rota-Baxter operators Let \(T\) be a \(\phi\)-twisted relative Rota-Baxter operator on a module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\). We have shown that \(M^{T,\phi}=(M,[\cdot\cdot\cdot]^{T,\phi})\) is a Lie conformal algebra, where the \(\lambda\)-bracket \([\cdot\cdot\cdot]^{T,\phi}\) is given by (4.12). Furthermore, \(\rho^{T}:M\to\mathrm{Cend}(\mathcal{A})\) defined by (4.13) gives a module of \(M^{T,\phi}\). Therefore, we obtain a cohomology for the Lie conformal algebra \(M^{T,\phi}\) with coefficients in the module \((\mathcal{A};\rho^{T})\). More precisely, let \(C^{0}(M,\mathcal{A})=\mathcal{A}/\partial\mathcal{A}\) and for \(k\geq 1\), let \(C^{k}(M,\mathcal{A})\) be the space of \(\mathbb{C}\)-linear maps from \(M^{\otimes k}\) to \(\mathbb{C}[\lambda_{1},\cdots,\lambda_{k-1}]\otimes\mathcal{A}\) satisfying (2.8)- (2.10). Denote by \(C^{*}(M,\mathcal{A})=\oplus_{k\in\mathbb{Z}_{+}}C^{k}(M,\mathcal{A})\). The coboundary operator \(\mathbf{d}_{T}:C^{0}(M,\mathcal{A})\to C^{1}(M,\mathcal{A})\) is given by \[(\mathbf{d}_{T}\bar{a})(m)=\rho^{T}(m)_{-\partial}a=[T(m)_{-\partial}a]+T(l_{- \partial}(a,m)-\phi_{-\partial}(T(m),a)), \tag{6.1}\] where \(\bar{a}\in\mathcal{A}/\partial\mathcal{A}\), \(m\in M\), and \(l_{\lambda}(a,m)=\rho(a)_{-\lambda-\partial}m\). For \(f\in C^{k}(M,\mathcal{A})\) with \(k\geq 1\), \(\mathbf{d}_{T}f\in C^{k+1}(M,\mathcal{A})\) is given by \[(\mathbf{d}_{T}f)_{\lambda_{1},\cdots,\lambda_{k}}(m_{1},\cdots,m_ {k+1})\] \[= \sum_{i=1}^{k}(-1)^{i+1}[T(m_{i})_{\lambda_{i}}f_{\lambda_{1}, \cdots,\lambda_{i},\cdots,\lambda_{k}}(m_{1},\cdots,\hat{m_{i}},\cdots,m_{k+1})]\] \[+\sum_{i=1}^{k}(-1)^{i+1}T(\rho(f_{\lambda_{1},\cdots,\lambda_{i}, \cdots,\lambda_{k}}(m_{1},\cdots,\hat{m_{i}},\cdots,m_{k+1})_{-\lambda_{i}-\partial }m_{i})\] \[+\sum_{i=1}^{k}(-1)^{i+1}T\rho_{\lambda_{i}}(T(m_{i}),f_{\lambda_{1 },\cdots,\lambda_{i},\cdots,\lambda_{k}}(m_{1},\cdots,\hat{m_{i}},\cdots,m_{k+1} ))+\sum_{i,j=1,i<j}^{k}(-1)^{k+i+j+1}\] \[\times f_{\lambda_{1},\cdots,\lambda_{i},\cdots,\lambda_{j}, \cdots,\lambda_{k},\lambda_{k+1}^{\dagger}}(m_{1},\cdots,m_{k},(T(m_{i})_{ \lambda_{i}}m_{j}-T(m_{j})_{-\lambda_{i}-\partial}m_{i}+\phi_{\lambda_{i}}(T(m _{i}),T(m_{j}))))\] \[+(-1)^{k}[T(m_{k+1})_{\lambda_{k+1}^{\dagger}}f_{\lambda_{1}, \cdots,\lambda_{k-1}}(m_{1},\cdots,m_{k})]+(-1)^{k}T(\rho(f_{\lambda_{1}, \cdots,\lambda_{k-1}}(m_{1},\cdots,m_{k}))_{-\lambda_{k+1}^{\dagger}-\partial }m_{k+1})\] \[+(-1)^{k}T\phi_{\lambda_{k+1}^{\dagger}}\big{(}T(m_{k+1}),f_{ \lambda_{1},\cdots,\lambda_{k-1}}(m_{1},\cdots,m_{k})\big{)}+\sum_{i=1}^{k}(- 1)^{i}\] \[\times f_{\lambda_{1},\cdots,\hat{\lambda_{i}},\cdots,\lambda_{k }}(m_{1},\cdots,\hat{m_{i}},\cdots,m_{k},T(m_{i})_{\lambda_{i}}m_{k+1}+T(m_{k+ 1})_{-\lambda_{i}-\partial}m_{i}+\phi_{\lambda_{i}}(T(m_{i}),T(m_{k+1}))),\] where \(m_{1},\cdots,m_{k+1}\in M\) and \(\lambda_{k+1}^{\dagger}=-\sum_{j=1}^{k}\lambda_{j}-\partial\). **Definition 6.1**.: _Let \(T:M\to\mathcal{A}\) be a \(\phi\)-twisted relative Rota-Baxter operator. The cohomology of the cochain complex \((C^{*}(M,\mathcal{A}),\mathbf{d}_{T})\) is called the_ **cohomology of the \(\phi\)-twisted relative Rota-Baxter operator**_. The corresponding \(k\)-th cohomology group, which we denote by \(H_{T}(M,\mathcal{A})\), is called the \(k\)_**-th cohomology group** _for the \(\phi\)-twisted relative Rota-Baxter operator \(T\)._ **Remark 6.2**.: _Since a relative Rota-Baxter operator is a \(\phi\)-twisted relative Rota-Baxter operator with \(\phi=0\), the above cohomology of \(\phi\)-twisted relative Rota-Baxter operators recovers the cohomology of relative Rota-Baxter operators in the case of \(\phi=0\)._ Let \(T:M\to\mathcal{A}\) be a \(\phi\)-twisted relative Rota-Baxter operator. We have shown in Proposition 4.20 that \((C^{*}(M,\mathcal{A}),I_{1}^{T},I_{2}^{T},I_{3}^{T})\) is an \(L_{\infty}\)-algebra with trivial higher brackets. Therefore, \(I_{1}^{T}\circ I_{1}^{T}=0\). Furthermore, the following result holds. **Proposition 6.3**.: _For \(f\in C^{k}(M,\mathcal{A})\) with \(k\geq 1\), we have \(\mathbf{d}_{T}f=(-1)^{k}I_{1}^{T}(f)\)._ Proof.: It follows by a direct calculation. We omit the details. ### Infinitesimal deformations of twisted relative Rota-Baxter operators **Definition 6.4**.: _Let \((M;\rho)\) be a module over a Lie conformal algebra \(\mathcal{A}\) and \(T:M\to\mathcal{A}\) a \(\phi\)-twisted relative Rota-Baxter operator. An_ **infinitesimal deformation** _of \(T\) is a \(t\)-parameterized sum \(T_{t}=T+t\mathfrak{T}\) for some \(\mathfrak{T}\in C^{1}(M,\mathcal{A})\) such that, for \(m,n\in\mathcal{A}\), the following equation holds:_ \[[T_{t}(m)_{\lambda}T_{t}(n)]\equiv T_{t}\big{(}\rho(T_{t}(m))_{\lambda}n-\rho( T_{t}(n))_{-\lambda-\partial}m+\phi_{\lambda}(T_{t}(m),T_{t}(n))\big{)}\ (\mathrm{mod}\ t^{2}).\] _In this case, we also say that \(\mathfrak{T}\) generates an infinitesimal deformation of \(T\)._ It is straightforward to check that \(\mathfrak{T}\) generates an infinitesimal deformation of \(T\) if and only if for \(m,n\in M\), the following condition holds: \[[T(m)_{\lambda}\mathfrak{T}(n)]+[\mathfrak{T}(m)_{\lambda}T(n)]= \mathfrak{T}\big{(}\rho(T(m))_{\lambda}n-\rho(T(n))_{-\lambda-\partial}m+\phi_{ \lambda}(T(m),T(n))\big{)}\] \[+T\big{(}\rho(\mathfrak{T}(m))_{\lambda}n-\rho(\mathfrak{T}(n))_{- \lambda-\partial}m+\phi_{\lambda}(\mathfrak{T}(m),T(n))+\phi_{\lambda}(T(m), \mathfrak{T}(n))\big{)}. \tag{6.2}\] This is equivalent to say that \(\mathfrak{T}\) is a \(1\)-cocycle in the cohomology of \(T\), namely, \(\mathbf{d}_{T}(\mathfrak{T})=0\). **Definition 6.5**.: _Two infinitesimal deformations \(T_{t}=T+t\mathfrak{T}\) and \(T^{\prime}_{t}=T+t\mathfrak{T}^{\prime}\) of a \(\phi\)-twisted relative Rota-Baxter operator \(T\) are said to be_ **equivalent** _if there exists an element \(\bar{a}\in\mathcal{A}/\partial\mathcal{A}\) such that the pair maps \((\chi_{t},\psi_{t})\) defined by_ \[\chi_{t}(x)=x-t[x_{-\partial}a],\ \ \psi_{t}(m)=m+tl_{-\partial}(a,m)-t\phi_{- \partial}(T(m),a),\] _satisfy_ \[\chi_{t}\circ T_{t}\equiv T^{\prime}_{t}\circ\psi_{t},\ \ \rho(\chi_{t}(a))_{ \lambda}\psi_{t}(m)\equiv\psi_{t}(\rho(a)_{\lambda}m),\ \ \psi_{t}\circ\phi_{\lambda}(a,b)\equiv\phi_{ \lambda}(\chi_{t}(a),\chi_{t}(b)),\] _for \(a,b\in\mathcal{A}\) and \(m\in M\), where "\(\equiv\)" means "equality under modulo \(t^{2}\)"._ Notice that \((\chi_{t},\psi_{t})\) is well defined since, if \(\partial a\in\partial\mathcal{A}\), \([x_{-\partial}\partial a]\), \(l_{-\partial}(\partial a,m)\) and \(\phi_{-\partial}(T(m),\partial a)\) are zero due to conformal sesquilinearity. An infinitesimal deformation \(T_{t}=T+t\mathfrak{T}\) of \(T\) is said to be **trivial** if \(T_{t}\) is equivalent to \(T\). Now suppose that \(T_{t}=T+t\mathfrak{T}\) and \(T^{\prime}_{t}=T+t\mathfrak{T}^{\prime}\) are equivalent infinitesimal deformations of \(T\). By the condition \(\chi_{t}\circ T_{t}\equiv T^{\prime}_{t}\circ\psi_{t}\ (\mathrm{mod}\ t^{2})\), we have \[\mathfrak{T}(m)-\mathfrak{T}^{\prime}(m)=[T(m)_{-\partial}a]+T(l_{-\partial}( a,m)-\phi_{-\partial}(T(m),a)). \tag{6.3}\] By the condition \(\rho(\chi_{t}(x))_{\lambda}\psi_{t}(m)\equiv\psi_{t}(\rho(x)_{\lambda}m)\ ( \mathrm{mod}\ t^{2})\), we have \[\rho(x)_{\lambda}\phi_{-\partial}(T(m),a)=\phi_{-\partial}(T(\rho(x)_{\lambda }m),a). \tag{6.4}\] By the condition \(\psi_{t}\circ\phi_{\lambda}(x,y)\equiv\phi_{\lambda}(\chi_{t}(x),\chi_{t}(y)) \ (\mathrm{mod}\ t^{2})\), we have \[l_{-\partial}(a,\phi_{\lambda}(x,y))=\phi_{-\partial}(T\phi_{\lambda}(x,y),a)- \phi_{\lambda}(x,[y_{-\partial}a])-\phi_{\lambda}([x_{-\partial}a],y). \tag{6.5}\] It follows from (6.3) that \[\mathfrak{T}(m)-\mathfrak{T}^{\prime}(m)=(\mathbf{d}_{T}\ \bar{a})(m),\quad \mathrm{for}\ m\in M. \tag{6.6}\] Hence we have the following result: **Theorem 6.6**.: _Let \(T_{t}=T+t\mathfrak{T}\) and \(T^{\prime}_{t}=T+t\mathfrak{T}^{\prime}\) be two equivalent infinitesimal deformations of the \(\phi\)-twisted relative Rota-Baxter operator \(T\). Then \(\mathfrak{T}\) and \(\mathfrak{T}^{\prime}\) are in the same cohomology class \(H^{1}_{T}(M,\mathcal{A})\)._ **Definition 6.7**.: _Let \(T\) be a \(\phi\)-twisted relative Rota-Baxter operator on a module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\). An element \(\bar{a}\in\mathcal{A}/\partial\mathcal{A}\) is called a_ **Nijenhuis element** _associated to \(T\) if the representative element a satisfies (6.4) and (6.5)._ Denote by \(\mathrm{Nij}(T)\) the set of Nijenhuis elements associated to the \(\phi\)-twisted relative Rota-Baxter operator \(T\). It is easy to see from (6.3)-(6.5) that a trivial infinitesimal deformation of \(T\) produces a Nijenhuis element. Notably, the converse is also valid, as the following theorem shows. **Theorem 6.8**.: _Let \(T\) be a \(\phi\)-twisted relative Rota-Baxter operator on a module \((M;\rho)\) over a Lie conformal algebra \(\mathcal{A}\). For any \(\bar{a}\in\mathrm{Nij}(T)\), \(T_{t}=T+t\mathfrak{T}\) with \(\mathfrak{T}=\mathbf{d}_{T}(\bar{a})\) is a trivial infinitesimal deformation of \(T\)._ Proof.: As \(\mathfrak{T}=\mathbf{d}_{T}(\bar{a})\), \(\mathbf{d}_{T}(\mathfrak{T})=0\) and thus condition (6.2) is valid. Evidently, (6.3)-(6.5) are satisfied and therefore \(\mathfrak{T}\) generates a trivial infinitesimal deformation of \(T\). **Acknowledgements.** This research was supported by the National Key Research and Development Program of China (2022T150109) and the Fundamental Research Funds for the Central Universities (2022FRFK060025, 2412022QD033).
2304.01232
Adapt a Generic Human-Centered AI Design Framework in Children's Context
Through systematically analyzing the literature on designing AI-based technologies, we extracted design implications and synthesized them into a generic human-centered design framework for AI technologies to better support human needs and mitigate their concerns. When adapting the framework to children's context, understanding their specific needs, behaviors, experiences, and social environments is needed. Therefore, we are working on projects to explore tailored design considerations for children, such as through investigating children's use of existing AI-based toys and learning technologies. By participating in the ACM CHI 2023 Workshop on "Child-Centred AI Design: Definition, Operation, and Considerations," we hope to learn more about how other researchers in this field approach designing child-centered AI technologies, exchange ideas on the research landscape of children and AI, and explore the possibility to develop a practical child-centered design framework of AI technologies for technology designers and developers.
Zhibin Zhou, Junnan Yu
2023-04-03T03:44:17Z
http://arxiv.org/abs/2304.01232v1
# Adapt a Generic Human-Centered AI Design Framework in Children's Context ###### Abstract Through systematically analyzing the literature on designing AI-based technologies, we extracted design implications and synthesized them into a generic human-centered design framework for AI technologies to better support human needs and mitigate their concerns. The generic human-centered AI design framework for AI technologies [1] Through systematically analyzing the literature on designing AI-based technologies, we extracted design implications and synthesized them into a generic human-centered design framework for AI technologies to better support human needs and mitigate their concerns. Figure 1: The generic human-centered UAV design framework for AI technologies [1] (Figure 1) [1]. As the figure shows, the framework consists of four dimensions: Machine Learning, Stakeholders, Context, and UX Values. Each dimension further includes components and characteristics impacting user experiences. Although not explicitly situated in children's context, the framework has revealed important design and research directions for designing AI-based technologies for children, to name a few in different dimensions: * _Data privacy and security (Machine Learning)_: Appropriate measures are needed to ensure children's data is used ethically and responsibly. Also, we need to raise children's and caregivers' awareness of what data AI systems will use in which ways. * _Ethical considerations (Stakeholders)_: There are many ethical considerations when developing AI systems for children, especially potential harmful consequences, such as unfair algorithms, reinforced biases, and resulting abuses. We will need a systematic exploration of ethical considerations for children's AI technologies, including existing ones for general technologies and new ones unique to AI systems. * _Diverse abilities and needs (UX Values)_: Even children of a similar age have different developmental and cognitive abilities, not to mention children of different ages. How can technology be designed to accommodate these differences? There is still limited data on how children interact with various AI applications, making it challenging to develop and test AI systems appropriate for their needs and abilities. * _Social networks (Context)_: Children's technology use is typically mediated and regulated by their caregivers, especially parents. Accordingly, how caregivers' expectations, concerns, and interventions regarding their children's use of AI technologies should be comprehensively investigated and considered in the design. When adapting the framework to children's context, understanding their specific needs, behaviors, experiences, and social environments is needed. Therefore, we are working on projects to explore tailored design considerations for children, such as through investigating children's use of existing AI-based toys and learning technologies. By participating in the ACM CHI 2023 Workshop on _"Child-Centred AI Design: Definition, Operation, and Considerations,"_ we hope to learn more about how other researchers in this field approach designing child-centered AI technologies, exchange ideas on the research landscape of children and AI, and explore the possibility to develop a practical child-centered design framework of AI technologies for technology designers and developers. CCS Concepts: **Human-centered computing \(\rightarrow\) Interaction design theory, concepts and paradigms**; **HCI theory, concepts and models**. Additional Key Words and Phrases: AI for UK; Children; Design; Model **ACM Reference Format**: Zhibin Zhou and Junnan Yu. 2023. Adapt a Generic Human-Centered AI Design Framework in Children's Context. In _CHI 2023 Workshop on Child-centred AI Design: Definition, Operation and Considerations, April 23, 2023, Hamburg, Germany_. ACM, New York, NY, USA, 2 pages.
2306.08309
Taming Reversible Halftoning via Predictive Luminance
Traditional halftoning usually drops colors when dithering images with binary dots, which makes it difficult to recover the original color information. We proposed a novel halftoning technique that converts a color image into a binary halftone with full restorability to its original version. Our novel base halftoning technique consists of two convolutional neural networks (CNNs) to produce the reversible halftone patterns, and a noise incentive block (NIB) to mitigate the flatness degradation issue of CNNs. Furthermore, to tackle the conflicts between the blue-noise quality and restoration accuracy in our novel base method, we proposed a predictor-embedded approach to offload predictable information from the network, which in our case is the luminance information resembling from the halftone pattern. Such an approach allows the network to gain more flexibility to produce halftones with better blue-noise quality without compromising the restoration quality. Detailed studies on the multiple-stage training method and loss weightings have been conducted. We have compared our predictor-embedded method and our novel method regarding spectrum analysis on halftone, halftone accuracy, restoration accuracy, and the data embedding studies. Our entropy evaluation evidences our halftone contains less encoding information than our novel base method. The experiments show our predictor-embedded method gains more flexibility to improve the blue-noise quality of halftones and maintains a comparable restoration quality with a higher tolerance for disturbances.
Cheuk-Kit Lau, Menghan Xia, Tien-Tsin Wong
2023-06-14T07:27:06Z
http://arxiv.org/abs/2306.08309v3
# Taming Reversible Halftoning ###### Abstract Traditional halftoning usually drops colors when dithering images with binary dots, which makes it difficult to recover the original color information. We proposed a novel halftoning technique that converts a color image into a binary halftoning with full restorability to its original version. Our novel base halftoning technique consists of two convolutional neural networks (CNNs) to produce the reversible halftone patterns, and a noise incentive block (NIB) to mitigate the flatness degradation issue of CNNs. Furthermore, to tackle the conflicts between the blue-noise quality and restoration accuracy in our novel base method, we proposed a predictor-embedded approach to offload predictable information from the network, which in our case is the luminance information resembling from the halftone pattern. Such an approach allows the network to gain more flexibility to produce halftones with better blue-noise quality without compromising the restoration quality. Detailed studies on the multiple-stage training method and loss weightings have been conducted. We have compared our predictor-embedded method and our novel method regarding spectrum analysis on halftone, halftone accuracy, restoration accuracy, and the data embedding studies. Our entropy evaluation evidences our halftone contains less encoding information than our novel base method. The experiments show our predictor-embedded method gains more flexibility to improve the blue-noise quality of halftones and maintains a comparable restoration quality with a higher tolerance for disturbances. Reversible halftoning, deep learning, blue-noise. ## 1 Introduction Halftoning is commonly used in the printing industry [1] to reproduce tone with limited colors, e.g. black and white, due to the cost consideration. The original image's color and fine details are inevitably lost during this process. This makes the originals nearly impossible to be recovered from these degraded halftones. Even the state-of-the-art inverse halftoning methods [2, 3] can only recover an approximate grayscale version since the color is usually dropped before halftoning. Apparently, resolving this dilemma requires a fore-looking halftoning technique that retains the necessary information for restoration. In this paper, we conducted a thorough study to explore this problem. Traditional halftoning methods distribute halftone dots mainly for tone reproduction. We observe that this target still permits certain perturbation in terms of the desired binary pattern, as evidenced in Fig. 1. It indicates the possibility of utilizing such a degree of freedom for additional usage, i.e., embedding the potentially missing color information and fine details. Formally, this brings out a new concept, i.e., reversible halftoning, which converts a color image to a halftone that possesses restoration ability to the original color version. Inspired by invertible grayscale [4], we adopt the invertible generative model to formulate our problem. However, generating quality halftones is more challenging than decolorization. The challenges lie in the flatness degradation of CNNs in halftoning and the difficulty in achieving vivid visual simulation and accurate information embedding with 1-bit pixels. To address flatness degradation, we propose a Noise Incentive Block (NIB) that introduces spatial variation to the feature space while reserving the information intactness. To achieve the binary halftone, we propose a binary gate that takes gradient propagation tricks to allow training with quantization. Anyhow, as reported in our preliminary study [5], the binary encoding space is limited and causes sacrifice for the blue-noise property against the restoration accuracy. Inspired by the predictive coding concept [6], we promote Fig. 1: Observation: the halftone variants of (a) (b) (c) present similar visual quality but with different binary patterns, as the overlaid RGB image visualized in (d). It shows the possibility of modulating the patterns for additional usage. the encoding framework by exploiting the predictive power from the inverse halftone module. The intuition is that most luminance information could be inferred from the halftone, and removing luminance information from the encoding stage offers more capacity for blue-noise realization. The model is trained end-to-end with highly mixed objectives, formulated as three loss terms: halftone loss, restoration loss, and luminance loss. Particularly, we propose a guiding-ware training scheme to circumvent the tricky converging issue of multi-objective optimization. Extensive evaluation and ablation study demonstrate that the proposed predictive encoding model allows a good balance among the visual simulation, blue-noise profile, and restoration accuracy for reversible halftoning. The trained model achieves very competitive performance against traditional halftoning algorithms in halftoning quality while still maintaining decent restoration accuracy of the original color image. The preliminary version of this manuscript presented two distinct contributions. Firstly, we introduced a novel method for reversible halftoning that enhances the functionality of existing halftoning applications. This method circumvents the ill-posed inverse halftoning problem at its source. Secondly, we proposed a model-agnostic plug-in, the noise incentive block, which effectively addresses the flatness degradation of CNN. In this manuscript, our primary focus is to promote the invertible generation framework with a predictive coding concept. Our objective is to demonstrate the efficacy of this framework in reducing the encoding burden and improving the embedded halftone quality. ## 2 Background ### _Image Halftoning_ Digital halftoning has been widely studied over the past decades. The goal is to render images in only two levels of pixel values, black and white. It creates an illusion of the continuous tone of the original image through the spatial distances between black and white dots. Traditional deterministic approaches include ordered dithering [7, 8, 9], error diffusion [10, 11, 12], dot diffusion [13], and direct binary search [14]. They aim to produce halftone images that preserve the local tone of the original image while with minimal artifacts. Since humans are perceptually more aware of artifacts in low-frequency areas, an ideal halftone image should contain the blue-noise property. The blue-noise property corresponds to visually pleasing [1] and minimal low-frequency components [15]. There are several works to achieve this, such as using perturbed error diffusion [1], blue-noise mask [15, 16], diffusion parameter set optimization [11, 17], and tile-based methods [18]. Although focusing on blue-noise rendering can produce a smooth and evenly distributed surface, fine details such as edges and complex structures will be blurred. Many proposed works aim to improve the halftone images using edge enhancement [19, 20, 21, 22]. Pang et al. [23] first introduced structural similarity and tonal similarity into the optimization function, followed by Chang et al. [24] optimized the error diffusion algorithm with structural similarity. Some neural-network-based approaches [25, 26] aim to produce halftone images in a deterministic manner. ### _Inverse Halftoning_ In the early printing industry, many images in newspapers, magazines, and books are halftone printings. "Inverse halftone" dedicates to restoring the continuous tone of images from the halftone images. It is an ill-posed problem because the fine details have been lost in the halftoning process. The simplest method is to process the halftone image with a low-pass filter [27, 28, 29]. However, such a method will also remove edge information. Kite et al. [30] proposed a kernel function built from local gradients to preserve high-frequency details. Xiong et al. [31] proposed to extract edge information and discard background noise via wavelet decomposition. Some works reformulate the continuous-tone restoration problem as a projection onto convex sets (POCS) [32, 33]. Ting and Riskin [34] proposed using a look-up table (LUT) to obtain a temporary grayscale image. Mese and Vaidyanathan [35] further proposed restoring the grayscale image using LUT without any linear filtering techniques. Both approaches improve the efficiency of restoring continuous-tone images. Therefore, many dictionary learning-based approaches have been proposed since then [36, 37, 38, 39, 40, 41]. Yue and Chen [42] proposed using Hopfield neural network [43] based optimization model to inverse halftoning. Huang et al. [44] proposed using a radial basis function neural network to restore the continuous tone from the halftone input. However, the quality of inverse halftoning is highly dependent on the starting halftone method. Recently, deep learning approaches have been explored by authors. Xiao et al. [45] and Gao et al. [46] proposed inverse halftone via U-Net structure with convolution layers. Xia and Wong [2] improved the restoration quality by introducing residual learning layers to predict enhanced details further. Kim and Park [3] proposed a generative adversarial network (GAN) with object categories prediction and edge information extraction. Besides restoring grayscale images, restoring color from halftone images is harder. It is because more information is needed to fill-ins instead of luminance only. Yen et al. [47] restored color images by concatenating the inverse halftone and colorization stages. Such a method requires extra information to hint at the network to predict color from the intermediate grayscale image. ### _Reversible Generation_ The reversible generation topic has been widely studied in the data hiding field. Major tasks applications include hiding watermarks or copyright declarations in images [48, 49, 50]. Also, authors have explored methods to hide the color information in the grayscale version image. Queiroz and Braun [51] proposed hiding chrominance channels into subbands from wavelet transform. Xu and Chan [52] proposed hiding the chrominance channels specifically in high-frequency areas of the grayscale version via error diffusion techniques. Recently, CNNs have gained massive success in image processing tasks. By considering the grayscale image as a latent representation of the color image, Xia et al. [4] proposed an encoding-and-decoding framework to generate reversible grayscale images that can be reversed back to their color version. Ye et al. [53] further proposed using the dual features extractions to improve the restoration quality. A similar framework is adopted in other tasks, such as image resampling [54, 55] and image retargeting [56]. Another approach, invertible neural networks (INNs) [57, 58, 59, 60, 61, 62, 63, 64], generates latent representation without loss of information; however, it relies on explicitly structured network architecture. Such constraint generally makes the training tricky and unstable. In our preliminary study [5], we adopted the invertible generation model as [4], and the limited encoding space of binary pattern causes the trade-off between the blue-noise quality and the restoration quality. In this paper, we promote the encoding framework with a predictive coding concept, i.e., removing the luminance information from the encoding stage and inferring it from the halftone pattern, which facilitates making a practically better balance between the visual quality and data embedding accuracy. ## 3 Reversible Halftoning We aim to learn reversible binary patterns toward halftoning color images, which is required to offer visual pleasantness and embed restoration-necessary information in the meantime. The key idea is to encode the color information into the halftone image and restore the color image by decoding the halftone image. We first adopted the autoencoder design, where the latent feature is represented in the halftone patterns, to approach the problem. However, the halftone patterns have to fulfill certain objectives: 1) the distribution of dots should resemble the continuous tone of its grayscale version perceptually; 2) the distribution of dots should maintain high blue-noise quality; and 3) the color information should be embedded into the distribution of dots. This poses a challenge to the novel autoencoder approach because the latent feature is not just a representation of the embedded information, but for fulfilling all three objectives simultaneously. ### _Embedding Framework with Predictive Luminance_ **Concept of Predictive Coding.** The concept of predictive coding had been described in different areas. In neuroscience, "predictive coding" suggests that the brain solves inverse problems via an internal model of the world [65, 66]. It provides an explanation of how our brain receives and reduces redundant signals. Such an idea was also established in the signal-processing domain. The key idea is to compress data with discarded information and restore the data by predicting the discarded information back. Predictable information shall be excluded from the compressed data. The compressed data should only include the residual error between the predicted and the actual values. Such an approach significantly increases the compression ratio. Predictive coding appears in various applications, such as image compression [67], temporal video compression [68] and representation learning [69, 70]. Our problem is similar to the data compression settings, where information is compressed (encoded) and restored (decoded). Our novel autoencoder method suffers the drawback of encoding information into the halftone pattern. Since we train the network to encode and decode information in RGB space, the encoder will encode all information in RGB as it can. However, due to the binary level of pixels and the halftone image having to resemble the continuous tone of its input, the encoding space available for encoding is further limited. In our base method, blue-noise quality has been sacrificed. If we remove some information from the limited encoding space but put it back in the restoration stage, the network should have more freedom to produce halftone patterns while maintaining its restoration ability. On the other hand, we know the work of inverse halftone has been long studied and well-developed. State-of-the-art work [2] can predict the continuous tone from halftone images with fine details. We can offload the luminance information from the encoding-decoding pipeline, thus constraining the network to sample the subspace of chrominance only. In the restoration stage, we extend the network with a predictor module, an inverse halftone module, to restore the offloaded luminance information. In this manuscript, we aim to improve the blue-noise quality of our halftone image through this spared encoding space. We extend the design established in \(\text{Ours}/_{base}\)[5]. Our network consists of three main components(Fig. 2): * An encoder that encodes color information into the generated halftone image; * A predictor that predicts the luminance channel from the encoded halftone image; * A decoder that restores the chrominance channels from the encoded halftone image. Given an RGB image \(I_{c}\), we construct a reversible halftone image \(O_{h}\) by the encoder \(\mathrm{E}\) and the binary gate \(\mathrm{B}\): \[\tilde{O}_{h} =\mathrm{E}(\,\mathrm{Nib}(I_{c})\,) \tag{1}\] \[O_{h} =\mathrm{B}(\tilde{O}_{h}) \tag{2}\] The details of the noise incentive block \(\mathrm{Nib}(\cdot)\) is discussed in section 3.2.1. The encoder generates a _pseudo halftone image_\(\tilde{O}_{h}\), in which each pixel are real value ranging from 0 to 1. The binary gate quantizes the pixels in the pseudo halftone image from real value to either 0 or 1. Then we feed \(O_{h}\) from (2) into two networks, a decoder network \(\mathrm{D}\) to restore the chrominance channels \(O_{c}^{ch}\), and a predictor network \(\mathrm{P}\) to predict the luminance channel \(O_{c}^{l}\). \[O_{c}^{ch} =\mathrm{D}(O_{h}) \tag{3}\] \[O_{c}^{l} =\mathrm{P}(O_{h}) \tag{4}\] Finally, we obtain the restored color image \(O_{c}\) by concatenating those three channels and convert to RGB color space. Our color space conversion function follows the standard specified in [71]. ### _Network Architecture_ We adopt the \(\mathrm{U}\)-shaped architecture for both the encoder and decoder networks. Both networks share a similar structure, containing three downscale blocks, three upscale blocks, four residual blocks, and two convolution blocks. We adopt U-Net as the network backbone because of its enlarged receptive field, and other qualified CNN architectures may also work. We adopted the [2] model as our predictor module. Any other inverse halftone module may also work. Additionally, we propose two special designs within this network: the noise incentive block to mitigate the flatness degradation introduced by CNN; and the binary gate to encourage the network to generate near-binary pixels. The base architecture, which does not include the predictor module, is denoted as \(\text{Ours}/_{base}\) in this manuscript. #### 3.2.1 Noise Incentive Block We uncovered a phenomenon that we refer to as "flatness degradation," which arises from the convolutional paradigm with spatially shared kernels when presented with flat inputs. This phenomenon leads to a scaling operation that applies the same parameters across the input and produces a constant signal, thereby impeding the ability of CNNs to dither a constant grayness. This, in turn, hinders the formulation of the blue-noise profile, which is primarily measured over the constant grayness. To address this issue, we propose the Noise Incentive Block (NIB), which introduces spatial variation to the feature representation while preserving the original input. By preprocessing the color image before passing it to the encoder, our dithering network is able to generate binary halftone in flat regions. The NIB also enables us to formulate the blue-noise profile through low-frequency constraints on dithered constant-grayness. The example of NIB-equipped results is located in Fig. 16 in the supplementary. #### 3.2.2 Binary Gate Another special design for the dithering network is the binary gate \(\mathrm{B}(\cdot)\) that quantizes the network output \(\tilde{O}_{h}\) to be a strict binary image \(O_{h}=\mathrm{B}(\tilde{O}_{h})\). We explicitly adopt a binary gate because the soft non-binary penalty is insensitive to tiny deviations, i.e., near-0 or near-1 valued pixels, which is vulnerable to quantization when stored as a 1-bit bitmap and thus hurts the restoration accuracy. However, one obstacle should be noted: the binary gate is non-differentiable. To enable the joint training, we use Straight-Through Estimator [72] on the binarization when calculating the gradients. #### 3.2.3 Predictor Fig. 3 shows an overview of the predictor module. We notice that halftone patterns inherently convey luminance and structural information, regardless of whether they are encoded with color information or not. As a result, we have implemented Xia's [2] inverse halftone module to predict continuous luminance information, thereby allowing us to concentrate on encoding chrominance information. The predictor consists of two key components: the content aggregation block, which incorporates three downscale blocks, three upscale blocks, and four residual blocks; and the detail enhancement block, which employs eight residual blocks to improve the predicted luminance details. ### _Loss Function_ We trained our network with the following loss functions: the halftone loss \(\mathcal{L}_{half}\); the restoration loss \(\mathcal{L}_{restore}\); and the Fig. 3: Overview of the predictor architecture [2]. \(\oplus\) denotes the addition operation. Fig. 2: Overview of our network architecture with embedded luminance predictor. \(\oplus\) denotes the concatenation operation. luminance loss \(\mathcal{L}_{lumin}\). We trained our network in multiple stages; the detailed combination of loss functions and their corresponding coefficients are discussed in Section 3.4. #### 3.3.1 Halftoning We adopted the halftone loss \(\mathcal{L}_{half}\) to train the network to generate the desired reversible halftone image. Our halftone loss is formulated as: \[\mathcal{L}_{half}=\alpha\cdot\mathcal{L}_{bin}+\beta\cdot\mathcal{L}_{tone} +\gamma\cdot\mathcal{L}_{blue} \tag{5}\] where \(\mathcal{L}_{bin}\) denotes the binary loss; \(\mathcal{L}_{tone}\) denotes the tone loss; and \(\mathcal{L}_{blue}\) denotes the blue-noise loss. Let \(\tilde{O}_{h}\) be the _pseudo halftone image_ generated by the encoder E but before the quantization layer Q. Since quantization on \(\tilde{O}_{h}\) cannot be differentiated, binarization loss takes a crucial role in encouraging the network to produce binary intensity values on the halftone image. It is formulated as \[\mathcal{L}_{bin}=||\mathrm{B}(\tilde{O}_{h})-\tilde{O}_{h}||_{1} \tag{6}\] where \(||\cdot||_{1}\) denotes the \(L_{1}\) norm. \(\mathrm{B}(\cdot)\) denotes the binary gate. Based on the tone similarity concept, which was proposed by [23], we applied tone loss \(\mathcal{L}_{tone}\) to encourage the halftone image \(O_{h}\) to resemble the tone of the input image. It is formulated as follows: \[\mathcal{L}_{tone}=||G(I_{gray})-G(O_{h})||_{2} \tag{7}\] where \(G(\cdot)\) denotes a Gaussian filter with kernel size \(11\times 11\) and sigma \(2.0\); \(I_{gray}\) denotes the grayscale version of color input \(I_{c}\); \(||\cdot||_{2}\) denotes the \(L_{2}\) norm. To train the network to produce halftone with blue-noise property, we adopted the blue-noise loss \(\mathcal{L}_{blue}\) suggested by [5]. Its basic idea is to restrict the network to generate minimal low-frequency components because they are more noticeable to the human eye. Therefore, we prepared a set of plain-color images \(\mathcal{P}\). For each training iteration, after the color image from the dataset has been passed to the network, we randomly draw a plain color image \(p\in\mathcal{P}\). A halftone image \(z_{p}\) is obtained by passing \(p\) into the network. The blue-noise loss is formulated as \[\mathcal{L}_{blue}=||[DCT(z_{p})-DCT(p_{gray})]\odot M||_{2} \tag{8}\] where \(p_{gray}\) denotes the grayscale version of \(p\). \(DCT(\cdot)\) denotes the discrete cosine transformation function. \(M\) denotes the binary mask. We set \(M\) to only allow the first 3.8% of low-frequency DCT coefficients to pass through. Compared to our preliminary version [5] of this manuscript, we dropped the structure loss suggested by [23] since it has no significant effect on our training outcome. #### 3.3.2 Restoration We constructed the restoration loss as \[\mathcal{L}_{restore}=\zeta\cdot\mathcal{L}_{chromin}+\eta\cdot\mathcal{L}_{ percep} \tag{9}\] where \(\mathcal{L}_{chromin}\) denotes the chrominance loss; and \(\mathcal{L}_{percep}\) denotes the perceptual loss. The chrominance loss trains the decoder to extract chrominance information from the encoded halftone image. Given a restored chrominance channels \(O_{c}^{ch}\), the chrominance loss are formulated as \[\mathcal{L}_{chromin}=||I_{c}^{ch}-O_{c}^{ch}||_{2} \tag{10}\] The perceptual loss trains the network to resemble color signals at the perceptual level. We adopted the perceptual loss \(\mathcal{L}_{percep}\) suggested by [5], which is formulated as \[\mathcal{L}_{percep}=||\Psi(I_{c})-\Psi(O_{c})||_{2} \tag{11}\] where we denote \(\Psi(\cdot)\) as the latent feature extracted from the _conv4\(4\) layer of the pre-trained VGG-19 module [73]. The luminance loss trains the predictor to generate a continuous luminance channel from the halftone image. Since we adopted the inverse halftone module from [2]. We take the full loss function of [2] as our luminance loss, \(\mathcal{L}_{lumin}\). \[\mathcal{L}_{content}=||\hat{O}_{c}^{l}-I_{gray}||_{2} \tag{12}\] \[\mathcal{L}_{full}=w_{a}||\Psi(O_{c}^{l})-\Psi(I_{gray})||_{2}+||O_{c}^{l}-I_ {gray}||_{1} \tag{13}\] \[\mathcal{L}_{lumin}=\mathcal{L}_{content}+w_{b}\mathcal{L}_{full} \tag{14}\] where \(\hat{O}_{c}^{l}\) denotes the initial predicted grayscale image from the content aggregation module in [2]. We set the coefficients to the default value stated in [2], where \(w_{a}=2.0\times 10^{-6}\), \(w_{b}=1.5\). ### _Training Strategy_ Training the whole model from scratch is vulnerable to a local minimum because of the challenging optimization target. To circumvent this problem, we propose to adopt a warm-up training scheme. In the first stage, we aim to warm up the dithering network alone, so that it can generate visually pleasant halftone images. To stabilize the training, the binary gate is temporally removed. Unfortunately, this relaxation still fails to guarantee satisfactory halftones in Fig. 4(b), and it is even associated with slow convergence, as shown in Fig. 20(green curve) in the supplementary. To boost the training, we propose explicitly providing a reference halftone image \(I_{h}\) to guide the training. For simplicity, the classical error diffusion [11] is employed as the reference. However, directly measuring the pixel-wise difference between the predicted halftone and the reference does not work, since per-pixel inspection can never capture the intrinsic feature of binary halftone patterns. _Halftone Pattern Measurement._ Inspired by perceptual loss [74], we propose to measure the halftone pattern difference in the continuous feature domain. We pretrained an inverse halftoning network F, a U-shaped architecture with three downscale blocks, four residual blocks, and three upscale blocks, to capture the halftone patterns in the continuous feature domain. Accordingly, we formulate the guidance loss \(\mathcal{L}_{G}\) as \[\mathcal{L}_{G}=||\mathrm{F}(O_{h})-\mathrm{F}(I_{h})||_{2} \tag{15}\] Then, we perform the warm-up training on the dithering network with the combined loss: \[\mathcal{L}_{stage1}=\mathcal{L}_{half}+\mathcal{L}_{G} \tag{16}\] where we set \(\alpha=0.1\), \(\beta=0.6\), \(\gamma=0.3\). The red curve in Fig. 20 shows the high training efficiency. With only 28 epochs, it is able to generate decent visual results, as shown in Fig. 4(c). In the second stage, we froze the predictor module; and trained the encoder and decoder networks to learn the desired halftone pattern. The whole model was trained under the following combination of loss functions until the loss converged \[\mathcal{L}_{stage2}=\mathcal{L}_{half}+\mathcal{L}_{restore}+\epsilon\cdot \mathcal{L}_{G} \tag{17}\] By isolating the predictor module in training, we ensure that the learning of the encoder does not involve luminance information; only chrominance information is encoded into the halftone. we set \(\alpha=0.4\), \(\beta=0.6\), \(\gamma=0.9\), \(\epsilon=0.3\), \(\zeta=1\) and \(\eta=0.00002\) empirically. It is worth noting that we set \(\gamma\) as 0.9 instead of 0.3 as \(\text{Ours}/_{base}\). The detailed analysis and reasoning are discussed in Section 4.5. We still have to use guidance loss \(\mathcal{L}_{G}\) here; as we experimented, if we dropped this loss, the halftone loses its structures and becomes over-smoothed. Fig. 4(d) shows an example of training without guidance loss in stage two. At the final stage, we fine-tuned the predictor module. We adopted the inverse halftone module from [2] as our predictor, and it was trained by the loss function specified in (14) only. The encoder and decoder are frozen in this stage. The approach that separates stage two and stage three ensures that the encoder only encodes chrominance information into \(O_{h}\). The decoder only outputs two channels, compared to \(\text{Ours}/_{base}\), which is three. Hence, the restoration burden on luminance has been shifted to the predictor. Such modification allows the encoder to generate \(O_{h}\) with better blue-noise quality instead of sacrificing it. Therefore, our proposed method generates halftones with better tone resemblance and blue-noise quality, while maintaining its restoration quality compared to our base design \(\text{Ours}/_{base}\). ## 4 Experimental Results We trained the warm-up stage with 28 epochs until the model generated decent visual results, then we trained the second stage and final stage until both corresponding losses converged. Each stage takes 87 epochs and 50 epochs respectively. The whole training takes a total of 165 epochs to complete. It is obvious that our predictor embedded method contains more parameters on the restoration side than our base method \(\text{Ours}/_{base}\). Therefore, to further justify the effectiveness of the predictor module, we compare our method with different variations of \(\text{Ours}/_{base}\) in this section. The color error maps, in this paper and the supplementary, are generated by normalizing the pixel from [0,255] to [0,1] and computing the L1 distance between the images in the RGB color space. ### _Dataset_ We evaluated our method on the VOC2012 dataset [75]. It contains 17,125 color images. We cropped and resized all images into \(256\times 256\). We randomly split the image set into: 13,758 images as training set; and 3,367 images as validation set. ### _Comparison with traditional halftoning_ Following the practice in [23], the tone consistency is measured by PSNR between the Gaussian-filtered halftone and the Gaussian-filtered luminance channel of the input, and the structure consistency is measured by SSIM between the halftone and the luminance channel of the input. We experimented with 3,367 grayscale images (decolorized from our validation set), as existing halftoning methods can only dither grayscale images. Two classical halftoning methods that generate high-quality halftones are selected as our competitors, Ostromoukhov's method [11] and the structure-aware halftoning method [23]. In our experiment, the structure-aware halftoning method is used with default parameters for quantitative evaluation while the case-by-case tuned result is provided for visual comparison. The statistics are tabulated in Table I. Among all, our method achieves the best comprehensive performance of tone similarity (PSRN) and structure similarity (SSIM). Fig. 5 shows examples on a gray ramp and images with structures. Our halftone resembles the continuous tone but is not as smooth as the traditional methods. It is because we traded off the blue-noise quality for encoded color information. However, our halftone visual quality is comparable with traditional methods in images with structures. Our method achieves better structure than the error-diffusion method [11], and less rigid patterns compared to the structure-aware method [23]. We further compared our method with some state-of-the-art halftoning methods [76, 17]. Fig. 6 shows that our method produce less "worm effects" than [11, 23] but still produce checkboard patterns compared to those improved methods [76, 17]. To analyze the blue-noise quality of our halftone images, we adopted the common analysis methods as in [1]. We \begin{table} \begin{tabular}{c|c c} \hline \hline **Methods** & **PSNR** & **SSIM** \\ \hline Ostromoukhov method & 41.728 & 0.1007 \\ Structure-aware halftoning & 21.803 & 0.0340 \\ Ours & 34.444 & 0.1094 \\ \hline \hline \end{tabular} \end{table} TABLE I: Quantitative evaluation on halftone images in terms of the mean PSNR and SSIM values. Higher PSNR/SSIM indicate better quality. Fig. 4: Halftone generated by models trained with and without guidance loss in different stages. (a) Error diffusion; (b) warm-up training for 130 epochs w/o guidance loss; (c) warm-up training for 28 epochs with guidance loss; (d) our stage two w/o guidance loss; (e) our stage two with guidance loss. selected the classical error diffusion methods [10, 11] as our competitors. We analyzed halftone images obtained from constant-grayness images in terms of its _power spectrum_ and its _radially averaged power spectrum_. The grayness is set to 0.8. The power spectrum indicates the frequency amplitude in 2-D. Since the amplitude of frequency in halftone is supposed to be radially symmetry. The radially averaged power spectrum visualizes the 2-D power spectrum in 1-D space. According to [1], for a good halftone with blue-noise property, the radial frequency graph should have 1) low amplitude in low-frequency areas; 2) a peak transition region on principle frequency; 3) a flat high-frequency region. We adopted the principle frequency defined in [77]. Fig. 7 illustrates the power spectrum and radial averaged power spectrum of the converted halftone image. Our method produces low amplitude in low-frequency regions similar to [10, 11]. Also, we observed our peak is closer to the principle frequency, and the shape of the curve resembles the shape in the classical method [11]. The frequency analysis among different gray levels is located in Fig. 15 in the supplementary. ### _Evaluation on Reversible Halftoning_ **Blue-noise quality** We evaluate the blue-noise quality on the halftone generated by our method, which includes the predictor module, and our base method \(\text{Ours}/_{base}\) over color input images. Fig. 8 (top-row) shows halftone examples produced by \(\text{Ours}/_{base}\) and our method. Both methods preserve the structural details. Our halftone patterns produce smoother surfaces and less "grid-like" structures in low-variance areas. This indicates a better blue-noise property on our halftones. The improvement of blue-noise quality is much more evident on Fig. 9(a). Our halftone dissolves the "grid-like" patterns and is visually smoother than \(\text{Ours}/_{base}\)with comparable restoration quality. More examples of the color ramp are provided in Fig. 21 in the supplementary. By observing the spectrum analysis results on \(\text{Ours}/_{base}\) and our method in Fig. 7, we can see that our halftone resembles the transition peak closer to [11] than \(\text{Ours}/_{base}\) in power spectrum. Although the peak region is not as wide as [11]; and shifted right from the principle frequency, it approaches the principle frequency closer than our base methods. Hence, our method with a predictor module extends the model's ability to produce halftone images with better blue-noise quality. **Restoration quality** We compare our method in grayscale and color image inputs. We take two state-of-the-art methods as our competitors: the PRL-Net [2] as our baseline grayscale candidates and the CoITran [78] as baseline color candidates. The PRL-Net [2] generates grayscale from the error-diffused halftone, and CoITran [78] colorize the grayscale from [2] to obtain the color version. Since PRL-Net can only restore grayscale images, we prepared 3,367 Fig. 5: Qualitative comparison of halftone images with intensive structures. (a) Ostromoukhov; (b) Structure-aware; and (c) Ours. Fig. 6: Qualitative comparison of halftone patterns on ‘Lena’. The example of \(\text{T}\text{D}\text{D}\text{D}\text{g}\) is directly from [76]. (a) Floyd-Steinberg; (b) Ostromoukhov; (c) Zhou and Fang [17]; (d) \(\text{T}\text{D}\text{D}\text{g}\)[76]; and (e) Ours. Fig. 7: Spectrum analysis on various halftone results from constant-grayness 0.8. From top to bottom: the power spectrum, radially averaged power spectrum density and anisotropy. The green dashed line indicates the principle frequency. (a) Floyd-Steinberg; (b) Ostromoukhov; (c) Ours/base; and (d) Ours grayscale images (decolorized from our testing dataset) for grayscale comparison. Table II presents the statistics of both PSNR and SSIM. Our superiority lies in the restoration of the color domain. Our method avoids the ill-posed problem of color choice in areas and improves color segmentation with encoded color information. ColTran [78] experiences the drop in PSNR due to differences in color choice from the ground truth. Fig. 10 shows an example of ColTran [78] failing to segment color section properly. Our method is able to produce the pink and green color at corresponding areas which ColTran [78] failed. In fact, the ability of ColTran [78] to guess color lies in its training batches while our method retrieves the color information from the halftone patterns. Furthermore, we compare our method with Ours\(/_{base}\) to evaluate the effectiveness of the predictor module. The example in Fig. 8 (bottom-row) demonstrates our improved encoded halftone with comparable restoration ability with Ours\(/_{base}\). By adopting the predictor module, we achieve the same level of restoration quality while improving the blue-noise quality in halftone. It is because, as our halftone becomes smoother with less encoded information, the predictor module fills in the missing luminance information by "guessing". Therefore, our restoration power maintains a comparable level with Ours\(/_{base}\) when the "guess" is correct. We notice the restoration artifacts in extreme dark luminance, such as Y=1 in Figure 21. We believe it is caused by the inverse halftone module being trained on images with structural complexity, rather than plain colors. Nonetheless, our restoration quality is comparable on average and applicable in real-world cases. ### _Data embedding study_ We adopt the concept of entropy in information theory to estimate encoded information in our halftone patterns. The information of a source produced can be measured in \begin{table} \begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Halftone**} & \multicolumn{2}{c}{**Restoration**} \\ **Methods** & **PSNR** & **SSIM** & **PSNR** & **SSIM** \\ \hline PRL-net [2] + ColTran [78] & 41.728 & 0.1007 & 19.543 & 0.4612 \\ Ours\(/_{base}\) & 30.624 & 0.1440 & 28.130 & 0.8592 \\ Ours & 32.283 & 0.1136 & 27.292 & 0.6060 \\ \hline \hline \end{tabular} \end{table} TABLE II: Quantitative evaluation on halftone and restored color images in terms of the mean PSNR and SSIM values. Higher PSNR/SSIM indicate better quality. Fig. 8: Qualitative comparison on halftone image and restored image together. Fig. 10: Colorization vs. our methods. (a) Input; (b) ColTran [78]; (c) Ours\(/_{base}\); and (d) Ours Fig. 9: Qualitative comparison on halftone image and restored image together on color ramp. (a) Y=0.8; and (b) Y=0.5. terms of entropy [79]. We claim that our method encoded less information than \(\text{Ours}/_{base}\). One way to evaluate the estimated entropy is to compare signals via a lossless compression. In lossless compression, redundant signals are replaced by shorter code words [80]. Therefore, sources with less information, hence less "surprise" in signals, should obtain a higher compression rate [81, 79]. Since compressing images with different spatial arrangements yield variances in compression rates, We rotated and flipped all 3,367 images in all four directions before evaluating their compression rates. By expanding the image set in this way, we could ensure that the compression rates we compared were representative of each image's general compression characteristics. Finally, we compressed all the halftone images into a single ZIP archive using the universal zip library [82] for comparison purposes. Table III shows the respective compression rates. The classical error-diffusion method obtains the highest compression ratio, while all variations of our base methods obtain lower compression rates. Our predictor-embedded method sits between the error-diffusion method and our base methods. This experiment further proves our claim. Furthermore, we analyze how the encoding information is embedded and its robustness by applying several typical disturbances to the generated halftones, including flipping, partial removal, and random impulse noises. Fig. 11 illustrates the restored color examples from augmented halftones. Fig. 11(c) shows when under a regional mask, the color in the unmasked regions is restored similarly to the original. This indicates our color data are encoded local-wise instead of global-wise. However, the restored structure is also blurred. We believe this is brought by the prediction accuracy of the predictor. The incorrect color restored from the flipped halftone in Fig. 11(b) indicated the encoded information is directionally sensitive. Although both \(\text{Ours}/_{base}\) and \(\text{Ours}\) cannot restore a correct color from the flipped halftones, our restored version contains fewer structural diagonal artifacts. Fig. 12(a) shows a comparison of the grayscale version of the restored image. Fig. 12(b) further shows that our method increases the tolerance to noise against our base method, which indicates the good potential to be used in real-world applications. Since most of the structural information is constructed from the luminance and we offload such work to the predictor, the encoded information only affects the color correctness. Therefore, our restored color in flipped halftones contains fewer structural artifacts than \(\text{Ours}/_{base}\). Our method also shows higher tolerance to random noises than \(\text{Ours}/_{base}\). Ground-truth images and detail comparisons are located in Fig. 22 in the supplementary. We studied the importance of the isolation training strategy. We released the predictor and trained all three modules end-to-end in the second stage, skipping stage three. The result is denoted as \(\text{Ours}/_{end-to-end}\) in Table IV. We can see that there is an improvement in color restoration. However, the halftone accuracy remains the same level as \(\text{Ours}/_{base}\). It is because, in the backward propagation stage, both the predictor and decoder act as the learning factors for the encoder. With the increased parameters on the restoration side, an improvement in color restoration is expected. Therefore, isolating the predictor when training the encoder becomes a necessary step. This indicates the significance of our two-stage strategy. Finally, we compare the effectiveness of blue-noise manipulation between our base and predictor-embedded methods. For a fair comparison, we trained our base method with doubled layers in the decoder module, denoted as \(\text{Ours}/_{base}^{L}\), to match the parameter size with the predictor-embedded approach. It is worth noting that we propose setting the coefficient \(\gamma=0.3\) in our base method because we found that \(\mathcal{L}_{blue}\) and \(\mathcal{L}_{restore}\) are conflicting each other. However, with the predictor-embedded approach, we can push the value \(\gamma\) to 0.9. Table V shows the detailed comparison between the variations of \(\gamma\) in training. We can see that if we increase the weight of blue-noise loss, \(\text{Ours}/_{base}\) results in the same level of quality regarding halftone accuracy and restoration accuracy. Fig. 13 shows the spectrum analysis with models trained with increased \(\gamma\). Even with a larger parameter size in the decoder and higher blue-noise loss weight on \(\text{Ours}/_{base}\), the anisotropy was suppressed, but the model still struggles to produce a transition peak around the principle frequency in the power spectrum. We can see that both Fig. 13(a) and Fig. 13(b) produce high intensity in the high-frequency area. It is because without changing the encoding content and the suppression of low frequency introduced by the blue-noise loss \(\mathcal{L}_{blue}\), the model is forced to encode information into the high-frequency area. \(\text{Ours}/_{base}\) reaches its limits to improve the blue-noise property. With the predictor approach, our model quickly raises the bar of halftone tone consistency and restoration accuracy with the default weighting of 0.3. It is because, with lesser information that needs to be encoded, the model tends to improve the tone of halftone patterns when we set the default \(\mathcal{L}_{tone}\)'s coefficient \(\beta=0.6\). Since we aim to improve the blue-noise quality in our halftone images, we take \(\gamma=0.9\) as our final proposed version. Fig. 14 shows an example of halftone images between heavier tone consistency vs. heavier blue-noise weights. The ablation study of the NIB block can be found in the supplementary. ## 5 Conclusion We propose a novel reversible halftoning technique with high restoration ability and state-of-the-art visual quality. Our approach is a strong alternative to traditional halftoning methods and eliminates the need to tackle the ill-posed inverse halftoning problem. To extend the ability of the reversible model, we introduce a predictive module that offloads the encoding burden between the blue-noise property and the hidden color information. Our formulation of the blue-noise loss as a low-frequency constraint on constant-grayness guarantees the visual pleasantness of halftone patterns. We also propose a method to modulate the priorities of different loss terms in three stages to handle the tricky optimization landscape. Our experiments demonstrate the advantages of our approach and highlight the improvement achieved by the predictor strategy. We believe our contributions to reversible halftoning and the predictor approach will inspire future work in this field. Fig. 14: Example of halftone generated via different tone and blue-noise loss weights. (a) Input (b) \(\beta=0.6,\gamma=0.3\); and (c) \(\beta=0.6,\gamma=0.9\) Fig. 13: Spectrum analysis between \(\text{Ours}/_{base}\) and Ours with increased blue-noise loss weight. (a) \(\text{Ours}/_{base},\gamma=0.9\); (b) \(\text{Ours}/_{base}^{L}\), \(\gamma=0.9\); and (c) Ours. \begin{table} \begin{tabular}{l|l|l l|l l} \hline \hline & & \multicolumn{2}{c|}{**Halftone**} & \multicolumn{2}{c}{**Restoration**} \\ \cline{3-6} **Methods** & \(\gamma\) & **PSNR** & **SSIM** & **PSNR** & **SSIM** \\ \hline \(\text{Ours}/_{base}\) & 0.3 & 30.678 & 0.1426 & 28.234 & 0.6408 \\ & 0.9 & 30.124 & 0.1838 & 27.666 & 0.6524 \\ \hline \(\text{Ours}/_{base}^{L}\) & 0.3 & 29.927 & 0.2422 & 28.276 & 0.6331 \\ & 0.9 & 28.744 & 0.1536 & 21.353 & 0.4489 \\ \hline \multirow{2}{*}{Ours} & 0.3 & 36.194 & 0.1081 & 29.362 & 0.7025 \\ & 0.9 & 32.282 & 0.1133 & 27.451 & 0.6036 \\ \hline \hline \end{tabular} \end{table} TABLE V: Ablation study on variation of \(\mathcal{L}_{blue}\)’s coefficient \(\gamma\).
2305.11308
MCD: A Model-Agnostic Counterfactual Search Method For Multi-modal Design Modifications
Designers may often ask themselves how to adjust their design concepts to achieve demanding functional goals. To answer such questions, designers must often consider counterfactuals, weighing design alternatives and their projected performance. This paper introduces Multi-objective Counterfactuals for Design (MCD), a computational tool that automates and streamlines the counterfactual search process and recommends targeted design modifications that meet designers' unique requirements. MCD improves upon existing counterfactual search methods by supporting multi-objective requirements, which are crucial in design problems, and by decoupling the counterfactual search and sampling processes, thus enhancing efficiency and facilitating objective trade-off visualization. The paper showcases MCD's capabilities in complex engineering tasks using three demonstrative bicycle design challenges. In the first, MCD effectively identifies design modifications that quantifiably enhance functional performance, strengthening the bike frame and saving weight. In the second, MCD modifies parametric bike models in a cross-modal fashion to resemble subjective text prompts or reference images. In a final multidisciplinary case study, MCD tackles all the quantitative and subjective design requirements introduced in the first two problems, while simultaneously customizing a bike design to an individual rider's biomechanical attributes. By exploring hypothetical design alterations and their impact on multiple design objectives, MCD recommends effective design modifications for practitioners seeking to make targeted enhancements to their designs. The code, test problems, and datasets used in the paper are available to the public at decode.mit.edu/projects/counterfactuals/.
Lyle Regenwetter, Yazan Abu Obaideh, Faez Ahmed
2023-05-18T21:10:58Z
http://arxiv.org/abs/2305.11308v2
# Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations ###### Abstract We introduce Multi-Objective Counterfactuals for Design (MCD), a novel method for counterfactual optimization in design problems. Counterfactuals are hypothetical situations that can lead to a different decision or choice. In this paper, the authors frame the counterfactual search problem as a design recommendation tool that can help identify modifications to a design, leading to better functional performance. MCD improves upon existing counterfactual search methods by supporting multi-objective queries, which are crucial in design problems, and by decoupling the counterfactual search and sampling processes, thus enhancing efficiency and facilitating objective tradeoff visualization. The paper demonstrates MCD's core functionality using a two-dimensional test case, followed by three case studies of bicycle design that showcase MCD's effectiveness in real-world design problems. In the first case study, MCD excels at recommending modifications to query designs that can significantly enhance functional performance, such as weight savings and improvements to the structural safety factor. The second case study demonstrates that MCD can work with a pre-trained language model to suggest design changes based on a subjective text prompt effectively. Lastly, the authors task MCD with increasing a query design's similarity to a target image and text prompt while simultaneously reducing weight and improving structural performance, demonstrating MCD's performance on a complex multimodal query. Overall, MCD has the potential to provide valuable recommendations for practitioners and design automation researchers looking for answers to their "What it" questions by exploring hypothetical design modifications and their impact on multiple design objectives. The code, test problems, and datasets used in the paper are available to the public at decode.mit.edu/projects/counterfactuals/. ## I Introduction Modifying existing designs to generate new ones is an essential aspect of various engineering sectors, such as aerospace, automotive, architecture, pharmaceuticals, consumer goods, and many others. Design modification significantly impacts the performance, efficiency, and safety of engineered systems. Effective methods for design modification can lead to more sustainable and environmentally friendly technologies, better transportation systems, and safer infrastructure. Furthermore, improved design modification methods can enable cost savings and improved efficiency, making products more accessible and affordable for society. However, coming up with good design modifications can be challenging, as it requires navigating huge design spaces and making numerous trade-offs between competing objectives. Often there are too many design attributes and potential modifications to consider. Not surprisingly, designers often struggle with the available choices and may often ask themselves, "What if?". As a designer, the ability to ask "What if?" questions is crucial in the iterative process of design modification. By exploring hypothetical scenarios, designers can identify opportunities to improve design performance and functionality. However, answering "What if?" questions can be challenging as it requires considering an extensive range of potential modifications and their effects on multiple design objectives. Counterfactuals are a powerful reasoning tool that allows designers to ask such questions by exploring hypothetical design modifications and their impact on multiple design objectives. A counterfactual is a hypothetical situation that depicts what could have happened if a specific event or action did not occur. It requires envisioning an alternate reality where a different choice or decision was made and analyzing the differences in results. Counterfactuals are often employed in reasoning, decision-making, and causal inference. They aid in comprehending the impact of particular events or actions on outcomes and considering the ramifications of various choices. Counterfactuals are typically employed to understand how an outcome would change given a different set of actions. This style of counterfactual can be applied to design problems to answer questions like: "How would the performance of this design change if I modified this particular attribute?" There are many tools to predict these 'classic' counterfactuals, such as simulations and predictive models. In this work, we instead consider an 'inverse' counterfactual problem, which states: "What events would have needed to occur to result in this other outcome?" In design contexts, this often equates to the question: "What attributes of my design would I need to change to achieve a particular performance target, design classification, or functional requirement?" This paper proposes an approach to answer such 'inverse' counterfactual hypotheticals using multi-objective optimization. Our proposed approach, Multi-Objective Counterfactuals for Design (MCD), allows users to input a design and a set of desired attributes, then recommends targeted modifications to the design to achieve these attributes. It identifies these modifications by querying a set of attribute predictors in a directed search procedure dictated by an evolutionary algorithm. We demonstrate how predictors ranging from machine learning regressors to text embedding models can support target attributes ranging from functional performance targets to subjective text requirements. MCD can be viewed as an AI design assistant that allows users to ask challenging objective and subjective questions about an existing design, such as: "What modifications would it take to make this product 10% lighter?", "What would make my design look like this other concept?", or "How would my design need to change to look more sleek and futuristic?" By enabling designers to interact with AI systems simply and intuitively, counterfactuals open the doors to more successful human-AI collaboration by enhancing and accelerating the design process. A block diagram demonstrating MCD's anticipated usage scenario is shown in Figure 1. A particularly related body of research to our work is counterfactual explanations, originally developed as a tool to interpret black-box machine learning (ML) models. Counterfactual explanations allow practitioners to understand the behavior of otherwise uninterpretable models by asking questions about counterfactual scenarios. A classic motivating example for counterfactual explanations involves a model that is deciding whether to approve a loan, where the applicant may ask: "What would I need to change for this model to approve my application?" Broadly speaking, these counterfactuals answer a very versatile question: "Hypothetically, what would I need to change about the input to my model for it to predict another outcome?" Many of the common challenges that designers face can be framed as such a question. For example, given a model that predicts the functional performance of a design, a designer can ask how to change the design to achieve some desired functional performance. Despite this, counterfactual explanations have not yet been used in design engineering problems, to the best of our knowledge1. Footnote 1: A search for the term “ counterfactual explanations” on the entire ASME digital collection, that includes design venues such as the IDETC conference and the Journal of Mechanical Design, returns zero results on March 10, 2023. In this paper, we showcase our MCD method and demonstrate that counterfactual search is a simple yet powerful AI-driven design tool that real designers can leverage for a variety of tasks. To do so, we make several key contributions, which we summarize as follows: 1. We introduce Multi-Objective Counterfactuals for Design (MCD), a new method to search for counterfactual design modifications to achieve desired outcomes. We formulate MCD as a multi-objective search problem to minimize the magnitude and extent of the modifications, encourage proximity to the data manifold, and satisfy user-provided multi-modal requirements. 2. We demonstrate that MCD effectively suggests targeted design modifications to improve the functional performance of query designs, illustrating that counterfactual search could be viewed as an effective design recommendation tool. 3. We present the first text and image-based counterfactual search in design using the Contrastive Language-Image Pre-training (CLIP) method. These cross-modal queries were previously not possible with existing counterfactual methods. 4. We demonstrate that MCD can effectively handle multimodal queries, including a mixed-variable text, image, and parametric query, the first example of multimodal queries to a counterfactual search model, to our knowledge. ## II Background Counterfactuals are a useful tool for investigating causality and forecasting the potential outcomes of different actions. Counterfactuals have been extensively used in various fields, including psychology, philosophy, social sciences, and machine learning as they offer a valuable tool for examining causality and understanding the consequences of actions [1]. In psychology, counterfactual thinking has been studied in relation to emotions, such as regret and disappointment. In philosophy, counterfactuals have been used to explore questions of determinism and free will. In social sciences, counterfactual analysis is widely used to evaluate the impact of policies and interventions. Counterfactual explanations are also gaining traction in the field of machine learning as a means to improve the interpretability and fairness of machine learning models. In this literature review, we discuss three key areas that relate closely to our work -- 1) explainability and counterfactuals in machine learning, 2) multi-objective optimization approaches to counterfactuals, and, 3) a multi-modal, zero-shot machine learning model that enables us to capture user requirements. Fig. 1: Multi-Objective Counterfactuals for Design (MCD) is a human-AI collaborative design recommendation tool. Users provide an initial design and a set of counterfactual attributes they would like to achieve. MCD queries a set of attribute predictors to search for a set of diverse modifications to the original design that achieve the counterfactual attributes. ### _Explainability and Counterfactuals in Machine Learning_ Counterfactual explanations are frequently used as a machine learning explainability tool. In machine learning, particularly deep learning, predictions are often mysterious and intractable. To remedy this intractability, a wealth of machine learning 'explainability' tools have been proposed in recent years. One common approach involves determining the sensitivity of the output with respect to the various input parameters (features), a technique known as 'feature importance.' Some popular methods in this category include Local Interpretable Model-Agnostic Explanations (LIME) [2] and Shapley Additive Explanations (SHAP) [3]. In the design automation community, these methods are often used to determine which design parameters have outsized impacts on design performance [4, 5, 6, 7] or which parameters are important for relationships between products [8]. Another common approach to explainability involves visualizing a model's decisions in some way. This technique lends itself well to data modalities that are easily appreciated visually, such as images, for which saliency maps are a common explainability method [9, 10, 11]. Counterfactuals were first proposed for machine learning (ML) explainability by Wachter _et al._[12]. Since then, researchers have proposed a wealth of counterfactual explanation approaches, which Verma _et al._[1] and Guidotti _et al._[13] review. Among the popular methods are Diverse Counterfactual Explanations (DiCE) [14], Feasible and Actionable Counterfactual Explanations (FACE) [15], and Multi-Objective Counterfactuals (MOC) [16]. Counterfactuals make for a great explainability tool since they allow users to intuitively understand the ML model's internal decision thresholds (i.e. "Where does my model start predicting a different outcome?"). Much like LIME and SHAP, many counterfactuals take a localized approach, with some even fitting local approximations to the data manifold to guide their explanations [17]. Counterfactuals have most commonly been proposed for tabular data, but have also been applied to images [18] and text [19], among other modalities. In general, good counterfactual explanations should typically demonstrate the following properties: 1. **Validity:** First and foremost, a good counterfactual explanation should result in the desired outcome. Depending on the nature of the problem, this desired outcome may be a class, an inequality, a range, an exact equality, or some combination of the above. For example, if we are querying a model that predicts the mass of a design and we specify a range of 2-3 kg, a proposed counterfactual should have a predicted mass in this range. 2. **Sparsity:** Good counterfactuals should be easy to realize, meaning that they should not change many features of the query. Sparsity refers to the number of features that must be modified to realize a counterfactual. 3. **Proximity:** While the number of modifications needed to realize a counterfactual is an important consideration, the extent of these modifications is also important. In simple terms, we would like counterfactuals to be as similar to the query as possible. This is typically quantified as a distance to the original query. 4. **Manifold Proximity:** In the classic usage of counterfactuals as an ML explainer, a predictive model has been trained on a dataset and is being iteratively queried by the counterfactual model. If queries lie too far from the data manifold on which the predictor was trained, predictions (and by extension counterfactuals) will no longer be accurate. In other use cases where the counterfactual is not explaining a statistical model, manifold proximity may not be desirable. 5. **Actionability:** In many problems, certain input parameters may not be changeable, but will nonetheless play a role in the output of the model. For example, the weight of the rider will play a significant role in the structural loading of a bicycle. However, when designing a bicycle, we can't choose to simply make the rider lighter. A good counterfactual explanation should only modify actionable features. Several works, such as [15], have also proposed more nuanced methods to handle actionability. 6. **Causality:** Features in a dataset may be causally linked, implying that changing one feature may necessitate changing another. In general, establishing causality is difficult. However, in design, we may be aware of causal relationships thanks to our fundamental understanding of the physics relating various input variables. For example, selecting a denser material for a given design may necessitate increasing the weight, provided that the geometry remains unchanged. This has clear ramifications for effective counterfactuals, which should ideally capture and respect any known causal relations in the problem. A strong counterfactual method should undoubtedly generate high-quality counterfactuals. However, good counterfactual methods should also exhibit several properties that may not be reflected in the strength of individual counterfactual examples themselves: 1. **Diverse Sets:** As emphasized in [14], it may be highly desirable to generate diverse sets of counterfactuals. This gives the user a wealth of options, ideally with different actionable requirements to achieve the query objective. 2. **Model-Agnosticism:** Ideally, the algorithms used for the generation of counterfactual explanations should treat the model as a black box and interact with the model only through its predict function [1]. These "model-agnostic" algorithms allow for wider applicability and code reuse. Notably, model-agnostic approaches do not rely on gradient information from the predictor but may be less sample-efficient than methods that leverage gradients, when available. Researchers have also adopted counterfactuals in a recommender-system setting. Tran _et al._[20] review the use of counterfactual explanations in recommendation systems and propose a method to generate counterfactuals for recommender systems. While related, this work differs slightly from our proposed use case, in which the counterfactual-generating model is the recommender. Regenwetter [21] also briefly investigates leveraging Diverse Counterfactual Explanations [14] for design recommendations, citing challenges due to the limitations of single-objective queries. ### _Multi-Objective Counterfactual Explanations_ Counterfactual explanations can be viewed as an optimization problem, and can similarly be implemented using an optimization algorithm. Many methods summarize the optimization objective as a weighted sum of the different objectives discussed earlier. However, another approach instead frames the counterfactual search process as a classic multi-objective optimization problem. Dandl _et al._[16] were the first to formalize this parallel between counterfactual explanations and multi-objective optimization (MOO) in Multi-Objective Counterfactuals (MOC). By handling objectives individually rather than as a single aggregated objective, MOC realizes a key benefit of Multi-Objective Optimization, namely the ability to generate non-dominated sets of counterfactual explanations. Whereas a single-objective approach returns a counterfactual that optimizes for a statically weighted aggregation of objectives, the non-dominated set allows designers to adaptively select counterfactuals based on their specific search priorities, which typically depend on the problem at hand. Multi-Objective Counterfactuals (MOC) [16] is a primary inspiration for Multi-Objective Counterfactuals in Design (MCD). However, we have expanded on MCD in several key directions. Chiefly, despite its name, MOC does not inherently support multi-objective queries. Furthermore, MOC does not distinguish between hard and soft constraints, despite the fact that this functionality is ingrained in the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) [22] that MOC is built around. MCD addresses these gaps while also decoupling the optimization and sampling steps, and introducing new ways to integrate counterfactuals with a multi-modal, zero-shot machine learning model. Since the overarching goal of MCD is not to explain predictors, but rather to search for design recommendation counterfactuals, we refer to the problem as 'counterfactual search.' Note that unlike counterfactual explanations, counterfactual search does not require ML predictors and can work with many types of forward models. It also has the additional goals of manifold similarity and meeting multi-objective multi-modal requirements. ### _Cross-Modal Design Recommendations_ The multitude of data modalities spanned by design data remains a prominent challenge in data-driven design [23, 24, 25]. Though a model explained by a counterfactual method may make predictions in one modality, users may instead prefer to query targets in an entirely different modality. We will demonstrate in this paper that MCD can be used in conjunction with rendering pipelines and trained language models to generate counterfactuals for a parametric model using images or even text prompts. In this way, counterfactuals can capture complex and abstract user requirements in a 'zero-shot' fashion, requiring no additional training to understand the context of the prompts. To provide context for this discussion, we will introduce a brief background on relevant subjects in cross-modal learning. When handling data of modalities like graphs [26], images [27], 3D geometry [28], text [29, 30], and mixed modalities, a common general technique involves mapping datapoints to a vector space. This effectively creates a link from datapoints of the modality to datapoints in the vector space. Two or more modalities can then be linked by creating shared embeddings for the modalities using the same vector space. Shared text-image embeddings are an example of cross-modal embeddings that have garnered significant attention in recent years [31]. Radford _et al._[32] propose one of the most widely used models for text-image shared embeddings called Contrastive Language-Image Pretraining (CLIP). CLIP trains a text embedding and image embedding model simultaneously on a dataset of text-image pairs. The models are rewarded for mapping matching pairs to similar embedding vectors and mapping non-matching pairs to dissimilar embedding vectors. In our second and third case studies, we will be leveraging pre-trained CLIP models to query counterfactuals using text prompts. Next, we move on to discuss our methodology. ## III Methodology In this section, we discuss the construction of the optimization algorithm behind MCD, emphasizing the constraints, objectives, and operators used. We then present our approach for sampling diverse sets of counterfactuals and discuss how we decouple the optimization from the final sampling step. Finally, we demonstrate the capabilities of MCD on a simple 2D problem. ### _Objectives_ Optimization algorithms typically seek to find constraint-satisfying solutions that achieve optimal objective scores. We will first discuss how objectives are defined in MCD, then go on to discuss constraints. Broadly, we consider two types of objectives: Objectives related to counterfactual quality and user-specified auxiliary objectives (often used for soft constraints). The former draw on the work of Dandl _et al._[16], who, among other things, leverage Gower distance [33] and the number of changed features as optimization objectives in MOC. 1. **Gower Distance:** Gower Distance [33] is a metric that indicates the distance between mixed feature data points. Its use as an objective tackles the issue of "proximity" introduced in Sec. II-A. The Gower distance between d-dimensional counterfactual \(p\) and query \(q\) is given in terms of their feature values \(p_{i}\) and \(q_{i}\) for \(i\in[1...d]\), as: \[f_{pr}(p,q)=\frac{1}{d}\sum_{i=1}^{d}\delta_{G}(p_{i},q_{i})\] (1) \(\delta_{G}(p_{i},q_{i})\) is a function that depends on feature type and is given as: \[\delta_{G}(p,q)=\begin{cases}\frac{1}{\hat{R}_{i}}|p_{i}-q_{i}|&\text{if $p_{i}$ is numerical}\\ \frac{1}{p_{i}\neq q_{i}}&\text{if $p_{i}$ is categorical}\end{cases}\] (2) Here, \(\hat{R}_{i}\) is the range of the feature \(i\) observed in the dataset. 2. **Changed Feature Ratio:** This objective calculates the proportion of features that the proposed counterfactual, \(p\), modifies from the query, \(q\). Its use as an objective tackles the issue of "sparsity" introduced in Sec. II-A. \[f_{sp}(p,q)=\frac{||p-q||_{0}}{d}=\frac{1}{d}\sum_{i=1}^{d}\mathbb{1}_{p_{i} \neq q_{i}}\] (3) 3. **Average Gower Distance:** To measure the "manifold proximity" discussed in Sec. II-A, Dandl _et al._[16] calculate the average Gower distance to the \(k\) nearest observed data points \(s^{i}...s^{k}\) from the dataset \(S\): \[f_{mp}(p,S)=\frac{1}{k}\sum_{i=1}^{k}\frac{1}{d}\sum_{j=1}^{d}\delta_{G}(p_{j },s_{j}^{i})\] (4) 4. **Problem-Specific Objectives:** Just as the user may specify non-negotiable requirements for the model outcome (hard constraints), they may also specify objectives (\(f_{1}(p)...f_{M}(p)\)) that they would like to satisfy, and later specify targets for these objectives during sampling. These auxiliary objectives are directly included as optimization objectives in NSGA-II. ### _Constraints_ In a counterfactual search, a variety of optimization constraints may be present. Constraints are considered non-negotiable and always take precedence over objectives. In practice, many optimization algorithms, including the variant of the NSGA-II algorithm driving MCD, prioritize resolving constraint violations before proceeding to the optimization of objectives. MCD considers several types of constraints: 1. **Variable and Constant Features:** Like many counterfactual models, we implement a mechanism to constrain which features are allowed to be modified by a counterfactual, as specified by the user. This addresses the challenge of "actionability" introduced in Sec. II-A. We call the set of actionable features \(A\). 2. **Model Output Constraints:** Users querying a counterfactual method may have requirements for the output of their model. In most counterfactual search approaches, these requirements are treated as non-negotiable hard constraints to satisfy the "validity" property introduced in Sec. II-A. MCD supports such hard requirements, which are handled as constraints in NSGA-II2, but does not require them. We consider any output with a constraint as belonging to a set B and require that \(L_{b}\leq f_{b}(p)\leq U_{b}\,\forall\,b\in B\). Instead, we also allow users to specify soft constraints in the form of additional optimization objectives, paired with targets to be used during sampling. Footnote 2: By default, we expect queries in the form of inequalities. Since range and equality constraints (or objectives) can be specified using two inequalities, we find this to be an adequately versatile interface for most types of constraints. In rare cases where users need to specify complex constraints, such as disjoint ranges, they can do so by creating a custom constraint function and passing it in as a black box. 3. **Domain-Specific Constraint Functions:** There are cases in which certain hard constraints are known a priori. MCD can be configured to respect such hard constraints through user-specified black-box constraint functions. Domain-specific constraints can be used for a variety of different purposes, including encoding causality relations into the optimization as discussed in Sec. II-A. We specify these constraint functions as \(g_{1}(p)...g_{K}(p)\) and, for simplicity, assume they are satisfied for \(g_{k}(p)\geq 0\) ### _Formulation as MOO problem_ In summary, we express the multi-objective optimization problem in terms of the variables, sets, and functions defined above as follows: \[\text{minimize: }f_{i}(p),\ \forall\ i\in\{pr,sp,mp,1,...,M\} \tag{5}\] \[\text{subject to: }f_{j}(p)-L_{j}\geq 0,\,U_{j}-f_{j}(p)\geq 0,\ \forall\ j\in B,\] \[g_{k}(p)\geq 0,\ \forall\ k\in\{1,...,K\},\] \[p_{l}=q_{l},\ \forall\ l\notin A\] ### _Algorithm_ Any gradient- or non-gradient-based multi-objective optimization method could be used in MCD. To demonstrate our results in this paper, we leverage the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) [22] as the backend of MCD. NSGA-II is a multi-objective genetic algorithm that boasts several innovative features, such as non-dominated sorting for elitist selection, crowding distance to encourage diversity, and genetic operators such as tournament selection, simulated binary crossover, and polynomial mutation. We use an implementation of NSGA-II from [34], including the mixed-variable selection, crossover, and mutation functions provided. The initial population always consists of the query and a set of randomly sampled points from the dataset or the user-specified design space boundaries. In problems with continuous variables, we find that without any precautions to maintain the exact parameter values from the original query, these values tend to get 'lost,' and can never be exactly reconstructed, hurting the sparsity objective of counterfactuals. To allow the algorithm to'rediscover' the exact parameter values from the query, we introduce a custom operator that randomly reverts individual parameter values back to the query's values with a certain probability. ### _Sampling_ Contrary to other counterfactual search approaches, our method decouples the optimization and sampling steps. Conventionally, a user will have to decide on the priorities between various objectives (e.g. proximity, diversity, manifold proximity, etc.) before running the optimization. This is impractical, as these objectives are challenging to select intuitively, and must often be chosen through trial and error. For example, a designer might realize that the generated counterfactuals are much too different from the query to be practically realizable. By avoiding retraining, our method can save significant computational expense and, as we will discuss in Sec. V, enable users to quickly consider counterfactuals from different regions of the objective landscape. We decouple the search and sampling process as follows: 1. Given a query, a set of constraints, and objectives, the optimizer generates a collection of candidate counterfactuals by running NSGA-II. 2. The sampling algorithm collects a set of objective priority weights and optional targets from the user. By collecting these weights after training, MCD allows rapid counterfactual sampling under different objective weights without the need for retraining, unlike other approaches. 3. Each candidate counterfactual is assigned an aggregate quality score, which is calculated as a sum of individual objective scores, weighted by their priority. For any objectives with specified targets, the Design Target Achievement Index [35] is used to quantify target achievement before factoring into the aggregate score. The aggregate score, \(S\) of a counterfactual candidate, p, is given in terms of objective priority weights \(w_{pr},w_{sp},w_{mp}\) by: \[\begin{split} S(p)=w_{pr}f_{pr}(p,q)+w_{sp}f_{sp}(p,q)+w_{mp}f_{ mp}(p,S)\\ +DTAI(p,t,\alpha,\beta)\end{split}\] (6) Here, \(DTAI(p,t,\alpha,\beta)\) is the Design Target Achievement Index of the candidate given auxiliary objective targets, \(t\), priority weights, \(\alpha\), and decay parameters, \(\beta\)[35]. 4. A performance-weighted diversity matrix is calculated using a Gower distance-based similarity kernel to evaluate the similarity between counterfactuals. Matrix entries are calculated as a function of aggregate scores and a diversity parameter, \(w_{d}\) as: \[D_{i,j}=\delta_{G}(i,j)\left(S(i)S(j)\right)^{\frac{1}{w_{d}}}\] (7) 5. A diverse set of high-performing counterfactuals is sampled from this matrix using k-greedy diverse sampling [36]. If the user requests only a single counterfactual instead of a diverse set, the candidate with the highest aggregate quality score is returned. Fig. 2: Counterfactual sets returned for three query designs under different weightings of counterfactual quality objectives. Performance space constraints are indicated on the plots. Valid counterfactuals must simultaneously meet both constraints. ### _Showcasing Functionality on 2D Examples_ Before showcasing the capabilities of MCD on real design datasets, we will first demonstrate its performance on a simple two-dimensional problem for ease of visualization. We select a challenging two-objective problem and sample synthetic data. We then query three different designs, D1-3, and specify the same challenging constraint criterion for each query, which is only satisfiable in four small disjoint regions of the space. Mathematically, we constrain the performance space values \(Y_{1}\) and \(Y_{2}\) such that \(0.4\leq Y_{1}\leq 0.6\) and \(Y_{2}\geq 0.6\). In simple terms, any valid counterfactual must lie near the star-shaped contour on the left and strictly within the circle on the right in the contour plots in Fig. 2. We consider four choices of objective weights: 1. First we examine a fairly "balanced" selection of objective weights (\(w_{pr}=0.5,\,w_{sp}=0.2,\,w_{mp}=0.5,\,w_{d}=0.2\)) in Fig. 2a. In this setting, the sampled counterfactual sets achieve a balance of proximity, diversity, and sparsity. 2. Next, we consider a setting where proximity is prioritized over other objectives (\(w_{pr}=50,\,w_{sp}=0.2,\,w_{mp}=.05,\,w_{d}=0.2\)) in Fig. 2b. In this setting, most counterfactuals in each set are sampled from the mode nearest the queries, though counterfactuals are still diversified within these modes. 3. We next consider a case where diversity is given precedence over other objectives (\(w_{pr}=0.5,\,w_{sp}=0.2,\,w_{mp}=0.5,\,w_{d}=20\)) in Fig. 2c. In this case, the sampled counterfactual sets are very well distributed across the feasible regions of the space. 4. Finally, we consider the case where sparsity is given the highest priority (\(w_{pr}=0.5,\,w_{sp}=20,\,w_{mp}=0.5,\,w_{d}=0.2\)) in Fig. 2d. Many sampled counterfactuals change only one parameter from the query, when possible. Each of these subsets is sampled from the same set of counterfactual candidates with no re-optimization necessary. Now, having demonstrated MCD's functionality on a simple 2D problem, we move on to a more complex real-world design problem: Bike frame design. ## IV Case Study 1: Design Refinement using Structural Performance Queries In our first case study, we consider the counterfactual: "What if my design were 30% lighter?" Specifically, we consider a bicycle frame design problem where we are trying to improve the structural properties and reduce the weight of a query design. We use a regression model trained on the FRAMED dataset consisting of Finite Element (FE) simulation results from 4500 community-designed bike frames [37], including weight, safety factors, and deflections under various loading conditions. The trained regression model is an Auto-Gluon tabular AutoML regressor [38] intended to predict the structural performance of bicycle frames accurately. To illustrate MCD's capabilities, we feed it three variants of the same query. The first has a single objective: finding counterfactuals that reduce the predicted mass of a given design. The second has two competing objectives: Maximize a design's safety factor while minimizing its mass. The third has the same objectives as the second but restricts MCD to only vary a more constrained and actionable set of features. In each example, we query the same design: a steel tube road bike with minor structural inefficiencies. These inefficiencies largely stem from a down tube with insufficient wall thickness, requiring other components to be over-engineered. This bike has a safety factor3 of 1.24 and a mass of 4.26 kg, so our primary objective is to reduce the mass. Each optimization ran for 100 generations with a population size of 500. Footnote 3: We use predicted safety factor in FRAMED’s in-plane loading scenario [37] Single objective queryIn the first variant, MCD was tasked with finding counterfactuals that reduced the mass of the original design from 4.26 kg to under 3 kg. MCD effectively successfully discovered hundreds of valid counterfactuals and sampled a set of five diverse counterfactuals which had, on average, a mass of 2.3 kg, as tabulated in Table I. Although MCD succeeded in its explicitly stated objective, a closer look reveals that it did nothing to remedy the wall thickness issue in the down tube, and as a consequence of weight savings in other parts of the sampled frames, the average safety factor across sampled counterfactuals was an abysmal 0.48. This disregard for secondary objectives is quite characteristic of the many existing single-objective counterfactual search algorithms and illustrates why MCD's novel support of multi-objective queries is so essential for design problems. Our next example showcases how to leverage multi-objective queries to avoid these issues. Bi-objective queryIn the second variant, a second objective was introduced: Increase the safety factor to a minimum value of 1.5. Again, MCD successfully discovered numerous counterfactuals, and the diverse 5-bike set sampled this time had an average mass of 2.4 kg and a safety factor of 1.7, as shown in Table II. This time, MCD realized that the bike could be made significantly more weight-efficient by increasing the down tube wall thickness to relieve structural stress on other components to be lightened. However, it also \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multirow{2}{*}{Material} & \multicolumn{2}{l}{Stack} & \multicolumn{1}{c}{Down Tube} & \multicolumn{1}{c}{Safety} & \multicolumn{1}{c}{Frame} \\ & & (mm) &... & \multicolumn{1}{c}{Thick. (mm)} & \multicolumn{1}{c}{Factor} & \multicolumn{1}{c}{Mass (kg)} \\ \hline Query & Steel & 565.6 &... & 0.52 & 1.24 & 4.26 \\ \hline CF 1 & Steel & 570.8 &... & 0.52 & 0.52 & 1.99 \\ CF 2 & Steel & 565.6 &... & 0.52 & 0.27 & 1.64 \\ CF 3 & Steel & 565.6 &... & 0.52 & 0.76 & 2.48 \\ CF 4 & Steel & 565.6 &... & 0.52 & 0.64 & 2.69 \\ CF 5 & Aluminum & 522.6 &... & 0.52 & 0.22 & 2.70 \\ \hline \hline \end{tabular} \end{table} TABLE I: Generated counterfactuals for variant 1 (34 columns omitted). Like many single-objective counterfactual engines, MCD tends to achieve single-objective queries at the expense of secondary objectives. MCD’s unique support of multi-objective queries remedies this problem. changed the material of the bike from steel to aluminum or titanium in four of the five counterfactuals, a modification that would likely carry a significant increase to the cost and may thus be unactionable. In the presence of a cost prediction model, MCD could consider cost as another query objective. However, even without such a model, MCD can be ordered to leave certain design parameters unchanged, as we demonstrate in our final example. Bi-objective query with constraintsIn the third variant, MCD was no longer allowed to vary frame material. It proceeded to find tens of valid designs through variations in certain tube diameters, lengths, and other structural configurations. From these valid designs, a 5-bike set was sampled that had an average mass of 2.5 kg and an average safety factor of 1.8, as shown in Table III. Through these examples, we have attempted to demonstrate that MCD excels at handling multi-objective performance queries and can be used in such a setting to recommend performance-enhancing design modifications. In our next example, we consider a scenario in which more abstract text queries are provided instead of hard performance constraints. ## V Case Study 2: Modifying Designs using Cross-Modal Text Queries In this case study we examine subjective counterfactuals like: "What if my design looked more 'cyberpunk' themed??" Classically, counterfactual search requires a query in the same data modality as the predictive model. This can be constraining, since it may be more natural in many cases to place queries in a different data modality, especially if that modality is more intuitive for human users. This is often the case for images or text, which are much more easily understood by humans compared to tabular or parametric data. Accordingly, we demonstrate how we can query MCD in a cross-modal setting using text prompts. ### _Methodology: Case Study 2_ To enable cross-model queries, we construct an objective evaluation function comprised of several key building blocks: * To begin, we require a rendered image of a bicycle design. We construct an automated rendering pipeline that works in conjunction with the BikeCAD software to generate an image of a bicycle given a parametric vector. * We then calculate an embedding for the generated bike image using a pre-trained CLIP model introduced in Sec. II-C that maps the generated bike renders to a vector embedding space. * Next, we compute the embedding vector for a target text prompt using a pre-trained CLIP text embedding model. * Finally, we calculate cosine similarity between the two 512-dimensional embedding vectors. In this case study, this entire objective evaluation pipeline serves as the predictor for counterfactual search. By generating counterfactuals that minimize this cosine similarity objective, the optimizer ensures that generated counterfactuals better match the given text prompts. We select a subset of the BIKED [5] dataset's parameter space to consider during optimization and choose a generic red road bike design as a query design. We choose two text prompts as optimization objectives: "A futuristic black cyberpunk-style road racing bicycle" and "A sturdy compact bright blue mountain bike with thick tires." Because the demands of human designers are often difficult to quantify using traditional parametric methods, the first text prompt was selected to be highly subjective. The second prompt is less subjective, offering details about design features, but stops short of explicit design guidelines. In this context, the user is effectively asking questions like: "How would my red road bike design change if I wanted it to look more like a black cyberpunk-style bike?" We optimize for 400 generations with a population size of 100. Next, we perform a series of sampling operations with different objective weights, as shown in II-C. By selecting the optimal bikes at a sweep of different objective weights, we can visualize the best bikes under numerous configurations of objective priorities. Counterfactual quality objective weights in the \(i^{th}\) row were chosen as: \[w_{1}=w_{2}=w_{3}=\frac{0.2}{2^{i}} \tag{8}\] In this way, counterfactuals with better proximity, sparsity, and manifold proximity were prioritized toward the top of the grid, while counterfactuals were given more leeway to deviate \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multirow{2}{*}{Material} & Stack & \multicolumn{2}{c}{Down Tube} & Safety & Frame \\ & & (mm) & \multicolumn{1}{c}{\(\cdots\)} & Thick. (mm) & Factor & Mass (kg) \\ \hline Query & Steel & 565.6 & \(\cdots\) & 0.52 & 1.24 & 4.26 \\ \hline CF 1 & Aluminum & 565.0 & \(\cdots\) & 2.20 & 1.91 & 2.81 \\ CF 2 & Titanium & 561.6 & \(\cdots\) & 2.46 & 1.82 & 2.21 \\ CF 3 & Aluminum & 532.2 & \(\cdots\) & 1.81 & 1.58 & 1.75 \\ CF 4 & Titanium & 563.5 & \(\cdots\) & 3.92 & 1.60 & 2.23 \\ CF 5 & Steel & 565.6 & \(\cdots\) & 2.48 & 1.65 & 2.87 \\ \hline \hline \end{tabular} \end{table} TABLE II: Generated counterfactuals for variant 2 (34 columns omitted). By querying multiple objectives simultaneously, MCD avoided the safety factor issue that occurred in Query 1. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multirow{2}{*}{Material} & Stack & \multicolumn{2}{c}{Down Tube} & Safety & Frame \\ & & (mm) & \multicolumn{1}{c}{\(\cdots\)} & Thick. (mm) & Factor & Mass (kg) \\ \hline Query & Steel & 565.6 & \(\cdots\) & 0.52 & 1.24 & 4.26 \\ \hline CF 1 & Steel & 565.6 & \(\cdots\) & 2.44 & 2.05 & 2.93 \\ CF 2 & Steel & 601.7 & \(\cdots\) & 3.38 & 2.06 & 2.31 \\ CF 3 & Steel & 565.6 & \(\cdots\) & 3.22 & 1.58 & 2.71 \\ CF 4 & Steel & 601.7 & \(\cdots\) & 2.12 & 1.61 & 1.87 \\ CF 5 & Steel & 565.6 & \(\cdots\) & 3.35 & 1.56 & 2.82 \\ \hline \hline \end{tabular} \end{table} TABLE III: Generated Counterfactuals for Query 3 (34 columns omitted). When restricted from modifying frame material, MCD is still able to recommend design modifications that meet the safety factor and mass targets. from the query design and data manifold toward the bottom. Diversity weight, \(w_{d}\), was irrelevant, as only one design was Fig. 3: Visualization of the objective manifold for cross-modal counterfactual selection. Designs sampled from the top of the manifold prioritize proximity, sparsity, and manifold proximity. Designs in the left and right corners prioritize similarity to two respective text prompts. Heatmaps show individual objective scores (lighter is better). sampled for each combination of objective weights. Similarly, auxiliary objective weights in the \(j^{th}\) column were set through the DTAI objective weighting parameter, \(\alpha\), in terms of the number of columns, \(n\) (in our case 6), as: \[\alpha_{1}=1.5^{n-j},\,\alpha_{2}=1.5^{j-1} \tag{9}\] These objectives allowed similarity to the first text prompt to take precedence on the left edge of the grid and similarity to the second text prompt to take precedence on the right. ### _Discussion: Case Study 2_ As expected, models at the top of the grid are appreciably similar to the red bike; some were essentially indistinguishable. Bikes further down the grid become progressively more visually different, which is corroborated by objective scores, as shown in Figs. 3f-3h. Bikes in the lower left corner of the grid can be subjectively identified as more similar to "A futuristic black cyberpunk-style road racing bicycle." Among the key modifications are a color change and a shift to tri-spoke wheels, which may be more on-theme for a 'cyberpunk-style' bike. Similarity to the text prompt as evaluated by CLIP agrees, as shown in Fig. 3d. Likewise, bikes towards the bottom right corner of the grid can be subjectively identified as more similar to "A study compact bright blue mountain bike with thick tires." Bikes in this corner have the slanted down tube which is characteristic of mountain bikes; have the requested color change; and have a thick rear tire. Notably, the model either does not discover a modification to the front tire or does not find that such a modification improves similarity to the prompt. Also, the models maintain the dropped handlebars present on the query, which are characteristic of road bikes. Nevertheless, similarity to the text prompt as evaluated by CLIP was found to be best in this corner, as shown in Fig. 3e. In this case study we demonstrated that MCD effectively handles multi-objective cross-modal prompts. Next, we move on to consider a challenging multi-modal query case as our final case study. ## VI Case Study 3: Modifying Designs using Multimodal Text, Image, and Parametric Queries In this final case study, we examine hybrid counterfactuals like: "What if my design were lighter, looked more 'cyberpunk themed,' had better structural properties, and looked like this other design?" Having considered multi-objective cross-modal queries in the previous case study, we now present our most challenging case study. This time, we provide a multi-objective multi-modal query consisting of a target text prompt, image, frame safety factor, and frame mass. ### _Methodology: Case Study 3_ To calculate image and text similarity, we leverage the rendering pipeline and pre-trained CLIP model used in case study 2. To calculate structural performance, we use the AutoGluon [38] model trained in [37], which was used in case study 1. We again select the same generic red road bike as our query design and select "A futuristic black cyberpunk-style road racing bicycle" as our text prompt. For our target image, we select an image of a Fuji Wendigo 1.1 mountain bike which closely matches the second text description from the previous case study. Like the last case study, we sample designs in a grid, as shown in Fig. 4 based on a variable objective weighting scheme. We select a spread of DTAI objective weighting parameter (\(\alpha\)) values in terms of the \(i^{th}\) row and \(j^{th}\) column as follows: \[\alpha_{\text{text}}=2^{n-j},\,\alpha_{\text{image}}=2^{j}\] \[\alpha_{\text{sf}}=1.5^{n-i-1},\,\alpha_{\text{mass}}=1.5^{i}\] This time, we hold the counterfactual quality objective weights constant at: \[w_{1}=w_{2}=w_{3}=0.05 \tag{11}\] ### _Discussion: Case Study 3_ As in the previous case study, MCD modifies several components to better match the text query of the models on the left side, including recoloring them black and replacing a regular spoked wheel with a disk wheel. However, due to the proximity, sparsity, and manifold proximity weights being fixed at moderate values, it does not deviate as far from the dataset as some of the most extreme designs in the previous case study. The bikes on the right side of the grid are visibly more similar to mountain bikes, displaying the characteristic slanted top tube and, in some cases, adding a front suspension to the design. Interestingly, MCD does not generate any blue bikes, indicating that the color of the reference image is not as strongly emphasized as when a color is explicitly stated in a text prompt. Structural modifications of the bike frame are challenging to appreciate in renderings because the largest drivers of structural performance are tube wall thickness parameters and material, none of which have a visual signature in the rendering. However, Figs. 4g and 4h indicate that bikes at the top and bottom prioritize safety factor and weight, respectively, as intended. From Fig. 4i, we can see that the bikes toward the top of the grid fall far outside of the data manifold. Unsurprisingly, we see various design infeasibilities in these bikes, such as colliding components. Unless explicitly prevented using constraints, such infeasibilities are typically more common as counterfactuals fall further from the data manifold. Though some of the generated counterfactuals suffer from infeasibilities, we have demonstrated that MCD can provide meaningful counterfactuals in high-dimensional (i.e., 4 auxiliary objectives and 3 counterfactual quality objectives) and multimodal objective spaces. Next, we proceed to discuss MCD's limitations. ## VII Limitations MCD makes several key contributions to counterfactual optimization methods for designers, such as incorporating ## References Fig. 4: Visualization of the objective manifold for multimodal counterfactual selection. Designs sampled towards the top and bottom of the manifold prioritize safety factor and weight respectively. Designs sampled towards the left and right edges prioritize similarity to a target text prompt and target image, respectively. Heatmaps show individual objective scores (lighter is better). Designs that fall far outside of the data manifold struggle with component overlap and other infeasibility issues. multiple objectives. However, it also has a few limitations. In model-agnostic configurations, MCD must use a gradient-free optimizer, preventing it from leveraging gradient information, even if some of the predictive models are differentiable. While this gradient-free approach allows MCD to support nondifferentiable predictors and avoid local minima, it potentially makes MCD less sample-efficient than similar gradient-based approaches. Another key limitation stems from the difficulty of genetic algorithms in handling a large number of objectives. Because MCD adds three counterfactual quality objectives to the objective space, it slightly exacerbates the dimensionality issue of multi-objective genetic algorithms. Future work will explore MCD variants that leverage gradient information and many-objective optimization methods to address these limitations. Additionally, we would like to acknowledge certain limitations with the text-based queries presented in the last two case studies. Though CLIP embeddings can capture more abstract and subjective ideas, they struggle to capture fine-grained technical details of designs. As such, we recommend that users with highly technical constraints specify them parametrically, instead of through text. However, as machine learning models continue to improve, querying counterfactual models for precise technical details through text and images may improve significantly. ## VIII Conclusion In this paper, we have introduced Multi-Objective Counterfactuals for Design (MCD), a specialized counterfactual optimization method for design tasks. We first discussed previous counterfactual optimization approaches, many stemming from machine learning explainability literature. We then identified key limitations with existing works, particularly their inability to sample multi-objective queries and the inherent coupling of the optimization and sampling process. Next, we demonstrated using 2D examples how MCD solves these two challenges. We presented a bicycle frame optimization problem and showed how MCD's support of multi-objective queries allows it to recommend meaningful modifications to a query design which improves structural performance. We then identified that although previous counterfactual search models have not supported cross-modal queries, advancements in multi-modal learning reasonably allow counterfactuals to be queried in different data modalities. Next, we showcased how MCD can be queried with text prompts, and illustrated how MCD's decoupling of optimization and sampling allows it to visualize complex objective manifolds without re-optimization. Finally, we asked MCD to generate counterfactuals given a multimodal text, image, and parameter query. By effectively recommending design modifications to match these queries, MCD demonstrated that it can support complex multimodal queries. All in all, MCD is a valuable tool for designers looking to optimize their designs and design automation researchers looking to interact intuitively with their models. We are excited to release our code and examples at [http://decode.mit.edu/projects/counterfactuals/](http://decode.mit.edu/projects/counterfactuals/) and anticipate a variety of interesting use cases across the community. ## IX Acknowledgments We would like to thank Amin Heyrani Nobari for his contributions to the image rendering pipeline that enabled much of the cross-modal work presented. We would also like to thank Tyler Butler for his feedback and edits.
2306.10546
Correction Factor of FWER for Normal Distribution in Nearly Independent Setup
In this paper, we have attempted to study the behaviour of the family wise error rate (FWER) for Bonferroni's procedure in a nearly independent setup for normal distribution. In search for a suitable correlation penalty, it has been noted that the root mean square (RMS) of correlations is not appropriate under this setup as opposed to the study of \cite{efron2007correlation}. We have provided a suitable correction factor for deviation from independence and approximated the FWER under this nearly independent setup.
Nabaneet Das, Subir K. Bhandari
2023-06-18T12:49:53Z
http://arxiv.org/abs/2306.10546v1
# Correction factor of FWER for normal distribution in nearly independent setup ###### Abstract In this paper, we have attempted to study the behaviour of the family wise error rate (FWER) for Bonferroni's procedure in a nearly independent setup for normal distribution. In search for a suitable correlation penalty, it has been noted that the root mean square (RMS) of correlations is not appropriate under this setup as opposed to the study of Efron (2007). We have provided a suitable correction factor for deviation from independence and approximated the FWER under this nearly independent setup. ## 1 Introduction Dependence among hypotheses in simultaneous testing problem have caused a great concern among researchers. Although many efforts have been made to generalize the existing methods under dependence (Yekutieli and Benjamini (1999), Benjamini et al. (2001), Sarkar et al. (2002), Sarkar (2008), Efron (2007), Efron (2012) etc.), very few literature is available which explicates the effect of dependence on the existing methods. Sarkar (2008) discusses false discovery rate (FDR) control under dependence. Efron (2007) in his study of empirical Bayes methods, has shown that, the correlation penalty depends on the root mean square (RMS) of correlations. An excellent review of the whole literature can be found in Efron (2012). When correlation is present, these methods show some undesirable characteristics (Being too conservative when positive correlation is present in some cases). (see Yekutieli and Benjamini (1999), Sarkar (2008), Genovese et al. (2006), Sun and Tony Cai (2009)) The conservative nature of these methods result in loss of power. Hence, it is imperative to study the error rate of these methods in order to rectify the effect of dependence. In a recent paper Das and Bhandari (2021) have shown that under equicorrelated normal setup with correlation \(\rho\), the widely used Bonferroni method asymptotically controls the family-wise error rate (FWER) at level \(\alpha(1-\rho)\) instead of \(\alpha\) and Dey (2021) further improves this upper bound by showing that, the asymptotic FWER is zero if correlations are bounded away from zero. Dey (2021) also studies the FWER under normal distribution in more general setups (non-asymptotic and non-equicorrelated setups) and confirm the conservative nature of Bonferroni's method. In a simulation study, Das and Bhandari (2020) have pointed out the exact behaviour of FWER and FDR of Bonferroni's method and Benjamini-Hochberg FDR control for moderate to large number of hypotheses under normal distribution. In this work, we have also considered jointly normal distribution and we shall focus on approximating the FWER of Bonferroni's method under "nearly independent" setup. We have provided an asymptotic correction factor to the FWER of Bonferroni's method under this setup. This article is organized as follows : in section 2, we introduce the framework of our work. In section 3, we have shown the motivation of considering the "nearly independent" setup and in section 4 we have approximated the FWER under this setup and provided a correction factor which accounts for the deviation from independence. ## 2 Description of the problem Let \(X_{1},X_{2},......\) be a sequence of observations and the null hypotheses are \[H_{0i}:X_{i}\sim N(0,1)\;\;i=1,2,....\] Here we have considered one sided tests (This means, \(H_{0i}\) is rejected for large values of \(X_{i}\) (say \(X_{i}>c\))). A classical measure of the type-I error is FWER, which is the probability of falsely rejecting at least one null hypothesis (Which happens if \(X_{i}>c\) for some \(i\) and the probability is computed under the intersection null hypothesis \(H_{0}=\bigcap\limits_{i=1}^{n}H_{0i}=\{X_{i}\sim N(0,1)\,\forall\,i=1,2,...,n\}\)). Then, \[\textbf{FWER}=\text{P}(\text{At least one false rejection})\text{= P}(\bigcup\limits_{i=1}^{n}\{X_{i}>c\}\ |\ H_{0}\ )\] Suppose, \(Corr(X_{i},X_{j})=\rho_{ij}\;\;\forall\;i\neq j\) and \(\Sigma\) is the corresponding correlation matrix. We assume the following about the correlation matrix \(\Sigma\). \[\rho_{ij}=O\left(\frac{1}{n^{\beta}}\right)\;\forall\;i\neq j\text{ for some }\beta>0\] For convenience, we shall call this setup as "**nearly independent setup**". Suppose \(P(X_{i}>c\;|\;H_{oi})=\alpha_{n}\). Then, under the assumption of independence of hypotheses (i.e. if \(\Sigma\) is identity matrix), we have \(FWER=1-(1-\alpha_{n})^{n}\). We shall study the value of the FWER under nearly independent setup. Bonferroni's method is based on \(\alpha_{n}=\frac{\alpha}{n}\). So, we shall consider the setup where \(\lim_{n\rightarrow\infty}n\alpha_{n}=\alpha\). ## 3 Motivation behind the nearly independent setup Efron (2007) in his study of empirical Bayes methods, has pointed out that the correlation penalty on the summary statistics based on empirical c.d.f. depends on the root mean square (RMS) of correlations (\(\frac{1}{n(n-1)}\sum\limits_{i\neq j}\sum\rho_{ij}^{2}\)). A natural question might arise from this result is that "does the RMS of correlations act as a correlation penalty" in our framework also?" If this were true, then we would have \(\text{ FWER }\sim\text{ FWER under independence as }\frac{1}{n(n-1)}\sum\limits_{i\neq j}\sum\rho_{ij}^{2}\to 0\). We can construct an easy counterexample to answer this question. Let \(M_{n}(\rho)\) denote the \(n*n\) equicorrelation matrix with correlation \(\rho\) and \(\mathbf{0}_{n}\) denote the \(n*n\) matrix with all zero entries. Consider the \(n^{2}*n^{2}\) block-diagonal matrix with \(n\) many \(M_{n}(\rho)\) blocks in the diagonal. \[\text{i.e. }\Sigma=\begin{pmatrix}M_{n}(\rho)&\mathbf{0}_{n}&...&\mathbf{0}_{n} \\ \mathbf{0}_{n}&M_{n}(\rho)&...&\mathbf{0}_{n}\\...&.....&&.....\\ \mathbf{0}_{n}&\mathbf{0}_{n}&...&M_{n}(\rho)\end{pmatrix}\] This is the correlation matrix of the \(n^{2}\) variables \((X_{1},...,X_{n^{2}})\) such that, * \((X_{1},..,X_{n}),(X_{n+1},..,X_{2n}),....,(X_{n^{2}-n+1},...,X_{n^{2}})\) are independent. * In each block \((X_{(k-1)n+1},..,X_{kn})\) we have \(Corr(X_{i},X_{j})=\rho\) for \(i\neq j\). (for \(k=1,\ldots,n\)) If we apply Bonferroni (\(\alpha\)) method in this setup then \(\alpha_{n}=\frac{\alpha}{n^{2}}\). By the similar approach of Das and Bhandari (2021), we can argue that, \(1-FWER\geq((1-\frac{\alpha}{n^{2}})-(1-\rho)[(1-\frac{\alpha}{n^{2}})-(1- \frac{\alpha}{n^{2}})^{n}])^{n}=(1-\frac{\alpha}{n^{2}})^{n}[1-(1-\rho)[1-(1- \frac{\alpha}{n^{2}})^{n-1}]]^{n}\) Since \(1-(1-x)^{k}\leq kx\ \ \forall\,0<x<1\), this implies, \[1-FWER\geq(1-\frac{\alpha}{n^{2}})^{n}[1-\frac{(1-\rho)(n-1)\alpha}{n^{2}}]^{ n}\to e^{-\alpha(1-\rho)}\text{ as }n\rightarrow\infty\] Under this setup, we finally have \(FWER\leq\alpha(1-\rho)\) asymptotically. It is interesting to note that, * In this setup, mean of the absolute values of correlations is \(\frac{n\binom{n}{2}\rho}{\binom{n^{2}}{2}}=O(\frac{1}{n})\) and so is the mean square of correlations. Although the root mean square (RMS) of correlations becomes smaller and smaller, the FWER is not close to the FWER under independence. So, the correlation penalty is neither dependent on RMS of correlations nor the mean of their absolute values. * It is also interesting to note that, whenever \(\max\limits_{i\neq j}|\rho_{ij}|\) is bounded away from \(0\), it is possible to find a setup where FWER widely differs from the one obtained under independence. So, if we ask ourselves "how far can we go from independence without losing much?", then the above example suggests us to bound the absolute values of the correlations. Under the nearly independent setup, we bound the absolute values by the order \(\frac{1}{n^{\beta}}\) for some \(\beta>0\). ## 4 FWER under the nearly independent setup Let's denote FWER under correlation matrix \(\Sigma\) by \(FWER(\Sigma)\). **Theorem 4.1**: _Fix a sufficiently large integer \(K\). Then, under the nearly independent setup, FWER can be approximated by the following way._ \[FWER(\Sigma)\sim\sum\limits_{i=1}^{K}\frac{(-1)^{i-1}\alpha^{i}}{i!}+\frac{c^{2} \bar{\rho}}{2}\sum\limits_{i=2}^{K}(-1)^{i-1}\frac{\alpha^{i}}{(i-2)!}\] _Where \(\bar{\rho}=\frac{1}{n(n-1)}\sum\limits_{i\neq j}\sum\rho_{ij}\) is the mean of correlations._ **Remarks** :- It is interesting to note that, \(\sum\limits_{i=1}^{K}\frac{(-1)^{i-1}\alpha^{i}}{i!}\approx 1-e^{-\alpha}\) for sufficiently large \(K\). Also \(\lim_{n\rightarrow\infty}1-(1-\alpha_{n})^{n}=1-e^{-\alpha}\) is the limiting form of FWER under independence. So, if we use large enough \(K\) for this approximation then the first term is nearly equal to the FWER for independent case. So, the second term \(\frac{c^{2}\bar{\rho}}{2}\sum\limits_{i=2}^{K}(-1)^{i-1}\frac{\alpha^{i}}{(i- 2)!}\) acts as a correction factor or the amount of deviation from independence. Under the nearly independent setup, \(\bar{\rho}=O(\frac{1}{n^{\beta}})\) (The correction factor is very small) Before we proceed with the proof of theorem 4.1, the following important theorem must be stated. **Theorem 4.2**: _Multivariate Mill's Ratio (Savage (1962)) Let \(\underset{\sim}{X}\sim N_{m}(\mathbf{0}_{m},V)\) and \(M=V^{-1}\)._ \(F(\underset{\sim}{a},M)=P(\underset{\sim}{X}>\underset{\sim}{a})\) _and \(f(\underset{\sim}{a},M)=\frac{|M|^{\frac{1}{2}}}{(2\pi)^{\frac{n}{2}}}\exp(- \frac{1}{2}\underset{\sim}{\alpha}^{T}M\underset{\sim}{a})\)_ _Let \(\underset{\sim}{\Delta}=\underset{\sim}{a}^{T}M\) (i.e. \(\Delta_{i}=\underset{j}{\sum}a_{j}m_{ji}\) \(i=1,2,...,m\)) If \(\Delta_{i}>0\)\(\forall\,i,\) then_ \[1-\frac{1}{2}\sum\limits_{i}\sum\limits_{j}\frac{m_{ij}(1+\delta_{ij})}{\Delta _{i}\Delta_{j}}<\frac{F(\underset{\sim}{a},M)}{f(\underset{\sim}{a},M)( \underset{i=1}{\prod}\Delta_{i})^{-1}}<1\] _Here \(\delta_{ij}=\) Kronecker's Delta_ **Proof of theorem 4.1** :- \(\mathbf{FWER}\left(\Sigma\right)=\underset{i=1}{\sum}P(X_{i}>c)-\underset{i \neq j}{\sum}\sum P(X_{i}>c,X_{j}>c)+\underset{i,j,k\text{ distinct}}{\sum}P(X_{i}>c,X_{j}>c,X_{k}>c)....+(-1)^{n-1}P(X_{1}>c,...,X_{n}>c)\) Fix any \(1\leq k\leq K<<n\) where \(K\) is a fixed positive integer. **Lemma 4.3**: _:- For any \(1\leq i_{1}<...<i_{k}\leq n\). Then,_ \[P(X_{i_{1}}>c,...,X_{i_{k}}>c)\sim f(c\mathbf{1}_{k})(\underset{i=1}{\prod} \Delta_{i})^{-1}\] _(Where \(\Delta=c{\bf 1}_{k}^{T}W^{-1},\ W\) is the correlation matrix of \((X_{i_{1}},X_{i_{2}},...,X_{ik})\) and \(f(.)\) is the density function of \(N_{k}({\bf 0}_{k},W)\) distribution)_ Proof :- Let \(W=I+R\). \(\Rightarrow W^{-1}=(I+R)^{-1}=I-R+o(R)\) By the nearly independent assumption, we have \((I+R)^{-1}\approx(I-R)\). So, \(\Delta=c{\bf 1}_{k}^{T}W^{-1}\approx c{\bf 1}_{k}^{T}(I-R)\). \(\Rightarrow\Delta_{i}=c(1-\sum\limits_{j\neq i}\rho_{ji})>0\quad\forall\,i\) (By the nearly independent assumption) Now it is clear that, \(1-\frac{1}{2}\sum\limits_{i}\sum\limits_{j}\frac{m_{ij}(1+\delta_{ij})}{ \Delta_{i}\Delta_{j}}=1-O(\frac{1}{c^{2}})\). (This is because the sum is over a fixed no. of terms (\(k^{2}\) many terms)). Since \(c\rightarrow\infty\) as \(n\rightarrow\infty\), we can say that, \(\lim_{n\rightarrow\infty}\frac{P(X_{i_{1}}>c,...,X_{i_{k}}>c)}{f(c{\bf 1}_{k})( \prod\limits_{i=}^{k}\Delta_{i})^{-1}}=1\) and hence the result. **Approximation of \(f(c{\bf 1}_{k})(\prod\limits_{i=}^{k}\Delta_{i})^{-1}\)** By the nearly independent assumption on \(\Sigma\), we can say that, \(|I+R|\sim 1\). Observe that, \(f(c{\bf 1}_{k})\sim\frac{1}{(2\pi)^{\frac{5}{2}}}\exp(-\frac{c^{2}}{2}{\bf 1}_{k }^{T}(I-R){\bf 1}_{k})=\frac{1}{(2\pi)^{\frac{5}{2}}}\exp(-\frac{kc^{2}}{2})\exp( \frac{c^{2}}{2}\sum\limits_{l\neq m}\sum\rho_{lm})\) Since \(\rho_{lm}=O(\frac{1}{n^{\beta}})\) and \(c^{2}=O(\log n)\), this implies, \(c^{2}\sum\limits_{l\neq m}\sum\rho_{lm}=o(1)\). We know that, \(e^{x}\sim 1+x\) for sufficiently small \(x\). So, \[\exp(\frac{c^{2}}{2}\sum\limits_{l\neq m}\sum\rho_{lm})\sim 1+\frac{c^{2}}{2} \sum\limits_{l\neq m}\sum\rho_{lm}\] \(\Rightarrow f(c{\bf 1}_{k})\sim\frac{1}{(2\pi)^{\frac{k}{2}}}\exp(-\frac{kc^{2}}{2 })(1+\frac{c^{2}}{2}\sum\limits_{l\neq m}\sum\rho_{lm})\) and \((\prod\limits_{i=1}^{k}\Delta_{i})=c^{k}\prod\limits_{i=1}^{k}(1-\sum\limits_ {j\neq i}\rho_{ji})\). Since \(\rho_{ij}=O(\frac{1}{n^{\beta}})\), this implies, \(\prod\limits_{i=1}^{k}(1-\sum\limits_{j\neq i}\rho_{ji})\sim 1\). Hence, \[f(c{\bf 1}_{k})(\prod\limits_{i=}^{k}\Delta_{i})^{-1}\approx\frac{c^{k}(1+ \frac{c^{2}}{2}\sum\limits_{l\neq m}\sum\rho_{lm})}{(2\pi)^{\frac{k}{2}}\exp( \frac{kc^{2}}{2})}\] Recall that, \(\Phi(-c)=\alpha_{n}\sim\frac{\alpha}{n}\). If \(\phi(.)\) denote the standard normal density, then for large enough \(c\), then \[\phi(c)\sim c\Phi(-c)\Rightarrow n\sim\alpha\sqrt{2\pi}ce^{\frac{c^{2}}{2}}\] This implies, for any finite \(k\), \(f(c\mathbf{1}_{k})(\prod\limits_{i=1}^{k}\Delta_{i})^{-1}\sim(\frac{\alpha}{n})^ {k}(1+\frac{c^{2}}{2}\sum\limits_{l\neq m}\sum\rho_{lm})\) In particular, \[P(X_{i_{1}}>c,...,X_{ik}>c)\sim(\frac{\alpha}{n})^{k}(1+\frac{c^{2}}{2}\sum \limits_{l\neq m\in\{i_{1},..,i_{k}\}}\sum\rho_{lm})\] Approximation of \(FWER(\Sigma)\) under the nearly independent setup \(\mathbf{FWER}\left(\Sigma\right)=\sum\limits_{i=1}^{n}P(X_{i}>c)-\sum\limits_ {i\neq j}\sum P(X_{i}>c,X_{j}>c)+\sum\limits_{i,j,k\text{ distinct}}\sum P(X_{i}>c,X_{j}>c,X_{k}>c)....+(-1)^{n-1}P(X_{1}>c,...,X_{n}>c)\) Clearly, \(P(X_{i}>c)=\Phi(-c)=\alpha_{n}\) and hence \(\sum\limits_{i=1}^{n}P(X_{i}>c)=n\alpha_{n}\sim\alpha\) For a fixed positive integer \(K\) (\(K>2\)), we have \(FWER(\Sigma)=n\alpha_{n}+\sum\limits_{i=2}^{K}\sum\limits_{\begin{subarray}{ c}u_{1},...,u_{i}\\ \text{distinct}\end{subarray}}(-1)^{i-1}P(X_{u_{1}}>c,...,X_{u_{i}}>c)+\sum \limits_{i=K+1}^{n}\sum\limits_{\begin{subarray}{c}u_{1},...,u_{i}\\ \text{distinct}\end{subarray}}(-1)^{i-1}P(X_{u_{1}}>c,...,X_{u_{i}}>c)\) Clearly, \(\sum\limits_{i=2}^{K}\sum\limits_{u_{1},...,u_{i}\text{ distinct}}(-1)^{i-1}P(X_{u_{1}}>c,...,X_{u_{i}}>c)\) \(\sim\sum\limits_{i=2}^{K}\sum\limits_{u_{1},...,u_{i}\text{ distinct}}(-1)^{i-1}(\frac{\alpha}{n})^{i}(1+\frac{c^{2}}{2}\sum\limits_{l \neq m\in\{u_{1},...,u_{i}\}}\sum\rho_{lm})\) \(\sim\alpha+\sum\limits_{i=2}^{K}(-1)^{i-1}\frac{\binom{n}{i}\alpha^{i}}{n^{i} }+\frac{c^{2}}{2}\sum\limits_{i=2}^{K}(-1)^{i-1}(\frac{\alpha}{n})^{i}\binom{ n-2}{i-2}(\sum\limits_{i\neq j}\sum\rho_{ij})\) \(\sim\sum\limits_{i=1}^{K}\frac{(-1)^{i-1}\alpha^{i}}{i!}+\frac{c^{2}}{2n^{2}} \sum\limits_{i=2}^{K}(-1)^{i-1}\frac{\alpha^{i}}{(i-2)!}(\sum\limits_{i\neq j }\sum\rho_{ij})\) \(\sim\sum\limits_{i=1}^{K}\frac{(-1)^{i-1}\alpha^{i}}{i!}+\frac{c^{2}\bar{\rho }}{2}\sum\limits_{i=2}^{K}(-1)^{i-1}\frac{\alpha^{i}}{(i-2)!}\) We ignore the tail term \(\sum\limits_{i=K+1}^{n}\sum\limits_{u_{1},...,u_{i}\text{ distinct}}(-1)^{i-1}P(X_{u_{1}}>c,...,X_{u_{i}}>c)\) from \(FWER(\Sigma)\) and we approximate the FWER by the following formula. \(FWER(\Sigma)\sim\sum\limits_{i=1}^{K}\frac{(-1)^{i-1}\alpha^{i}}{i!}+\frac{c^ {2}\bar{\rho}}{2}\sum\limits_{i=2}^{K}(-1)^{i-1}\frac{\alpha^{i}}{(i-2)!}\) **Remarks** :- It is interesting to note that, taking \(\rho_{ij}\)'s \(O(\frac{1}{n^{\beta}})\) is not necessary for the proof. It works for any order faster than \(\frac{1}{c^{2}}\) or \(\frac{1}{\log n}\). Simulation results Bonferroni's procedure controls FWER at desired level under independence. We examine the FWER in the nearly independent setup and compare with the corrected value of FWER as per theorem 4.1. For these simulations we have considered \(n=5000\) and \(\beta=0.4,0.6,0.8,1\). They are repeated on four levels of significance (\(\alpha\)) ( namely \(\alpha=0.01,0.05,0.1,0.2\)). For each combination, the actual FWER is estimated based on 10,000 replications and the corrected value as per theorem 4.1 is computed based on \(K=15\).
2308.06850
S3C2 Summit 2023-06: Government Secure Supply Chain Summit
Recent years have shown increased cyber attacks targeting less secure elements in the software supply chain and causing fatal damage to businesses and organizations. Past well-known examples of software supply chain attacks are the SolarWinds or log4j incidents that have affected thousands of customers and businesses. The US government and industry are equally interested in enhancing software supply chain security. On June 7, 2023, researchers from the NSF-supported Secure Software Supply Chain Center (S3C2) conducted a Secure Software Supply Chain Summit with a diverse set of 17 practitioners from 13 government agencies. The goal of the Summit was two-fold: (1) to share our observations from our previous two summits with industry, and (2) to enable sharing between individuals at the government agencies regarding practical experiences and challenges with software supply chain security. For each discussion topic, we presented our observations and take-aways from the industry summits to spur conversation. We specifically focused on the Executive Order 14028, software bill of materials (SBOMs), choosing new dependencies, provenance and self-attestation, and large language models. The open discussions enabled mutual sharing and shed light on common challenges that government agencies see as impacting government and industry practitioners when securing their software supply chain. In this paper, we provide a summary of the Summit.
William Enck, Yasemin Acar, Michel Cukier, Alexandros Kapravelos, Christian Kästner, Laurie Williams
2023-08-13T21:51:28Z
http://arxiv.org/abs/2308.06850v1
# S3C2 Summit 2023-06: ###### Abstract. Recent years have shown increased cyber attacks targeting less secure elements in the software supply chain and causing fatal damage to businesses and organizations. Past well-known examples of software supply chain attacks are the SolarWinds or log4j incidents that have affected thousands of customers and businesses. The US government and industry are equally interested in enhancing software supply chain security. On June 7, 2023, researchers from the NSF-supported Secure Software Supply Chain Center (S3C2) conducted a Secure Software Supply Chain Summit with a diverse set of 17 practitioners from 13 government agencies. The goal of the Summit was two-fold: (1) to share our observations from our previous two summits with industry, and (2) to enable sharing between individuals at the government agencies regarding practical experiences and challenges with software supply chain security. For each discussion topic, we presented our observations and take-aways from the industry summits to spur conversation. We specifically focused on the Executive Order 14028, software bill of materials (SBOMs), choosing new dependencies, provenance and self-attestation, and large language models. The open discussions enabled mutual sharing and shed light on common challenges that government agencies see as impacting government and industry practitioners when securing their software supply chain. In this paper, we provide a summary of the Summit. Full panel questions can be found at the beginning of each section and in the appendix. software supply chain, open source, secure software engineering + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: ## 1. Introduction Recent years have shown increased cyber attacks targeting less secure elements in the software supply chain and causing fatal damage to businesses and organizations. Past well-known examples of software supply chain attacks are the SolarWinds or log4j incidents that have affected thousands of customers and businesses. On June 7, 2023, two researchers from the NSF-supported Secure Software Supply Chain Center (S3C2) conducted a one-day Secure Software Supply Chain Summit with a diverse set of 17 practitioners from 13 government agencies. The goal of the Summit was two-fold: (1) to share our observations from our previous two summits with industry (Brands, 2017; Brands, 2017), and (2) to enable sharing between individuals at the government agencies regarding practical experiences and challenges with software supply chain security. Summit participants were recruited from 13 government agencies with interests in the security of software. Attendance was limited to keep the event small enough that honest communication between participants can flow. The Summit was conducted under the Chatham House Rules, which state that all participants are free to use the information discussed, but neither the identity nor the affiliation of the speaker(s), nor any other participant may be revealed. As such, none of the participating government agencies are identified in this paper. The Summit consisted of five discussion topics led by one of the S3C2 researchers. Before the Summit, participants completed a survey to vote on the topics of the five discussion topics. As such, the panel topics of interest to the government agencies. The remaining topics discussed in the industry summits were also briefly presented but with only minor discussion. The questions posed to the panelists appear in the Appendix. The two researchers (two professors) took notes on the discussions and created the first draft summary of the discussion based on these notes. The draft was then by the other authors of this paper, who are also S3C2 researchers and experts in software supply chain security. The next seven sections provide a summary of the Secure Supply Chain Summit. ## 2. Executive Order Executive Order 14028 (Brands, 2021) issued on May 12, 2021, charges organizations supplying critical software to the US government to improve the security and integrity of their software and the software supply chain. Most organizations need to make procedural, operational, and cultural changes as a result. ### What is the Goal? In contrast to the industry summits, the participants of the government summit are consumers of the EO requirements. One participant noted that the EO does not apply to in-house developed software. The participants reflected on what they see through their interactions with contractors. The discussion began by reflecting on the ambiguity concerns raised in the industry summits (Brands, 2017; Brands, 2017). One participant acknowledged that when it comes to operationalizing the EO, a lot is still in flight. Some deadlines of the EO have been missed, and there is uncertainty about when things will go into effect. Another participant gave valuable insight into the ambiguity problems raised by industry: "Ambiguity from a government perspective also means choice for the company." The government is careful not to be too prescriptive about specific technologies, which may change faster than government requirements. The discussion also revealed a sense that industry has latched too much onto Software Bill of Materials (SBOMs) (see Section 3) and not enough onto other aspects, including attestations (see Section 5). For example, the EO requires attestation that companies follow the NIST Secure Software Development Framework (SSDF) (Bordes et al., 2017). The attestation process is currently very manual; however, there is potential to automate many aspects of it. One participant noted that companies are starting to automate different aspects of the attestation requirements and that the first one to do so will define the landscape. Finally, there were comments and concerns about the fact that the EO only states that SBOMs should be created and does not describe what should be done with that data. On the one hand, a participant stated that the EO lacks a clearly articulated problem statement. SBOMs have existed. Specifications have existed. There is an incredible amount of literature that we are not using in the workshops. On the other hand, there was the recognition that each system is different. A risk for one might be different than a risk for another. This makes it complicated to prescribe specific SBOM uses. One participant noted that a key value of the EO is that we are speaking openly that we are operating in an adversarial environment. ### Open Questions At the end of the panel, some open questions remained: * How can attestations be made more automated and scalable? Companies are starting to do this already. The first there will define the landscape. ## 3. Software Bill of Materials (SBOM) An SBOM is a nested inventory of 'ingredients' that make up the software component or product that helps to identify and keep track of third-party components of a software system. The EO states that any company that sells software to the federal government must issue a complete SBOM that complies with the National Telecommunications and Information Administration (NTIA) Minimal Elements (Bordes et al., 2017). Vulnerability Exploitability eXchange (VEX) (Bordes et al., 2017) information was touched upon throughout the discussion. An SBOM can be accompanied by a VEX (Bordes et al., 2017) addendum, which is a form of a security advisory that indicates whether a product or products are affected by a known vulnerability or vulnerabilities. ### Current State of the Practice Increasingly, tool vendors and package managers are providing the capability to generate SBOMs. Participants acknowledged that SBOM output is not consistent between tools. Additionally, tool support is lagging for embedded software. Participants expressed a desire for SBOMs to be signed, which is not currently the state of the practice. Tools to consume and provide actionable information from SBOMs are lagging tools to produce SBOM. As a result, participants acknowledged it could take years to use SBOMs effectively to aid in response and recovery from a cyber event. Some expressed concern that information in SBOMs may make it easier for adversaries. ### Open Questions At the end of the panel, some open questions remained: * What are the use cases for SBOM? What is the benefit of benefit gained by having an SBOM? * How will SBOMs be shared, including how will they be shared across agencies so companies do not have to share separately with each agency they do business with? * What other kinds of Bills of Materials should be considered, such as Firmware Bill of Materials (FBOM) or Hardware Bill of Materials (HBOM)? ## 4. Choosing Dependencies Open source dependencies vary widely in quality, maintenance, origin, and licenses. Every dependency introduces value and risk, and once a dependency is incorporated into a project, it is often hard to replace. Therefore, it is important to have a policy that governs how software developers may choose new dependencies. ### Consumers of Software "Dependencies" can be thought of at different levels. In contrast to the industry summits, participants at the government summit are more often on the consumer side of software production. They do not focus much on choosing or updating specific software library dependencies. Their contractors do this. That said, participants felt that the increased attention to the software supply chain has led to a significant increase in awareness of how much open source software they use. This awareness leads to a better understanding of what they need to manage. Having policies around choosing dependencies is also vastly different in the government space. In contrast to an industry policy, a policy in a government organization comes with its own bureaucracy, and bureaucracy can stifle innovation. Therefore, there was a significant hesitance to a permission-based culture around software development. One participant reflected on a government policy decades ago that only allowed software development using the Ada programming language. Finally, the participants discussed what it meant to use software dependencies from foreign countries, i.e., anything not developed in the United States. There was discussion about efforts to identify and manage when software comes from embargoed countries. However, there was also an awareness that adversaries can find ways around those mechanisms. Overall, there was the sentiment that "foreign" doesn't mean bad; it means risk. ### Open Questions * How can we be objective about the risk of dependencies with foreign origins? ## 5. Self-Attestation and provenance As discussed in Section 2, Executive Order 14028 requires government contractors to attest to (1) conformity with secure software development practices as well (2) the integrity and provenance of open-source software used within any portion of a product. In the context of the software supply chain, provenance refers to not only the identity that created each dependency and transitive dependency but also the process through which each software component was built. For example, provenance is a key part of the SLSA framework (Bordes et al., 2016), and systems such as in-toto1 can be used to capture and communicate provenance information. Footnote 1: [https://in-toto.io/](https://in-toto.io/) ### Attestation Earlier summit discussion (see Section 2) touched upon industry's concerns about ambiguity in self-attestation requirements. Contractors are asked to self-attest conformance to NIST's Secure Software Development Framework (SSDF) (Bordes et al., 2016). However, one participant noted that the SSDF is "fuzzy buckets." There is not a lot of specificity and many degrees of freedom. What it means to attest to the SSDF is a wide range, which is why industry is looking for something more concrete. Participants also compared the SSDF conformance requirements to the (arguably) failed rollout of the original Cybersecurity Maturity Model Certification (CMMC) requirements. Ultimately, self-attestation was not working for the DoD, so CMMC started requiring external parties to come in. However, participants noted that SSDF is different than CMMC, because industry is already working towards machine-consumable ways of tracking the information that is part of the self-attestation. Of specific note were efforts in the IETF and the OpenSSF's supply chain integrity group. Many participants agreed that the SSDF and attestation are far more foundational to security than the SBOM. Similar to the discussion of SBOM product and consumption (see Section 3), participants acknowledged that processes to consume and share attestation data were lagging further behind the production of attestation data. ### Provenance The participants had mixed feelings on provenance in the software supply chain. For example, one participant noted that some partners really like to know who is contributing to projects. However, others were concerned that policies based on provenance information simply are not enforceable. There was also recognition that sometimes critically needed software might have origins buried within them that violate policies. This is just the nature of open-source software. Some participants also raised the sentiment that it is more important to verify quality than consider provenance. As an anecdote, a participant noted that we should not have cared about Kaspersky from a provenance perspective. Instead, the real concerns emerged when you considered the _functionality_ it was performing. Another participant recalled incidents that were identified not because of provenance information, but because of a whistle-blower. Finally, there were comparisons to provenance in hardware supply chains. There was a discussion of supply chain illumination tools with concerns of false positive rates resulting from their probabilistic risk assessment approaches. Given that many of these tools consider contractual elements, they apply more to closed-source software than open-source. ### Open Questions At the end of the panel, some open questions remained: * To what extent can the evaluation of conformance to the SSDF be automated? * To what extent should we focus on provenance versus other indicators of security? * How will attestation data be shared and consumed? ## 6. Large language models (LLMS) Within the last year, Large Language Model (LLM)-based systems, such as ChatGPT, are increasingly being used for automated code generation. ### Security Concerns The popularity of LLM-based systems for code generation causes a security concern of adversarial models resulting in tainting the training data such that vulnerable code is generated and infiltrated into software systems. A participant expressed concerns that the public's accelerated use of LLM can lead to large-scale data exfiltration. LLM system users regularly pull LLM output into their product and regularly contribute their own proprietary data into training data through their queries. Participants also expressed concern about the use of social engineering of LLM-based tools. For example, a participant knew of a country that input a policy into an LLM system for language translation. Now that country's official policy may be considered to be generated by a computer and may have subtle unintended differences when compared with the original that could cause unrest. There is concern about the use of LLMS to manipulate public opinion and to manipulate society. Some government agencies do not allow LLM use on government computers. ### Open Questions At the end of the panel, some open questions remained: * Is it possible to quantify the vulnerabilities in LLM training data? * How can LLMs be used to aid in software supply chain security? Can LLMS enable whole proof generation and proof repair? ## 7. Misc discussion Throughout the Summit, several additional topics were discussed, primarily originating from the sharing of the two prior industry summits. ### Updating Dependencies Most software uses a plethora of third-party dependencies that provide common functionality and help developers with productivity. However, these dependencies add an additional layer of complexity and lead to an ecosystem of direct and transitive dependencies that each software replies on. A security vulnerability in a third-party dependency can lead to cascading issues and needs to be updated with the newly released version fix as soon as possible. As primarily software consumers, the government participants acknowledge the complexities of software suppliers needing to update dependencies. They expressed being even more nervous about updating vulnerable containers than with software components due to the processes of scanning and updating containers being less mature than components. ### Secure Build and Deploy Various build platforms and CI/CD tools support developers in automating the parts of software development related to building, testing, and deploying. These platforms further help in enhancing software build integrity by establishing documented and consistent build environments, isolating build processes, and generating verifiable provenance. Participants shared that agencies may have a process for evaluating and "blessing" tools. The agencies may enforce that vendors use approved tools in their CI/CD pipeline. ## 8. Executive Summary Though some government agencies have large software development teams, most are primarily consumers of software and, therefore, consumers of the software development practices and artifacts mandated by the EO. The participants acknowledged that industrial organizations may be confused by the lack of specificity in the EO, including guidance on SBOM sharing and attestation production. Work is underway to produce the specificity, but the lack of guidance may also provide implementation flexibility by organizations. For example, a company that solves automating attestation information may be influential in establishing this specificity. Participants acknowledged the complexities organizations face in choosing and updating dependencies and provided a realistic perspective that an adversary in an embargoed country can find the means to disguise the geographic origins of the dependency. Finally, LLMs are considered both a security risk and offer the potential to aid in securing the software supply chain. ## 9. Acknowledgements A big thank you to all Summit participants. We are very grateful for being able to hear about your valuable experiences and suggestions. The Summit was organized by and recorded by Laurie Williams and William Enck. This material is based upon work supported by the National Science Foundation Grant Nos. 2207008, 2206859, 2206865, and 2206921. These grants support the Secure Software Supply Chain Summit (S3C2) consisting of researchers at North Carolina State University, Carnegie Mellon University, University of Maryland, and George Washington University. Any opinions expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2305.09547
Coherent distributions on the square $\unicode{x2013}$ extreme points and asymptotics
Let $\mathcal{C}$ denote the family of all coherent distributions on the unit square $[0,1]^2$, i.e. all those probability measures $\mu$ for which there exists a random vector $(X,Y)\sim \mu$, a pair $(\mathcal{G},\mathcal{H})$ of $\sigma$-fields and an event $E$ such that $X=\mathbb{P}(E|\mathcal{G})$, $Y=\mathbb{P}(E|\mathcal{H})$ almost surely. In this paper we examine the set $\mathrm{ext}(\mathcal{C})$ of extreme points of $\mathcal{C}$ and provide its general characterisation. Moreover, we establish several structural properties of finitely-supported elements of $\mathrm{ext}(\mathcal{C})$. We apply these results to obtain the asymptotic sharp bound $$\lim_{\alpha \to \infty} \alpha\cdot \Big(\sup_{(X,Y)\in \mathcal{C}}\mathbb{E}|X-Y|^{\alpha}\Big) = \frac{2}{e}.$$
Stanisław Cichomski, Adam Osękowski
2023-05-16T15:42:43Z
http://arxiv.org/abs/2305.09547v1
# Coherent distributions on the square ###### Abstract. Let \(\mathcal{C}\) denote the family of all coherent distributions on the unit square \([0,1]^{2}\), i.e. all those probability measures \(\mu\) for which there exists a random vector \((X,Y)\sim\mu\), a pair \((\mathcal{G},\mathcal{H})\) of \(\sigma\)-fields and an event \(E\) such that \(X=\mathbb{P}(E|\mathcal{G})\), \(Y=\mathbb{P}(E|\mathcal{H})\) almost surely. In this paper we examine the set \(\mathrm{ext}(\mathcal{C})\) of extreme points of \(\mathcal{C}\) and provide its general characterisation. Moreover, we establish several structural properties of finitely-supported elements of \(\mathrm{ext}(\mathcal{C})\). We apply these results to obtain the asymptotic sharp bound \[\lim_{\alpha\to\infty}\alpha\cdot\Big{(}\sup_{(X,Y)\in\mathcal{C}}\mathbb{E}|X -Y|^{\alpha}\Big{)}=\frac{2}{e}.\] ## 1. Introduction Let \(\mu\) be a probability measure on the unit square \([0,1]^{2}\). Following [12], this measure is called _coherent_, if it is the joint distribution of a two-variate random vector \((X,Y)\) defined on some arbitrary probability space \((\Omega,\mathcal{F},\mathbb{P})\), such that \[X=\mathbb{P}(E|\mathcal{G})\quad\text{and}\quad Y=\mathbb{P}(E|\mathcal{H}), \quad\text{almost surely,}\] for some measurable event \(E\in\mathcal{F}\) and some two sub-\(\sigma\)-fields \(\mathcal{G},\mathcal{H}\subset\mathcal{F}\). Throughout the text, the class of all coherent probability measures will be denoted by \(\mathcal{C}\); for the sake of convenience (and with a slight abuse of notation), we will also write \((X,Y)\in\mathcal{C}\) to indicate that the distribution of a random vector \((X,Y)\) is coherent. Coherent measures enjoy the following nice interpretation. Suppose that two experts provide their personal estimates on the likelihood of some random event \(E\), and assume that the knowledge of the first and the second expert is represented by the \(\sigma\)-algebras \(\mathcal{G}\) and \(\mathcal{H}\), respectively. Then a natural idea to model the predictions of the experts is to use conditional expectations: this leads to the random variables \(X\) and \(Y\) as above. The importance of coherent distributions stem from their numerous applications in statistics (cf. [12, 13, 17, 19]) and economics (consult [1, 2, 3, 15]). Coherent distributions are also closely related to graph theory and combinatorial matrix theory, see for instance [4, 7, 11, 20]. Moreover, there has been a substantial purely probabilistic advancement on this subject during the last decade, see [5, 6, 8, 9, 10, 21]. The main interest, both in applied and theoretical considerations, involves bounding the maximal discrepancy of coherent vectors measured by different functionals. A canonical result of this type is the following threshold bound of Burdzy and Pal [5]. **Theorem 1.1**.: _For any parameter \(\delta\in(\frac{1}{2},1]\), we have_ \[\sup_{(X,Y)\in\mathcal{C}}\mathbb{P}(|X-Y|\geq\delta)=\frac{2(1-\delta)}{2- \delta}. \tag{1.1}\] For a generalisation of (1.1) to \(n\)-variate coherent vectors, consult [9]. Another important example is the expectation bound established independently in [3, 7]. ## 1. Introduction Let \(\mathcal{R}\) be a bounded bounded domain with Lipschitz boundary \(\partial\mathcal{R}\). We consider the following problem \[\begin{cases}\int_{\mathcal{R}}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d} \mu^{x},\\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.1}\] where \(\mu\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.2}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.3}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.4}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.5}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.6}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.7}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.8}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.9}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.1}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.10}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.11}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.12}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.13}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.14}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.15}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.16}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.17}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.1}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.2}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A}x\,\mathrm{d}\mu^{x}, \\ \int_{B}(1-y)\,\mathrm{d}\mu^{y}&=\int_{B}y\,\mathrm{d}\mu^{y},\end{cases} \tag{1.2}\] where \(\mathrm{d}\mu^{x}\) is the domain of \(\mathcal{R}\). The problem is to find a solution of the following problem (1.3) \[\begin{cases}\int_{A}(1-x)\,\mathrm{d}\mu^{x}&=\int_{A **Definition 1.6**.: For a fixed \(m\in\mathcal{C}\), consider the class \[\mathcal{R}(m)\ =\ \{(\mu,\nu)\in\mathcal{R}:\ m=\mu+\nu\}.\] Any element \((\mu,\nu)\in\mathcal{R}(m)\) will be called a _representation_ of a coherent distribution \(m\). By the very definition, both \(\mathcal{C}\) and \(\mathcal{R}\), and hence also \(\mathcal{R}(m)\), are convex sets. To proceed, let us distinguish the ordering in the class of measures, which will often be used in our considerations below. Namely, for two Borel measures \(\mu_{1},\mu_{2}\) supported on the unit square, we will write \(\mu_{1}\leq\mu_{2}\) if we have \(\mu_{1}(A)\leq\mu_{2}(A)\) for all \(A\in\mathcal{B}([0,1]^{2})\). **Definition 1.7**.: Let \(m\in\mathcal{C}\). We say that the representation \((\mu,\nu)\) of \(m\) is \(\cdot\)_unique_, if for every \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) with \(m=\tilde{\mu}+\tilde{\nu}\), we have \(\tilde{\mu}=\mu\) and \(\tilde{\nu}=\nu\); \(\cdot\)_minimal_, if for all \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) with \(\tilde{\mu}\leq\mu\) and \(\tilde{\nu}\leq\nu\), there exists \(\alpha\in[0,1]\) such that \((\tilde{\mu},\tilde{\nu})=\alpha\cdot(\mu,\nu)\). With these notions at hand, we will give the following general characterisation of \(\text{ext}(\mathcal{C})\). **Theorem 1.8**.: _Let \(m\) be a coherent distribution on \([0,1]^{2}\). Then \(m\) is extremal if and only if the representation of \(m\) is unique and minimal._ This statement will be established in the next section. Then, in Section 3, we concentrate on extremal coherent measures with finite support. Let \(\text{ext}_{f}(\mathcal{C})=\{\eta\in\text{ext}(\mathcal{C}):|\text{supp}(\eta )|<\infty\}\). Theorem 1.8 will enable us to deduce several structural properties of \(\text{ext}_{f}(\mathcal{C})\); most importantly, as conjectured in [21], we show that support of \(\eta\in\text{ext}_{f}(\mathcal{C})\) cannot contain any axial cycles. Here is the definition. **Definition 1.9**.: The sequence \(\big{(}(x_{i},y_{i})\big{)}_{i=1}^{2n}\) with values in \([0,1]^{2}\) is called an _axial cycle_, if all points \((x_{i},y_{i})\) are distinct, the endpoint coordinates \(x_{1}\) and \(x_{2n}\) coincide, and we have \[x_{2i}=x_{2i+1}\quad\text{and}\quad y_{2i-1}=y_{2i}\quad\text{for all}\ \,i.\] Remarkably, the same 'no axial cycle' property holds true for extremal doubly stochastic measures (permutons) - for the relevant discussion, see [16]. Next, in Section 4, we apply our previous results and obtain the following reduction towards Theorem 1.3. Namely, for all \(\alpha\geq 1\), we have \[\sup_{(X,Y)\in\mathcal{C}}\mathbb{E}|X-Y|^{\alpha}\ =\ \sup_{\tilde{\mathbf{z}}}\ \sum_{i=1}^{n}z_{i}\Big{|}\frac{z_{i}}{z_{i-1}+z_{i}}-\frac{z_{i}}{z_{i}+z_{i+ 1}}\Big{|}^{\alpha}. \tag{1.4}\] Here the supremum is taken over all \(n\) and all sequences \(\,\tilde{\mathbf{z}}=(z_{0},z_{1},\ldots,z_{n+1})\) such that \(z_{0}=z_{n+1}=0\), \(z_{i}>0\) for all \(i=1,\,2,\,\ldots,\,n\), and \(\sum_{i=1}^{n}z_{i}=1\). Finally, using several combinatorial arguments and reductions, we prove Theorem 1.3 by a direct analysis of the right-hand side of (1.4). ## 2. Coherent measures, Representations Let \(\mathcal{M}([0,1]^{2})\) and \(\mathcal{M}([0,1])\) denote the space of nonnegative Borel measures on \([0,1]^{2}\) and \([0,1]\), respectively. For \(\mu\in\mathcal{M}([0,1]^{2})\), let \(\mu^{x},\mu^{y}\in\mathcal{M}([0,1])\) be defined by \[\mu^{x}(A)=\mu(A\times[0,1])\quad\text{ and }\quad\mu^{y}(B)=\mu([0,1] \times B),\] for all Borel subsets \(A,B\in\mathcal{B}([0,1])\). We begin with the following characterisation of \(\mathcal{C}\). **Proposition 2.1**.: _Let \(m\in\mathcal{M}([0,1]^{2})\). The measure \(m\) is a coherent distribution if and only if it is the joint distribution of a two-variate random vector \((X,Y)\) such that_ \[X=\mathbb{E}(Z|X)\quad\text{and}\quad Y=\mathbb{E}(Z|Y)\quad\text{almost surely}\] _for some random variable \(Z\) with \(0\leq Z\leq 1\)._ Proof.: This is straightforward. See [6, 7]. Recall the definition of the class \(\mathcal{R}\) formulated in the previous section. Let us study the connection between this class and the family of all coherent distributions. Proof of Proposition 1.5.: First, we show that the decomposition \(m=\mu+\nu\) exists for all \(m\in\mathcal{C}\). Indeed, by virtue of Proposition 2.1, we can find a random vector \((X,Y)\sim m\) defined on some probability space \((\Omega,\mathcal{F},\mathbb{P})\), such that \(X=\mathbb{E}(Z|X)\) and \(Y=\mathbb{E}(Z|Y)\) for some random variable \(Z\in[0,1]\). For a set \(C\in\mathcal{B}([0,1]^{2})\), we put \[\mu(C)=\int_{\{(X,Y)\in C\}}Z\;\mathrm{d}\mathbb{P}\quad\text{ and }\quad\nu(C)=\int_{\{(X,Y)\in C\}}(1-Z)\;\mathrm{d}\mathbb{P}. \tag{2.1}\] Then the equality \(m=\mu+\nu\) is evident. Furthermore, for a fixed \(A\in\mathcal{B}([0,1])\), we have \[\int_{\{X\in A\}}X\;\mathrm{d}\mathbb{P}\;=\;\int_{\{X\in A\}}Z\;\mathrm{d} \mathbb{P}\;=\;\int_{A}1\;\mathrm{d}\mu^{x}, \tag{2.2}\] where the first equality is due to \(X=\mathbb{E}(Z|X)\) and the second is a consequence of (2.1). Moreover, we may also write \[\int_{\{X\in A\}}X\;\mathrm{d}\mathbb{P}\;=\;\int_{A\times[0,1]}x\;\mathrm{d}m \;=\;\int_{A}x\;\mathrm{d}\mu^{x}+\int_{A}x\;\mathrm{d}\nu^{x}. \tag{2.3}\] Combining (2.2) and (2.3), we get \[\int_{A}(1-x)\;\mathrm{d}\mu^{x}\;=\;\int_{A}x\;\mathrm{d}\nu^{x},\] for all \(A\in\mathcal{B}([0,1])\). The symmetric condition (the second requirement in Definition 1.4) is shown analogously. This completes the first part of the proof. Now, pick a probability measure \(m\) on \([0,1]^{2}\) such that \(m=\mu+\nu\) for some \((\mu,\nu)\in\mathcal{R}\). We need to show that \(m\) is coherent. To this end, consider the probability space \(([0,1]^{2},\mathcal{B}([0,1]^{2}),m)\) and the random variables \(X,Y:[0,1]^{2}\to[0,1]\) defined by \[X(x,y)=x\quad\text{and}\quad Y(x,y)=y,\quad x,\,y\in[0,1].\] Additionally, let \(Z\) denote the Radon-Nikodym derivative of \(\mu\) with respect to \(m\): we have \(0\leq Z\leq 1\)\(m\)-almost surely and \(\mu(C)=\int_{C}Z\mathrm{d}m\) for all \(C\in\mathcal{B}([0,1]^{2})\). Again by Proposition 2.1, it is sufficient to verify that \(X=\mathbb{E}(Z|X)\) and \(Y=\mathbb{E}(Z|Y)\). By symmetry, it is enough to show the first equality. Fix \(A\in\mathcal{B}([0,1])\) and note that \[\int_{\{X\in A\}}X\;\mathrm{d}m\;=\;\int_{A\times[0,1]}x\;\mathrm{d}m\;=\; \int_{A}x\;\mathrm{d}\mu^{x}+\int_{A}x\;\mathrm{d}\nu^{x}. \tag{2.4}\] Similarly, we also have \[\int_{\{X\in A\}}Z\;\mathrm{d}m\;=\;\int_{A\times[0,1]}Z\;\mathrm{d}m\;=\; \mu(A\times[0,1])\;=\;\int_{A}1\;\mathrm{d}\mu^{x}. \tag{2.5}\] Finally, note that by \((\mu,\nu)\in\mathcal{R}\), the right-hand sides of (2.4) and (2.3) are equal. Therefore we obtain the identity \[\int_{\{X\in A\}}X\;\mathrm{d}m\;=\;\int_{\{X\in A\}}Z\;\mathrm{d}m\] for arbitrary \(A\in\mathcal{B}([0,1])\). This yields the claim. We turn our attention to the characterisation of \(\mathrm{ext}(\mathcal{C})\) stated in the previous section. Proof of Theorem 1.8, the implication '\(\Rightarrow\)'.: Let \(m\) be an extremal coherent measure and suppose, on contrary, that \((\mu_{1},\nu_{1})\) and \((\mu_{2},\nu_{2})\) are two different elements of \(\mathcal{R}(m)\). We will prove that \(m-\mu_{1}+\mu_{2}\) and \(m-\mu_{2}+\mu_{1}\) are also coherent distributions. Because of \[m\;=\;\frac{1}{2}(m-\mu_{1}+\mu_{2})\;+\;\frac{1}{2}(m-\mu_{2}+\mu_{1}),\] we will obtain the contradiction with the assumed extremality of \(m\). By symmetry, it is enough to show that \((m-\mu_{1}+\mu_{2})\in\mathcal{C}\). To this end, by virtue of Proposition 1.5, it suffices to check that \(m-\mu_{1}+\mu_{2}\) is a probability measure and \((\mu_{2},m-\mu_{1})\in\mathcal{R}\). First, note that \(\nu_{1}=m-\mu_{1}\) is nonnegative and fix an arbitrary \(A\in\mathcal{B}([0,1])\). As \((\mu_{1},\nu_{1})\) and \((\mu_{2},\nu_{2})\) are representations of \(m\), Definition 1.4 gives \[\int_{A}1\;\mathrm{d}\mu_{1}^{x}\;=\;\int_{A}x\;(\mathrm{d}\nu_{1}^{x}+ \mathrm{d}\mu_{1}^{x})\;=\;\int_{A}x\;\mathrm{d}m^{x},\] and \[\int_{A}1\;\mathrm{d}\mu_{2}^{x}\;=\;\int_{A}x\;(\mathrm{d}\nu_{2}^{x}+ \mathrm{d}\mu_{2}^{x})\;=\;\int_{A}x\;\mathrm{d}m^{x}, \tag{2.6}\] so \(\mu_{1}^{x}(A)=\mu_{2}^{x}(A)\). Similarly, we can deduce that \(\mu_{1}^{y}=\mu_{2}^{y}\), which means that marginal distributions of \(\mu_{1}\) and \(\mu_{2}\) are equal. This, together with \(m-\mu_{1}\geq 0\), proves that \(m-\mu_{1}+\mu_{2}\) is a probability measure. Next, using (2.6) and \(\mu_{1}^{x}=\mu_{2}^{x}\), we can also write \[\int_{A}(1-x)\;\mathrm{d}\mu_{2}^{x}\;=\;\int_{A}x\;\mathrm{d}m^{x}-\int_{A}x \;\mathrm{d}\mu_{1}^{x}\;=\;\int_{A}x\;\mathrm{d}(m-\mu_{1})^{x}. \tag{2.7}\] In the same way we get \[\int_{B}(1-y)\;\mathrm{d}\mu_{2}^{y}\;=\;\int_{B}y\;\mathrm{d}(m-\mu_{1})^{y}, \tag{2.8}\] for all \(B\in\mathcal{B}([0,1])\). By (2.7) and (2.8), we obtain that \((\mu_{2},m-\mu_{1})\in\mathcal{R}\) and this completes the proof of the uniqueness. To show the minimality, let \(m\) be an extremal coherent measure with the representation \((\mu,\nu)\) (which is unique, as we have just proved). Consider any nonzero \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) with \(\tilde{\mu}\leq\mu\) and \(\tilde{\nu}\leq\nu\). Then, by the very definition of \(\mathcal{R}\), we have \((\mu-\tilde{\mu},\nu-\tilde{\nu})\in\mathcal{R}\). Therefore, by Proposition 1.5, we get \[\alpha^{-1}(\tilde{\mu}+\tilde{\nu}),\;(1-\alpha)^{-1}(m-\tilde{\mu}-\tilde{ \nu})\;\in\mathcal{C},\] where \(\alpha=(\tilde{\mu}+\tilde{\nu})([0,1]^{2})\in(0,1]\). We have the identity \[m\;=\;\alpha\cdot\Big{(}\alpha^{-1}(\tilde{\mu}+\tilde{\nu})\Big{)}+(1- \alpha)\cdot\Big{(}(1-\alpha)^{-1}(m-\tilde{\mu}-\tilde{\nu})\Big{)}, \tag{2.9}\] which combined with the extremality of \(m\) yields \(m=\alpha^{-1}(\tilde{\mu}+\tilde{\nu})=\alpha^{-1}\tilde{\mu}+\alpha^{-1} \tilde{\nu}\). But \((\alpha^{-1}\tilde{\mu},\alpha^{-1}\tilde{\nu})\) belongs to \(\mathcal{R}\), since \((\tilde{\mu},\tilde{\nu})\) does, and hence \((\alpha^{-1}\tilde{\mu},\alpha^{-1}\tilde{\nu})\) is a representation of \(m\). By the uniqueness, we deduce that \((\tilde{\mu},\tilde{\nu})=\alpha\cdot(\mu,\nu)\). Proof of Theorem 1.8, the implication '\(\Leftarrow\)'.: Let \(m\) be a coherent distribution with the unique and minimal representation \((\mu,\nu)\). To show that \(m\) is extremal, consider the decomposition \(m=\beta\cdot m_{1}+(1-\beta)\cdot m_{2}\) for some \(m_{1},m_{2}\in\mathcal{C}\) and \(\beta\in(0,1)\). Moreover, let \((\mu_{1},\nu_{1})\in\mathcal{R}(m_{1})\) and \((\mu_{2},\nu_{2})\in\mathcal{R}(m_{2})\). By the convexity of \(\mathcal{R}\), we have \[(\mu^{\prime},\nu^{\prime}):=\;(\beta\mu_{1}+(1-\beta)\mu_{2},\;\beta\nu_{1}+( 1-\beta)\nu_{2})\;\in\mathcal{R}(m) \tag{2.10}\] and hence, by the uniqueness, we get \((\mu^{\prime},\nu^{\prime})=(\mu,\nu)\). Then, directly by (2.10), we have \[\beta\mu_{1}\leq\mu\;\;\;\mbox{and}\;\;\;\beta\nu_{1}\leq\nu. \tag{2.11}\] Combining this with the minimality of \((\mu,\nu)\), we get \((\beta\mu_{1},\beta\nu_{1})=\alpha(\mu,\nu)\) for some \(\alpha\in[0,1]\). Since \(m=\mu+\nu\) and \(m_{1}=\mu_{1}+\nu_{1}\) are probability measures, this gives \(\alpha=\beta\) and hence \((\mu_{1},\nu_{1})=(\mu,\nu)\). This implies \(m=m_{1}\) and completes the proof. ## 3. Extreme points with finite support In this section we study the geometric structure of the supports of measures belonging to \(\operatorname{ext}_{f}(\mathcal{C})=\{\eta\in\operatorname{ext}(\mathcal{C}): |\operatorname{supp}(\eta)|<\infty\}\). Our key result is presented in Theorem 3.7 - we prove that the support of an extremal coherent distribution cannot contain any axial cycles (see Definition 1.9). Let us emphasize that this property has been originally conjectured in [21]. We start with a simple combinatorial observation: it is straightforward to check that certain special 'alternating' cycles are forbidden. **Definition 3.1**.: Let \(\eta\) be a coherent distribution with a unique representation \((\mu,\nu)\) and let \(\big{(}(x_{i},y_{i})\big{)}_{i=1}^{2n}\) be an axial cycle contained in \(\operatorname{supp}(\eta)\). Then \(\big{(}(x_{i},y_{i})\big{)}_{i=1}^{2n}\) is an alternating cycle if \[(x_{2i+1},y_{2i+1})\in\operatorname{supp}(\mu)\quad\text{and}\quad(x_{2i},y_{ 2i})\in\operatorname{supp}(\nu),\] for all \(i=1,2,\ldots,n\) (with the convention \(x_{2n+1}=x_{1},\,y_{2n+1}=y_{1}\)). **Proposition 3.2**.: _If \(\eta\in\operatorname{ext}_{f}(\mathcal{C})\), then \(\operatorname{supp}(\eta)\) does not contain any alternating cycles._ Proof.: Let \(\eta\) be a coherent distribution with a unique representation \((\mu,\nu)\) and a finite support. Additionally, assume that \(\big{(}(x_{i},y_{i})\big{)}_{i=1}^{2n}\) is an alternating cycle contained in \(\operatorname{supp}(\eta)\). Let \(\delta\) be the smaller of the two numbers \[\min_{0\leq i\leq n-1}\mu(x_{2i+1},y_{2i+1})\quad\text{ and }\quad\min_{1\leq i \leq n}\nu(x_{2i},y_{2i})\] (for brevity, in what follows we will skip the parentheses and write \(\mu(a,b)\), \(\nu(a,b)\) instead of \(\mu(\{a,b\})\), \(\nu(\{a,b\})\), respectively). By Definition 3.1, we have \(\delta>0\). Now, consider the transformation \((\mu,\nu)\mapsto(\mu^{\prime},\nu^{\prime})\) described by the following requirements: Figure 1. An example of an alternating cycle. Red points represent probability masses in \(\operatorname{supp}(\mu)\), while blue points indicate probability masses in \(\operatorname{supp}(\nu)\). Arrows outline a possible transformation of the representation \((\mu,\nu)\). 1. for \(i=0,1,\ldots,n-1\), put \[\mu^{\prime}(x_{2i+1},y_{2i+1}) :=\ \mu(x_{2i+1},y_{2i+1})-\delta\] \[\nu^{\prime}(x_{2i+1},y_{2i+1}) :=\ \nu(x_{2i+1},y_{2i+1})+\delta,\] 2. for \(i=1,2,\ldots,n\), put \[\mu^{\prime}(x_{2i},y_{2i}) :=\ \mu(x_{2i},y_{2i})+\delta\] \[\nu^{\prime}(x_{2i},y_{2i}) :=\ \nu(x_{2i},y_{2i})-\delta,\] 3. for \((x,y)\not\in\{(x_{i},y_{i}):1\leq i\leq 2n\}\), set \[\mu^{\prime}(x,y) =\mu(x,y),\] \[\nu^{\prime}(x,y) =\nu(x,y).\] Note that \(\mu\) and \(\mu^{\prime}\), as well as \(\nu\) and \(\nu^{\prime}\), have the same marginal distributions and hence \((\mu^{\prime},\nu^{\prime})\in\mathcal{R}\). We also have \(\mu^{\prime}+\nu^{\prime}=\mu+\nu=\eta\) and thus \((\mu^{\prime},\nu^{\prime})\in\mathcal{R}(\eta)\). This contradicts the uniqueness of the representation \((\mu,\nu)\) and shows that \(\operatorname{supp}(\eta)\) cannot contain an alternating cycle. By Theorem 1.8, this ends the proof. Before the further combinatorial analysis, we need to introduce some useful auxiliary notation. For \(\mu,\nu\in\mathcal{M}([0,1]^{2})\) with \(|\operatorname{supp}(\mu+\nu)|<\infty\), we define a quotient function \(q_{(\mu,\nu)}:\operatorname{supp}(\mu+\nu)\to[0,1]\) by \[q_{(\mu,\nu)}(x,y)=\frac{\mu(x,y)}{\mu(x,y)+\nu(x,y)}.\] In what follows, we will omit the subscripts and write \(q\) for \(q_{(\mu,\nu)}\) whenever the choice for \((\mu,\nu)\) is clear from the context. **Proposition 3.3**.: _Let \(\mu,\nu\in\mathcal{M}([0,1]^{2})\) and \(|\operatorname{supp}(\mu+\nu)|<\infty\). Then \((\mu,\nu)\in\mathcal{R}\) if and only if the following conditions hold simultaneously:_ * _for every_ \(x\) _satisfying_ \(\mu(\{x\}\times[0,1])+\nu(\{x\}\times[0,1])>0\)_, we have_ (3.1) \[\sum_{\begin{subarray}{c}y\in[0,1],\\ (x,y)\in\operatorname{supp}(\mu+\nu)\end{subarray}}q(x,y)\frac{\mu(x,y)+\nu(x,y)}{\mu(\{x\}\times[0,1])+\nu(\{x\}\times[0,1])}\ =\ x,\] * _for every_ \(y\) _satisfying_ \(\mu([0,1]\times\{y\})+\nu([0,1]\times\{y\})>0\)_, we have_ (3.2) \[\sum_{\begin{subarray}{c}x\in[0,1],\\ (x,y)\in\operatorname{supp}(\mu+\nu)\end{subarray}}q(x,y)\frac{\mu(x,y)+\nu(x,y)}{\mu([0,1]\times\{y\})+\nu([0,1]\times\{y\})}\ =\ y,\] _where sums in (3.1) and (3.2) are well defined - in both cases, there is only a finite number of nonzero summands._ Proof.: Due to \(|\operatorname{supp}(\mu+\nu)|<\infty\), this is a simple consequence of Definition 1.4. Next, we will require an additional distinction between three different types of points. **Definition 3.4**.: Let \((\mu,\nu)\in\mathcal{R}\). A point \((x,y)\in\operatorname{supp}(\mu+\nu)\) is said to be * a lower out point, if \(q(x,y)<\min(x,y)\); * an upper out point, if \(q(x,y)>\max(x,y)\); * a cut point, if it is not an out point, i.e. \[x\leq q(x,y)\leq y\quad\text{ or }\quad y\leq q(x,y)\leq x.\] Finally, for the sake of completeness, we include a formal definition of an axial path. **Definition 3.5**.: The sequence \(\big{(}(x_{i},y_{i})\big{)}_{i=1}^{n}\) with terms in \([0,1]^{2}\) is called an axial path if * all points \((x_{i},y_{i})\) are distinct; * we have \(x_{i+1}=x_{i}\) or \(y_{i+1}=y_{i}\) for all \(i\); * there are at most two points on any horizontal or vertical line. To develop some intuition, it is convenient to inspect the example given below. **Example 3.6**.: Let \(m\) be a probability measure given by \[m\Big{(}\frac{1}{8},\frac{1}{4}\Big{)}=\frac{84}{196},\ \ \ m\Big{(}\frac{1}{2},\frac{1}{4} \Big{)}=\frac{14}{196},\ \ \ m\Big{(}\frac{1}{2},\frac{3}{4}\Big{)}=\frac{14}{196},\ \ \ m\Big{(}\frac{7}{8},\frac{3}{4}\Big{)}= \frac{84}{196}.\] There are five observations, which will be discussed separately. (i) Consider the decomposition \(m=\mu+\nu\), where \((\mu,\nu)\) is determined by the quotient function \[q\Big{(}\frac{1}{8},\frac{1}{4}\Big{)}=\frac{1}{8},\ \ \ q\Big{(}\frac{1}{2},\frac{1}{4} \Big{)}=1,\ \ \ q\Big{(}\frac{1}{2},\frac{3}{4}\Big{)}=0,\ \ \ q\Big{(}\frac{7}{8},\frac{3}{4}\Big{)}= \frac{7}{8}.\] Using Proposition 3.3, we can check that \((\mu,\nu)\in\mathcal{R}\). For instance, for \(y=\frac{1}{4}\) we get \[\frac{q(\frac{1}{8},\frac{1}{4})\cdot m(\frac{1}{8},\frac{1}{4})+q(\frac{1}{2},\frac{1}{4})\cdot m(\frac{1}{2},\frac{1}{4})}{m(\frac{1}{8},\frac{1}{4})+m( \frac{1}{2},\frac{1}{4})}\ =\ \frac{\frac{1}{8}\cdot\frac{84}{196}+1\cdot\frac{14}{19 6}}{\frac{84}{196}+\frac{14}{196}}\ =\ \frac{1}{4}, \tag{3.3}\] which agrees with (3.2). As a direct consequence, by Proposition 1.5, we have \(m\in\mathcal{C}\). (ii) Observe that \((\frac{1}{8},\frac{1}{4})\) and \((\frac{7}{8},\frac{3}{4})\) are cut points, \((\frac{1}{2},\frac{1}{4})\) is an upper out point and \((\frac{1}{2},\frac{3}{4})\) is a lower out point. Moreover, \(\operatorname{supp}(m)\) is an axial path without cycles - see Figure 2. (iii) Notably, \((\mu,\nu)\) is a unique representation of \(m\). Indeed, \((\frac{1}{8},\frac{1}{4})\) is the only point in \(\operatorname{supp}(m)\) with \(x\)-coordinate equal to \(\frac{1}{8}\) and hence \(q(\frac{1}{8},\frac{1}{4})=\frac{1}{8}\). Accordingly, \(q(\frac{1}{2},\frac{1}{4})=1\) is now a consequence of (3.3). The derivation of \(q(\frac{1}{2},\frac{3}{4})=0\) and \(q(\frac{7}{8},\frac{3}{4})=\frac{7}{8}\) follows from an analogous computation. Figure 2. Support of a coherent distribution \(m\). Purple points (endpoints of the path) are cut points. Red point represents a mass in \(\operatorname{supp}(\mu)\) and is an upper out point. Blue point indicates a mass in \(\operatorname{supp}(\nu)\) and it is a lower out point. (iv) Finally, the representation \((\mu,\nu)\) is minimal; let \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) satisfy \(\tilde{\mu}\leq\mu\) and \(\tilde{\nu}\leq\nu\). Suppose that \((\frac{1}{8},\frac{1}{4})\in\mathrm{supp}(\tilde{\mu}+\tilde{\nu})\). Again, as \((\frac{1}{8},\frac{1}{4})\) is the only point in \(\mathrm{supp}(m)\) with \(x\)-coordinate equal to \(\frac{1}{8}\), we get \(q_{(\tilde{\mu},\tilde{\nu})}(\frac{1}{8},\frac{1}{4})=\frac{1}{8}\). Next, assume that \((\frac{1}{2},\frac{1}{4})\in\mathrm{supp}(\tilde{\mu}+\tilde{\nu})\). As \(\tilde{\nu}(\frac{1}{2},\frac{1}{4})\leq\nu(\frac{1}{2},\frac{1}{4})=0\), we have \(q_{(\tilde{\mu},\tilde{\nu})}(\frac{1}{2},\frac{1}{4})=1\). Likewise, we can check that \[q_{(\tilde{\mu},\tilde{\nu})}(x,y)\ =\ q_{(\mu,\nu)}(x,y)\quad\text{for all }(x,y)\in \mathrm{supp}(\tilde{\mu}+\tilde{\nu}). \tag{3.4}\] By Proposition 3.3 and the equation (3.4), we easily obtain that \(\tilde{\mu}+\tilde{\nu}=0\) or \(\mathrm{supp}(\tilde{\mu}+\tilde{\nu})=\mathrm{supp}(m)\). For example, * if \((\frac{1}{2},\frac{1}{4})\in\mathrm{supp}(\tilde{\mu}+\tilde{\nu})\), then (3.1) gives \((\frac{1}{2},\frac{3}{4})\in\mathrm{supp}(\tilde{\mu}+\tilde{\nu})\); * if \((\frac{1}{2},\frac{3}{4})\in\mathrm{supp}(\tilde{\mu}+\tilde{\nu})\), then (3.2) yields \((\frac{7}{8},\frac{3}{4})\in\mathrm{supp}(\tilde{\mu}+\tilde{\nu})\). Therefore, if \(\tilde{\mu}+\tilde{\nu}\neq 0\), then the measure \(\tilde{\mu}+\tilde{\nu}\) is supported on the same set as \(m\) and \(q_{(\tilde{\mu},\tilde{\nu})}\equiv q_{(\mu,\nu)}\). For the same reason, i.e. using Proposition 3.3 and path structure of \(\mathrm{supp}(m)\), it follows that \(\tilde{\mu}+\tilde{\nu}=\alpha\cdot m\) for some \(\alpha\in[0,1]\). For instance, by (3.2) for \(y=\frac{1}{4}\), we get \[\frac{\frac{1}{8}\cdot\tilde{m}(\frac{1}{8},\frac{1}{4})+1\cdot\tilde{m}( \frac{1}{2},\frac{1}{4})}{\tilde{m}(\frac{1}{8},\frac{1}{4})+\tilde{m}(\frac{ 1}{2},\frac{1}{4})}\ =\ \frac{1}{4},\] where \(\tilde{m}=\tilde{\mu}+\tilde{\nu}\). Hence \(\tilde{m}(\frac{1}{8},\frac{1}{4})\tilde{m}(\frac{1}{2},\frac{1}{4})^{-1}=m( \frac{1}{8},\frac{1}{4})m(\frac{1}{2},\frac{1}{4})^{-1}=\frac{84}{14}\). (v) By the above analysis and Theorem 1.8, we conclude that \(m\in\mathrm{ext}_{f}(\mathcal{C})\). We are now ready to demonstrate the central result of this section. **Theorem 3.7**.: _If \(\eta\in\mathrm{ext}_{f}(\mathcal{C})\), then \(\mathrm{supp}(\eta)\) is an axial path without cycles._ Let us briefly explain the main idea of the proof. For \(\eta\in\mathrm{ext}_{f}(\mathcal{C})\), we inductively construct a special axial path contained in \(\mathrm{supp}(\eta)\), which does not contain any cut points (apart from the endpoints). We show that axial path obtained in this process is acyclic and involves all points from \(\mathrm{supp}(\eta)\). Proof of Theorem 3.7.: Fix \(\eta\in\mathrm{ext}_{f}(\mathcal{C})\) and let \((\mu,\nu)\) be the unique representation of \(\eta\). By \(\mathcal{L}(\eta)\) and \(\mathcal{U}(\eta)\) denote the sets of lower and upper out points, correspondingly. Choose any \((x_{0},y_{0})\in\mathrm{supp}(\eta)\). We will consider two separate cases now: **Case I:**\((x_{0},y_{0})\) is an out point. With no loss of generality, we can assume that \((x_{0},y_{0})\in\mathcal{L}(\eta)\). We then use the following inductive procedure. \(1^{\circ}\) Suppose we have successfully found \((x_{n},y_{n})\in\mathcal{L}(\eta)\) and it is the first time we have chosen a point with the \(x\)-coordinate equal to \(x_{n}\). Since \((x_{n},y_{n})\in\mathcal{L}(\eta)\), we have \(q(x_{n},y_{n})<x_{n}\). By (3.1), there must exist a point \((x_{n+1},y_{n+1})\in\mathrm{supp}(\eta)\) such that \(x_{n+1}=x_{n}\) and \(q(x_{n+1},y_{n+1})>x_{n}\). We pick one such point and add it at the end of the path. If \((x_{n+1},y_{n+1})\) is a cut point or an axial cycle was just created, we exit the loop. Otherwise, note that \((x_{n+1},y_{n+1})\in\mathcal{U}(\eta)\). Go to \(2^{\circ}\). \(2^{\circ}\) Assume we have successfully found \((x_{n},y_{n})\in\mathcal{U}(\eta)\) and it is the first time we have chosen a point with the \(y\)-coordinate equal to \(y_{n}\). Since \((x_{n},y_{n})\in\mathcal{U}(\eta)\), we have \(q(x_{n},y_{n})>y_{n}\). By (3.2), there must exist a point \((x_{n+1},y_{n+1})\in\mathrm{supp}(\eta)\) such that \(y_{n+1}=y_{n}\) and \(q(x_{n+1},y_{n+1})<y_{n}\). We pick one such point and add it at the end of the path. If \((x_{n+1},y_{n+1})\) is a cut point or an axial cycle was just created, we exit the loop. Otherwise, note that \((x_{n+1},y_{n+1})\in\mathcal{L}(\eta)\). Go to \(1^{\circ}\). As \(|\mathrm{supp}(\eta)|<\infty\), the procedure terminates after a finite number of steps (denote it by \(k\)) and produces an axial path \(\big{(}(x_{i},y_{i})\big{)}_{i=0}^{k}\) contained in \(\mathrm{supp}(\eta)\); to be more precise, it is also formally possible that \((x_{k},y_{k})\) is a third point on some horizontal or vertical line (in such a case we have obtained an axial cycle). By the construction of the loop, point \((x_{k},y_{k})\) is either an endpoint of an axial cycle or a cut point. Let us show that the first alternative is impossible. First, we clearly have \(\mathcal{L}(\eta)\subset\mathrm{supp}(\nu)\) and \(\mathcal{U}(\eta)\subset\mathrm{supp}(\mu)\), which implies that \(\mathcal{L}(\eta)\) is a third point on the right of \(\mathcal{U}(\eta)\). Figure 4. An example of an axial path \(\Gamma\) constructed after the second run of the algorithm. Purple points \((x_{k},y_{k})\) and \((x_{-l},y_{-l})\) (endpoints of \(\Gamma\)) are cut points. Red points represent probability masses in \(\mathrm{supp}(\mu)\), while blue points indicate probability masses in \(\mathrm{supp}(\nu)\). Figure 3. An example of an axial path constructed by the algorithm. Symbols \(\vee,\wedge\) are placed next to lower (\(\vee\)) and upper (\(\wedge\)) out points. Purple point \((x_{k},y_{k})\) is the endpoint of the path. Red points represent probability masses in \(\mathrm{supp}(\mu)\), while blue points indicate probability masses in \(\mathrm{supp}(\nu)\). see Figure 3. Next, assume that \((x_{k-1},y_{k-1})\in\mathcal{U}(\eta)\). This means that \((x_{k},y_{k})\) was found in step 2\({}^{\circ}\) and \(q(x_{k},y_{k})<y_{k-1}\leq 1\). Therefore \((x_{k},y_{k})\in\mathrm{supp}(\nu)\) and there exists an alternating cycle in \(\mathrm{supp}(\eta)\). However, this is not possible because of Proposition 3.2. If \((x_{k-1},y_{k-1})\in\mathcal{L}(\eta)\), the argument is analogous. We have shown that \((x_{k},y_{k})\) is a cut point. Set \(\Gamma_{+}=\bigcup_{i=1}^{k}\{(x_{i},y_{i})\}\). Moving on, we can return to the starting point \((x_{0},y_{0})\) and repeat the above construction in the reversed direction. By switching the roles of \(x\) and \(y\)-coordinates in steps 1\({}^{\circ}\) and 2\({}^{\circ}\), we produce another axial path \((x_{i},y_{i})_{i=0}^{-l}\). Set \(\Gamma_{-}=\bigcup_{i=-1}^{-l}\{(x_{i},y_{i})\}\) and \[\Gamma\ =\ \Gamma_{+}\cup\{(x_{0},y_{0})\}\cup\Gamma_{-}.\] Repeating the same arguments as before, we show that \((x_{-l},y_{-l})\) is a cut point and \(\Gamma\) is an axial path without cycles, see Figure 4. It remains to verify that \(\mathrm{supp}(\eta)=\Gamma\). This will be accomplished by showing that there exists \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) with \(\tilde{\mu}\leq\mu\), \(\tilde{\nu}\leq\nu\) and \(\mathrm{supp}(\tilde{\mu}+\tilde{\nu})=\Gamma\). This will give the claim: by the minimality of the representation \((\mu,\nu)\), we will deduce that \(\tilde{\mu}+\tilde{\nu}=\alpha\cdot\eta\) for some \(\alpha\in(0,1]\), and hence \(\mathrm{supp}(\tilde{\mu}+\tilde{\nu})=\mathrm{supp}(\eta)\). We begin with the endpoints of \(\Gamma\). As \((x_{k},y_{k})\) is a cut point, there exists \(\gamma\in[0,1]\) such that \(q(x_{k},y_{k})=\gamma x_{k}+(1-\gamma)y_{k}\). We can write \[\eta(x_{k},y_{k})\ =\ \eta^{\prime}(x_{k},y_{k})+\eta^{\prime\prime}(x_{k},y_{k }), \tag{3.5}\] where \(\eta^{\prime}(x_{k},y_{k})=\gamma\eta(x_{k},y_{k})\) and \(\eta^{\prime\prime}(x_{k},y_{k})=(1-\gamma)\eta(x_{k},y_{k})\). Set \[\mu^{\prime}(x_{k},y_{k})=x_{k}\eta^{\prime}(x_{k},y_{k})\quad\text{and}\quad \mu^{\prime\prime}(x_{k},y_{k})=y_{k}\eta^{\prime\prime}(x_{k},y_{k}). \tag{3.6}\] By (3.5) and (3.6), we have \[\mu^{\prime}(x_{k},y_{k})+\mu^{\prime\prime}(x_{k},y_{k})=\Big{(}x_{k}\gamma+y _{k}(1-\gamma)\Big{)}\eta(x_{k},y_{k})=\mu(x_{k},y_{k}). \tag{3.7}\] Equations (3.5) and (3.7) have a clear and convenient interpretation. Namely, we can visualize it as 'cutting' the point \((x_{k},y_{k})\) into two separate points: \((x_{k},y_{k})^{\prime}\) with mass \(\eta^{\prime}(x_{k},y_{k})\) and \((x_{k},y_{k})^{\prime\prime}\) with mass \(\eta^{\prime\prime}(x_{k},y_{k})\). Moreover, calculating their quotient functions independently, we get \(q^{\prime}(x_{k},y_{k})=x_{k}\) and \(q^{\prime\prime}(x_{k},y_{k})=y_{k}\). Performing the same 'cut' operation on \((x_{-l},y_{-l})\) we can divide this point into \((x_{-l},y_{-l})^{\prime}\) and \((x_{-l},y_{-l})^{\prime\prime}\) such that \(q^{\prime}(x_{-l},y_{-l})=x_{-l}\) and \(q^{\prime\prime}(x_{-l},y_{-l})=y_{-l}\). Observe that \((x_{k},y_{k})\) and \((x_{k-1},y_{k-1})\) have exactly one common coordinate, say \(y_{k}=y_{k-1}\). Consequently, \((x_{k},y_{k})\) is the only point in \(\Gamma\) with \(x\)-coordinate equal to \(x_{k}\). Additionally, by (3.2) and \((x_{k-1},y_{k-1})\in\mathcal{U}(\eta)\), this means that \(q(x_{k},y_{k})\neq y_{k}\) and \(\gamma>0\). Hence \(\eta^{\prime}(x_{k},y_{k})>0\). Similarly, suppose that \(y_{-l}=y_{-l+1}\) (as presented in Figure 4; for other configurations of endpoints, we proceed by analogy). Thus, \((x_{-l},y_{-l})\) is the only point in \(\Gamma\) with \(x\)-coordinate equal to \(x_{-l}\). By (3.2) and \((x_{-l+1},y_{-l+1})\in\mathcal{L}(\eta)\), we have \(\eta^{\prime}(x_{-l},y_{-l})>0\). Next, consider the following function \(\tilde{q}:\Gamma\to[0,1]\) uniquely determined by the following requirements: 1. \(\tilde{q}(x_{k},y_{k})=x_{k}\) (if \(y_{k}=y_{k-1}\), as we have assumed) or \(\tilde{q}(x_{k},y_{k})=y_{k}\) (in the case when \(x_{k}=x_{k-1}\)), 2. \(\tilde{q}(x_{-l},y_{-l})=x_{-l}\) (if \(y_{-l}=y_{-l+1}\), as we have assumed) or \(\tilde{q}(x_{-l},y_{-l})=y_{-l}\) (in the case when \(x_{-l}=x_{-l+1}\)), 3. \(\tilde{q}(x,y)=0\) for all \((x,y)\in\Gamma\cap\mathcal{L}(\eta)\), 4. \(\tilde{q}(x,y)=1\) for all \((x,y)\in\Gamma\cap\mathcal{U}(\eta)\). Set \(\delta=\min(a,b,c,d)\), where \[a=\eta^{\prime}(x_{k},y_{k})\ \ \text{(if $y_{k}=y_{k-1}$)}\quad\text{or} \quad a=\eta^{\prime\prime}(x_{k},y_{k})\ \ \text{(if $x_{k}=x_{k-1}$),}\] \[b=\eta^{\prime}(x_{-l},y_{-l})\ \ \text{(if $y_{-l}=y_{-l+1}$)}\quad \text{or}\quad b=\eta^{\prime\prime}(x_{-l},y_{-l})\ \ \text{(if $x_{-l}=x_{-l+1}$),}\] \[c=\min_{(x,y)\in\Gamma\cap\mathcal{L}(\eta)}\nu(x,y),\ \ \ d=\min_{(x,y)\in\Gamma\cap\mathcal{U}(\eta)}\mu(x,y).\] Then \(\delta>0\), which follows from the previous discussion. Finally, using the acyclic path structure of \(\Gamma\) and Proposition 3.3 (just as in Example 3.6), we are able to find a pair \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) with \(\operatorname{supp}(\tilde{\mu}+\tilde{\nu})=\Gamma\) and a quotient function \(q_{(\tilde{\mu},\tilde{\nu})}=\tilde{q}\). Letting \[\beta\ =\ \delta\cdot\Big{(}\max_{(x,y)\in\Gamma}(\tilde{\mu}+\tilde{\nu}) (x,y)\Big{)}^{-1},\] we see that \(\beta\tilde{\mu}\leq\mu\) and \(\beta\tilde{\nu}\leq\nu\), as desired. **Case II:**\((x_{0},y_{0})\) is a cut point. Suppose that \(x_{0}=y_{0}\) and \(q(x_{0},x_{0})=x_{0}\). Put \[\tilde{\mu}=\mathbb{1}_{\{(x_{0},x_{0})\}}x_{0}\eta(x_{0},y_{0})\quad\text{and }\quad\tilde{\nu}=\mathbb{1}_{\{(x_{0},x_{0})\}}(1-x_{0})\eta(x_{0},y_{0}).\] We have \((\tilde{\mu},\tilde{\nu})\in\mathcal{R}\) and \(\tilde{\mu}\leq\mu\), \(\tilde{\nu}\leq\nu\). Hence \(\operatorname{supp}(\eta)=\{(x_{0},x_{0})\}\). Next, assume that \(x_{0}\neq y_{0}\). In that case, \(q(x_{0},y_{0})\) cannot be equal to both \(x_{0}\) and \(y_{0}\) at the same time. This means that we can proceed just as in Case I (at least in one direction). The only difference is that we have already located one of the cut points - there is no need to apply the procedure twice. From the proof provided, we can deduce yet another significant conclusion. **Corollary 3.8**.: _If \(\eta\in\operatorname{ext}_{f}(\mathcal{C})\), then \(q(x,y)=0\) for all \((x,y)\in\mathcal{L}(\eta)\) and \(q(x,y)=1\) for all \((x,y)\in\mathcal{U}(\eta)\). Except for the endpoints of this axial path (which are cut points), \(\operatorname{supp}(\eta)\) consists of lower and upper out points, appearing alternately._ Proof.: Note that \(\mathcal{L}(\eta)\) and \(\mathcal{U}(\eta)\) are well defined as the representation of \(\eta\) is unique. The statement follows directly from the proof of Theorem 3.7. ## 4. Asymptotic estimate Equipped with the machinery developed in the previous sections, we are ready to establish the asymptotic estimate (1.3). We need to clarify how the properties of \(\operatorname{ext}_{f}(\mathcal{C})\) covered in the preceding part apply to this problem. Referring to the prior notation, we will write \[(X,Y)\in\mathcal{C}_{f}\quad\text{ or }\quad(X,Y)\in\operatorname{ext}_{f}( \mathcal{C}),\] to indicate that the distribution of a random vector \((X,Y)\) is a coherent (or an extremal coherent) measure with finite support. **Proposition 4.1**.: _For any \(\alpha>0\), we have_ \[\sup_{(X,Y)\in\mathcal{C}}\mathbb{E}|X-Y|^{\alpha}\ \ =\ \sup_{(X,Y)\in \mathcal{C}_{f}}\mathbb{E}|X-Y|^{\alpha}.\] Proof.: Fix any \((X,Y)\in\mathcal{C}\). As shown in [5, 7], there exists a sequence \((X_{n},Y_{n})\in\mathcal{C}_{f}\) such that \[\max\Big{\{}|X-X_{n}|,|Y-Y_{n}|\Big{\}}\ \leq\ \frac{1}{n},\quad\text{ for all }n=1,\,2,\,\ldots \tag{4.1}\] almost surely. Consequently, by dominated convergence and (4.1), we obtain \[\mathbb{E}|X-Y|^{\alpha}\ \ =\ \lim_{n\to\infty}\mathbb{E}|X_{n}-Y_{n}|^{\alpha},\] and thus \[\mathbb{E}|X-Y|^{\alpha}\ \ \leq\ \sup_{n\in\mathbb{N}}\ \mathbb{E}|X_{n}-Y_{n}|^{\alpha} \ \leq\ \sup_{(X,Y)\in\mathcal{C}_{f}}\mathbb{E}|X-Y|^{\alpha}.\] This proves the '\(\leq\)'-inequality, while in the reversed direction it is evident. Next, we will apply the celebrated Krein-Milman theorem, see [18]. **Theorem 4.2** (Krein-Milman).: _A compact convex subset of a Hausdorff locally convex topological vector space is equal to the closed convex hull of its extreme points._ The above statement enables us to restrict the analysis of the estimate (1.3) to extremal measures. Precisely, we have the following statement. **Proposition 4.3**.: _For any \(\alpha>0\), we have_ \[\sup_{(X,Y)\in\mathcal{C}_{f}}\mathbb{E}|X-Y|^{\alpha}\ \ =\ \ \sup_{(X,Y)\in \operatorname{ext}_{f}(\mathcal{C})}\mathbb{E}|X-Y|^{\alpha}.\] Proof.: Let \(Z=C([0,1]^{2},\mathbb{R})\); then \(Z^{*}\) is the space of finite signed Borel measures with the total variation norm \(\|\cdot\|_{\mathrm{TV}}\). Let us equip \(Z^{*}\) with the topology of weak\({}^{*}\) convergence. Under this topology, \(Z^{*}\) is a Hausdorff and a locally convex space. For a fixed \(m\in\mathcal{C}_{f}\), let \[\mathcal{C}_{m}\ =\ \{m^{\prime}\in\mathcal{C}_{f}:\ \mathrm{supp}(m^{\prime}) \subseteq\mathrm{supp}(m)\}\] denote the family of coherent distributions supported on the subsets of \(\mathrm{supp}(m)\). Firstly, observe that \(\mathcal{C}_{m}\) is convex. Secondly, we can easily verify that \(\operatorname{ext}(\mathcal{C}_{m})=\mathcal{C}_{m}\cap\operatorname{ext}_{f }(\mathcal{C})\). Plainly, if \(m^{\prime}\in\mathcal{C}_{m}\) and \(m^{\prime}=\alpha\cdot m_{1}+(1-\alpha)\cdot m_{2}\) for some \(\alpha\in(0,1)\) and \(m_{1},m_{2}\in\mathcal{C}\), then \(\mathrm{supp}(m^{\prime})=\mathrm{supp}(m_{1})\cup\mathrm{supp}(m_{2})\) and we must have \(m_{1},m_{2}\in\mathcal{C}_{m}\). Hence \(\operatorname{ext}(\mathcal{C}_{m})\subset\operatorname{ext}_{f}(\mathcal{C})\), whereas \(\operatorname{ext}_{f}(\mathcal{C})\cap\mathcal{C}_{m}\subset\operatorname{ ext}(\mathcal{C}_{m})\) is obvious. Moreover, we claim that \(\mathcal{C}_{m}\) is compact in the weak\({}^{*}\) topology. Indeed, by the Banach-Alaoglu theorem, \[B_{Z^{*}}\ =\ \{\mu\in Z^{*}:\ \|\mu\|_{\mathrm{TV}}\leq 1\}\] is weak\({}^{*}\) compact. As \(\mathcal{C}_{m}\subset B_{Z^{*}}\), it remains to check that \(\mathcal{C}_{m}\) is weak\({}^{*}\) closed. We can write \(\mathcal{C}_{m}=\mathcal{C}\cap\mathcal{P}_{m}\), where \(\mathcal{P}_{m}\) stands for the set of all probability measures supported on the subsets of \(\mathrm{supp}(m)\). Note that \(\mathcal{P}_{m}\) is clearly weak\({}^{*}\) closed. Lastly, coherent distributions on \([0,1]^{2}\) are also weak\({}^{*}\) closed, as demonstrated in [6]. Thus, by Krein-Milman theorem, there exists a sequence \((m_{n})_{n=1}^{\infty}\) with values in \(\mathcal{C}_{m}\), satisfying \[m_{n}\ =\ \beta_{1}^{(n)}\eta_{1}^{(n)}+\beta_{2}^{(n)}\eta_{2}^{(n)}+\cdots+ \beta_{k_{n}}^{(n)}\eta_{k_{n}}^{(n)}, \tag{4.2}\] where \(\ \eta_{1}^{(n)},\ldots,\eta_{k_{n}}^{(n)}\in\operatorname{ext}(\mathcal{C}_{m}) \ \text{and}\ \beta_{1}^{(n)},\ldots,\beta_{k_{n}}^{(n)}\) are positive numbers summing up to \(1\), such that \[\int_{[0,1]^{2}}f\ \mathrm{d}m_{n}\ \longrightarrow\ \int_{[0,1]^{2}}f\ \mathrm{d}m, \tag{4.3}\] for all bounded, continuous functions \(f:[0,1]^{2}\to\mathbb{R}\). Put \(f(x,y)=|x-y|^{\alpha}\). By (4.3) and (4.2), we have \[\int_{[0,1]^{2}}|x-y|^{\alpha}\;\mathrm{d}m \leq \sup_{n\in\mathbb{N}}\int_{[0,1]^{2}}|x-y|^{\alpha}\;\mathrm{d}m_ {n}\] \[\leq \sup_{\begin{subarray}{c}n\in\mathbb{N},\\ 1\leq i\leq k_{n}\end{subarray}}\;\int_{[0,1]^{2}}|x-y|^{\alpha}\;\mathrm{d} \eta_{i}^{(n)}\] \[\leq \sup_{\eta\in\mathrm{ext}_{f}(\mathcal{C})}\int_{[0,1]^{2}}|x-y| ^{\alpha}\;\mathrm{d}\eta,\] and hence \[\sup_{(X,Y)\in\mathcal{C}_{f}}\mathbb{E}|X-Y|^{\alpha} \leq \sup_{(X,Y)\in\mathrm{ext}_{f}(\mathcal{C})}\mathbb{E}|X-Y|^{ \alpha}.\] The reverse inequality is clear. Now, we have the following significant reduction. Denote by \(\mathcal{S}\) the family of all finite sequences \(\mathbf{z}=(z_{0},z_{1},\ldots,z_{n+1}),\ n\in\mathbb{N}\), with \(z_{0}=z_{n+1}=0\), \(\sum_{i=1}^{n}z_{i}=1\) and \(z_{i}>0\) for \(i=1,2,\ldots,n\). We emphasize that \(n=n(\mathbf{z})\), the length of \(\mathbf{z}\), is also allowed to vary. In what follows, we will write \(n\) instead of \(n(\mathbf{z})\); this should not lead to any confusion. **Proposition 4.4**.: _For any \(\alpha\geq 1\), we have_ \[\sup_{(X,Y)\in\mathrm{ext}_{f}(\mathcal{C})}\mathbb{E}|X-Y|^{\alpha} = \sup_{\mathbf{z}\in\mathcal{S}}\;\sum_{i=1}^{n}z_{i}\Big{|}\frac{z_ {i}}{z_{i-1}+z_{i}}-\frac{z_{i}}{z_{i}+z_{i+1}}\Big{|}^{\alpha}. \tag{4.4}\] Proof.: Consider an arbitrary \(\eta\in\mathrm{ext}_{f}(\mathcal{C})\) and let \((\mu,\nu)\) be its unique representation. Recall, based on Theorem 3.7, that \(\mathrm{supp}(\eta)\) is an axial path without cycles. Set \(\mathrm{supp}(\eta)=\{(x_{i},y_{i})\}_{i=1}^{n}\) and let \(q:\mathrm{supp}(\eta)\to[0,1]\) be the quotient function associated with \((\mu,\nu)\). In this setup, by (3.1) and (3.2), we can write \[\int_{[0,1]^{2}}|x-y|^{\alpha}\;\mathrm{d}\eta = \sum_{i=1}^{n}z_{i}\Big{|}\frac{q_{i-1}z_{i-1}+q_{i}z_{i}}{z_{i-1 }+z_{i}}-\frac{q_{i}z_{i}+q_{i+1}z_{i+1}}{z_{i}+z_{i+1}}\Big{|}^{\alpha}, \tag{4.5}\] where \(z_{0}=z_{n+1}=0\), \(q_{0}=q_{n+1}=0\), \[q_{i}=q(x_{i},y_{i})\quad\text{and}\quad z_{i}=\eta(x_{i},y_{i}),\quad\text{ for all }\;i=1,\,2,\,\ldots,n.\] Note that if \(n=1\), then both sides of (4.5) are equal to zero; hence \(\eta\) does not bring any contribution to (4.4). Hence, from now on, we will assume that \(n\geq 2\). Notice that by Corollary 3.8, the sequence \((q_{1},q_{2},\ldots,q_{n})\) is given by \[(q_{1},0,1,0,1,\ldots,q_{n})\quad\text{or}\quad(q_{1},1,0,1,0,\ldots,q_{n})\] - except for \(q_{1}\) and \(q_{n}\), \((q_{2},\ldots,q_{n-1})\) is simply an alternating binary sequence. Furthermore, the right-hand side of (4.5) is the sum of \[P(q_{1}):=\;z_{1}\Big{|}q_{1}-\frac{q_{1}z_{1}+q_{2}z_{2}}{z_{1}+z_{2}}\Big{|} ^{\alpha}\;+\;z_{2}\Big{|}\frac{q_{1}z_{1}+q_{2}z_{2}}{z_{1}+z_{2}}-\frac{q_{ 2}z_{2}+q_{3}z_{3}}{z_{2}+z_{3}}\Big{|}^{\alpha} \tag{4.6}\] and some other terms not involving \(q_{1}\). Since \(\alpha\geq 1\), \(P\) is a convex function on \([0,1]\) and hence it is maximized by some \(q_{1}^{\prime}\in\{0,1\}\); in the case of \(P(0)=P(1)\), we choose \(q_{1}^{\prime}\) arbitrarily. Depending on \(q_{1}^{\prime}\), we shall now perform one of the following transformations \((q,z)\mapsto(\tilde{q},\tilde{z})\): a. If \(q_{1}^{\prime}\neq q_{2}\), we let \(\tilde{n}=n\), \(\tilde{q}_{1}=q_{1}^{\prime}\) and \(\tilde{q}_{i}=q_{i}\) for \(i\in\{0\}\cup\{2,3,\ldots,n+1\}\), \(\tilde{z}_{i}=z_{i}\) for \(i\in\{0,1,\ldots,n+1\}\). This operation only changes \(q_{1}\) into \(q_{1}^{\prime}\) - we increase the right-hand side of (4.5) by "correcting" the quotient function on the first atom. b. If \(q_{1}^{\prime}=q_{2}\), we take \(\tilde{n}=n-1\), \(\tilde{q}_{0}=0\), \(\tilde{z}_{0}=0\) and \[\tilde{q}_{i}=q_{i+1},\qquad\tilde{z}_{i}=\frac{z_{i+1}}{z_{2}+z_{3}+\ldots+z _{n}}\qquad\quad\mbox{ for}\quad i\in\{1,2,\ldots,\tilde{n}+1\}.\] This modification removes the first atom and rescales the remaining ones. It is easy to see that for the transformed sequences \((\tilde{q},\tilde{z})\), the right-hand side of (4.5) does not decrease. Performing a similar transformation for the last summand in (4.5) (depending on \(q_{n}^{\prime}\) and \(q_{n-1}\)) we obtain a pair of sequences \((\tilde{q},\tilde{z})\), such that \((\tilde{q}_{1},\ldots,\tilde{q}_{\tilde{n}})\) is an alternating binary sequence and \[\int_{[0,1]^{2}}|x-y|^{\alpha}\;\mathrm{d}\eta \leq \sum_{i=1}^{\tilde{n}}\tilde{z}_{i}\Big{|}\frac{\tilde{q}_{i-1} \tilde{z}_{i-1}+\tilde{q}_{i}\tilde{z}_{i}}{\tilde{z}_{i-1}+\tilde{z}_{i}}- \frac{\tilde{q}_{i}\tilde{z}_{i}+\tilde{q}_{i+1}\tilde{z}_{i+1}}{\tilde{z}_{ i+1}}\Big{|}^{\alpha}\] \[= \sum_{i=1}^{\tilde{n}}\tilde{z}_{i}\Big{|}\frac{\tilde{z}_{i}}{ \tilde{z}_{i-1}+\tilde{z}_{i}}-\frac{\tilde{z}_{i}}{\tilde{z}_{i}+\tilde{z}_{ i+1}}\Big{|}^{\alpha}\] \[\leq \sup_{\tilde{\mathbf{z}}}\;\sum_{i=1}^{n}z_{i}\Big{|}\frac{z_{i} }{z_{i-1}+z_{i}}-\frac{z_{i}}{z_{i}+z_{i+1}}\Big{|}^{\alpha},\] which proves the inequality '\(\leq\)' in (4.4). The reverse bound follows by a straightforward construction, involving measures with quotient functions equal to \(0\) or \(1\) (see (4.5)). We require some further notation. Given \(\alpha>0\), let \(\Phi_{\alpha}:\mathcal{S}\to[0,1]\) be defined by \[\Phi_{\alpha}(z) = \sum_{i=1}^{n}z_{i}\Big{|}\frac{z_{i}}{z_{i-1}+z_{i}}-\frac{z_{i} }{z_{i}+z_{i+1}}\Big{|}^{\alpha}.\] By the preceding discussion, for \(\alpha\geq 1\) we have \[\sup_{(X,Y)\in\mathcal{C}}\mathbb{E}|X-Y|^{\alpha}\;=\;\sup_{z\in\mathcal{S}} \Phi_{\alpha}(z),\] and our main problem amounts to the identification of \[\limsup_{\alpha\to\infty}\Big{[}\alpha\cdot\sup_{z\in\mathcal{S}}\Phi_{\alpha }(z)\Big{]}. \tag{4.7}\] It will later become clear that \(\limsup\) in (4.7) can be replaced by an ordinary limit. We begin by making some introductory observations. **Definition 4.5**.: Fix \(\alpha\geq 1\) and let \(\mathbf{z}=(z_{0},z_{1},\ldots,z_{n+1})\) be a generic element of \(\mathcal{S}\). For \(1\leq i\leq n\), we say that the term (component) \(z_{i}\) of \(\mathbf{z}\) is _significant_ if \[\sqrt{\alpha}\cdot z_{i-1}<z_{i}\quad\mbox{ and }\quad\sqrt{\alpha}\cdot z_{i}<z_{i +1},\] or \[z_{i-1}>\sqrt{\alpha}\cdot z_{i}\quad\mbox{ and }\quad z_{i}>\sqrt{\alpha}\cdot z _{i+1}.\] The set of all significant components of \(z\) will be denoted by \(\phi_{\alpha}(z)\). Whenever a component \(z_{i}\) of \(\mathbf{z}\) (\(1\leq i\leq n\)) is not significant, we say that \(z_{i}\) is _negligible_. The terms \(z_{0}\) and \(z_{n+1}\) will be treated as neither significant nor negligible. Now we will show that the contribution of all negligible terms of \(z\) to the total sum \(\Phi_{\alpha}(z)\) vanishes in the limit \(\alpha\to\infty\). Precisely, we have the following. **Proposition 4.6**.: _For \(\alpha\geq 1\) and \(z\in\mathcal{S}\), we have_ \[\Phi_{\alpha}(z) \leq \Psi_{\alpha}(z)\;+\;\Big{|}1-\frac{1}{1+\sqrt{\alpha}}\Big{|}^{ \alpha},\] _where \(\Psi_{\alpha}:\mathcal{S}\to[0,1]\) is defined by_ \[\Psi_{\alpha}(z) = \sum_{z_{i}\in\phi_{\alpha}(z)}z_{i}\Big{|}\frac{z_{i}}{z_{i-1}+z_ {i}}-\frac{z_{i}}{z_{i}+z_{i+1}}\Big{|}^{\alpha}.\] Proof.: Since \(z_{1}+z_{2}+\cdots+z_{n}=1\), it is sufficient to show that \[\Big{|}\frac{z_{i}}{z_{i-1}+z_{i}}-\frac{z_{i}}{z_{i}+z_{i+1}}\Big{|} \;\;\leq\;\;\Big{|}1-\frac{1}{1+\sqrt{\alpha}}\Big{|}, \tag{4.8}\] for all negligible components \(z_{i}\). Assume that (4.8) does not hold. Since the ratios \(z_{i}/(z_{i-1}+z_{i})\) and \(z_{i+1}/(z_{i}+z_{i+1})\) take values in \([0,1]\), we must have \[\min\Big{\{}\frac{z_{i}}{z_{i-1}+z_{i}},\frac{z_{i}}{z_{i}+z_{i+1} }\Big{\}} < \frac{1}{1+\sqrt{\alpha}} \tag{4.9}\] and \[\max\Big{\{}\frac{z_{i}}{z_{i-1}+z_{i}},\frac{z_{i}}{z_{i}+z_{i+1} }\Big{\}} > \frac{\sqrt{\alpha}}{1+\sqrt{\alpha}}. \tag{4.10}\] It remains to note that component \(z_{i}\) fulfilling (4.9) and (4.10) is significant. It is also useful to consider some special arrangements consisting of three successive components \((z_{i-1},z_{i},z_{i+1})\) of the generic sequence \(z\in\mathcal{S}\). **Definition 4.7**.: Let \(\,\textbf{z}=(z_{0},z_{1},\ldots,z_{n+1})\,\) be an element of \(\,\mathcal{S}\). For \(\,1\leq i\leq n\), we say that a subsequence \(\;(z_{i-1},z_{i},z_{i+1})\,\) of **z** is * a _split_, if \(\;z_{i-1}>z_{i}<z_{i+1}\), * a _peak_, if \(\;z_{i-1}<z_{i}>z_{i+1}\). In what follows, let \(\mathcal{S}^{\prime}\) be the subset of all those \(z\in\mathcal{S}\), which satisfy: 1. \(z_{i-1}\neq z_{i}\,\) for all \(\,i\in\{1,2,\ldots,n+1\}\), 2. there are no split subsequences in \(z\), 3. there is exactly one peak in \(z\), 4. there is exactly one negligible component \(z_{j_{0}}\) in \(z\), and \(z_{j_{0}}\) is the center of the unique peak \((z_{j_{0}-1},z_{j_{0}},z_{j_{0}+1})\). **Proposition 4.8**.: _For \(\alpha\geq 1\), we have_ \[\sup_{z\in\mathcal{S}}\Psi_{\alpha}(z) \leq \sup_{z\in\mathcal{S}^{\prime}}\Psi_{\alpha}(z).\] Proof.: Let us start by outlining the structure of the proof. Pick an arbitrary \(z\in\mathcal{S}\). We will gradually improve \(z\) by a series of subsequent combinatorial reductions \[z\longrightarrow z^{(1)}\longrightarrow z^{(2)}\longrightarrow z^{(3)} \longrightarrow z^{(4)},\] such that \[\Psi_{\alpha}(z)\;\leq\;\Psi_{\alpha}(z^{(i)})\;\leq\;\Psi_{\alpha}(z^{(j)}) \quad\text{ for }\quad 1\leq i\leq j\leq 4,\] and \(z^{(i)}\) will satisfy the requirements from \(1.\) to \(i.\) in the definition of \(\mathcal{S}^{\prime}\). This will give \(\Psi_{\alpha}(z)\leq\Psi_{\alpha}(z^{(4)})\) for some \(z^{(4)}\in\mathcal{S}^{\prime}\) and the claim will be proved. 1. \(z\to z^{(1)}\). Put \(z=(z_{0},z_{1},\ldots,z_{n+1})\). If \(z_{i-1}\neq z_{i}\) for all \(i\in\{1,2,\ldots,n+1\}\), then we are done. Otherwise, let \(i_{0}\) be the smallest index without this property. As \(z_{0}=0\) and \(z_{1}\) is strictly positive, we must have \(i_{0}>1\). Analogously, we have \(i_{0}<n+1\). Consequently, observe that \(z_{i_{0}-1}\) and \(z_{i_{0}}\) are negligible. Examine the transformation \(z\mapsto\tilde{z}\), \[(\ldots,z_{i_{0}-1},z_{i_{0}},z_{i_{0}+1},\ldots)\ \longrightarrow\ w^{-1} \cdot(\ldots,z_{i_{0}-1},z_{i_{0}+1},\ldots), \tag{4.11}\] \[w\ =\ 1-z_{i_{0}},\] which removes \(z_{i_{0}}\) and rescales the remaining elements. If \(z_{i_{0}+1}\in\phi_{\alpha}(z)\), then \(w^{-1}z_{i_{0}+1}\) will remain a significant component of \(\tilde{z}\). The contribution of \(z_{i_{0}+1}\) (and all the other significant components of \(z\)) to the overall sum will grow by a factor of \(w^{-1}>1\). The contribution of \(z_{i_{0}-1}\) to \(\Psi_{\alpha}(z)\) is zero and it can only increase if \(z_{i_{0}-1}\) becomes significant. Therefore \(\Psi_{\alpha}(z)\leq\Psi_{\alpha}(\tilde{z})\). After a finite number of such operations, we obtain a sequence \(z^{(1)}\) for which \(1\). holds. 2. \(z^{(1)}\to z^{(2)}\). Set \(z^{(1)}=(z^{(1)}_{i})_{i=0}^{n+1}\) and suppose that \((z^{(1)}_{i_{0}-1},z^{(1)}_{i_{0}},z^{(1)}_{i_{0}+1})\) is a split for some \(i_{0}\in\{2,3,\ldots,n-1\}\) - by the definition of split configuration, \(i_{0}\) must be greater than \(1\) and smaller than \(n\). Accordingly, note that \(z^{(1)}_{i_{0}}\) is negligible and consider the preliminary modification \(z^{(1)}\mapsto\hat{z}^{(1)}\) given by \[(\ldots,z^{(1)}_{i_{0}-1},z^{(1)}_{i_{0}},z^{(1)}_{i_{0}+1},\ldots)\ \longrightarrow\ ( \ldots,z^{(1)}_{i_{0}-1},0,z^{(1)}_{i_{0}+1},\ldots),\] which changes \(z^{(1)}_{i_{0}}\) into \(0\) (so \(\hat{z}^{(1)}\not\in\mathcal{S}\): we will handle this later). As \(z^{(1)}_{i_{0}-1}>z^{(1)}_{i_{0}}\), we have \[\Bigg{|}\frac{z^{(1)}_{i_{0}-1}}{z^{(1)}_{i_{0}-2}+z^{(1)}_{i_{0}-1}}-\frac{z^ {(1)}_{i_{0}-1}}{z^{(1)}_{i_{0}-1}+z^{(1)}_{i_{0}}}\Bigg{|}\ \ <\ \ \Bigg{|}\frac{z^{(1)}_{i_{0}-1}}{z^{(1)}_{i_{0}-2}+z^{(1)}_{i_{0}-1}}-1 \Bigg{|}, \tag{4.12}\] if only \(z^{(1)}_{i_{0}-1}\in\phi_{\alpha}(z^{(1)})\). Similarly, as \(z^{(1)}_{i_{0}}<z^{(1)}_{i_{0}+1}\), we get \[\Bigg{|}\frac{z^{(1)}_{i_{0}+1}}{z^{(1)}_{i_{0}}+z^{(1)}_{i_{0}+1}}-\frac{z^ {(1)}_{i_{0}+1}}{z^{(1)}_{i_{0}+1}+z^{(1)}_{i_{0}+2}}\Bigg{|}\ \ <\ \ \Bigg{|}1-\frac{z^{(1)}_{i_{0}+1}}{z^{(1)}_{i_{0}+1}+z^{(1)}_{i_{0}+2}}\Bigg{|}, \tag{4.13}\] as long as \(z^{(1)}_{i_{0}+1}\in\phi_{\alpha}(z^{(1)})\). By (4.12) and (4.13), with a slight abuse of notation (the domain of \(\Psi_{\alpha}\) formally does not contain \(\hat{z}^{(1)}\), but we may extend the definition for \(\Psi_{\alpha}(\hat{z}^{(1)})\) in a straightforward way), we can write \(\Psi_{\alpha}(z^{(1)})\leq\Psi_{\alpha}(\hat{z}^{(1)})\). Now, let us denote \[\hat{z}^{(1,\leftarrow)}\ =\ (0,\hat{z}^{(1)}_{1},\ldots,\hat{z}^{(1)}_{i_{0}-1},0)\] and \[\hat{z}^{(1,\rightarrow)}\ =\ (0,\hat{z}^{(1)}_{i_{0}+1},\ldots,\hat{z}^{(1)}_{n},0).\] In other words, sequences \(\hat{z}^{(1,\leftarrow)}\) and \(\hat{z}^{(1,\rightarrow)}\) are two consecutive parts of \(\hat{z}^{(1)}\) and we can restore \(\hat{z}^{(1)}\) by glueing their corresponding zeroes together. Moreover, after normalizing them by the weights \[w^{(1,\leftarrow)}=\sum_{i=1}^{i_{0}-1}\hat{z}^{(1)}_{i}\ \ \ \ \mbox{and}\ \ \ \ w^{(1,\rightarrow)}=\sum_{i=i_{0}+1}^{n}\hat{z}^{(1)}_{i},\] we get \((w^{(1,\leftarrow)})^{-1}\hat{z}^{(1,\leftarrow)}\), \((w^{(1,\rightarrow)})^{-1}\hat{z}^{(1,\rightarrow)}\in\mathcal{S}\). Next, in this setup, we are left with \[\Psi_{\alpha}(\hat{z}^{(1)}) = w^{(1,\leftarrow)}\cdot\Psi_{\alpha}\left(\frac{\hat{z}^{(1, \leftarrow)}}{w^{(1,\leftarrow)}}\right)\] \[+ w^{(1,\rightarrow)}\cdot\Psi_{\alpha}\left(\frac{\hat{z}^{(1, \rightarrow)}}{w^{(1,\rightarrow)}}\right)\] \[\leq \max\left\{\Psi_{\alpha}\left(\frac{\hat{z}^{(1,\leftarrow)}}{w^ {(1,\leftarrow)}}\right),\Psi_{\alpha}\left(\frac{\hat{z}^{(1,\rightarrow)}}{ w^{(1,\rightarrow)}}\right)\right\},\] where we have used \(w^{(1,\leftarrow)}+w^{(1,\rightarrow)}=1.\) Let \[\tilde{z}^{(1)}\ =\ \arg\max\left\{\Psi_{\alpha}(z):\ z\in\left\{\frac{\hat{z}^{(1, \leftarrow)}}{w^{(1,\leftarrow)}},\frac{\hat{z}^{(1,\rightarrow)}}{w^{(1, \rightarrow)}}\right\}\right\}.\] By the construction, we have \(\Psi_{\alpha}(z^{(1)})\leq\Psi_{\alpha}(\tilde{z}^{(1)})\), the new sequence \(\tilde{z}^{(1)}\) is shorter than \(z^{(1)}\) and \(\tilde{z}^{(1)}\) contains less split configurations than \(z^{(1)}\). After repeating this procedure (\(z^{(1)}\mapsto\tilde{z}^{(1)}\)) multiple times, we acquire a new sequence \(z^{(2)}\) obeying \(1.\) and \(2.\) 3. \(z^{(2)}\to z^{(3)}\). Surprisingly, it is enough to put \(z^{(3)}=z^{(2)}\). Indeed, we can show that sequence \(z^{(2)}\) already satisfies the third condition. First, suppose that \((z^{(2)}_{j_{0}-1},z^{(2)}_{j_{0}},z^{(2)}_{j_{0}+1})\) and \((z^{(2)}_{j_{1}-1},z^{(2)}_{j_{1}},z^{(2)}_{j_{1}+1})\) are two different peaks with indices \(j_{0}<j_{1}\). Hence, as \(z^{(2)}_{j_{0}}>z^{(2)}_{j_{0}+1}\) and \(z^{(2)}_{j_{1}-1}<z^{(2)}_{j_{1}}\), there is at least one point \(i_{0}\in\{j_{0}+1,\ldots,j_{1}-1\}\) at which we are forced to "flip" the direction of the previous inequality sign: \[z^{(2)}_{j_{0}-1}<z^{(2)}_{j_{0}}>z^{(2)}_{j_{0}+1}>\cdots>z^{(2)}_{i_{0}}< \cdots<z^{(2)}_{j_{1}-1}<z^{(2)}_{j_{1}}>z^{(2)}_{j_{1}+1}.\] Equivalently, this means that \((z^{(2)}_{i_{0}-1},z^{(2)}_{i_{0}},z^{(2)}_{i_{0}+1})\) is a split configuration. This contradicts our initial assumptions about \(z^{(2)}\) (the requirement \(2.\) is not met) and proves that there is at most one peak in \(z^{(2)}.\) Second, we have \[0=z^{(2)}_{0}<z^{(2)}_{1}\quad\text{and}\quad z^{(2)}_{n}>z^{(2)}_{n+1}=0,\] so there exists a point \(j_{0}\) at which the direction of the inequalities must be changed from '\(<\)' to '\(>\)'. Thus, there is at least one peak in \(z^{(2)}.\) 4. \(z^{(3)}\to z^{(4)}.\) Let \(z^{(3)}=(z^{(3)}_{i})^{n+1}_{i=0}\) and assume that \((z^{(3)}_{j_{0}-1},z^{(3)}_{j_{0}},z^{(3)}_{j_{0}+1})\) is the unique peak of \(z^{(3)}\): \[0<z^{(3)}_{1}<\cdots<z^{(3)}_{j_{0}-1}<z^{(3)}_{j_{0}}>z^{(3)}_{j_{0}+1}> \cdots>z^{(3)}_{n}>0. \tag{4.14}\] Further reasoning is similar to the previous ones (from points \(1.\) and \(2.\)), so we will just sketch it. If the requirement \(4.\) is not satisfied, pick a negligible component \(z^{(3)}_{i_{0}}\) with \(i_{0}\neq j_{0}\). Next, apply the transformation \(z^{(3)}\mapsto\tilde{z}^{(3)}\) defined by (4.11), i.e. remove \(z^{(3)}_{i_{0}}\) and rescale the remaining components. Thanks to the'single peak structure' (4.14), all the significant components of \(z^{(3)}\) remain significant for \(\tilde{z}^{(3)}\). The terms associated with components \(z^{(3)}_{i}\in\phi_{\alpha}(z^{(3)})\backslash\{z^{(3)}_{i_{0}-1},z^{(3)}_{i_{0 }+1}\}\) are not changed (and their contribution grows after the rescaling). The summands corresponding to \(z^{(3)}_{i_{0}-1}\) and \(z^{(3)}_{i_{0}+1}\) can only increase, just as in (4.12) and (4.13). Therefore \(\Psi_{\alpha}(z^{(3)})\leq\Psi_{\alpha}(\tilde{z}^{(3)})\). After several repetitions and discarding of all unnecessary negligible components (beyond the central \(z_{j_{0}}\)), we finally obtain the desired sequence \(z^{(4)}\in\mathcal{S}^{\prime}\). We proceed to the proof of our main result. Proof of Theorem 1.3.: We start with the lower estimate, for which the argument is simpler. By Proposition 4.4 and reformulation (4.7), for \(\alpha>2\) we have \[\alpha\cdot\sup_{(X,Y)\in\mathcal{C}}\mathbb{E}|X-Y|^{\alpha} = \alpha\cdot\sup_{z\in\mathcal{S}}\Phi_{\alpha}(z)\] \[\geq \alpha\cdot\Phi_{\alpha}\left(0,\frac{1}{\alpha},\frac{\alpha-2} {\alpha},\frac{1}{\alpha},0\right)\] \[= \alpha\cdot\frac{2}{\alpha}\left|1-\frac{1}{\alpha-1}\right|^{ \alpha}\xrightarrow{\alpha\to\infty}\ \ \frac{2}{e}.\] Now we turn our attention to the upper estimate. By Propositions 4.6 and 4.8, we get \[\alpha\cdot\sup_{(X,Y)\in\mathcal{C}}\mathbb{E}|X-Y|^{\alpha} \leq \alpha\cdot\left(\left|1-\frac{1}{1+\sqrt{\alpha}}\right|^{ \alpha}\ +\ \sup_{z\in\mathcal{S}^{\prime}}\Psi_{\alpha}(z)\right).\] Next, because of \[\lim_{\alpha\to\infty}\alpha\cdot\left|1-\frac{1}{1+\sqrt{\alpha}}\right|^{ \alpha}\ =\ 0,\] it is enough to provide an asymptotic estimate for \(\alpha\cdot\sup_{z\in\mathcal{S}^{\prime}}\Psi_{\alpha}(z)\). Fix an arbitrary \(z=(z_{0},z_{1},\ldots,z_{n+1})\in\mathcal{S}^{\prime}\) and let \(z_{j_{0}}\) be the center of the unique peak contained in \(z\): \[0<z_{1}<\cdots<z_{j_{0}-1}<z_{j_{0}}>z_{j_{0}+1}>\cdots>z_{n}>0.\] As \(z_{j_{0}}\) is the only negligible component contained in \(z\), we have \[\sqrt{\alpha}\cdot z_{i}<z_{i+1}\quad\text{ for }\quad 1\leq i\leq j_{0}-1,\] and \[z_{i-1}>\sqrt{\alpha}\cdot z_{i}\quad\text{ for }\quad j_{0}+1\leq i\leq n.\] In particular, we get \(0\leq z_{j_{0}-1},z_{j_{0}+1}<1/\sqrt{\alpha}\). Consequently, we can write \(\Psi_{\alpha}(z)=A+B+C\), where \[A\ =\ \sum_{|i-j_{0}|>2}z_{i}\Big{|}\frac{z_{i}}{z_{i-1}+z_{i}}-\frac{z_{i}}{z_{ i}+z_{i+1}}\Big{|}^{\alpha},\] \[B\ =\ z_{i_{0}-2}\Big{|}\frac{z_{i_{0}-2}}{z_{i_{0}-3}+z_{i_{0}-2}}-\frac{z_{i_ {0}-2}}{z_{i_{0}-2}+z_{i_{0}-1}}\Big{|}^{\alpha}+z_{i_{0}+2}\Big{|}\frac{z_{i_ {0}+2}}{z_{i_{0}+1}+z_{i_{0}+2}}-\frac{z_{i_{0}+2}}{z_{i_{0}+2}+z_{i_{0}+3}} \Big{|}^{\alpha}\] and \[C\ =\ z_{i_{0}-1}\Big{|}\frac{z_{i_{0}-1}}{z_{i_{0}-2}+z_{i_{0}-1}}-\frac{z_{i_ {0}-1}}{z_{i_{0}-1}+z_{i_{0}}}\Big{|}^{\alpha}\ +\ z_{i_{0}+1}\Big{|}\frac{z_{i_ {0}+1}}{z_{i_{0}}+z_{i_{0}+1}}-\frac{z_{i_{0}+1}}{z_{i_{0}+1}+z_{i_{0}+2}} \Big{|}^{\alpha}.\] We will examine these three parts separately. _The term \(A\)._ Since \(z_{i}/(z_{i-1}+z_{i})\) and \(z_{i}/(z_{i}+z_{i+1})\) belong to \([0,1]\), we may write \[A \leq \sum_{i=1}^{j_{0}-3}z_{i}\ \ +\ \sum_{i=j_{0}+3}^{n}z_{i}\] \[< z_{j_{0}-3}\cdot\sum_{i=0}^{j_{0}-4}\left(\frac{1}{\sqrt{\alpha} }\right)^{i}\ \ +\ \ z_{j_{0}+3}\cdot\sum_{i=0}^{n-j_{0}-3}\left(\frac{1}{\sqrt{\alpha}} \right)^{i}\] \[< (z_{j_{0}-1}+z_{j_{0}+1})\cdot\frac{1}{\alpha}\cdot\sum_{i=0}^{ \infty}\left(\frac{1}{\sqrt{\alpha}}\right)^{i}\] \[< \frac{2}{\alpha\sqrt{\alpha}}\cdot\sum_{i=0}^{\infty}\left(\frac {1}{\sqrt{\alpha}}\right)^{i}\ \ =\ \ \frac{2}{\alpha(\sqrt{\alpha}-1)}\] and hence \[\alpha\cdot A\ <\ \frac{2}{\sqrt{\alpha}-1}\ \xrightarrow{\alpha\to\infty}\ 0.\] _The term \(B\)._ We have \[B \leq z_{i_{0}-2}\Big{|}1-\frac{z_{i_{0}-2}}{z_{i_{0}-2}+z_{i_{0}-1}} \Big{|}^{\alpha}\ \ +\ \ z_{i_{0}+2}\Big{|}\frac{z_{i_{0}+2}}{z_{i_{0}+1}+z_{i_{0}+2}}-1 \Big{|}^{\alpha}\] \[< z_{i_{0}-2}\left|1-\frac{z_{i_{0}-2}}{z_{i_{0}-2}+\frac{1}{\sqrt {\alpha}}}\right|^{\alpha}\ \ +\ \ z_{i_{0}+2}\left|\frac{z_{i_{0}+2}}{\frac{1}{\sqrt{\alpha}}+z_{i_{0}+2}}-1 \right|^{\alpha}\] \[\leq 2\cdot\sup_{x\in[0,1]}x\left|1-\frac{x}{x+\frac{1}{\sqrt{\alpha} }}\right|^{\alpha}\ \ =\ \ \frac{2}{\sqrt{\alpha}(\alpha-1)}\cdot\left(1-\frac{1}{\alpha}\right)^{ \alpha}.\] This yields \[\alpha\cdot B\ <\ \frac{2\sqrt{\alpha}}{\alpha-1}\cdot\left(1-\frac{1}{\alpha} \right)^{\alpha}\ \xrightarrow{\alpha\to\infty}\ 0.\] _The term \(C\)._ Finally, we observe that \[C \leq z_{i_{0}-1}\Big{|}1-\frac{z_{i_{0}-1}}{z_{i_{0}-1}+z_{i_{0}}} \Big{|}^{\alpha}\ \ +\ \ z_{i_{0}+1}\Big{|}\frac{z_{i_{0}+1}}{z_{i_{0}}+z_{i_{0}+1}}-1 \Big{|}^{\alpha}\] \[\leq z_{i_{0}-1}\left|1-z_{i_{0}-1}\right|^{\alpha}\ \ +\ \ z_{i_{0}+1}\left|z_{i_{0}+1}-1 \right|^{\alpha}\] \[\leq 2\cdot\sup_{x\in[0,1]}x\left|1-x\right|^{\alpha}\ \ =\ \ \frac{2}{\alpha+1}\cdot\left(1-\frac{1}{\alpha+1} \right)^{\alpha}.\] Consequently, we obtain \[\alpha\cdot C\ \leq\ \frac{2\alpha}{\alpha+1}\cdot\left(1-\frac{1}{\alpha+1} \right)^{\alpha}\ \xrightarrow{\alpha\to\infty}\ \frac{2}{e}.\] The estimates for \(A\), \(B\) and \(C\) give the desired upper bound. The proof is complete.
2310.13971
A well-balanced second-order finite volume approximation for a coupled system of granular flow
A well-balanced second-order finite volume scheme is proposed and analyzed for a 2 X 2 system of non-linear partial differential equations which describes the dynamics of growing sandpiles created by a vertical source on a flat, bounded rectangular table in multiple dimensions. To derive a second-order scheme, we combine a MUSCL type spatial reconstruction with strong stability preserving Runge-Kutta time stepping method. The resulting scheme is ensured to be well-balanced through a modified limiting approach that allows the scheme to reduce to well-balanced first-order scheme near the steady state while maintaining the second-order accuracy away from it. The well-balanced property of the scheme is proven analytically in one dimension and demonstrated numerically in two dimensions. Additionally, numerical experiments reveal that the second-order scheme reduces finite time oscillations, takes fewer time iterations for achieving the steady state and gives sharper resolutions of the physical structure of the sandpile, as compared to the existing first-order schemes of the literature.
Aekta Aggarwal, Veerappa Gowda G. D., Sudarshan Kumar K
2023-10-21T11:15:33Z
http://arxiv.org/abs/2310.13971v2
# A well-balanced second-order finite volume approximation for a coupled system of granular flow ###### Abstract A second-order finite volume scheme is proposed and analyzed for a \(2\times 2\) system of non-linear partial differential equations. These equations model the dynamics of growing sandpiles created by a vertical source on a flat, bounded rectangular table in multiple dimensions. The well-balancedness of the scheme is ensured through a modified limitation approach allowing the scheme to reduce to well-balanced first-order scheme near the steady state while maintaining the second-order accuracy away from it. The well-balanced property of the scheme is proven analytically in one dimension and demonstrated numerically in two dimensions. It is also shown through the numerical experiments that the second-order scheme reduces the finite time oscillations, takes fewer time iterations for achieving the steady state and gives sharper resolutions of the physical structure of the sandpile, as compared to the first-order schemes existing in the literature. Hamilton Jacobi Well-Balanced Discontinuous Flux Sandpile Balance Laws ## 1 Introduction The study of dynamics of granular matter has been gaining interest among applied mathematicians in the last few years. A wide array of models can be found in the literature, ranging from kinetic models to hyperbolic differential equations. For detailed discussion of these models, refer to [23] and the references therein. This area of research has seen numerous endeavors focused on the theoretical aspects of differential equations, as evidenced by works such as [13, 14, 15, 17, 19, 31, 28]. Additionally, considerable efforts have been devoted to numerically approximating these proposed models, as seen in [21, 22, 1, 3]. In this work, our focus is on the model equations introduced in [28], commonly referred to as the Hadler and Kuttler(**HK**) model. This model comprises a coupled system of non-linear partial differential equations and is widely recognized for describing the evolution of a sandpile formed by pouring dry sand grains onto a flat and bounded table surface denoted as \(\Omega\). The sandpile's evolution is governed by these equations under the influence of a time-independent non-negative vertical source represented by \(f\in L^{1}(\Omega)\). It is assumed that all sand grains are uniform in size, thus disregarding phenomena like segregation or pattern formation. Additionally, external factors such as wind or stress fields within the bulk of the medium are not taken into account. The **(HK)** model reads as : \[u_{t}=(1-|\nabla u|)v\] in \[\Omega\times(0,T], \tag{1}\] \[v_{t}-\nabla.(v\nabla u)=-(1-|\nabla u|)v+f\] in \[\Omega\times(0,T],\] (2) \[u(\mathbf{x},0)=u_{0}(\mathbf{x}),\;\;v(\mathbf{x},0)=v_{0}( \mathbf{x})\] in \[\Omega, \tag{3}\] where, \(u(\mathbf{x},t)\) denotes the local height of the pile containing the grains at rest and is called as the _standing_ layer, and \(v(\mathbf{x},t)\) denotes the _rolling_ layer, formed by only by the grains that roll on the surface of the pile until they are captured by the standing layer. Further, the boundary \(\partial\Omega\) can be split into two parts: \(\Gamma_{o}\), an open non-empty subset of \(\partial\Omega\) where the sand can fall down from the table and \(\Gamma_{w}=\partial\Omega\setminus\Gamma_{o}\) where the sand is blocked by a wall. From modelling point of view, a wall of arbitrary height can be imagined on \(\Gamma_{w}\) so that no sand can trespass this wall, while on \(\Gamma_{o}\), the table is "open". If \(\Gamma_{o}=\partial\Omega\), then the problem is called as _open table problem_, otherwise it is called _partially open table problem_. The system (1)-(3) is supplemented with the following boundary conditions: \[u=0\;\;\;\text{in}\;\;\;\Gamma_{o},\;\;v\frac{\partial u}{\partial\nu}=0\;\; \;\text{in}\;\;\;\;\Gamma_{w}, \tag{4}\] a detailed discussion can be found in [28, 21, 1, 3]. For stability reasons, \(|\nabla u|\) cannot exceed 1. Moreover, at any equilibrium, the profile of \(|\nabla u|\) must be maximal where transport occurs(that is, where \(v>0\)). The exchange of the grains between the two layers occurs through an exchange term \((|\nabla u|-1)v\) which is independent of the slope orientation and can be characterized as erosion/deposition. The equilibrium of the system (1)-(4) is given by: \[\begin{array}{rl}-\nabla.(v\nabla u)&=f\;\;\;\text{in}\;\Omega,\\ \left|\nabla u\right|&=1\;\;\;\text{on}\;\{v>0\},\\ \left|\nabla u\right|&\leq 1,\;\;\;u,v\geq 0\;\;\text{in}\;\;\;\Omega,\\ u&=0\;\;\;\;\text{in}\;\;\Gamma_{o},\;v\frac{\partial u}{\partial\nu}=0\;\; \text{in}\;\;\Gamma_{w}.\end{array} \tag{5}\] A complete mathematical theory for the existence of the solutions of(**HK**) model at finite time and at equilibrium is still not completely settled and is not covered by standard existence and uniqueness results available for hyperbolic balance laws, see [5, 6, 33, 28, 13, 14, 15] for some limited results. There also have been some recent interest on slow erosion limit of the model in one dimension, see [16, 7, 18, 27, 10] and references therein. There have been numerous studies in the last decade to devise robust numerical schemes approximating the(**HK**) model, with the ability to preserve the discrete steady states and the physical properties of the model efficiently, see [21, 23, 1, 3]. In this context, finite difference schemes capturing the discrete steady states were proposed and analyzed in [21, 23] and well-balanced finite volume schemes were developed in [1, 3] using the basic principle of conservation laws with discontinuous flux. The class of finite volume schemes based on conservation laws with discontinuous flux have been used in the last decade for various real life applications,(see, [11, 12, 34, 2]). The schemes proposed in [1, 3] were shown to be well-balanced and capable of capturing the sharp crests at the equilibrium state more efficiently than existing methods. However, these schemes exhibit oscillations near the initial condition, persisting for a significant duration and it leads to a delay in reaching the steady state. This issues give raise to an important question: is it possible to control these oscillations and reduce the time in reaching the steady state by moving to a high-order scheme? Simultaneously, it also put forth the question of whether these high-order scheme could result in a sharper resolution of the discrete steady state. In various scenario, the well-balanced schemes for hyperbolic systems has been of keen interest for the past few decades, see [8, 26, 25, 29, 9, 30, 24, 20] and references therein. It has been observed that capturing moving steady states or those with complex structures, like that of (5), can be a challenging task in general. To the best of our knowledge, there have been no studies on second-order schemes for (5) in the existing literature. However, in the case of shallow water equations, second-order schemes were proposed and analyzed in [30, 24, 20]. It has been noted in these studies that the second-order extensions may not inherently possess the well-balanced characteristics, and an adaptation algorithm is required to ensure this property. In this work, to derive a second-order scheme we employ a MUSCL-type spatial reconstruction [35] along with a strong stability preserving Runge-Kutta time stepping method [2, 2], which is basically an extension of the first-order scheme of [1, 3]. In SS4, we illustrate that this second-order scheme is not well-balanced for the state variable \(v\). To overcome this difficulty, we modify the proposed second-order scheme with an adaptation procedure similar to that of [20], to develop a well-balanced second-order scheme. The procedure involves a modified limitation strategy in the linear reconstruction of the approximate solution at each time step. We establish that the resulting scheme is well-balanced and is able to accurately capture the discrete steady states, and notably, it exhibits reduced oscillations at the beginning, reaching the steady state faster than the first-order scheme. The rest of this paper is organized as follows: In Section 2, we focus on deriving the second-order numerical scheme and provide a concise overview of the first-order numerical scheme proposed in [1, 3]. The stability analysis for the second-order scheme is presented in Section 3. The discussion about the well-balanced property of the second-order scheme is outlined in Section 4. In Section 5, we elucidate the second-order adaptive scheme and analytically establish its well-balancedness property. Section 6 deals with the extension of the first-order scheme from one dimension to two dimensions, along with the adaptation procedure in the two-dimensional context. In Section 7, we provide numerical examples in both one and two dimensions to showcase the performance of the proposed second-order adaptive scheme in comparison to the non-adaptive second-order scheme and the first-order schemes of [1, 3]. Finally we draw our conclusion in SS8. ## 2 Numerical schemes in one-dimension We now present the numerical algorithm approximating (1)-(3). First, we briefly review the first-order finite volume schemes of [1, 3] and then present the second-order scheme. Let \(\Omega=[0,1]\). As in [1], we rewrite (1)-(3) as follows: \[u_{t}+F^{\alpha}(\alpha,v) = 0 \tag{6}\] \[v_{t}+F^{v}(\alpha,v,B)_{x}-F^{\alpha}(\alpha,v) = 0 \tag{7}\] where \(\alpha=u_{x},B(x)=\int_{0}^{x}f(y)dy,F^{\alpha}(\alpha,v)=(|\alpha|-1)v\), and \(F^{v}(\alpha,v,B)=-\alpha v-B(x)\). For \(\Delta x,\Delta t>0\), and \(\lambda:=\Delta t/\Delta x\), consider equidistant spatial grid points \(x_{i+\frac{1}{2}}:=i\Delta x\) for non-negative integers \(i\in\{0,1,.....M\}:=\mathcal{M}\) and temporal grid points \(t^{n}:=n\Delta t\) for non-negative integer \(n\in\{0,1,.....N_{T}\}:=\mathcal{N}_{T}\), such that \(x_{1/2}=0,x_{M+\frac{1}{2}}=1\) and \(T=t^{N_{T}}\). Let \(\chi_{i}(x)\) denote the indicator function of \(C_{i}:=[x_{i-1/2},x_{i+1/2})\), where \(x_{i+1/2}=0.5(x_{i}+x_{i+1})\) and let \(\chi^{n}(t)\) denote the indicator function of \(C^{n}:=[t^{n},t^{n+1})\). Let \(C_{i}^{n}=C_{i}\times C^{n}\). Let \(u_{i+1/2}^{n}\) be an approximation of the solution \(u\) calculated at grid points \(x_{i+1/2}\) at time \(t^{n}\). For each \((i,n)\in\mathcal{M}\times\mathcal{N}_{T}\), define \[\alpha_{i}^{n}:=\frac{u_{i+1/2}^{n}-u_{i-1/2}^{n}}{\Delta x},\ v_{i}^{n}:= \frac{1}{\Delta x}\int_{C_{i}}v(x,t^{n})dx,\] as the approximation for \(\alpha,v\) in the cell \(C_{i}^{n}\). ### First-order scheme We revisit the first order schemes formulated in [1, 3]. The first order finite volume scheme for the system(1-3) is given by \[u_{i+1/2}^{n+1} =u_{i+1/2}^{n}-\Delta tG_{i+1/2}^{n},(i,n)\in(\mathcal{M}\setminus \{M,0\})\times\mathcal{N}_{T},\] \[v_{i}^{n+1} =v_{i}^{n}-\lambda(H_{i+1/2}^{n}-H_{i-1/2}^{n})+\Delta tS_{i}^{n},(i,n)\in(\mathcal{M}\setminus\{0\})\times\mathcal{N}_{T} \tag{8}\] \[u_{j+1/2}^{0} =0,v_{i}^{0}=0,(j,i)\in\mathcal{M}\times(\mathcal{M}\setminus\{0 \}).\] Further, \[u_{1/2}^{n}=0=u_{M+1/2}^{n}\,n\in\mathcal{N}_{T},\] in case of \(\Gamma_{o}=\partial\Omega=\{0,1\}\), and \[u_{M+1/2}^{n+1}=\left\{\begin{array}{ll}u_{M-1/2}^{n}&\text{ if }\quad D_{f}=[X_{1},X_{2}],X_{2}<1,\\ u_{M-1/2}^{n}+\Delta x\ \max(\alpha_{M-1}^{n},0)&\text{ if }\quad D_{f}=[X_{1},X_{2}],X_{2}=1, \end{array}\right.n\in\mathcal{N}_{T}\setminus\{N_{T}\},\] with \(\Gamma_{o}=\{0\}\). Further, \(H_{i+1/2}^{n}\) and \(G_{i+1/2}^{n}\) are the numerical fluxes associated with the fluxes \(F^{\alpha}\) and \(F^{v}\) respectively at time \((x_{i+1/2},t^{n})\), and are given by: \[G_{i+1/2}^{n}=G(\alpha_{i}^{n},v_{i}^{n},\alpha_{i+1}^{n},\ v_{i+1}^{n}),\ H_{i+1/2}=H(\alpha_{i}^{n},v_{i}^{n},\alpha_{i+1}^{n},v_{i+1} ^{n},B_{i},B_{i+1}),\] with \[G(a,b,c,d) = \max\{(|\max\{a,0\}|-1)b,(|\min\{c,0\}|-1)d\}, \tag{9}\] \[H(a,b,c,d,e_{1},e_{2}) = \begin{cases}(-ab-e_{1}),&-a\geq 0,-c\geq 0,\\ (-cd-e_{2}),&-a<0,-c\leq 0,\\ \frac{(-ce_{1}+ae_{2})}{(c-a)},&-a<0,-c>0,\\ (-ab-e_{1}),&b>d\ \text{and}\ \ -a\geq 0,-c\leq 0,\\ (-cd-e_{2}),&b<d,\ \text{and}\ \ -a\geq 0,-c\leq 0,\\ -0.5(ab+cd+e_{1}+e_{2}),&b=d\ \text{and}\ \ -a\geq 0,-c\leq 0.\end{cases}. \tag{10}\] Further, \(S_{i}^{n}\) can be taken as \[S_{i}^{n}=v_{i}^{n}(|\alpha_{i}^{n}|-1),\] or \[S_{i}^{n}=0.5(G_{i+1/2}^{n}+G_{i-1/2}^{n}).\] In case of \(\Gamma_{o}=\partial\Omega\), as done in [1], for each \(n\in\mathcal{N}_{T}\), we set the boundary condition weakly as \[\begin{split}& G_{1/2}^{n}=0,G_{M+1/2}^{n}=0,\\ & H_{1/2}^{n}=-\alpha_{1}^{n}v_{1}^{n}-B_{1},H_{M+1/2}^{n}=- \alpha_{M}^{n}v_{M}^{n}-B_{M}.\end{split} \tag{11}\] In case of \(\Gamma_{o}=\{0\}\), as done in [3], for each \(n\in\mathcal{N}_{T}\), we set the boundary condition weakly as \[\begin{split}& G_{1/2}^{n}=0,\\ & H_{M+1/2}^{n}=\left\{\begin{array}{cc}-\alpha_{M}^{n}v_{M}^{n} -B_{M}&\text{ if }&\alpha_{M}^{n}\leq 0,\\ &-B_{M+1}&\text{ if }&\alpha_{M}^{n}>0.\end{array}\right.\end{split} \tag{12}\] ### Second-order scheme We now describe the direct second-order extension of the first order scheme described in the previous section. On each \(C_{i}^{n}\), we construct a piecewise linear function \(z_{\Delta x}\) which is defined by \[z_{\Delta x}(t,x):=z_{i}^{n}+\frac{(x-x_{i})}{\Delta x}(z_{i+1/2L}^{n}-z_{i-1/ 2R}^{n}),\;\;z=\alpha,v,B,\] where \[z_{i+1/2L}^{n}:=z_{i}^{n}+0.5Dz_{i}^{n},\;z_{i-1/2R}^{n}:=z_{i}^{n}-0.5Dz_{i}^ {n} \tag{13}\] with \[Dz_{i}^{n} = 2\theta\text{minmod}\left(z_{i}-z_{i-1},\frac{z_{i+1}-z_{i-1}}{2},z_{i+1}-z_{i}\right),\;\theta\in[0,1]. \tag{14}\] Note that \(\theta=0\) gives the first order scheme, while \(\theta=0.5\) gives the usual minmod limiter. For each \(i\), we can define \(\sigma_{i}^{L}\) and \(\sigma_{i}^{R}\) with \(0\leq\sigma_{i}^{L},\sigma_{i}^{R}\leq 1\) such that \[\left\{\begin{array}{l}z_{i+1/2L}^{n}=z_{i}^{n}+\theta\sigma_{i}^{R}(z_{i+1} ^{n}-z_{i}^{n}),\\ z_{i+1/2R}^{n}=z_{i+1}^{n}-\theta\sigma_{i+1}^{L}(z_{i+1}^{n}-z_{i}^{n}).\end{array}\right. \tag{15}\] This implies for all \(0\leq\theta\leq 1\), \[\min\{z_{i}^{n},z_{i+1}^{n}\}\leq z_{i+1/2L}^{n},z_{i+1/2R}^{n}\leq\max\{z_{i} ^{n},z_{i+1}^{n}\}.\] We now define the Runge Kutta steps of the second-order scheme. **RK step-1**: Define \[\begin{split}\alpha_{i}^{n,*}&:=\alpha_{i}^{n}-\lambda(G _{i+1/2}^{n}-G_{i-1/2}^{n})\\ &=\frac{\alpha_{i+1/2L}^{n}+\alpha_{i-1/2R}^{n}}{2}-\lambda(G_{i+ 1/2}^{n}-G_{i-1/2}^{n}),\\ v_{i}^{n,*}&:=v_{i}^{n}-\lambda(H_{i+1/2}^{n}-H_{i-1/2}^{n})+ \Delta tS_{i+1/2L}^{n}\\ &=\frac{v_{i+1/2L}^{n}+v_{i-1/2R}^{n}}{2}-\lambda(H_{i+1/2}^{n}-H_{ i-1/2}^{n})+\Delta tS_{i+1/2L}^{n}\end{split} \tag{16}\] where \[G_{i+1/2}^{n} =G(\alpha_{i+1/2L}^{n},v_{i+1/2L}^{n},\alpha_{i+1/2R}^{n},v_{i+1/ 2R}^{n}),\] \[H_{i+1/2}^{n} =H(\alpha_{i+1/2L}^{n},v_{i+1/2L}^{n},\alpha_{i+1/2R}^{n},v_{i+1/ 2R}^{n},B_{i+1/2L},B_{i+1/2R}),\] and where \(G(a,b,c,d)\) and \(H(a,b,c,d,e_{1},e_{2})\) are given by(9) and (10) respectively. Further,\(S_{i+1/2L}^{n}=-g(\alpha_{i+1/2L}^{n},v_{i+1/2L}^{n})\), where \(g(a,b)=(|a|-1)b\). **RK step-2:** By knowing \(\alpha_{i}^{n,*}\) and \(v_{i}^{n,*}\) from RK step-1, we now construct corresponding \(\alpha_{i+1/2L}^{n,*},\alpha_{i+1/2R}^{n,*},v_{i+1/2L}^{n,*}\) and \(v_{i+1/2R}^{n,*}\) by using (13)-(14) for \(z=\alpha^{*},v^{*}\) and \(B^{*}\). Now define \[\begin{split}\alpha_{i}^{n,*}&:=\alpha_{i}^{n,*}- \lambda(G_{i+1/2}^{n,*}-G_{i-1/2}^{*}),\\ &=\frac{\alpha_{i+1/2L}^{n,*}+\alpha_{i-1/2R}^{n,*}}{2}-\lambda( G_{i+1/2}^{n,*}-G_{i-1/2}^{n,*}),\\ v_{i}^{n,*}&:=v_{i}^{n,*}-\lambda(H_{i+1/2}^{n,*}-H _{i-1/2}^{n,*})+\Delta tS_{i+1/2L}^{n,*}\\ &=\frac{v_{i+1/2L}^{n,*}+v_{i-1/2R}^{n,*}}{2}-\lambda(H_{i+1/2}^ {n,*}-H_{i-1/2}^{n,*})+\Delta tS_{i+1/2L}^{n,*}\end{split} \tag{17}\] where \[\begin{split} G_{i+1/2}^{n,*}&=G(\alpha_{i+1/2L}^{n,* },v_{i+1/2L}^{n,*},\alpha_{i+1/2R}^{n,*},v_{i+1/2R}^{n,*}),\\ H_{i+1/2}^{n,*}&=H(\alpha_{i+1/2L}^{n,*},v_{i+1/2L}^{n,* },\alpha_{i+1/2R}^{n,*},v_{i+1/2R}^{n,*},B_{i+1/2L}^{*},B_{i+1/2R}^{*}),\\ S_{i+1/2L}^{n,*}&=-g(\alpha_{i+1/2L}^{n,*},v_{i+1/2L}^{n,* }),\end{split}\] with \(G(a,b,c,d)\) and \(H(a,b,c,d,e_{1},e_{2})\) are given by(9) and(10) respectively. **RK final step :** \[\begin{split}\alpha_{i}^{n+1}&:=\frac{\alpha_{i}^ {n}+\alpha_{i}^{n,**}}{2}\\ v_{i}^{n+1}&:=\frac{v_{i}^{n}+v_{i}^{n,**}}{2}. \end{split} \tag{18}\] ### Boundary conditions for second-order scheme From the point of view of implementation, we describe the treatment of the boundary conditions for \(u,v\). In the case when \(\Gamma_{o}=\partial\Omega\), for the variable \(u\), we set the boundary condition weakly as \[G_{1/2}^{n},\ G_{M+1/2}^{n}=0.\] Additionally, since in the first order scheme, the boundary condition of \(v\) is not needed in the computation of \(v_{1}^{n}\) and \(v_{M}^{n}\), we similarly avoid the usage of ghost cells in the second-order scheme for \(v^{n+1}\) in \(C_{1}\) and \(C_{M}\). Hence, we set \(D_{1}^{z_{1}}=D_{M}^{z_{M}}=0\) for each quantities \(\alpha\), \(v\) and \(B,\) while computing the interior fluxes \(G_{\frac{1}{2}}^{n}\) and \(G_{M-\frac{1}{2}}^{n}\) as well as the boundary fluxes \(H_{\frac{1}{2}}\) and \(H_{M+\frac{1}{2}}.\) Eventually, the boundary conditions for \(v\) variable reads as: \[H_{1/2}^{n}=-\alpha_{1}^{n}v_{1}^{n}-B_{1},\quad H_{M+1/2}^{n}=-\alpha_{M}^{n} v_{M}^{n}-B_{M}.\] In the case of \(\Gamma_{o}=\{0\},\) at the left boundary \(x_{\frac{1}{2}}\) same boundary condition as described above is imposed. For the boundary at \(x_{M+\frac{1}{2}},\) we define the boundary condition as follows. We recall the first-order approximations at the right boundary: \[u_{M+1/2}^{n+1}=\left\{\begin{array}{ccc}u_{M-1/2}^{n},&\text{ if}&D_{f}=[X_{1},X_{2}],X_{2}<1,\\ u_{M-1/2}^{n}+\Delta x\max(\alpha_{M-1}^{n},0),&\text{ if}&D_{f}=[X_{1},X_{2}],X_{2}=1. \end{array}\right.\] Now the corresponding second-order approximation to \(u\) is given by \[u_{M+1/2}^{n+1}=\left\{\begin{array}{ccc}u_{M-1/2}^{n},&\text{ if}&D_{f}=[X_{1},X_{2}],X_{2}<1,\\ u_{M-1/2}^{n}+\Delta x\max(\alpha_{M-1/2L}^{n},0),&\text{ if}&D_{f}=[X_{1},X_{2}],X_{2}=1. \end{array}\right.\] On the other hand, we set the slope \(Dz_{M}^{n}=0\) in the last cell \(C_{M}\). Consequently, the boundary flux \(H_{M+\frac{1}{2}}\) is given same as in (12). Note that these boundary conditions are imposed in each stage of the RK time stepping. ## 3 Stability results in one-dimension In this section, we prove that the numerical solution obtained by (16)-(18) is consistent with the properties of the physics of the growing pile for totally open table problem. Results for partially open table problem can be done on similar lines following [3]. **Theorem 1**: _Let \(f\geq 0\) in \(\Omega\). Assume that \(\sup|\alpha^{0}|\leq 1\), \(u^{0}\geq 0\) and \(v^{0}\geq 0\), then the numerical scheme(18) under the CFL conditions:_ \[\lambda\max_{i}v_{i}^{n}\leq 1/2, \tag{19}\] \[\lambda\leq 1/2-\Delta t \tag{20}\] _satisfies the following properties:_ 1. \(\sup|\alpha^{n+1}|\leq 1\)_,_ \(u^{n+1}\geq u^{n}\geq 0\) _and_ 2. \(v^{n+1}\geq 0\)_._ Proof: Let \[\alpha_{i}^{n,*} = \frac{\alpha_{i+1/2L}^{n}+\alpha_{i-1/2R}^{n}}{2}-\lambda(G_{i+1/ 2}^{n}-G_{i-1/2}^{n}),\] \[= K(\alpha_{i-1/2L}^{n},\alpha_{i-1/2R}^{n},\alpha_{i+1/2L}^{n}, \alpha_{i+1/2R}^{n},v_{i-1/2L}^{n},v_{i-1/2R}^{n},v_{i+1/2L}^{n},v_{i+1/2R}^{n })\] We now show that \(K\) is monotonically increasing in \(\alpha_{i-1/2L}^{n},\alpha_{i-1/2R}^{n},\alpha_{i+1/2L}^{n},\alpha_{i+1/2R}^ {n}\) under the CFL condition(19). For each \(i\), we have, \(\frac{\partial G_{i+1/2}^{n}}{\partial\alpha_{i-1/2L}^{n}}=0,\) \[\frac{\partial G_{i+1/2}^{n}}{\partial\alpha_{i+1/2L}^{n}}=\left\{\begin{array} []{ll}v_{i+1/2L}^{n}&\mbox{if }\,G_{i+1/2}^{n}=(|\max(\alpha_{i+1/2L}^{n},0)|-1)v_{i}^{n}\mbox{ and }\alpha_{i+1/2L}^{n}>0,\\ 0&\mbox{otherwise}\end{array}\right. \tag{21}\] and \[\frac{\partial G_{i+1/2}^{n}}{\partial\alpha_{i+1/2R}^{n}}=\left\{\begin{array} []{ll}-v_{i+1/2R}^{n}&\mbox{if }\,G_{i+1/2}^{n}=(|\min(\alpha_{i+1/2R}^{n},0)|-1)v_{i+1}^{n}\mbox{ and }\alpha_{i+1/2R}^{n}<0,\\ 0&\mbox{otherwise}\end{array}\right. \tag{22}\] which shows that \(G_{i+1/2}^{n}\) is non-decreasing in \(\alpha_{i+1/2L}^{n}\) and non-increasing in \(\alpha_{i+1/2R}^{n}\). Hence, we have \[\frac{\partial K}{\partial\alpha_{i-1/2L}^{n}}=\lambda\frac{ \partial G_{i-1/2}^{n}}{\partial\alpha_{i-1/2L}^{n}}\geq 0,\ \ \frac{\partial K}{\partial\alpha_{i+1/2R}^{n}}=-\lambda\frac{ \partial G_{i+1/2}^{n}}{\partial\alpha_{i+1/2R}^{n}}\geq 0.\mbox{ Now by(\ref{eq:19}) we have}\] \[\frac{\partial K}{\partial\alpha_{i+1/2L}^{n}}=0.5-\lambda\frac{ \partial G_{i+1/2}^{n}}{\partial\alpha_{i+1/2L}^{n}}=0.5-\lambda v_{i+1/2L}^ {n}\geq 0\mbox{ and }\ \ \frac{\partial K}{\partial\alpha_{i-1/2R}^{n}}=0.5+\lambda\frac{ \partial G_{i-1/2}^{n}}{\partial\alpha_{i-1/2R}^{n}}=0.5-\lambda v_{i-1/2R}^ {n}\geq 0\] This proves \(K\) is increasing in each of its variable \(\alpha_{i-1/2R}^{n},\alpha_{i-1/2L}^{n},\alpha_{i+1/2L}^{n}\) and \(\alpha_{i+1/2R}^{n}\) under the CFL condition(19). This implies \[-1 = K(-1,-1,-1,1,v_{i-1/2L}^{n},v_{i-1/2R}^{n},v_{i+1/2L}^{n},v_{i+1 /2R}^{n})\] \[\leq K(\alpha_{i-1/2L}^{n},\alpha_{i-1/2R}^{n},\alpha_{i+1/2L}^{n}, \alpha_{i+1/2R}^{n},v_{i-1/2L}^{n},v_{i-1/2R}^{n},v_{i+1/2L}^{n},v_{i+1/2R}^{n })=\alpha_{i}^{*}\] \[\leq K(1,1,1,1,v_{i-1/2L}^{n},v_{i-1/2R}^{n},v_{i+1/2L}^{n},v_{i+1/2R}^ {n})=1,\] \[\Rightarrow-1\leq\alpha_{i}^{n,*}\leq 1.\] By setting \[\alpha_{i}^{n,**}=K(\alpha_{i-1/2L}^{n,*},\alpha_{i-1/2R}^{n,*},\alpha_{i+1/2L} ^{n,*},\alpha_{i+1/2R}^{n,*},v_{i-1/2L}^{n,*},v_{i-1/2R}^{n,*},v_{i+1/2L}^{n,*},v_{i+1/2R}^{n,*}),\] it can be shown similarly that, \[\Rightarrow-1\leq\alpha_{i}^{**}\leq 1.\] under the CFL condition(19). Then by(18) we have \[-1\leq\alpha_{i}^{n+1}\leq 1. \tag{23}\] under the CFL condition(19). Let \[u_{i+1/2}^{n,*}=\Delta x\sum_{j=0}^{i}\alpha_{j}^{n,*}=\Delta x\sum_{j=0}^{i} \alpha_{j}^{n}-\Delta tG_{i+1/2}^{n}. \tag{24}\] Since \(-1\leq\alpha^{n}\leq 1\), we have \(G^{n}_{i+1/2}\leq 0\). This implies \(u^{*}_{i+1/2}\geq\Delta x\sum_{j=0}^{i}\alpha^{n}_{j}\). By similar argument we can show that \(u^{n,**}_{i+1/2}=\Delta x\sum_{j=0}^{i}\alpha^{n,**}_{j}\geq\Delta x\sum_{j=0}^{ i}\alpha^{n,*}_{j}\). Now \[u^{n+1}_{i+1/2} =\Delta x\sum_{j=0}^{i}\alpha^{n+1}_{j}=\frac{\Delta x}{2}\sum_{j =0}^{i}(\alpha^{n}_{j}+\alpha^{n,**}_{j})\] \[\geq\frac{\Delta x}{2}\sum_{j=0}^{i}(\alpha^{n}_{j}+\alpha^{n,*}_ {j})\geq\frac{\Delta x}{2}\sum_{j=0}^{i}(\alpha^{n}_{j}+\alpha^{n}_{j})\] \[=\Delta x\sum_{j=0}^{i}\alpha^{n}_{j}=u^{n}_{i+1/2}\geq 0.\] This proves 1. Now consider the equation \[v^{n,*}_{i}=v^{n}_{i}-\lambda(H^{n}_{i+1/2}-H^{n}_{i-1/2})+\Delta tv^{n}_{i+1/ 2L}(|\alpha^{n}_{i+1/2L}|-1).\] Let \[v^{n,*}_{i}=L(\alpha^{n}_{i-1/2L},\alpha^{n}_{i-1/2R},\alpha^{n}_{i+1/2L}, \alpha^{n}_{i+1/2R},v^{n}_{i-1/2L},v^{n}_{i-1/2R},v^{n}_{i+1/2L},v^{n}_{i+1/2R})\] To prove that \(L\) is non-decreasing in each variable \(v^{n}_{i-1/2L},v^{n}_{i-1/2R},v^{n}_{i+1/2L}\) and \(v^{n}_{i+1/2R}\), we consider the following two cases and other cases follow similarly. 1. Case 1:-\(\alpha^{n}_{i-1/2L}\geq 0,\alpha^{n}_{i-1/2R}\geq 0,-\alpha^{n}_{i+1/2L}\geq 0,- \alpha^{n}_{i+1/2R}>0\). Then \(H^{n}_{i+1/2}=-\alpha^{n}_{i+1/2L}v^{n}_{i+1/2L}-B_{i+1/2L}\) and \(H^{n}_{i-1/2}=-\alpha^{n}_{i-1/2L}v^{n}_{i-1/2L}-B_{i-1/2L}\). This implies that \[L =\frac{v^{n}_{i+1/2L}+v^{n}_{i-1/2R}}{2}-\lambda(-\alpha^{n}_{i+1 /2L}v^{n}_{i+1/2L}-B_{i+1/2L})\] \[\quad+\lambda(-\alpha^{n}_{i-1/2L}v^{n}_{i-1/2L}-B_{i-1/2L}))+ \Delta tv^{n}_{i+1/2L}(|\alpha^{n}_{i+1/2L}|-1),\] which implies that \[\frac{\partial L}{\partial v^{n}_{i-1/2L}}=\frac{\partial H^{n}_{ i-1/2}}{\partial v^{n}_{i-1/2L}}=-\lambda\alpha^{n}_{i-1/2L}\geq 0\] \[\frac{\partial L}{\partial v^{n}_{i+1/2R}}=0, \frac{\partial L}{\partial v^{n}_{i-1/2R}}=0.5\geq 0\] \[\frac{\partial L}{\partial v^{n}_{i+1/2L}}=0.5-\lambda(-\alpha^{n} _{i+1/2L})+\Delta t(|\alpha^{n}_{i+1/2L}|-1)\] \[=0.5-\lambda|\alpha^{n}_{i+1/2L}|+\Delta t(|\alpha^{n}_{i+1/2L}|- 1).\] This implies that \[\frac{\partial L}{\partial v^{n}_{i+1/2L}}\geq 0\ \ \text{if}\ \ |\alpha^{n}_{i+1/2L}| \lambda\leq 1/2+\Delta t(|\alpha^{n}_{i+1/2L}|-1).\] Since \(|\alpha^{n}_{i+1/2L}|\leq 1\), a sufficient condition for \(L\) to be monotone is \[\lambda\leq 1/2-\Delta t.\] 2. \(:-\alpha^{n}_{i-1/2L}\geq 0,-\alpha^{n}_{i-1/2R}\leq 0,-\alpha^{n}_{i+1/2L} \leq 0,-\alpha^{n}_{i+1/2R}\leq 0\). We prove for the case \(-\alpha^{n}_{i-1/2R}<0,v^{n}_{i-1/2R}=v^{n}_{i-1/2L}\) and other cases can be proved similarly. Now, \[H^{n}_{i+1/2} =-\alpha^{n}_{i+1/2R}v^{n}_{i+1/2R}-B_{i+1/2R}\] \[H^{n}_{i-1/2} =\frac{-\alpha^{n}_{i-1/2L}v^{n}_{i-1/2L}-B_{i-1/2L}-\alpha^{n}_ {i-1/2R}v^{n}_{i-1/2R}-B_{i-1/2R}}{2}.\] Hence, \[L =\frac{v_{i+1/2L}^{n}+v_{i-1/2R}^{n}}{2}+\lambda(\frac{-\alpha_{i-1/ 2L}^{n}v_{i-1/2L}^{n}-B_{i-1/2L}-\alpha_{i-1/2R}^{n}v_{i-1/2R}^{n}-B_{i-1/2R}}{2 })\] \[\quad-\lambda(-\alpha_{i+1/2R}^{n}v_{i+1/2R}^{n}-B_{i+1/2R})+ \Delta tv_{i+1/2L}^{n}(\alpha_{i+1/2L}^{n}-1).\] Therefore \[\frac{\partial L}{\partial v_{i-1/2L}^{n}} =\frac{\partial H_{i-1/2}^{n}}{\partial v_{i-1/2L}^{n}}=-\frac{ \lambda\alpha_{i-1/2L}^{n}}{2}\geq 0\] \[\frac{\partial L}{\partial v_{i-1/2L}^{n}} =\frac{\partial H_{i-1/2}^{n}}{\partial v_{i-1/2L}^{n}}=-\frac{ \lambda\alpha_{i-1/2L}^{n}}{2}\geq 0\] \[\frac{\partial L}{\partial v_{i+1/2R}^{n}} =\lambda\alpha_{i+1/2R}^{n}\geq 0\] \[\frac{\partial L}{\partial v_{i+1/2L}^{n}} =0.5+\Delta t(\alpha_{i+1/2L}^{n}-1)\geq 0\] \[\frac{\partial L}{\partial v_{i-1/2R}^{n}} =0.5-\frac{\lambda\alpha_{i-1/2R}^{n}}{2}\geq 0\] by(20). This proves \(L\) is non-decreasing in each of \(v^{n}\) under the condition(20) and therefore we have \[0 \leq L(\alpha_{i-1/2L}^{n},\alpha_{i-1/2R}^{n},\alpha_{i+1/2L}^{n}, \alpha_{i+1/2R}^{n},v_{i-1/2L}^{n},0,0,0)\] \[\leq L(\alpha_{i-1/2L}^{n},\alpha_{i-1/2R}^{n},\alpha_{i+1/2L}^{n}, \alpha_{i+1/2R}^{n},v_{i-1/2L}^{n},v_{i-1/2R}^{n},v_{i+1/2L}^{n},v_{i+1/2R}^{n })=v_{i}^{n,*}.\] Hence \(v_{i}^{n,*}\geq 0\). Exactly on the similar way we can show that \(v_{i}^{n,**}\geq 0\). Then by(18), it follows easily that \(v_{i}^{n+1}\geq 0\). This completes the proof of 2. ## 4 Well-balancedness for Open table problem in one-dimension We now show that the the second-order scheme (16)-(18) is not well-balanced in general, i.e. the numerical scheme does not capture the steady state solution \((\overline{u},\overline{v})\) given by (5) exactly. **Lemma 1**: _The second-order scheme (16)-(18) scheme is well balanced in the state variable \(u\) but not in \(v\)._ **Proof** We take the particular case of \(f=1\). In this case, the exact steady state solutions are given by \[(\overline{\alpha},\overline{v})(x)=\left\{\begin{array}{lcl}(1,0.5-x)&\mbox { if }&x\in[0,0.5),\\ (-1,x-0.5)&\mbox{ if }&x\in(0.5,1].\end{array}\right. \tag{25}\] To show that the scheme is not well-balanced, we substitute the discrete form of (25),\((\overline{\alpha}_{i},\overline{v}_{i})_{\{i\in\mathcal{M}\setminus\{0\}\}}\) in the second-order scheme as an initial datum at \(t=t^{n}\) and show that \[(\overline{\alpha}_{i}^{n+1},\overline{v}_{i}^{n+1})\neq(\overline{\alpha}, \overline{v})_{i}^{n}=(\overline{\alpha},\overline{v})_{i}.\] For \(f=1\) in \([0,1]\), set \[(\overline{\alpha}_{i},\overline{v}_{i})=\begin{cases}(1,0.5-x_{i}),&i\leq K\\ (-1,x_{i}-0.5),&i>K,\end{cases}\] with \(x_{K+1/2}=0.5\). It is clear that \(D\overline{\alpha}_{i}^{n}=0\) implying that \(\overline{\alpha}_{i+1/2L}^{n}=\overline{\alpha}_{i}=\overline{\alpha}_{i-1/2 R}^{n}\). This further concludes \(G_{i+1/2}^{n}=G(\overline{\alpha}_{i+1/2L}^{n},\overline{v}_{i+1/2L}^{n}, \overline{\alpha}_{i+1/2R}^{n},\overline{v}_{i+1/2R}^{n})=0\). This implies \[\overline{\alpha}_{i}^{n,*}=\overline{\alpha}_{i}. \tag{26}\] Again, \(\overline{\alpha}_{i+1/2L}^{n,*}=\overline{\alpha}_{i}^{n,*}=\overline{\alpha }_{i-1/2R}^{n,*}\) resulting in \[\overline{\alpha}_{i}^{n+1}=0.5(\overline{\alpha}_{i}+\overline{\alpha}_{i})= \overline{\alpha}_{i}. \tag{27}\] Now, let us look at \(\overline{v}\). \[D\overline{v}_{i}^{n}=2\theta\begin{cases}-\Delta x,&i\leq K-1,\\ \Delta x,&i\geq K+2\\ \text{minmod}(\overline{v}_{K+1}-\overline{v}_{K},\ 0.5(\overline{v}_{K+2}- \overline{v}_{K}),\Delta x),&i=K+1,\\ \text{minmod}(\Delta x,\ 0.5(\overline{v}_{K+1}-\overline{v}_{K-1}),\overline{v}_{K+1} -\overline{v}_{K}),&i=K.\end{cases}\] Since \(x_{K+1/2}=0.5\), we have \[\overline{v}_{K+1}-\overline{v}_{K}=x_{K+1}-0.5-(1/2-x_{K})=(2K+1)\Delta x-1=0.\] This implies, \[D\overline{v}_{i}^{n}=2\theta\begin{cases}0,&i=K,K+1,\\ -\Delta x,&i\leq K-1,\\ \Delta x,&i\geq K+2.\end{cases}\] We have \[\overline{v}_{i}^{n,*} = \overline{v}_{i}-\lambda(H_{i+1/2}^{n}-H_{i-1/2}^{n})+\Delta tS _{i+1/2L}^{n}.\] Since \(\left|\overline{\alpha}_{i+1/2,L}^{n}\right|=1\), hence \(S_{i+1/2L}^{n}=0\ \forall i\in\mathcal{M}\). Also, \[H_{i+1/2}^{n}=\begin{cases}\overline{v}_{i+1/2,L}^{n}-B_{i},&i\geq K+1\\ -\overline{v}_{i+1/2,R}^{n}-B_{i+1},&i\leq K-1\\ 0.5(B_{i}-B_{i+1})-B_{i},&i=K\end{cases}\] which implies \[H_{i+1/2}^{n}=\begin{cases}-0.5+\theta\Delta x,&i\geq K+2\\ -0.5,&i=K,K+1\\ -0.5+\theta\Delta x,&i\leq K-1\end{cases}\] Hence, \[i.e.,\ \ \overline{v}_{i}^{n,*}=\begin{cases}\overline{v}_{i},&i\geq K+3,i=K+1,i\leq K-1\\ \overline{v}_{i}+\lambda\theta\Delta x=\overline{v}_{i}+\theta\Delta t,&i=K\\ \overline{v}_{i}-\lambda\theta\Delta x=\overline{v}_{i}-\theta\Delta t,&i=K+2 \end{cases}\] which implies \[D\overline{v}_{i}^{n,*}=2\theta\begin{cases}0,&i=K+1\\ \Delta x,&i\geq K+4\\ -\Delta x,&i\leq K-2\\ \text{minmod}(-\Delta x,-\Delta x+\frac{\Delta t}{2}),&i=K-1\\ -\Delta x+\frac{\Delta t}{2},-\frac{\Delta t}{2}),&i=K\\ \text{minmod}(\Delta x,\Delta x-\frac{\Delta t}{2},\Delta x+\frac{\Delta t}{ 2}),&i=K+2\\ \text{minmod}(\Delta x+\frac{\Delta t}{4},h+\frac{\Delta t}{2},h),&i=K+3\end{cases}\] which implies \[D\overline{v}_{i}^{n,*}=2\theta\begin{cases}0,&i=K+1\\ \Delta x,&i\geq K+4\\ -\Delta x,&i\leq K-2\\ -\Delta x+\frac{\Delta t}{2},&i=K-1\\ -\frac{\Delta t}{2},&i=K\\ \Delta x-\frac{\Delta t}{2},&i=K+2\\ \Delta x+\frac{\Delta t}{4},&i=K+3\end{cases}\] We have \[\overline{v}_{i}^{n,**} = \overline{v}_{i}^{n,*}-\lambda(H_{i+1/2}^{n,*}-H_{i-1/2}^{n,*})+ \Delta tS_{i+1/2L}^{n,*}.\] Since \(\left|\alpha_{i+1/2,L}^{n}\right|=1,\) hence \(S_{i+1/2L}^{n,*}=0\;\forall n,i.\) Also, \[H_{i+1/2}^{n,*}=\begin{cases}\overline{v}_{i+1/2,L}^{n,*}-B_{i},&i\geq K+1\\ -\overline{v}_{i+1/2,R}^{n,*}-B_{i+1},&i\leq K-1\\ 0.5(B_{i}-B_{i+1})-B_{i},&i=K\end{cases}\] hence, which implies \[H_{i+1/2}^{n,*}=\begin{cases}-0.5,&i=K,K+1\\ -0.5-3\theta\frac{\Delta t}{2}+\theta\Delta x,&i=K+2\\ -0.5+\theta(\Delta x+\frac{\Delta t}{4}),&i=K+3\\ -0.5+\theta\Delta x,&i\geq K+4\\ -0.5+\theta(-\Delta x+\frac{\Delta t}{2}),&i=K-1\\ -0.5-\theta\Delta x,&i\leq K-2.\end{cases}\] Now, \[\overline{v}_{i}^{n,**}=\overline{v}_{i}^{n,*}-\lambda(H_{i+1/2}^{n,*}-H_{i-1/ 2}^{n,*})\] and hence, \[\overline{v}_{i}^{n,**}=\begin{cases}\overline{v}_{i}^{n,*},&i=K+1\\ \overline{v}_{i}^{n,*}+3\theta\frac{\lambda\Delta t}{4}-\theta\Delta t,&i=K+2 \\ \overline{v}_{i}^{n,*}-7\theta\frac{\lambda\Delta t}{4},&i=K+3\\ \overline{v}_{i}^{n,*}+\theta\frac{\lambda\Delta t}{4},&i=K+4\\ \overline{v}_{i}^{n,*},&i>K+4\\ \overline{v}_{i}^{n,*}-\theta\frac{\lambda\Delta t}{2},&i=K-1\\ \overline{v}_{i}^{n,*},&i\leq K-2\\ \overline{v}_{i}^{n,*}-\theta\Delta t+\theta\frac{\lambda\Delta t}{2},&i=K. \end{cases}\] Hence by substituting the value of \(\overline{v}_{i}^{n,*}\), we have \[\overline{v}_{i}^{n,**}=\begin{cases}\overline{v}_{i},&i\leq K-2,i=K+1,i\geq K +5\\ \overline{v}_{i}-2\theta\Delta t+3\theta\frac{\lambda\Delta t}{4},&i=K+2\\ \overline{v}_{i}-7\theta\frac{\lambda\Delta t}{4},&i=K+3\\ \overline{v}_{i}+\theta\frac{\lambda\Delta t}{4},&i=K+4\\ \overline{v}_{i}-\theta\frac{\lambda\Delta t}{2},&i=K-1\\ \overline{v}_{i}+\theta\frac{\lambda\Delta t}{2},&i=K.\end{cases}\] Now, we have \[\overline{v}_{i}^{n+1} = \frac{\overline{v}_{i}+\overline{v}_{i}^{n,**}}{2}\] and hence \[\overline{v}_{i}^{n+1}=\begin{cases}\overline{v}_{i},&i\leq K-2,i=K+1,i\geq K +5\\ \overline{v}_{i}-\theta\Delta t+3\theta\frac{\lambda\Delta t}{4},&i=K+2\\ \overline{v}_{i}-7\theta\frac{\lambda\Delta t}{8},&i=K+3\\ \overline{v}_{i}+\theta\frac{\lambda\Delta t}{8},&i=K+4\\ \overline{v}_{i}-\theta\frac{\lambda\Delta t}{4},&i=K-1\\ \overline{v}_{i}+\theta\frac{\lambda\Delta t}{4},&i=K,\end{cases}\] which shows that the second-order scheme is not well balanced for \(\theta\neq 0.\) ## 5 Second-order adaptive scheme in one-dimension We now suggest a modification in the second-order scheme (14)-(18) based on an idea introduced in [30, 24, 20] to make this scheme well-balanced. The main principle is to consider (14)-(18) far from steady states and recover the first-order scheme (8) near the steady states, resulting in the modified version of the second-order scheme to be well-balanced. The measure of the closeness to a steady state is decided by modifying the limitation process as described below. For this purpose, for any fixed \(\alpha>0,\) we define a smooth function \(\Theta\) such that \[\Theta(x) := \frac{x^{2}}{x^{2}+\alpha^{2}},x\in\mathbb{R}.\] Note that \(\Theta(0)=0\) and \(\Theta(x)\approx 1\) for \(x\neq 0,\) for small \(\Delta x.\) For each \((i,n)\in(\mathcal{M}\setminus\{0\})\times\mathcal{N}_{T},\) we define \[\begin{split}\Theta_{i}^{n}&:=\Theta(\mathcal{E}_{i }^{n})\\ \mathcal{E}_{i}^{n}&:=\mathcal{E}_{i}^{n}(\alpha_{i} ^{n},v_{i}^{n})=\mathcal{E}_{i-1/2}^{n}+\mathcal{E}_{i+1/2}^{n},\\ \mathcal{E}_{i+1/2}^{n}&:=\sqrt{(G_{i+1/2}^{n})^{2 }+[H_{i+1/2}^{n}]^{2}}\end{split} \tag{28}\] where \[[H_{i+1/2}^{n}] := H_{i+1/2}^{n}-H_{i-1/2}^{n}\] \[H_{i+1/2}^{n} = H(\alpha_{i}^{n},v_{i}^{n},\alpha_{i+1}^{n},v_{i+1}^{n},B_{i},B_ {i+1}),\] \[G_{i+1/2}^{n} = G(\alpha_{i}^{n},v_{i}^{n},\alpha_{i+1}^{n},v_{i+1}^{n}),\] where \(G\) and \(H\) are given by(9) and(10) respectively. Now we modify the previously defined left and right states (14) and define linear function \(z\) on \([x_{i-1/2},x_{i+1/2}]\) by \[z_{i+1/2L}^{n}=z_{i}^{n}+0.5\Theta_{i}^{n}Dz_{i}^{n},\qquad z_{i-1/2R}^{n}=z_{ i}^{n}-0.5\Theta_{i}^{n}Dz_{i}^{n} \tag{29}\] where \(z=\alpha,v,B\) and \(Dz\) is defined as before in SS2. Now, we perform the **RK step-1** with new linear function \(z,\) and obtain \((\alpha_{i}^{n,*},v_{i}^{n,*})_{i\in\mathcal{M}\setminus\{0\}}^{n\times \mathcal{N}_{T}},\) using (16). Further, for each \((i,n)\in(\mathcal{M}\setminus\{0\})/\times\mathcal{N}_{T},\) we redefine \[\begin{split}\Theta_{i}^{n}&=\Theta(\mathcal{E}_{i }^{n,*})\\ \mathcal{E}_{i}^{n,*}&=\mathcal{E}_{i}^{n}(\alpha_{i} ^{n,*},v_{i}^{n,*})=\mathcal{E}_{i-1/2}^{n,*}+\mathcal{E}_{i+1/2}^{n,*},\\ \mathcal{E}_{i+1/2}^{n,*}&:=\sqrt{(G_{i+1/2}^{n,*})^ {2}+[H_{i+1/2}^{n,*}]^{2}}\end{split} \tag{30}\] where \[\begin{split} H_{i+1/2}^{n,*}&=H(\alpha_{i}^{n,*},v_ {i}^{n,*},\alpha_{i+1}^{n,*},v_{i+1}^{n,*},B_{i}^{*},B_{i+1}^{*}),\\ G_{i+1/2}^{n,*}&=G(\alpha_{i}^{n,*},v_{i}^{n,*}, \alpha_{i+1}^{n,*},v_{i+1}^{n,*}).\end{split}\] Now, \(z_{i+1/2L}^{n}\) and \(z_{i+1/2R}^{n}\) where \(z=\alpha^{*},v^{*},B^{*}\) and \(Dz\) is calculated using (29). Finally, we perform the **RK step-2** and obtain \((\alpha_{i}^{n,**},v_{i}^{n,**})_{i\in\mathcal{M}\setminus\{0\}}^{n\times \mathcal{N}_{T}},\) using (17). The final step of the second-order scheme is performed as usual. **Lemma 2**: _The adapted scheme described above (28)-(30) is second-order away from the steady state and well-balanced._ * To show that the scheme is well-balanced, it is enough to show that if we substitute the discrete form of discrete steady states \((\overline{\alpha},\overline{v})_{i}\) as an initial data in the adapted second-order scheme, we should get \[(\alpha,v)_{i}^{n+1}=(\overline{\alpha},\overline{v})_{i}.\] By construction, at steady state, using [1], \(\Theta_{i}^{n}=0\), which implies scheme reduces to the first order. Since the first order scheme is well balanced as established in [1], we have \(\overline{u}_{i}^{n+1}=\overline{u}_{i}\) and \(\overline{v}_{i}^{n+1}=\overline{v}_{i}.\) By definition, \(\Theta_{i}^{n}\approx 1,\) keeping the order of the scheme 2, away from the steady state, see [30, 24, 20]. \(\Box\) ## 6 Numerical schemes in two-dimensions In this section, we extend the scheme constructed in the previous section to the two dimensional case. Let \(\Omega=[0,1]\times[0,1]\). Define the space grid points along x-axis as \(x_{i+\frac{1}{2}}=i\Delta x,\Delta x>0,i\in\mathcal{M}\) and along y-axis as \(y_{k+\frac{1}{2}}=k\Delta_{y},\Delta_{y}>0,k\in\mathcal{M}\) with \((x_{\frac{1}{2}},y_{\frac{1}{2}})=(0,0)\) and \((x_{M+\frac{1}{2}},y_{M+\frac{1}{2}})=(1,1)\). For simplicity, \(\Delta x=\Delta_{y}=h\) and for \(\Delta t>0\), define the time discretization points \(t^{n}=n\Delta t,n\in\mathcal{N}_{T}\) with \(\lambda=\Delta t/h\). For \(i,k\in\mathcal{M}\), the finite volume grid can be represented by : The solution \(u\) at the point \((x_{i+\frac{1}{2}},y_{k+\frac{1}{2}}),i,k\in\mathcal{M}\) at time \(t^{n}\) is given by \(u^{n}_{i+\frac{1}{2},k+\frac{1}{2}}\). The slope of \(u\) in the \(x\) and \(y\) direction in the cell \(C_{i,k}\) at time \(t^{n}\) are given by \[\alpha^{n}_{i,k}=\frac{u^{n}_{i+\frac{1}{2},k+\frac{1}{2}}-u^{n}_{i-1/2,k+ \frac{1}{2}}}{h}\text{ and }\beta^{n}_{i,k}=\frac{u^{n}_{i+\frac{1}{2},k+\frac{1}{2}}-u^{n}_{i+\frac{1}{ 2},k-\frac{1}{2}}}{h}\] respectively. For \(i,k\in\mathcal{M}\setminus\{0\}\), the solution \(v\) in the cell \(C_{i,k}\) at time \(t^{n}\) is given by \[v^{n}_{i,k}=\frac{1}{h^{2}}\int_{C_{i,k}}v(x,y,t^{n})dxdy.\] A pictorial illustration of the grid \(C_{i,k}\) is given in Fig 1, where we suppress the time index \(n\) for simplicity. ### second-order Extension We now extend the first order scheme of [3] to higher order. As a first step, we need to define the following approximations of \(\alpha\) and \(\beta\) as follows. \[\alpha^{n}_{i,k+1/2}=\frac{u^{n}_{i+1/2,k+1/2}-u^{n}_{i-1/2,k+1/2} }{h},\ i\in\mathcal{M}\setminus\{0\},k\in\mathcal{M} \tag{31}\] \[\beta^{n}_{i+1/2,k}=\frac{u^{n}_{i+1/2,k+1/2}-u^{n}_{i+1/2,k-1/2} }{h},\ k\in\mathcal{M}\setminus\{0\},i\in\mathcal{M}. \tag{32}\] Next define the slope at \(k+1/2\) and \(i+1/2\) level. For \(i\in\mathcal{M}\setminus\{0\},\ k\in\mathcal{M}\), define \[S^{x}_{i,k+1/2}=2\theta\text{ minmod}\{\alpha^{n}_{i+1,k+1/2}-\alpha^{n}_{i,k+1/2}, \alpha^{n}_{i,k+1/2}-\alpha^{n}_{i-1,k+1/2}\},\] and for \(k\in\mathcal{M}\setminus\{0\},i\in\mathcal{M}\), define \[S^{y}_{i+1/2,k}=2\theta\text{ minmod}\{\beta^{n}_{i+1/2,k+1}-\beta^{n}_{i+1/2,k}, \beta^{n}_{i+1/2,k}-\beta^{n}_{i+1/2,k-1}\}.\] Then define the piecewise linear function with left and right end points by \[\alpha^{n}_{i+\frac{1}{2}L,k+\frac{1}{2}}=\alpha^{n}_{i,k+\frac{1 }{2}}+\frac{S^{x}_{i,k+\frac{1}{2}}}{2},\ \ \ \alpha^{n}_{i-\frac{1}{2}R,k+\frac{1}{2}}=\alpha^{n}_{i,k+\frac{1}{2}}-\frac{S^ {x}_{i,k+\frac{1}{2}}}{2},\] \[\beta^{n}_{i+\frac{1}{2},k+\frac{1}{2}L}=\beta^{n}_{i+\frac{1}{2},k}+\frac{S^{y}_{i+\frac{1}{2},k}}{2},\ \ \ \beta^{n}_{i+\frac{1}{2},k-\frac{1}{2}R}=\beta^{n}_{i+\frac{1}{2},k}-\frac{S^ {y}_{i+\frac{1}{2},k}}{2}.\] Figure 1: Finite volume cell Now we detail the slope limiter for \(v\) and construction of piecewise linear function in \(x\) and \(y\) direction. Suppose \(v^{n}_{i,k}\) is known in a cell \((x_{i-1/2},x_{i+1/2})\times(y_{k-1/2},y_{k+1/2})\). Define \[T^{x}_{i,k}=2\theta\ \text{minmod}\{v^{n}_{i+1,k}-v^{n}_{i,k},v^{n}_{i,k}-v^{n}_ {i-1,k}\},\] \[T^{y}_{i,k}=2\theta\ \text{minmod}\{v^{n}_{i,k+1}-v^{n}_{i,k},v^{n}_{i,k}-v^{n}_ {i,k-1}\},\] \[v^{n}_{i+1/2L,k}=v^{n}_{i,k}+\frac{T^{x}_{i,k}}{2},\quad v^{n}_{i-1/2R,k}=v^{n }_{i,k}-\frac{T^{x}_{i,k}}{2},\] \[v^{n}_{i,k+1/2L}=v^{n}_{i,k}+\frac{T^{y}_{i,k}}{2},\quad v^{n}_{i,k-1/2R}=v^{n }_{i,k}-\frac{T^{y}_{i,k}}{2}.\] Next define, \[v^{n}_{i+1/2L,k+1/2}=\frac{v^{n}_{i+1/2L,k}+v_{i+1/2L,k+1}}{2}, \ v^{n}_{i+1/2R,k+1/2}=\frac{v^{n}_{i+1/2R,k}+v^{n}_{i+1/2R,k+1}}{2},\] \[v^{n}_{i+1/2,k+1/2L}=\frac{v^{n}_{i,k+1/2L}+v^{n}_{i+1,k+1/2L}}{ 2},\ \ v^{n}_{i+\frac{1}{2},k+1/2R}=\frac{v^{n}_{i,k+1/2R}+v^{n}_{i+1,k+1/2R}}{2}.\] Further, \[(u_{x})^{2}\approx G^{n,x}_{i,k+1/2}=[\max(|\max(\alpha^{n}_{i+1/2L,k+1/2},0)|,|\min(\alpha^{n}_{i+1/2R,k+1/2},0)|)]^{2},\] \[(u_{y})^{2}\approx G^{n,y}_{i+1/2,k}=[\max(|\max(\beta^{n}_{i+1/2,k+1/2L},0)|,|\min(\beta^{n}_{i+1/2,k+1/2R},0)|)]^{2}.\] Define \[W^{n,x}_{i,k+1/2}=(\sqrt{(|\max(\alpha^{n}_{i+1/2L,k+1/2},0)|)^{2}+G^{n,y}_{i,k+1/2}}-1)v^{n}_{i+1/2L,k+1/2},\] \[W^{n,x}_{i+1,k+1/2}=(\sqrt{|\min(\alpha^{n}_{i+1/2R,k+1/2},0)|^{2}+G^{n,y}_{i +1,k+1/2}}-1)v^{n}_{i+1/2R,k+1/2},\] \[W^{n,y}_{i+1/2,k}=(\sqrt{|\max(\beta^{n}_{i+1/2,k+1/2L},0)|^{2}+G^{n,x}_{i+1/2,k}}-1)v^{n}_{i+1/2,k+1/2L},\] \[W^{n,y}_{i+1/2,k+1}=(\sqrt{|\min(\beta^{n}_{i+1/2,k+1/2R},0)|^{2}+G^{n,x}_{i+1 /2,k+1}}-1)v^{n}_{i+1/2,k+1/2R}.\] Define the numerical fluxes in the \(x-\) direction by \[G^{n,x}_{i+1/2,k+1/2}=\max(W^{n,x}_{i,k+1/2},W^{n,x}_{i+1,k+1/2})\] and in the \(y-\) direction \[G^{n,y}_{i+1/2,k+1/2}=\max(W^{n,y}_{i+1/2,k},W^{n,y}_{i+1/2,k+1})\] Figure 2: Two dimensional Cartesian grids element respectively. Now, define \[G^{n}_{i+1/2,k+1/2}=\max(G^{n,x}_{i+1/2,k+1/2},G^{n,y}_{i+1/2,k+1/2}).\] The finite difference scheme for(1) is given by \[u^{n+1}_{i+1/2,k+1/2}=u^{n}_{i+1/2,k+1/2}-\Delta tG^{n}_{i+1/2,k+1/2} \tag{33}\] where \(G^{n}_{i+1/2,k+1/2}\) approximates \[G(u,v)=(\sqrt{(u^{2}_{x}+u^{2}_{y}}-1)v\] at time level \(t^{n}\). We obtain a second-order approximation for \(v\) by replacing the fluxes in (34) with following flux functions \[H^{n,x}_{i+1/2,k} =H_{x}(\alpha^{n}_{i+1/2L,k},\alpha^{n}_{i+1/2R,k},v^{n}_{i+1/2L,k},v^{n}_{i+1/2R,k},B^{x}_{i+1/2L,k},B^{x}_{i+1/2R,k})\] \[H^{n,y}_{i,k+1/2} =H_{y}(\beta^{n}_{i,k+1/2L},\beta^{n}_{i,k+1/2R},v^{n}_{i,k+1/2L},v^{n}_{i,k+1/2R},B^{y}_{i,k+1/2L},B^{y}_{i,k+1/2R})\] where \[\alpha^{n}_{i+1/2L,k} =\alpha^{n}_{i,k}+\frac{1}{2}\text{ minmod}\left(\alpha^{n}_{i+1,k}-\alpha^{n}_{i,k},\alpha^{n}_{i,k}-\alpha^{n}_{i-1,k}\right)\] \[\alpha^{n}_{i+1/2R,k} =\alpha^{n}_{i+1,k}-\frac{1}{2}\text{ minmod}\left(\alpha^{n}_{i+2,k}-\alpha^{n}_{i+1,k},\alpha^{n}_{i+1,k}-\alpha^{n}_{i,k}\right)\] \[\beta^{n}_{i,k+1/2L} =\beta^{n}_{i,k}+\frac{1}{2}\text{ minmod}\left(\beta^{n}_{i,k+1}-\beta^{n}_{i,k},\beta^{n}_{i,k}-\beta^{n}_{i,k-1}\right)\] \[\beta^{n}_{i,k+1/2R} =\beta^{n}_{i,k+1}-\frac{1}{2}\text{ minmod}\left(\beta^{n}_{i,k+2}-\beta^{n}_{i,k+1},\beta^{n}_{i,k+1}-\beta^{n}_{i,k}\right)\] and \(\alpha^{n}_{i,k},\beta^{n}_{i,k}\) are computed as \[\alpha^{n}_{i,k}=\frac{1}{2}(\alpha^{n}_{i,k+\frac{1}{2}}+\alpha^{m}_{i,k- \frac{1}{2}})\text{ and }\beta^{n}_{i,k}=\frac{1}{2}(\beta^{n}_{i+\frac{1}{2},k}+\beta^{n}_{i-\frac{ 1}{2},k}).\] Further, \(v^{n}_{i+1/2L,k},v^{n}_{i+1/2R,k},v^{n}_{i,k+1/2L},v^{n}_{i,k+1/2R}\) are defined as in (6.1). ### Second-order adaptive scheme In a similar manner to what was accomplished in one dimension, we are employ an adaptation procedure for the second-order scheme in two dimensions. This ensures that the resulting scheme possesses the crucial well-balanced property. The numerical algorithm reads as: \[u^{n+1}_{i+1/2,k+1/2} =u^{n}_{i+1/2,k+1/2}-\Delta tG^{n}_{i+1/2,k+1/2},\] \[v^{n+1}_{i,k} =v^{n}_{i,k}-\lambda\;(H^{n,x}_{i+1/2,k}-H^{n,x}_{i-1/2,k})-\lambda (H^{n,y}_{i,k+1/2}-H^{n,y}_{i,k-1/2}) \tag{34}\] \[\quad+\Delta tG^{n}_{i+1/2,k+1/2}\] At steady state, \[G^{n}_{i+1/2,k+1/2}=0\] \[(H^{n,x}_{i+1/2,k}-H^{n,x}_{i-1/2,k})+(H^{n,y}_{i,k+1/2}-H^{n,y}_{i,k-1/2})-hG^ {n}_{i+1/2,k+1/2}=0\] which implies \[(H^{n,x}_{i+1/2,k}-H^{n,x}_{i-1/2,k})+(H^{n,y}_{i,k+1/2}-H^{n,y}_{i,k-1/2})=0.\] Now define \[\mathcal{E}^{n}_{i+1/2,k+1/2} :=\sqrt{\left|[H^{n,x}_{i+1/2,k}]+[H^{n,y}_{i,k+1/2}]\right|^{2}+ (G^{n}_{i+1/2,k+1/2})^{2}}\] \[[H^{n,x}_{i+1/2,k}] :=H^{n,x}_{i+1/2,k}-H^{n,x}_{i-1/2,k}\text{ and }[H^{n,y}_{i,k+1/2}]:=H^{n,y}_{i,k+1/2}-H^{n,y}_{i,k-1/2}.\] Define the steady state indicator on every cell in the \(x-\) direction by \[\Theta^{n,x}_{i,k}:=\Theta(\mathcal{E}^{n}_{i,k+1/2}),\;\;\mathcal{E}^{n}_{i,k+1 /2}=\mathcal{E}^{n}_{i-1/2,k+1/2}+\mathcal{E}^{n}_{i+1/2,k+1/2}\] and in the \(y\) direction by \[\Theta^{n,y}_{i,k}:=\Theta(\mathcal{E}^{n}_{i+1/2,k}),\;\;\mathcal{E}^{n}_{i+1 /2,k}=\mathcal{E}^{n}_{i+1/2,k-1/2}+\mathcal{E}^{n}_{i+1/2,k+1/2}\] or uniformly in both \(x\) and \(y\) direction \[\Theta^{n}_{i,k}:=\Theta^{n,y}_{i,k}+\Theta^{n,x}_{i,k}.\] ### Computation of the term B in two dimensions The technique of inclusion of source term with convective term is not straightforward for multi-dimensions, and was dealt in [3, 1] based on the concept of _Transport Rays_, which are precisely the line segments along which \(f\) is integrated in a suitable sense to obtain the equilibrium solution \(v\). In this strategy, the source term \(f\) was suitably decomposed to be included with the convective fluxes in \(x-\) and \(y-\) directions according to the structure of the transport rays and hence the source term was distributed along the physical flow of sand. The scheme was shown to be more efficient in capturing of crests in case of discontinuous sources than the fractional time stepping method. The reader is requested to refer to the papers for details, we briefly review the construct of the source terms \(B^{x}\) and \(B^{y}\). Let us define \[g^{x}(x,y)=\int\limits_{0}^{x}f_{1}(z,y)dz\quad\forall y\in[0,1],\] \[g^{y}(x,y)=\int\limits_{0}^{y}f_{2}(x,z)dz\quad\forall x\in[0,1],\] where \(f_{1}\) and \(f_{2}\) have to be appropriately chosen such that the source term \(f=f_{1}+f_{2}\). It was seen in [1] that the steady state solution \(v\) was calculated by suitably integrating \(f\) along a subset of the transport ray \(R_{\mathbf{x}}\). Define \[f_{1}:=f\cos^{2}(\theta),f_{2}:=f\sin^{2}(\theta).\] where \(\theta\) is the angle which \(R_{\mathbf{x}}=R_{(x,y)}\) makes with the positive \(x-\) axis. Infact, it can be proven that \[(\cos(\theta),\sin(\theta))=(-\frac{\partial d_{\Gamma}}{\partial x},-\frac{ \partial d_{\Gamma}}{\partial y})=-\nabla d_{\Gamma} \tag{35}\] where the distance function \(d_{\Gamma}(\mathbf{x})\) is calculated by solving \[|\nabla d_{\Gamma}(\mathbf{x})|=1,\mathbf{x}\in\Omega,\quad d_{\Gamma}( \mathbf{x})=0,\mathbf{x}\in\Gamma,\] for which various efficient numerical algorithms are available in the literature, for example, fast sweeping Method [32]. Since \(\sin^{2}(\pi-\theta)=\sin^{2}(\pi+\theta)=\sin^{2}(\theta),\cos^{2}(\pi-\theta )=\cos^{2}(\pi+\theta)=\cos^{2}(\theta)\), the outward orientation of the transport ray \(R_{\mathbf{x}}\) does not play any role and the angle \(\theta\) can be alternatively chosen to be as the angle with the smallest absolute value which the transport ray \(R_{\mathbf{x}}\) makes with the \(x-\) axis. Now using (6.3) and (35), we get \[f_{1}=\Big{(}\frac{\partial d_{\Gamma}}{\partial x}\Big{)}^{2}f,f_{2}=\Big{(} \frac{\partial d_{\Gamma}}{\partial y}\Big{)}^{2}f\] Figure 3 shows the transport rays for a unit square \([0,1]\times[0,1]\) for the open table problem, where \(\Gamma=\partial\Omega\). In the triangle \(\{x<y,x+y>1\}\), the minimal distance of any point \(\mathbf{x}=(x,y)\) from the boundary \(\partial\Omega\) is given by the distance of the point \((x,y)\) from the line \(y=1\). Hence \(d_{\Gamma}(x,y)=1-y\) and \[\cos(\theta)=-\frac{\partial d_{\Gamma}}{\partial x}=0,\sin(\theta)=-\frac{ \partial d_{\Gamma}}{\partial y}=1,\] which implies that the transport rays in this region make an angle \(\frac{\pi}{2}\) with the positive \(x-\) axis and hence are vertical. In this case, \[f_{1}=f\Big{(}\frac{\partial d_{\Gamma}}{\partial x}\Big{)}^{2}=0,f_{2}=f \Big{(}\frac{\partial d_{\Gamma}}{\partial y}\Big{)}^{2}=f.\] Similarly, they are vertical in the triangle \(\{x>y,x+y<1\}\) and make an angle \(\dfrac{3\pi}{2}\) with the positive \(x-axis\), and are horizontal in the triangles \(\{x<y,x+y<1\}\) and \(\{x>y,x+y>1\}\) and make an angle \(\theta=\pi\) and \(0\) respectively with the positive \(x-axis\). Therefore, we define \[f_{1}(x,y)=\left\{\begin{array}{ccc}f(x,y)&\text{if}&\{x\geq y,x+y\geq 1\} \cup\{x\leq y,x+y\leq 1\},\\ 0&\text{otherwise}&\\ \end{array}\right. \tag{36}\] and \[f_{2}(x,y)=\left\{\begin{array}{ccc}f(x,y)&\text{if}&\{x\geq y,x+y\leq 1\} \cup\{x\leq y,x+y\geq 1\},\\ 0&\text{otherwise}.\\ \end{array}\right. \tag{37}\] Now, we give an example of partially open table problem case. Let \(\Gamma=\{0\leq x\leq 0.5,y=0\}\). Figure 4 shows that the distribution of transport rays. The minimal distance of any point \(\textbf{x}=(x,y)\) from the boundary \(\Gamma\) is given by the distance of the point \((x,y)\) from the line \(y=0\). Hence \(d_{\Gamma}(x,y)=y\) and \[\cos(\theta)=-\dfrac{\partial d_{\Gamma}}{\partial x}=0,\sin(\theta)=-\dfrac {\partial d_{\Gamma}}{\partial y}=-1,\] Thus, the transport ray \(R_{(x,y)}\) through any point \((x,y)\) makes an angle \(\pi\) with the positive \(x-\) axis. On the right side of the line \(x=0.5\), the distance of the point \((x,y)\) through has minimal distance from the open boundary \((0.5,0)\) and hence \(d_{\Gamma}(x,y)=\sqrt{(x-0.5)^{2}+y^{2}}\) which implies that \[\cos(\theta)=-\dfrac{\partial d_{\Gamma}}{\partial x}=\dfrac{0.5-x}{d_{\Gamma }(\textbf{x})}<0,\ \sin(\theta)=-\dfrac{\partial d_{\Gamma}}{\partial y}=-\dfrac{y}{d_{\Gamma}( \textbf{x})}<0,\ \theta=\tan^{-1}(\dfrac{0-y}{0.5-x}),\] where the exact steady state solutions are given by \[u_{s}(x,y)=\left\{\begin{array}{ccc}y&\text{if}&x\leq 0.5,\\ \sqrt{(x-0.5)^{2}+y^{2}}&\text{if}&x>0.5,\end{array}\right.,\ \ v_{s}(x,y)=\left\{ \begin{array}{ccc}1-y&\text{if}&x\leq 0.5,\\ \dfrac{1}{d_{\Gamma}(x,y)}\int_{d_{\Gamma}(x,y)}^{l(x,y)}\rho d \rho&\text{if}&x>0.5,\end{array}\right. \tag{38}\] where \[l(x,y)=\left\{\begin{array}{ccc}\sqrt{(1-0.5)^{2}+(\frac{0.5y}{x-0.5})^{2}} &\text{if}&\frac{y}{x-0.5}\leq\frac{1}{0.5},\\ \sqrt{1+(\frac{x-0.5}{y})^{2}}&\text{if}&\frac{y}{x-0.5}>\frac{1}{0.5},\end{array} \right.,\ \ d_{\Gamma}(x,y)=\sqrt{(x-0.5)^{2}+y^{2}}.\] As earlier, the transport ray is directed outwards to the point(0,5,0) (see Figure 4) and the coordinate system has to be shifted to the point(0.5,0) and the transport ray has to be extended in the third quadrant to find \(\theta\). Now, define \[f_{1}:=f\cos^{2}\theta,f_{2}:=f\sin^{2}\theta.\] Since \(\cos^{2}(\theta-\pi)=\cos^{2}(\theta)\) and \(\sin^{2}(\theta-\pi)=\sin^{2}(\theta)\), \(\theta\) can be alternatively taken as \(\theta^{\prime}=\theta-\pi\) which is the angle with the smallest absolute value which the transport ray \(R_{\mathbf{x}}\) makes with the \(x-\) axis, which has been shown in Figure 4. Thus, on the right hand side of the line \(\{x=0.5\}\), all the transport rays converge all together into the extremal point \(P=(0.5,0)\), creating a singularity. Now we proceed to define \(g^{x},g^{y}\) numerically as was done in one dimension. We construct two functions \(B^{x}_{i,k}\), \(B^{y}_{i,k}\) such that for each \(i,k=1,....,M,\) \[B^{x}_{i,k}=\int_{0}^{x_{i}}f_{1}(x,y_{k})dx=g^{x}(x_{i}), \tag{39}\] and \[B^{y}_{i,k}=\int_{0}^{y_{k}}f_{2}(x_{i},y)dy=g^{y}(y_{k}), \tag{40}\] and to approximate these integrals, we use composite trapezoidal rule. ### Boundary conditions The boundary conditions depends on the problem under consideration. In this article, we consider two type of problems, namely, open table and partially open table problems. This give rise to two cases of boundary conditions, which we detail as below **Case 1** (Open table problem) In this case, for computing the \(u\) variable, we impose the boundary conditions weakly as follows. \[G^{n}_{\frac{1}{2},k+\frac{1}{2}}=G^{n}_{M+\frac{1}{2},k+\frac{1 }{2}}=0,\quad,k\in\mathcal{M}\] \[G^{n}_{i+\frac{1}{2},\frac{1}{2}}=G^{n}_{i+\frac{1}{2},M+\frac{ 1}{2}}=0,\quad i\in\mathcal{M}.\] Further, to compute the fluxes at the interior vertices the boundary cells we require the slopes in the boundary cells. We set this slopes to zero. The boundary conditions for \(v\), also imposed weakly by determining the boundary fluxes \(H_{x}\) and \(H_{y}\). As we set the slopes in all boundary cells as zero, the corresponding fluxes are given by \[H^{n,x}_{1/2,k} =-\alpha^{n}_{1,k}v^{n}_{1,k}-B^{x}_{1,k},\quad H^{n,x}_{M+1/2,k} =-\alpha^{n}_{M,k}v^{n}_{M,k}-B^{x}_{M,k}\] \[H^{n,y}_{i,1/2} =-\beta^{n}_{i,1}v^{n}_{i,1}-B^{y}_{i,1},\quad H^{n,y}_{i,M+1/2}= -\beta^{n}_{i,M}v^{n}_{i,M}-B^{y}_{i,M}\] **Case 2**(Partially open table problem) We consider partially open table problem in 2D, where we choose the domain with wall boundaries as given in Fig.4(see [1]). In this case, the boundary conditions for computing \(u\) are given as: * Bottom horizontal boundary \[u^{n+1}_{i+\frac{1}{2},\frac{1}{2}}=\left\{\begin{array}{ccc}0&\text{if}&x_ {i+\frac{1}{2}}\leq 0.5,\\ u^{n}_{i+\frac{1}{2},\frac{1}{2}}+h\max(\beta^{n}_{i+\frac{1}{2},2},0)&\text{ if}&x_{i+\frac{1}{2}}>0.5,\quad,i\in\mathcal{M}\setminus\{0,M\}.\end{array}\right.\] * Left vertical boundary \[u^{n+1}_{\frac{1}{2},k+\frac{1}{2}}=u^{n}_{\frac{1}{2},k+\frac{1}{2}}+h\max( \alpha_{2,k+\frac{1}{2}},0),\quad k\in\mathcal{M}\setminus\{0,M\}\] * Top horizontal boundary \[u^{n+1}_{i+\frac{1}{2},M+\frac{1}{2}}=u^{n}_{i+\frac{1}{2},M-\frac{1}{2}}+h \max(\beta_{i+\frac{1}{2},M-1},0),\quad i\in\mathcal{M}\setminus\{0,M\}.\] * Right vertical boundary \[u^{n+1}_{M+\frac{1}{2},k+\frac{1}{2}}=u^{n}_{M-\frac{1}{2},k+\frac{1}{2}}+h \max(\alpha_{M-1,k+\frac{1}{2}},0),\quad k\in\mathcal{M}\setminus\{0,M\}.\] Finally at the corner vertices of the rectangular domain, solution is \(u\) is evolved by taking the average of solution computed through horizontal and vertical directions. Next, the boundary conditions for \(v\) are imposed through the numerical flux, and are given by \[H^{n,x}_{\frac{1}{2},k}=\begin{cases}-\alpha_{1,k}v_{1,k}-B^{x}(x_{1},y_{k}),& \text{if }\alpha_{1,k}\geq 0\\ -B^{x}(x_{0},y_{k})&\text{otherwise},\end{cases},k\in\mathcal{M}\setminus\{0\} \tag{41}\] \[H^{n,x}_{M+\frac{1}{2},k}=\begin{cases}-\alpha_{M,k}v_{M,k}-B^{x}(x_{M},y_{k}),&\text{if }\alpha_{M,k}\leq 0\\ -B^{x}(x_{M+1},y_{k})&\text{otherwise},\end{cases},k\in\mathcal{M}\setminus\{0\}\] \[H^{n,y}_{i,M+\frac{1}{2}}=\begin{cases}-\beta_{i,M}v_{i,M}-B^{y}(x_{i},y_{M}),&\text{if }\beta_{i,M}\leq 0\\ -B^{y}(x_{i},y_{M+1})&\text{otherwise}\end{cases},i\in\mathcal{M}\setminus\{0\}\] \[\text{For }x_{i+\frac{1}{2}}\geq 0.5,\quad H^{n,y}_{i,\frac{1}{2}}=\begin{cases}- \beta_{i,1}v_{i,1}-B^{y}(x_{i},y_{1}),&\text{if }\beta_{i,1}\geq 0\\ -B^{y}(x_{i},y_{-1})&\text{otherwise}\end{cases},i\in\mathcal{M}\setminus\{0\} \tag{42}\] Further, for \(x_{i+\frac{1}{2}}<0.5\), \[H^{n,y}_{i,\frac{1}{2}}=-\beta_{i,1}v_{i,1}-B^{y}(x_{i},y_{1}),i\in\mathcal{M} \setminus\{0\}. \tag{44}\] Note that all the boundary conditions are imposed in each stage of the RK time stepping. ## 7 Numerical Experiments We denote by FO, SO and SO-\(\Theta\) first-order schemes, the second-order scheme and second-order adaptive scheme respectively. Through out this section, we denote by \(||\cdot||_{\infty}\) and \(||\cdot||_{1}\), the supremum norm and the \(L^{1}\) norm, respectively. ### Examples in one-dimension(1D) In this section, we deal with one-dimensional examples, specifically addressing the problem (6)-(7) with the source function given by \[f(x)=0.5,\text{ for all }x\in[0,1], \tag{45}\] in the computational domain \([0,1]\). Further, \(u_{s}\) and \(v_{s}\) denote the exact steady state solutions and are given by \[u_{s}(x) =\min(x,1-x),\,x\in[0,1] \tag{46}\] \[v_{s}(x) =\begin{cases}\int_{x}^{\frac{1}{2}}f(\psi)d\psi&\text{if }x\in[0, \frac{1}{2}]\\ \int_{\frac{1}{2}}^{x}f(\psi)d\psi&\text{if }x\in[\frac{1}{2},1],\end{cases}\] where \(f\) is the source function given in equation (45). We explore a variety of test cases in 1D to understand the significance of the proposed SO-\(\Theta\) scheme. The boundary conditions in each case are employed as detailed in Section 2.1 and 2.3. If not explicitly specified, the initial conditions are set as \(u=0\) and \(v=0\) in \([0,1]\). **Example 1(convergence test case-1D)** In this test case we verify the experimental order of convergence(E.O.C.) of the proposed SO-\(\Theta\) scheme away from the steady state and compare it with that of FO and SO schmes. The source function is given in (45). We compute the solution at time \(T=1.3\) and numerical solutions are evolved with a time step \(\lambda=0.3\). Since the exact solution of the problem is not available away from the steady state, the E.O.C. is computed using a reference solution which is obtained by the SO scheme with a fine mesh of size \(\Delta x=1/8000\). We denote by \(u_{r},v_{r}\) the reference solutions corresponding to \(u\) and \(v\), respectively. The results are given in Table 1. From the definition of \(\Theta\) in (28), it becomes apparent that away from the steady state, both SO and SO-\(\Theta\) schemes approach each other. This observation aligns with the findings from the numerical experiment, where both schemes exhibit nearly identical convergence rates. Next, we will compare the results for a larger \(\lambda=0.45,\) a value within the permissible limit as in Theorem 1. The solutions corresponding to FO, SO and SO-\(\Theta\) schemes are compared against the same reference solution mentioned earlier, where we use \(\Delta x=1/100.\) The results are given in Fig.5. It is observed that, even though the FO scheme is stable with this \(\lambda=0.45,\) it produces oscillations. This is in contrast to the SO and SO-\(\Theta\) schemes. This indicates the robustness of the proposed SO-\(\Theta\) schemes with larger values of \(\lambda\) away from the steady state.For smaller time both the FO and SO oscillates, as the SO scheme moves faster towards the steady state, the oscillation reduces in the second-order scheme. But first order scheme has more diffusion and it takes larger time to recover the smooth steady state solution. **Example 2(Well balance test case-1D)** The purpose of this example is to illustrate the well-balance property of the SO-\(\Theta\) scheme for the problem (6)-(7). The simulations are carried out for a single time step, i.e., \(T=\Delta t\), with \(\lambda=0.45,\) where the initial condition is set as the exact steady state solution (46). We compute the errors \(||u_{\Delta x}-u_{s}||_{\infty}\) and \(||v_{\Delta x}-v_{s}||_{1}\) for various values of \(\Delta x\) and present the results in Table 2. It is observed that the SO-\(\Theta\) scheme achieves the well-balance property, consistent with the FO scheme. In contrast, the SO scheme fails to capture this well-balance property. This observation agrees with the result in Lemma 1. \begin{table} \begin{tabular}{|l||l|l|l|l|} \hline \(\Delta x\) & \(||u_{\Delta x}-u_{r}||_{\infty}\) & E.O.C. & \(||v_{\Delta x}-v_{r}||_{1}\) & E.O.C. \\ \hline \multicolumn{5}{|c|}{FO scheme} \\ \hline 0.025 & 0.0122 & - & 0.0085 & - \\ \hline 0.0125 & 0.0069 & 0.8122 & 0.0052 & 0.7100 \\ \hline 0.00625 & 0.0040 & 0.7728 & 0.0032 & 0.7022 \\ \hline 0.003125 & 0.0023 & 0.7941 & 0.0019 & 0.7899 \\ \hline 0.0015625 & 0.0013 & 0.8098 & 0.0011 & 0.7655 \\ \hline \multicolumn{5}{|c|}{SO scheme} \\ \hline 0.025 & 0.0058 & - & 0.0049 & - \\ \hline 0.0125 & 0.0028 & 1.03110 & 0.0024 & 1.0023 \\ \hline 0.00625 & 0.0014 & 1.0422 & 0.0012 & 1.0237 \\ \hline 0.003125 & 0.0007 & 1.0483 & 0.0006 & 1.0473 \\ \hline 0.0015625 & 0.0003 & 1.0656 & 0.0003 & 0.9933 \\ \hline \multicolumn{5}{|c|}{SO-\(\Theta\) scheme} \\ \hline 0.025 & 0.0062 & - & 0.0050 & - \\ \hline 0.0125 & 0.0030 & 1.0674 & 0.0025 & 1.0395 \\ \hline 0.00625 & 0.0014 & 1.0655 & 0.0012 & 1.0031 \\ \hline 0.003125 & 0.0007 & 1.0633 & 0.0006 & 1.0423 \\ \hline 0.0015625 & 0.0003 & 1.0749 & 0.0003 & 1.0053 \\ \hline \end{tabular} \end{table} Table 1: Example 1(1D) Errors of numerical solutions produced by FO, SO and SO-AD schemes, computed up to time \(t=1.3\) with a time step of \(\lambda=0.3.\) **Example 3(steady state solution test case-1D)**In this example, we compute the numerical solutions with the initial condition \(u_{0}=0\) and \(v_{0}=0,\) and evolve them up to the steady state level using the FO, SO and SO-\(\Theta\) schemes. For all the three schemes, the numerical computed at \(T=450\) with \(\lambda=0.45\) and \(\Delta x=1/100\). The numerical results are then compared with the exact steady state solution (46), which are plotted through interpolation on the same mesh \(\Delta x=1/100\) and are given in Fig. 6. The obtained results reveal that the FO and the adapted SO-\(\Theta\) schemes align remarkably well with the steady state solution. On the other hand, the SO scheme, successfully reaches the steady state for \(u\) as shown in Fig. 6(a) and(b), but it exhibits poor performance for \(v\), as shown in Fig. 6(c) and(d). This emphasizes the significance of the adaptation strategy when employing a second-order scheme to accurately capture the steady state solution. **Example 4(error versus number of iterations plots-1D)** In this example, we demonstrate the efficiency of the SO-\(\Theta\) scheme in reaching the steady state solution by showing that it takes less number of iterations compared to the FO scheme. To assess this, we plot the errors \(||u_{\Delta x}-u_{s}||_{\infty}\) and \(||v_{\Delta x}-v_{s}||_{1}\) at each iteration against the number of iterations for solving the problem (6)-(7). Here also we keep the same \(\lambda=0.45\) with a mesh size of \(\Delta x=1/100\). We compare the outcomes from the FO, SO and SO-\(\Theta\) schemes. The results are depicted in Fig. 7. It is crucial to emphasize that the SO-\(\Theta\) scheme reverts to first order at the steady state. When comparing the solution \(u\), the SO scheme shows better performance than both the FO and SO-\(\Theta\) schemes, but not in \(v\). The key challenge here is in achieving the steady state for \(v\). In this context, the results reveal the efficiency of the SO-\(\Theta\) scheme, which converges to the steady state with lesser iterations than the FO scheme, Fig. 7(a) and(b). Significantly, the non-adaptive SO scheme encounters difficulties in reaching the steady state for \(v\). In conclusion, the SO-\(\Theta\) scheme performs well compared to the FO scheme. ### Examples in two-dimensions(2D) We perform numerous numerical experiments in a two-dimensional setting, employing a computational domain \([0,1]\times[0,1]\). As mentioned previously, we discretize this domain into uniform Cartesian grids, denoted as \(h=\Delta x=\Delta y\). Our focus here centers on investigating SO-\(\Theta\) scheme for three specific types of problems in the two-dimensional context: those involving a complete source, scenarios with a discontinuous source, and problems related to partially open boundaries, detailed in [1, 3]. The boundary conditions for each examples in this test cases are described in Section 6.4 and the initial conditions are set as \(u=0\) and \(v=0\) in \((0,1)\times(0,1)\). In all the examples below we take the source function as \(f=0.5\) on \([0,1]\times[0,1]\). **Example 5(convergence test case-2D)** Here, for the open table problem, we analyze the E.O.C. of SO-\(\Theta\) scheme with respect to the steady state solution and compare it with the FO and SO schemes. The numerical solutions are computed near the steady state, by running the simulation till the time \(T=26\), as studied in [1, 3]. Considering the \(\lambda=0.7\) used in [3] for the FO scheme, we adopt a natural choice for the SO and SO-\(\Theta\) schemes with \(\lambda=0.35\). Here, for a fair comparison, we use the same \(\lambda=0.35\) for all the three schemes FO, SO and SO-\(\Theta\). The E.O.C is computed as: \[\varphi(h):=\log\Bigl{(}e(h)/e(h/2)\Bigr{)}/\log 2, \tag{47}\] where \(e(h)\) denotes \(||u_{\Delta x}-u_{s}||_{\infty}\) and \(||v_{\Delta x}-v_{s}||_{1}\) in respective cases. The outcomes presented in Table 3 reveal that the SO-\(\Theta\) scheme exhibits marginally superior convergence when compared with the FO scheme despite the fact that \begin{table} \begin{tabular}{|l|l|l|} \hline \(\Delta x\) & \(||u_{\Delta x}-\overline{u}||_{\infty}\) & \(||v_{\Delta x}-\overline{v}||_{1}\) \\ \hline \multicolumn{3}{|c|}{FO scheme} \\ \hline \multicolumn{3}{|c|}{FO scheme} \\ \hline 0.02 & 6.9388e-18 & 7.0546e-18 \\ \hline 0.01 & 1.3878e-17 & 8.7499e-17 \\ \hline 0.005 & 1.3878e-17 & 9.4632e-17 \\ \hline 0.0025 & 6.9389e-18 & 1.2179e-15 \\ \hline \multicolumn{3}{|c|}{SO scheme} \\ \hline 0.02 & 6.9389e-18 & 0.0001 \\ \hline 0.01 & 6.9389e-18 & 3.4875e-05 \\ \hline 0.005 & 1.3878e-17 & 8.7188e-06 \\ \hline 0.0025 & 6.9389e-18 & 2.1797e-06 \\ \hline \multicolumn{3}{|c|}{SO-\(\Theta\) scheme} \\ \hline 0.02 & 6.9389e-18 & 5.3950e-17 \\ \hline 0.01 & 6.9389e-18 & 6.9766e-17 \\ \hline 0.005 & 1.3878e-17 & 7.2185e-17 \\ \hline 0.0025 & 6.9389e-18 & 6.6996e-16 \\ \hline \end{tabular} \end{table} Table 2: Example 2(1D) Well balanced test, full source case for FO, SO and SO-\(\Theta\) schemes, computed up to time \(T=\Delta t\) with a time step of \(\Delta t=0.45\Delta x\). Figure 6: Example 3(1D): Full source case: \(f(x)=0.5,\forall x\in[0,1]\). Solutions are computed with \(\Delta x=1/100,\Delta t=0.45\Delta x\). Figure 7: Example 4(1D): Full source case: \(f(x)=0.5,\forall x\in[0,1]\). Solutions are computed with \(\Delta x=1/100,\lambda=0.45\). the SO-\(\Theta\) scheme reduces to the FO scheme in the vicinity of the steady state. On the other hand, it is evident that the SO scheme displays a diminished rate of convergence in comparison to both the FO and SO-\(\Theta\) schemes for \(v\). This emphasizes the robustness and significance of the proposed SO-\(\Theta\) scheme. Next, to visualize the solution near the steady state, we run the simulation till the time \(T=196\), with the same \(\lambda=0.35\) and \(h=1/50\). We plot the approximate solutions obtained using the FO and SO-\(\Theta\) schemes with the exact steady solutions in Fig.8, where exact steady state solutions are plotted with linear interpolation on a finer mesh of size \(\Delta x=0.500\). Since the approximate solutions are very close to the steady state solutions, visual distinctions are difficult to discern from the given plots. **Example 6(error versus number of iterations-2D)**: We examine the error versus number of iterations plot obtained using the FO, SO, and SO-\(\Theta\) schemes for the open table problem. To conduct this comparison, we have taken li,ke in Example 5, \(\lambda=0.35\) with mesh size of \(h=1/50\). Errors \(||u_{\Delta x}-u_{s}||_{\infty}\) and \(||v_{\Delta x}-v_{s}||_{1}\), are computed at each iteration and plotted against the number of iterations. The results depicted in Fig. 9 clearly indicate that the SO-\(\Theta\) scheme outperforms the FO and SO schemes. In other words, the SO-\(\Theta\) scheme converges to the steady state more rapidly in comparison to the FO and SO schemes, demonstrating its superior efficiency. **Example 7(solutions at finite time-2D)** Here again we consider the open table problem and compare the solution profiles at a finite time, specifically at \(T=3.0\). The main aim here is to show that the SO-\(\Theta\) scheme performs well even for the larger \(\lambda\). We compute the approximate solution \(v_{h}\) using both the FO and SO-\(\Theta\) schemes and plot with contour curves. The computations are performed for different values of \(\lambda\) : 0.35, 0.45, and 0.7, all with a mesh size of \(h=1/50\) and the results are given in Fig.11. For \(\lambda=0.35\), we see a clear distinction between the two schemes. In the case of \(v:\) the FO scheme exhibits oscillations, particularly noticeable when observing the contour plots along the lines \(\{(x,y):x=0.5,0\leq y\leq 1\}\) and \(\{(x,y):y=0.5,0\leq x\leq 1\}\) in Fig. 11(a) and(b). In contrast, the SO-\(\Theta\) scheme generates fewer oscillations, indicating its robustness. Furthermore, as the value of \(\lambda\) increase to 0.45 and 0.7, the oscillations intensify in the FO schemes. However, the SO-\(\Theta\) scheme remains stable and less-oscillatory. This is evident in Fig. 11(b),(d) and(f). In the case of \(u:\) we observe this same behaviour at time \(T=1.2\), which is given in Fig. 10. This comparison highlights the robustness of the SO-\(\Theta\) scheme in producing accurate solutions at finite times, emphasizing its significance in numerical simulations. **Example 8(discontinuous source test case-2D)** In this example we consider the open table problem in the computational domain \([0,1]\times[0,1]\) with a discontinuous source function given by(see Example 2 of [4], page no. 1110) \[f(x,y)=\begin{cases}0.5&\text{if }(x,y)\in D_{f},\\ 0&\text{elsewhere},\end{cases} \tag{48}\] where \(D_{f}:=[0.1,0.3]\times[0.5,0.7]\cup[0.5,0.7]\times[0.7,0.9]\). The decomposition of source function \(f\) is as given in [3]. As explained in [3], the discretization of the domain \([0,1]\times[0,1]\) is carefully done in such a way that the corners of the interior squares(see Fig. 21 in [3]) fall on the center of the computational cell. Due to this reason, in this test case we choose a mesh size of \(h=1/55\). For more details on this experiment we refer to [3]. We compute the solutions at large time \(T=200,\) where it reaches the steady state with \(\lambda=0.35\). The results are printed in Fig. 12. It is evident that the SO-\(\Theta\) scheme produces a better sharp resolution near the crest formation compared to the FO scheme. \begin{table} \begin{tabular}{|l||l|l|l|l|} \hline h & \(||u_{\Delta x}-u_{r}||_{\infty}\) & E.O.C. & \(||v_{\Delta x}-v_{r}||_{1}\) & E.O.C. \\ \hline \multicolumn{5}{|c|}{FO scheme} \\ \hline 0.05 & 0.0662 & - & 0.0031 & - \\ \hline 0.025 & 0.0334 & 0.9841 & 0.0009 & 1.7513 \\ \hline 0.0125 & 0.0167 & 1.0037 & 0.0002 & 1.8799 \\ \hline 0.00625 & 0.0084 & 0.9876 & 6.4819e-05 & 1.9368 \\ \hline \multicolumn{5}{|c|}{SO scheme} \\ \hline 0.05 & 0.0599 & - & 0.0038 & - \\ \hline 0.025 & 0.0299 & 0.9992 & 0.0017 & 1.1327 \\ \hline 0.0125 & 0.0150 & 0.9994 & 0.0008 & 1.0709 \\ \hline 0.00625 & 0.0075 & 0.9986 & 0.0004 & 1.0385 \\ \hline \multicolumn{5}{|c|}{SO-\(\Theta\) scheme} \\ \hline 0.05 & 0.0600 & - & 0.0018 & - \\ \hline 0.025 & 0.0300 & 0.9995 & 0.0005 & 1.9026 \\ \hline 0.0125 & 0.0150 & 0.9991 & 0.0001 & 1.9178 \\ \hline 0.00625 & 0.0075 & 0.9984 & 3.3913e-05 & 1.9059 \\ \hline \end{tabular} \end{table} Table 3: Example 5(2D)Errors of numerical solutions produced by FO, SO and SO-\(\Theta\) schemes, computed up to time \(T=26\) with a time step of \(\Delta t=0.35h\). Figure 8: Example 5(2D) Solution near the steady state computed with \(h=1/50\) and \(\Delta t=0.35h\) at time \(T=196\). **Example 9(partially open table problem-2D)** We consider the domain \((0,1)\times(0,1)\) with the boundary walls \(\Gamma_{w}\) and open boundary \(\Gamma_{o}\) as given in Fig.4. In this scenario, we aim to solve the problem defined by equations (1)-(4), with a given source function \[f(x,y)=0.5\text{ for all }(x,y)\in[0,1]\times[0,1].\] Further details about the source decomposition and problem description can be found in [3]. The computations are performed with a mesh size \(h=1/50\) and \(\lambda=0.1\). It is worth noting that we have opted here a relatively small value of \(\lambda\) in comparison to the previous examples. This is due to the fact that solution \(v\) takes larger values, and \(\lambda\) has to satisfy the condition: \(\lambda\max_{i}v_{i}^{n}\leq 1/2\) of Theorem 1. The simulations are run using both the FO and SO-\(\Theta\) schemes until a large time \(T=200\), allowing the numerical solutions to approach the steady state. The exact steady state solution is available in this case and is given in expression (38) which are plotted using the linear interpolation on a mesh of size \(h=1/200\). The comparison of numerical solutions obtained from the FO and SO-\(\Theta\) schemes with the exact steady state solutions is presented in Fig.13 and 14. The results are visualized using 40 contour curves in Fig. 14. Notably, in the vicinity of the point \((0.5,0)\) where the solution \(v_{h}\) exhibits a singularity, the SO-\(\Theta\) scheme provides significantly better resolutions compared to the FO scheme. This enhancement is particularly evident when comparing the numerical solutions with the exact steady state solution. These results clearly demonstrate the advantage of the SO-\(\Theta\) scheme over the FO scheme in the partially open table test case. ## 8 Conclusion In this study, we address the challenges associated with numerically approximating the Hadler and Kuttler (HK) model, a complex system of non-linear partial differential equations describing granular matter dynamics. Focusing on high-order schemes, we address issues such as initial oscillations and delays in reaching steady states. We present a second-order scheme that incorporates a MUSCL-type spatial reconstruction and a strong stability preserving Runge-Kutta time-stepping method, building upon the foundational first-order scheme. Through meticulous adaptation, specifically employing a modified limitation strategy during linear reconstruction, our scheme achieves well-balancedness. We extend our analysis to two dimensions and showcase the effectiveness of our adaptive scheme through numerical examples. Notably, our resulting scheme significantly reduces initial oscillations, reaches the steady state solution faster than the first-order scheme, and provides a sharper resolution of the discrete steady state solution. ## Acknowledgement This work was done while one of the authors, G. D. Veerappa Gowda, was a Raja Ramanna Fellow at TIFR-Centre for Applicable Mathematics, Bangalore. The work of Sudarshan Kumar K. is supported by the Science and Engineering Research Board, Government of India, under MATRICS project no. MTR/2017/000649. Figure 9: Example 6(2D)Full source test case with \(h=1/50\) and \(\Delta t=0.35h\). Error versus number of iterations plots. Figure 10: Example 7(2D) Solutions computed at finite time \(T=1.2\) with a mesh size of \(h=1/50,\) using values of \(\lambda:\) 0.35, 0.45 and 0.7. The contour plots use 40 contour curves. Figure 11: Example 7(2D) Solutions computed at finite time \(T=3.0\) with a mesh size of \(h=1/50\), using values of \(\lambda\): 0.35, 0.45 and 0.7. The contour plots use 40 contour curves.
2304.11021
Mutually avoiding Eulerian circuits
Two Eulerian circuits, both starting and ending at the same vertex, are avoiding if at every other point of the circuits they are at least distance 2 apart. An Eulerian graph which admits two such avoiding circuits starting from any vertex is said to be doubly Eulerian. The motivation for this definition is that the extremal Eulerian graphs, i.e. the complete graphs on an odd number of vertices and the cycles, are not doubly Eulerian. We prove results about doubly Eulerian graphs and identify those that are the `densest' and `sparsest' in terms of the number of edges.
Grahame Erskine, Terry Griggs, Robert Lewis, James Tuite
2023-04-13T16:05:00Z
http://arxiv.org/abs/2304.11021v1
# Mutually avoiding Eulerian circuits ###### Abstract Two Eulerian circuits, both starting and ending at the same vertex, are _avoiding_ if at every other point of the circuits they are at least distance 2 apart. An Eulerian graph which admits two such avoiding circuits starting from any vertex is said to be _doubly Eulerian_. The motivation for this definition is that the extremal Eulerian graphs, i.e. the complete graphs on an odd number of vertices and the cycles, are not doubly Eulerian. We prove results about doubly Eulerian graphs and identify those that are the "densest" and "sparsest" in terms of the number of edges. + Footnote †: Keywords: Eulerian circuit; avoidance + Footnote †: Keywords: Eulerian circuit; avoidance ## 1 Introduction The problem of traversing a network, subject to certain constraints, in an efficient manner is a fundamental concept in graph theory and related disciplines. Examples such as the _travelling salesman_ problem are well known, where the constraint is to visit each node of the network at least once (see for example [3, Ch.15]). A related problem is known as the _Chinese postman_ or _route optimisation_ problem [4] (also see [3, Ch.14]), where the constraint is to traverse each edge in the network at least once. The route optimisation problem is clearly trivial if the graph to be traversed is Eulerian. In that case, we propose here an extension of the problem to the case where the network is to be traversed by _two_ postmen simultaneously, both starting and finishing at the same node on the network, and both traversing each edge exactly once. The constraint we impose is that the two postmen should not be aware of each other's position on the route, except at the start and end points. By this we mean that except at the start and end points, at any given point in time the two postmen should neither be at the same vertex nor at adjacent vertices. Of course, we assume that each postman will traverse exactly one edge in one unit of time. One might ask why we do not allow the postmen to be at adjacent vertices: one motivation is as follows. Consider the simple case of a cycle graph of even order. Clearly the two postmen would have to set off in opposite directions round the cycle, and would therefore meet at the opposite vertex. But in the case of an odd cycle, the two would never be at the same vertex, but would'meet' in the sense that they would traverse the same edge in opposing directions. Our insistence that the postmen are not allowed to be at adjacent vertices means that the situation for odd and even cycles is consistent. Informally, the two postmen can neither meet at the same vertex nor be able to see one another at adjacent vertices by looking along an edge. We formulate the problem using standard terminology from graph theory; for definitions and notation not noted here see a standard text, for example [1]. Throughout, let \(G=(V,E)\) be a connected graph without loops and multiple edges where \(V\) is the set of vertices and \(E\) is the set of edges. A _trail_ in a graph is a sequence of vertices \(v_{1},v_{2},\ldots v_{m}\) such that \(v_{i}\) is adjacent to \(v_{i+1}\) for \(1\leq i\leq m-1\) and no edge of the graph is traversed more than once. A _circuit_ in a graph is a closed trail, i.e. a trail beginning and ending at the same vertex. An _Eulerian circuit_ in a graph is a circuit in which each edge is traversed exactly once. We say a graph is _Eulerian_ if it admits an Eulerian circuit; it is of course a foundational result in graph theory [2] that a graph is Eulerian if and only if every vertex has even valency. Let \(G\) be an Eulerian graph with \(m\) edges and let \(u\) be a vertex of \(G\). Let \(C_{1}=u,v_{1},v_{2},\ldots v_{m-1},u\) and \(C_{2}=u,w_{1},w_{2},\ldots w_{m-1},u\) be two Eulerian circuits in \(G\). We say \(C_{1}\) and \(C_{2}\) are _avoiding_ if two criteria are met: (1) \(v_{i}\neq w_{i}\), \(1\leq i\leq m-1\) and (2) \(v_{i}\) is not adjacent to \(w_{i}\), \(1\leq i\leq m-1\). Thus two circuits are avoiding if they are at distance at least 2 at every step apart from the beginning and end points. We say \(G\) is _doubly Eulerian_ if it admits a pair of avoiding Eulerian circuits starting from _any_ vertex. The motivation for this definition is that the extremal Eulerian graphs, i.e. the complete graphs \(K_{2n+1}\), \(n\geq 1\), which have the largest ratio of edges to vertices of \(n\), and the cycles \(C_{n}\), \(n\geq 3\), which have the smallest such ratio of 1, are not doubly Eulerian. This leads naturally to the question of identifying what are the "densest" and the "sparsest" doubly Eulerian graphs, and much of our paper is concerned with these questions. We can extend the concept of a doubly Eulerian graph further. Given an Eulerian graph \(G\), we define the _avoidance index_\(\operatorname{av}(G)\) to be the largest integer \(k\) such that starting from any vertex of \(G\) we may find a set of \(k\) mutually avoiding Eulerian circuits. A graph \(G\) is therefore doubly Eulerian if and only if \(\operatorname{av}(G)\geq 2\). For small orders, we can calculate the avoidance index of Eulerian graphs directly by computer. Table 1 shows the results. We see that no graph of order less than 8 is doubly Eulerian. Of the doubly Eulerian graphs of order 8, five of the seven are regular of valency 4, including the single example of avoidance index 3; we discuss these in Section 2. The remaining two graphs are discussed in Section 4. No graph of order 9 has avoidance index greater than 2. At order 10, the single example at avoidance index 4 is \(K_{5,5}^{*}\) which is the complete bipartite graph \(K_{5,5}\) minus a perfect matching; this graph is discussed in Section 5. The two order 10 graphs of avoidance index 3 are \(K_{4,6}\) and the circulant graph with generating set \(\{1,4\}\). The computations required to calculate the avoidance index are substantial, especially when there are a number of vertices of valency 6. We have therefore not been able to determine the split between avoidance index 1 and 2 for graphs of order 10. The remainder of this paper is organised as follows. Section 2 is concerned with doubly Eulerian graphs having the greatest number of possible edges for a given order, which we will call _edge-maximal_; we give a bound on the possible number of edges in a doubly Eulerian graph and show that this bound can be attained for all orders 8 and above. Bipartite graphs are of particular interest because the second criterion that \(v_{i}\) is not adjacent to \(w_{i}\), \(1\leq i\leq m-1\), automatically follows from the first criterion that \(v_{i}\neq w_{i}\), \(1\leq i\leq m-1\), and so becomes redundant. This is the subject matter of Section 3 where we give bounds for this special case, and show that again these bounds are attained. Section 4 deals with doubly Eulerian graphs having the least number of possible edges which we will call _edge-minimal_. The analysis of possible edge-minimal graphs is significantly more complicated than the edge-maximal case, and we restrict ourselves to constructions for certain special cases. \begin{table} \begin{tabular}{|c|c|c c c c|} \hline Order & Eulerian & \multicolumn{3}{c|}{Avoidance index} \\ & graphs & 1 & 2 & 3 & 4 \\ \hline 3 & 1 & 1 & 0 & 0 & 0 \\ 4 & 1 & 1 & 0 & 0 & 0 \\ 5 & 4 & 4 & 0 & 0 & 0 \\ 6 & 8 & 8 & 0 & 0 & 0 \\ 7 & 37 & 37 & 0 & 0 & 0 \\ 8 & 184 & 177 & 6 & 1 & 0 \\ 9 & 1782 & 1692 & 90 & 0 & 0 \\ 10 & 31026 &? &? & 2 & 1 \\ \hline \end{tabular} \end{table} Table 1: Avoidance index of small graphs We study the avoidance index in Section 5, and finally in Section 6, we summarise our results and suggest a number of open questions and possible directions for future research. ## 2 Edge-maximal doubly Eulerian graphs It seems intuitively clear that in some sense, very dense graphs are unlikely to be doubly Eulerian because it is difficult for the two postmen to avoid being adjacent at some point on the circuit. More formally, we may ask for the maximum possible number of edges in a doubly Eulerian graph of given order. We begin with two simple lemmas. **Lemma 2.1**.: _There exists no doubly Eulerian graph of odd order \(n\geq 3\) containing a vertex of valency \(n-1\)._ Proof.: Let \(G\) be an Eulerian graph of odd order \(n\geq 3\) with vertex \(v\) with valency \(n-1\). Then \(v\) is adjacent to every other vertex of \(G\). Hence any two Eulerian circuits, beginning and ending at a vertex other than \(v\), fail to be avoiding at \(v\). **Lemma 2.2**.: _There exists no doubly Eulerian graph of even order \(n\geq 4\) containing a vertex of valency \(n-2\)._ Proof.: Let \(G\) be an Eulerian graph of even order \(n\geq 4\) with vertex \(u\) with valency \(n-2\). Then there is exactly one vertex, \(v\), not adjacent to \(u\). Consider two Eulerian avoiding circuits \(C_{1}\) and \(C_{2}\) starting at a common vertex. At every step, if \(C_{1}\) visits \(u\) then \(C_{2}\) must visit \(v\). Hence, there are at least as many edges incident to \(v\) as to \(u\), and so \(v\) also has valency \(n-2\). Now consider two Eulerian circuits starting at vertex \(u\). Before both return to \(u\) for their final visit to complete the circuits, they will need to visit \(v\) once more than \(u\). But this is impossible as the visits to \(u\) and \(v\) occur as complementary pairs. Given the restrictions imposed by the above lemmas, the maximum possible number of edges in a doubly Eulerian graph of even order \(n\) would be attained by an \((n-4)\)-regular graph; and in such a graph of odd order \(n\) by an \((n-3)\)-regular graph. From Table 1 we know that the smallest doubly Eulerian graph has order \(8\), and indeed there are five examples of \(4\)-regular graphs of that order, see Figure 1. Graph (e) is the complete bipartite graph \(K_{4,4}\) and actually has avoidance index \(3\). It is interesting to note that there are exactly six \(4\)-regular graphs of order \(8\), so only one of these fails to be doubly Eulerian. The exceptional graph turns out to be the complement of the cube, and is shown in Figure 2. Our next results show that these edge-maximal graphs do in fact exist for all orders greater than or equal to \(8\). The graphs that we prove to be edge-maximal in the following theorems are circulant graphs, and therefore vertex-transitive. Thus it suffices to find avoiding Eulerian circuits starting and ending from any vertex. The case where \(n=6s+3,s\geq 1\), is relatively easy. **Theorem 2.3**.: _For any \(n\equiv 3\pmod{6}\) with \(n\geq 9\), there exists an edge-maximal doubly Eulerian graph of order \(n\)._ Proof.: By Lemma 2.1, such a graph must be regular of degree \(n-3\). Let \(G\) be the complete graph \(K_{n}\) on vertex set \(\{0,1,\ldots,n-1\}\). Delete from \(G\) all edges \(i\)\((i+n/3)\), \(0\leq i\leq n-1\), with arithmetic mod \(n\). The reduced graph \(G^{\prime}\) is regular of degree \(n-3\). Choose vertices \(x,y\notin\{0,n/3,2n/3\}\), with \(x\neq y\), and remove edges \(0\)\(x\), \(0\)\(y\), \((n/3)\)\(x\) and \((n/3)\)\(y\). The graph remains Eulerian, so choose an Eulerian circuit and separate it into two Eulerian trails \(T_{1}\) and \(T_{2}\), both starting at vertex \(x\) and ending at vertex \(y\). Construct an Eulerian circuit of \(G^{\prime}\) as follows: \(0,x,T_{1},y,(n/3),x,T_{2},y,0\). Further construct a second "parallel" Eulerian circuit: \((2n/3),(x+2n/3),T_{1}^{(+2n/3)},(y+2n/3),0,(x+2n/3),T_{2}^{(+2n/3)},(y+2n/3),( 2n/3),\) \(n\). Now replace \((2n/3)\) with \(0\) at the start and end points of the second circuit, and replace the path \((y+2n/3),0,(x+2n/3)\) with the path \((y+2n/3),(2n/3),(x+2n/3)\). The result is a pair of avoiding Eulerian circuits beginning and ending at \(0\). The remaining odd orders require a more complex construction. **Theorem 2.4**.: _For any odd \(n\geq 9\), there exists an edge-maximal doubly Eulerian graph of order \(n\) which is regular of degree \(n-3\)._ Proof.: Let \(X(V,G)\) be a circulant graph of odd order \(n\geq 9\) and even degree \(d=n-3\) defined as a Cayley graph with vertex set \(V=\mathbb{Z}_{n}\) and generator set \(G=\{b_{1},\ldots,b_{f}\}\) where \(f=d/2\) and each \(b_{i}\leq(n-1)/2\), so that the connection set is \(\{\pm b_{1},\ldots,\pm b_{f}\}\). As \(X\) is vertex-transitive, it suffices to demonstrate the existence of two avoiding Eulerian circuits \(C_{1}\) and \(C_{2}\) from an arbitrary vertex \(u\). For odd order \(n\), where \(9\leq n\leq 21\), we have the following constructions with circuits starting at vertex \(0\). It is easily verified that these circuits are Eulerian, also that the difference between each pair of vertices is not equal to a connection set element and not zero except at the endpoints. Order \(n=9,G=\{1,2,3\}\): Circuit \(C_{1}\): \(0\ 1\ 2\ 3\ 4\ 5\ 6\ 7\ 8\ 0\ 7\ 5\ 3\ 6\ 8\ 2\ 5\ 8\ 1\ 7\ 4\ 6\ 0\ 2\ 4\ 1\ 3\ 0\) Circuit \(C_{2}\): \(0\ 6\ 7\ 8\ 0\ 1\ 2\ 3\ 4\ 5\ 3\ 1\ 8\ 2\ 4\ 7\ 1\ 4\ 6\ 3\ 0\ 2\ 5\ 6\ 8\ 5\ 7\ 0\). Order \(n=11,G=\{1,2,3,4\}\): Circuit \(C_{1}\): \(0\ 1\ 2\ 3\ 4\ 5\ 6\ 7\ 8\ 9\ 10\ 0\ 2\ 4\ 6\ 8\ 10\ 1\ 3\ 5\ 7\ 9\ 0\ 4\ 1\ 5\ 2\ 10\ 7\ 4\ 8\ 0\ 7\ 3\ 6\ 9\ 1\ 8\ 5\ 9\ 2\ 6\ 10\ 3\ 0\) Circuit \(C_{2}\): \(0\ 7\ 8\ 9\ 10\ 0\ 1\ 2\ 3\ 4\ 5\ 6\ 7\ 9\ 0\ 2\ 4\ 6\ 8\ 10\ 1\ 3\ 5\ 9\ 6\ 10\ 7\ 5\ 2\ 10\ 3\ 6\ 2\ 9\ 1\ 4\ 7\ 3\ 0\ 4\ 8\ 1\ 5\ 8\ 0\). Figure 1: The five 4-regular doubly Eulerian graphs of order 8 Figure 2: The unique 4-regular graph of order 8 which is not doubly Eulerian Order \(n=13,G=\{2,3,4,5,6\}\): Circuit \(C_{1}\): 0 4 6 8 10 12 1 3 5 7 9 11 0 2 4 7 10 0 3 6 9 12 2 5 8 11 1 4 8 12 3 7 11 2 6 10 1 5 9 0 5 10 3 8 1 6 11 3 9 1 7 12 5 11 4 9 2 7 0 6 12 4 10 2 8 0 Circuit \(C_{2}\): 0 5 7 9 11 0 2 4 6 8 10 12 1 3 5 8 11 1 4 7 10 0 3 6 9 12 2 5 9 0 4 8 12 3 7 11 2 6 10 1 6 11 4 9 2 7 12 4 10 2 8 0 6 12 5 10 3 8 1 5 11 3 9 1 7 0. Order \(n=15,G=\{1,2,3,4,5,6\}\): Circuit \(C_{1}\): 0 4 5 14 4 1 14 3 4 13 3 0 13 2 3 12 2 14 12 1 1 13 11 0 1 10 0 12 10 14 0 9 14 11 9 13 14 8 13 10 8 12 13 7 12 9 7 11 12 6 11 8 6 10 11 5 10 7 5 9 10 4 9 6 4 8 9 3 8 5 3 7 8 2 7 4 2 6 7 1 6 3 1 5 6 0 5 2 0 Circuit \(C_{2}\): 0 12 13 7 12 9 7 11 12 6 11 8 6 10 11 5 10 7 5 9 10 4 9 6 4 8 9 3 8 5 3 7 8 2 7 4 2 6 7 1 6 3 1 5 6 0 5 2 0 4 5 14 4 1 14 3 4 13 3 0 13 2 3 12 2 14 12 1 1 13 11 0 1 10 14 11 9 14 0 9 13 10 8 13 14 8 12 10 0. Order \(n=17,G=\{1,2,3,4,5,6,7\}\): Circuit \(C_{1}\): 0 11 12 8 3 13 15 1 12 13 9 4 14 16 2 13 14 10 5 15 0 3 14 15 11 6 16 1 4 15 16 12 7 0 2 5 16 0 13 8 1 3 6 0 1 14 9 2 4 7 1 2 15 10 3 5 8 2 3 16 11 4 6 9 3 4 0 12 5 7 10 4 5 1 13 6 8 11 5 6 2 14 7 9 12 6 7 3 15 8 10 13 7 8 4 16 9 11 14 8 9 5 0 10 12 15 9 10 6 1 11 13 16 10 11 7 2 12 14 0 Circuit \(C_{2}\): 0 2 3 16 11 4 6 9 3 4 0 12 5 7 10 4 5 1 13 6 8 11 5 6 2 14 7 9 12 6 7 3 15 8 10 13 7 8 4 16 9 11 14 8 9 5 0 10 12 15 9 10 6 1 11 13 16 10 11 7 2 12 14 0 11 12 8 3 13 15 1 12 13 9 4 14 16 2 13 14 10 5 15 0 3 14 15 11 6 16 1 4 15 16 12 7 1 3 5 16 0 13 8 2 4 7 0 1 14 9 2 5 8 1 2 15 10 3 6 0. Order \(n=19,G=\{1,2,3,4,5,6,7,8\}\): Circuit \(C_{1}\): 0 12 11 15 10 4 15 17 1 13 12 16 11 5 16 18 2 14 13 17 12 6 17 0 3 15 14 18 13 7 18 1 4 16 15 0 14 8 0 2 5 17 16 1 15 9 1 3 6 18 17 2 16 10 2 4 7 0 18 3 17 11 3 5 8 1 0 4 18 12 4 6 9 2 1 5 0 13 5 7 10 3 2 6 1 14 6 8 11 4 3 7 2 15 7 9 12 5 4 8 3 16 8 10 13 6 5 9 4 17 9 11 14 7 6 10 5 18 10 12 15 8 7 11 6 0 11 13 16 9 8 12 7 1 12 14 17 10 9 13 8 2 13 15 18 11 10 14 9 3 14 16 0 Circuit \(C_{2}\): 0 2 1 5 0 13 5 7 10 3 2 6 1 14 6 8 11 4 3 7 2 15 7 9 12 5 4 8 3 16 8 10 13 6 5 9 4 17 9 11 14 7 6 10 5 18 10 12 15 8 7 11 6 6 10 5 18 10 12 15 8 7 11 6 0 11 13 16 9 8 12 7 1 12 14 17 10 9 13 8 2 13 15 18 11 10 14 9 3 14 16 0 11 15 10 4 15 17 1 13 12 16 11 5 16 18 2 14 13 17 12 6 17 0 3 15 14 18 13 7 18 1 4 16 15 0 14 8 1 3 5 17 16 1 15 9 2 4 6 18 17 2 16 10 2 5 8 0 18 3 17 11 3 6 9 1 0 4 18 12 4 7 0 Order \(n=21,G=\{1,2,3,4,5,6,7,8,9\}\): Circuit \(C_{1}\): 0 13 12 8 13 19 5 17 19 1 14 13 9 14 20 6 18 20 2 15 14 10 15 0 7 19 0 3 16 15 11 16 1 8 20 1 4 17 16 12 17 2 9 0 2 5 18 17 13 8 10 1 3 6 19 18 14 9 11 2 4 7 20 19 15 20 5 12 3 5 8 0 20 16 0 6 13 4 6 9 1 0 17 1 7 14 5 7 10 2 1 18 2 8 15 6 8 11 3 2 19 3 9 16 7 9 12 4 3 20 4 10 17 8 10 13 5 4 0 5 11 18 9 11 14 6 5 1 6 12 19 10 12 15 7 6 2 7 13 20 11 13 16 8 7 3 8 14 0 12 14 17 9 8 4 9 15 1 13 15 18 10 9 5 10 16 2 14 16 19 11 10 6 11 17 3 15 17 20 12 11 7 12 18 4 16 18 0 Circuit \(C_{2}\): 0 2 1 18 2 8 15 6 8 11 3 2 19 3 9 16 7 9 12 4 3 20 4 10 17 8 10 13 5 4 0 5 11 6 12 19 10 12 15 7 6 2 7 13 20 11 13 16 8 7 3 8 14 0 12 14 17 9 8 4 9 15 1 13 15 18 10 9 5 10 16 2 14 16 19 11 10 6 117 3 15 17 20 12 11 7 12 8 4 16 18 0 Circuit \(C_{2}\): 0 2 1 18 2 8 15 6 8 11 3 2 19 3 9 16 7 9 12 4 3 20 4 10 17 8 10 13 5 4 0 5 11 6 12 19 10 12 15 7 6 2 7 13 20 11 13 16 8 7 3 8 14 0 12 14 17 9 8 4 9 15 1 13 15 18 10 9 5 10 16 2 14 16 19 11 10 6 11 7 3 15 17 20 12 11 7 12 8 4 16 18 0 Circuit \(C_{2}\): 0 2 1 18 2 8 15 6 8 11 3 2 19 3 9 16 7 9 12 4 3 20 4 10 17 8 10 13 5 4 0 5 11 6 12 19 10 12 15 7 6 2 7 13 20 11 13 16 8 7 3 8 14 0 12 14 17 9 8 4 9 15 1 13 15 18 10 9 5 10 16 2 14 16 19 11 10 6 11 17 3 15 17 20 1 restriction that \(c_{1}=f-1\), \(c_{f-2}=f\), \(c_{f-1}=2\) and \(c_{f}=3\). Such a sequence can be taken to be, for example, \((f-1,1,4,5,\ldots,f-2,f,2,3)\). Then from an arbitrary vertex \(v\), define a path \(P\) of length \(f\) by a sequence \(B=(b_{1},b_{2},\ldots,b_{f})\) of connection elements \(b_{i}=\pm c_{i}\) where \(b_{1}=-c_{1}\), \(b_{f-2}=-c_{f-2}\), \(b_{f-1}=c_{f-1}\), \(b_{f}=c_{f}\) and for \(i=2\) to \(f-3\), \(b_{i}=c_{i}\) for the values of \(c_{i}\) specified in Table 2 and \(b_{i}=-c_{i}\) otherwise. This ensures that \(\sum_{i=1}^{f}b_{i}\equiv 1\pmod{n}\). Connection block \(B\) includes one instance of each generator, taken either positive or negative. Starting at an arbitrary vertex \(v\in\mathbb{Z}_{n}\), it defines a path \(P\) ending at vertex \(v+1\) (all arithmetic modulo \(n\)). We construct a path \(C_{1}\) from vertex \(u\) by the concatenation of \(n\) paths \(P_{i}\), \(i=1,\ldots,n\), defined by instances of \(B\), labelled \(B_{i}\). Then \(C_{1}\) is a circuit of length \(nf\). Consider an arbitrary edge \(e\) of \(X\), between vertices \(v_{1}\) and \(v_{2}\). Let \(c=v_{2}-v_{1}\), so that \(c\) is an element of the connection set and there is a generator \(b_{j}\in G\) such that \(b_{j}=\pm c\). Let \(h=\sum_{i=1}^{j-1}b_{i}\). If \(b_{j}=c\), then connection element \(c\) defines an edge of \(B_{1}\) from vertex \(u+h\). Let \(k=v_{1}+1-(u+h)\). Then, as \(\sum_{i=1}^{f}b_{i}\equiv 1\pmod{n}\), we have \(e\in B_{k}\). Similarly, if \(b_{j}=-c\), then \(e\in B_{k}\) where \(k=v_{2}+1-(u+h)\). So \(X\subset C_{1}\) and therefore \(C_{1}\) is an Eulerian circuit of \(X\). Circuit \(C_{2}\) starts from vertex \(u\) with a path defined by connection block \(B\), except that the first connection element is \(2\) instead of \(-(f-1)\) in order to establish a difference of \(f+1\); this is maintained by concatenating a further \(f+5\) paths defined by block \(B\). The rest of \(C_{2}\) is constructed from \(f-3\) paths defined by variants of block \(B\). The \(n\) concatenated paths of both circuits are defined in Table 3. With these paths, the difference between the pair of vertices at each step of circuits \(C_{1}\) and \(C_{2}\) is maintained at \(f+1\) or \(f+2\), as required. This is shown in Table 4 below. In order to prove that circuit \(C_{2}\) is Eulerian, it is sufficient to show that every edge of \(C_{1}\) is contained in \(C_{2}\). At the first step, \(C_{2}\) establishes an offset of \(f+1\) from \(C_{1}\) and maintains this offset for a major part of the circuit. Therefore it is convenient to compare \(C_{2}\) with a version of \(C_{1}\) that is offset by \(f+1\), denoted by \(C_{1}^{*}\). We identify all the steps where the edges in these two circuits differ (a total of \(2f-1\) steps) and confirm that these two sets of edges are identical. A step in each circuit is denoted by its block number, \(1\) to \(2f+3\), and its position within the block, \(1\) to \(f\). An edge in each circuit is denoted by the two vertices that it connects. It is clear from Table 3 that the edges that differ at any step occur at positions \(1\), \(f-2\), \(f-1\), or \(f\) within the blocks and are generated by elements \(2\), \(3\), \(f-1\) or \(f\). These \(2f-1\) edges are listed in Table 5, along with their location within the circuits \(C_{1}^{*}\) and \(C_{2}\). \begin{table} \begin{tabular}{|c c c c c c|} \hline \begin{tabular}{c} Path \\ sequence \\ \end{tabular} & \begin{tabular}{c} Number \\ of paths \\ \end{tabular} & \begin{tabular}{c} Connection elements 1 to \(f\) \\ 1 \\ \end{tabular} & \begin{tabular}{c} \\ \(2\) to \(f-3\) \\ \end{tabular} & \begin{tabular}{c} \(f-2\) \\ \end{tabular} & \begin{tabular}{c} \(f-1\) \\ \end{tabular} & \begin{tabular}{c} \(f\) \\ \end{tabular} \\ \hline Circuit \(C_{1}\) & & & & & \\ All (1 to \(n\)) & \(n\) & \(-(f-1)\) & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(2\) & \(3\) \\ Circuit \(C_{2}\) & & & & & \\ 1 & 1 & 2 & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(2\) & \(3\) \\ 2 to \(f+6\) & \(f+5\) & \(-(f-1)\) & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(2\) & \(3\) \\ \(f+7\) and \(f+8\) & \(2\) & \(-(f-1)\) & \(b_{2}\) to \(b_{f-3}\) & \(-(f-1)\) & \(2\) & \(2\) \\ \(f+9\) to \(2f\) & \(f-8\) & \(-(f-1)\) & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(3\) & \(2\) \\ \(2f+1\) & 1 & \(-(f-1)\) & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(3\) & \(3\) \\ \(2f+2\) & 1 & \(-f\) & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(3\) & \(3\) \\ \(2f+3\) & 1 & \(-f\) & \(b_{2}\) to \(b_{f-3}\) & \(-f\) & \(3\) & \(-(f-1)\) \\ \hline \end{tabular} \end{table} Table 3: The connection elements for circuits \(C_{1}\) and \(C_{2}\) (\(n\) paths of \(f\) elements) By reference to Table 4, all the other edges occur at the same position within the two circuits. Therefore circuit \(C_{1}^{*}\) contains all the edges of \(C_{2}\) and so is Eulerian, which means that \(C_{1}\) is also Eulerian. From Table 4, the circuits \(C_{1}\) and \(C_{2}\) are avoiding. It remains to consider the case of even orders \(n\). We deal separately with the two cases \(n\equiv 0\pmod{4}\) and \(n\equiv 2\pmod{4}\), and in each case we give two different proofs of the result to illustrate the techniques involved. **Theorem 2.5**.: _For any \(n\equiv 0\pmod{4}\) with \(n\geq 8\), there exists an edge-maximal doubly Eulerian circulant graph of order \(n\) which is regular of degree \(n-4\)._ Proof 1.: Let \(n=4k\), \(k\geq 2\). Our first proof is direct; the circulant graph which we prove to be edge-maximal doubly Eulerian is the complete multipartite graph \(K_{4,4,\ldots,4}\) consisting of \(k\) independent sets of size \(4\) which we label \(A,B,C,\ldots\). The vertices are labelled \(a_{0},a_{1},a_{2},a_{3},b_{0},b_{1},\ldots\) in an obvious manner. All edges exist between sets \(X\) and \(Y\), \(X,Y\in\{A,B,C,\ldots\}\), \(X\neq Y\). We adopt a similar strategy to that in the proof of Theorem 2.3. Remove the edges \(a_{0}\)\(b_{0}\), \(a_{0}\)\(b_{1}\), \(a_{3}\)\(b_{0}\) and \(a_{3}\)\(b_{1}\). The reduced graph remains Eulerian. Choose an Eulerian circuit \(E\) beginning and ending at the vertex \(b_{1}.\) Extend this to an Eulerian circuit of \(K_{4,4,\ldots,4}\) as follows: \(a_{0},b_{0},a_{3},b_{1},E,b_{1},a_{0}\). \begin{table} \begin{tabular}{|c c c c c c|} \hline Path & Number & \multicolumn{3}{c|}{Vertex position within the path} \\ sequence & of paths & 1 & 2 to \(f-3\) & \(f-2\) & \(f-1\) & \(f\) \\ \hline \multicolumn{6}{|c|}{Difference between the pair of vertices} \\ 1 & 1 & \(f+1\) & \(f+1\) & \(f+1\) & \(f+1\) & \(f+1\) \\ 2 to \(f+6\) & \(f+5\) & \(f+1\) & \(f+1\) & \(f+1\) & \(f+1\) \\ \(f+7\) and \(f+8\) & 2 & \(f+1\) & \(f+1\) & \(f+2\) & \(f+1\) \\ \(f+9\) to \(2f\) & \(f-8\) & \(f+1\) & \(f+1\) & \(f+1\) & \(f+2\) & \(f+1\) \\ \(2f+1\) & 1 & \(f+1\) & \(f+1\) & \(f+1\) & \(f+2\) & \(f+2\) \\ \(2f+2\) & 1 & \(f+1\) & \(f+1\) & \(f+1\) & \(f+2\) & \(f+2\) \\ \(2f+3\) & 1 & \(f+1\) & \(f+1\) & \(f+1\) & \(f+2\) & 0 \\ \hline \end{tabular} \end{table} Table 4: The difference between the pair of vertices at each step of circuits \(C_{1}\) and \(C_{2}\) \begin{table} \begin{tabular}{|c c c c c c|} \hline Edge & Edge vertices & \multicolumn{3}{c|}{Circuit \(C_{1}^{*}\)} & \multicolumn{3}{c|}{Circuit \(C_{2}\)} \\ generator & from & to & Block & Position & Block & Position \\ \hline 2 & 0 & 2 & \(f+7\) & \(f-1\) & 1 & 1 \\ & 1 & 3 & \(f+8\) & \(f-1\) & \(f+7\) & \(f-1\) \\ & 2 & 4 & \(f+9\) & \(f-1\) & \(f+8\) & \(f-1\) \\ & \multicolumn{3}{c|}{For \(i=3\) to \(f-4\):} \\ & \multicolumn{3}{c|}{\(i\)\(i+2\)\(f+7+i\)} & \(f-1\) & \(f+4+i\) & \(f\) \\ 3 & \(f-3\) & \(f\) & \(2f+2\) & \(f\) & \(2f+1\) & \(f\) \\ & \multicolumn{3}{c|}{\(f-2\)\(f+1\)} & \(2f+3\) & \(f\) & \(2f+2\) & \(f\) \\ & \multicolumn{3}{c|}{For \(i=2\) to \(f-4\):} \\ & \multicolumn{3}{c|}{\(i\)\(i+3\)\(f+5+i\)} & \(f\) & \(f+7+i\) & \(f-1\) \\ \(f-1\) & \(f-1\) & 0 & \(2f+2\) & 1 & \(2f+3\) & \(f\) \\ & \multicolumn{3}{c|}{\(f\)\(1\)} & \(2f+3\) & 1 & \(f+7\) & \(f-2\) \\ & \multicolumn{3}{c|}{\(f+1\)} & \(2\) & 1 & \(f+8\) & \(f-2\) \\ \(f\) & \(f\) & 0 & \(f+7\) & \(f-2\) & \(2f+2\) & 1 \\ & \multicolumn{3}{c|}{\(f+1\)} & \(1\) & \(f+8\) & \(f-2\) & \(2f+2\) & 1 \\ \hline \end{tabular} \end{table} Table 5: The edges that are at different locations within the circuits \(C_{1}^{*}\) and \(C_{2}\) Now construct a second "parallel" Eulerian circuit \[a_{1},b_{1},a_{0},b_{2},E^{(+1)},b_{2},a_{1},\] where if the first circuit visits vertex \(x_{i}\) at a particular step, the second visits \(x_{i+1}\) (arithmetic modulo \(4\)). Now replace \(a_{1}\) with \(a_{0}\) at the start and end points of the second "parallel" circuit and replace the first visit to \(a_{0}\) with \(a_{1}\). The result is a pair of avoiding Eulerian circuits, beginning and ending at \(a_{0}\). Proof 2.: Our second proof proceeds by induction. The result is true for the case \(n=8\) since we have a pair of avoiding Eulerian circuits on the graph \(K_{4,4}\) (see for example Table 1 or Proof 1). Let \(n=4k\), \(k\geq 3\) and suppose we have a pair of avoiding Eulerian circuits on the graph \(K_{4,4,\ldots,4}\) of order \(4(k-1)\), with vertices labelled as in Proof 1. We now extend to the larger graph of order \(4k\) by adjoining a new independent set \(Z\) with vertices \(z_{0},z_{1},z_{2},z_{3}\). In the first circuit choose a vertex \(a_{0}\), _other than at the beginning or end of the circuit_. Replace \(a_{0}\) by an Eulerian circuit on the \(A\) and \(Z\) partitions beginning and ending at \(a_{0}\). In the second circuit, _in the same place_, there will be vertex \(a_{i}\) where \(i\neq 0\). Replace \(a_{i}\) by the same Eulerian circuit on the \(A\) and \(Z\) partitions as above, but to every vertex add \(i\) to its index (arithmetic modulo \(4\)). Now do the same for the vertex \(b_{0}\) on the \(B\) and \(Z\) partitions, the vertex \(c_{0}\) on the \(C\) and \(Z\) partitions and so on. The result is a pair of avoiding circuits beginning and ending at \(a_{0}\). **Observation**.: We observed earlier that the complete bipartite graph \(K_{4,4}\) has avoidance index \(3\). The second proof of the above theorem can be extended to show that the complete multipartite graph \(K_{4,4,\ldots,4}\) has avoidance index at least \(3\). In fact it is exactly \(3\), which follows from a more general result which we prove in Section 5. We now turn our attention to the remaining case where \(n\equiv 2\pmod{4}\). **Theorem 2.6**.: _For any \(n\equiv 2\pmod{4}\) with \(n\geq 10\), there exists an edge-maximal doubly Eulerian circulant graph of order \(n\) which is regular of degree \(n-4\)._ Again we offer two proofs but, unlike in the previous theorem, here the graphs which we prove to be edge-maximal doubly Eulerian are different. In the first proof the graph is not a circulant; in the second it is. Proof 1.: We follow the strategy of the first proof of Theorem 2.5. First consider the case \(n=10\). Let \(G\) be the graph with vertex set \(\{i:0\leq i\leq 9\}\). For all \(i\), \(0\leq i\leq 3\) and all \(j\), \(4\leq j\leq 9\), vertex \(i\) is connected to vertex \(j\). Further edges are \(4\) \(5\), \(5\) \(6\), \(6\) \(7\), \(7\) \(8\), \(8\) \(9\) and \(9\)\(4\). We need to exhibit two pairs of mutually avoiding Eulerian circuits, one starting and ending at the vertex \(0\) and the other at the vertex \(4\). Vertex \(0\) circuit \(C_{1}\): \(0\) \(4\) \(1\) \(5\) \(2\) \(6\) \(3\) \(7\) \(0\) \(8\) \(1\) \(9\) \(2\) \(4\) \(5\) \(6\) \(7\) \(8\) \(9\) \(4\) \(3\) \(5\) \(0\) \(6\) \(1\) \(7\) \(2\) \(8\) \(3\) \(9\) \(0\) Vertex \(0\) circuit \(C_{2}\): \(0\) \(6\) \(2\) \(7\) \(0\) \(8\) \(1\) \(9\) \(3\) \(4\) \(2\) \(5\) \(3\) \(6\) \(7\) \(8\) \(9\) \(4\) \(5\) \(6\) \(1\) \(7\) \(3\) \(8\) \(2\) \(9\) \(0\) \(4\) \(1\) \(5\) \(0\) Vertex \(4\) circuit \(C_{1}\): \(4\) \(1\) \(5\) \(2\) \(6\) \(3\) \(7\) \(0\) \(8\) \(1\) \(9\) \(2\) \(4\) \(5\) \(6\) \(7\) \(8\) \(9\) \(4\) \(5\) \(5\) \(0\) \(6\) \(1\) \(7\) \(2\) \(8\) \(3\) \(9\) \(0\) \(4\) Vertex \(4\) circuit \(C_{2}\): \(4\) \(2\) \(7\) \(1\) \(8\) \(0\) \(9\) \(3\) \(4\) \(0\) \(5\) \(3\) \(6\) \(7\) \(8\) \(9\) \(4\) \(5\) \(6\) \(0\) \(7\) \(3\) \(8\) \(2\) \(9\) \(1\) \(6\) \(2\) \(5\) \(1\) \(4\) Now let \(n=4k+2\), \(k\geq 3\) and consider the graph \(K_{4,4,\ldots,4,6}\) of order \(4k+2\). This graph consists of \(k-1\) independent sets of size \(4\), which we label \(A,B,C,\ldots\), together with one further set \(Z\) of size \(6\). The vertices are labelled \(a_{0},a_{1},a_{2},a_{3},b_{0},b_{1},\ldots\) and \(z_{0},z_{1},\ldots,z_{5}\) in an obvious manner. All edges exist between sets \(X\) and \(Y\), \(X,Y\in\{A,B,C,\ldots,Z\}\), \(X\neq Y\). In addition, to achieve an \((n-4)\)-regular graph we include a \(6\)-cycle \(z_{0},z_{1},\ldots,z_{5},z_{0}\). As before, we construct one Eulerian circuit on this graph starting from vertex \(a_{0}\) with initial vertices \(a_{0},b_{0},a_{3},b_{1},\ldots\) and ending with \(\ldots,b_{1},a_{0}\). Now construct a second 'parallel' Eulerian circuit beginning \(a_{1},b_{1},a_{0},b_{2},\ldots\) and ending \(\ldots,b_{2},a_{1}\). This time if the first circuit visits vertex \(x_{i}\) at a particular step, where \(x_{i}\) is in one of the sets \(A,B,C,\ldots\) of size \(4\), the second visits \(x_{i+1}\) (arithmetic modulo \(4\)). But if the first circuit visits \(z_{i}\), then the second visits \(z_{i+2}\) (arithmetic modulo \(6\)). As before, replace \(a_{1}\) with \(a_{0}\) as the start and end point of this new circuit, and replace the first visit to \(a_{0}\) with \(a_{1}\); the result is a pair of avoiding Eulerian circuits, starting and ending at \(a_{0}\). It remains to find a pair of such circuits starting and ending at \(z_{0}\). For the first circuit, we choose one starting \(z_{0},a_{0},z_{4},a_{1},\ldots\) and ending \(\ldots,a_{1},z_{0}\). The parallel circuit, using the same rules as above, begins \(z_{2},a_{1},z_{0},a_{2},\ldots\) and ends \(\ldots,a_{2},z_{2}\). Now replace \(z_{2}\) with \(z_{0}\) as the start and end point of this new circuit, and replace the first visit to \(z_{0}\) with \(z_{2}\); the result is a pair of avoiding Eulerian circuits, starting and ending at \(z_{0}\). For our second proof we present a doubling construction. Proof 2.: Let \(G\) be the graph obtained from the complete graph \(K_{2m+1}\) on vertex set \(\{0,1,2,\ldots,2m\}\) by removing the Hamiltonian cycle given by the edges \(i\)\((i+1)\) (arithmetic modulo \(2m+1\)). By Theorem 2.4, for \(m\geq 4\) the graph admits a pair of mutually avoiding Eulerian circuits \(C_{1}\) and \(C_{2}\), without loss of generality starting and ending at vertex \(0\). Represent \(C_{1}\) as \(0\ C_{1}^{-}\ 0\ C_{1}^{+}\ 0\), where the intermediate occurrence of \(0\) is the first time that the circuit returns to this vertex. Represent \(C_{2}\) as \(0\ C_{2}^{-}\ z\ C_{2}^{+}\ 0\), where \(z\) is the vertex in \(C_{2}\) when \(C_{1}\) is at the first occurrence of \(0\). In fact, \(z=\pm 1\). Now let \(G^{\prime}\) be an isomorphic copy of \(G\) on the vertex set \(\{0^{\prime},1^{\prime},2^{\prime},\ldots,(2m)^{\prime}\}\). Introduce connections between the graphs \(G\) and \(G^{\prime}\) by adjoining all edges \(x\ y^{\prime}\), \(x\neq y\). The result is a circulant graph of order \(4m+2\) and degree \(4m-2\). Let \(\hat{C}\) be the circuit (not Eulerian) obtained by replacing every edge \(x\ y\) in \(C_{1}\) by the path \(x\ y^{\prime}\ x^{\prime}\ y\), and further let \(\hat{C}^{(z)}\) be the circuit offset by \(z\), i.e. \(x\ y\) in \(C_{1}\) is replaced by \((x+z)\ (y+z)^{\prime}\ (x+z)^{\prime}\ (y+z)\). The following are a pair of mutually avoiding Eulerian circuits. \[0,C_{1}^{-},0,\hat{C},0,1^{\prime},2,3^{\prime},\ldots,(2m),0^{ \prime},1,2^{\prime},\ldots,(2m)^{\prime},0,C_{1}^{+},0;\] and \[0,C_{2}^{-},z,\hat{C}^{(z)},z,(z+1)^{\prime},(z+2),(z+3)^{\prime}, \ldots,(z+2m),z^{\prime},(z+1),(z+2)^{\prime},\ldots,(z+2m)^{\prime},z,C_{2}^ {+},0.\] It remains to deal separately with the two cases where \(m=2\) or \(m=3\). Pairs of mutually avoiding Eulerian circuits for these two cases are as follows. \(4m+2=10\): Circuit \(C_{1}\): 0 4' 3 2' 1 0' 4 3' 2 1' 0 3 1' 3' 0' 3 1 4' 2 4 2' 4' 1' 4 1 3' 0 2 0' 2' 0 Circuit \(C_{2}\): 0 3' 2 1' 0 4' 3 2' 1 0' 4 3' 0 3 1' 4 2 0' 2' 4' 1' 3' 1 3 0 2' 4 1 4' 2 0 \(4m+2=14\): Circuit \(C_{1}\): 0 6' 5 4' 3 2' 1 0' 6 5' 4 3' 2 1' 0 5 3 1 6 4 2 0 4' 2' 0 5' 3' 1' 6' 4' 1 5'2 6' 3 0' 4 1' 5 2' 6 3' \(0\) 2' 5' 1'4' 0' 5 1 6' 2' 4 0 5' 3 6 2 5 3' 6' 4 1 3' 0 2' 4 6 1' 3 0 Circuit \(C_{2}\): 0 5' 4 3' 2' 1 0' 6' 5' 4' 3 2' 1 0' 6 5' 3' 1' 6' 4' 2' 0' 5' 2 0 5 3 1 6 4 2 6' 3 0' 4 1' 5 2' 6 3' 0 4' \(1\) 3' 6' 2' 5' 1' 6 2 0' 3' 5 0' 4' 2 5 1 5' 3 6 4' 1' 3 0 2' 4 6' 1 4 0 **Example.** We illustrate Proof 2 of the preceding theorem by an example. Let \(m=4\) and let \(G\) be the complete graph \(K_{9}\) on vertex set \(\{0,1,\ldots,8\}\), with the Hamiltonian cycle given by the edges \(i\)\((i+1)\) (arithmetic modulo \(9\)) removed. A pair of mutually avoiding circuits on \(G\) beginning and ending at \(0\) are the following. Circuit \(C_{1}\): 0 7 5 3 1 8 6 4 2 0 4 8 3 6 2 5 8 2 7 4 1 6 0 5 1 7 3 0 Circuit \(C_{2}\): 0 6 4 2 0 7 5 3 1 8 3 7 2 5 1 4 7 1 6 3 0 5 8 6 2 8 4 0 Applying the doubling construction employed in the proof yields a pair of mutually avoiding Eulerian circuits on a 14-regular graph of order 18 as follows. Circuit \(C_{1}\): 0 7 5 3 1 8 6 4 2 0 7' 0' 7 5' 7' 5' 3' 5 1 3' 1 8' 1' 8 6' 8' 6 4' 6' 4 2' 4' 2 0' 2' 0 4' 0' 4 8' \(4\)' 8 3' 8' 3 6' 3' 6 2' 6' 2 5' 2' 5 8' 5' 8 2' 7' 2' 7' 4' 7' 4 1' 6' 1' 6 0' 6' 0 5 0' 5 1' 5' 1 7' 1' 7 3' \(7\) 3 0' 3 0' 1 2' 3' 4' 5' 6' 7' 8' 0' 1' 2' 3' 4' 5' 6' 7' 8' 0 4 8 3 6 2 5 8 2 7 4 1 6 0 5 1 7 3 0 Circuit \(C_{2}\): 0 6 4 2 0 7 5 3 1 8 6' 8' 6' 4 6' 4 2' 4 2' 0' 2' 0 7' 7 5' 7' 5' 3' 5 3' 1' 3' 1' 8' 1' 8 3' 8' 3' 7' \(3\)' 7 2' 7' 2' 5' 2' 5' 1' 5' 1' 4' 7' 4' 7' 7' 1' 7 1 6' 1' 6 3' 0 3' 0' 5' 0' 5' 8' 5' 8' 4' 0 4' 0' 6' 0' 6 2' \(6\)' 2 8' 2' 8' 0' 1 2' 3' 4' 5' 6' 7' 8' 0 1' 2' 3' 4' 5' 6' 7' 8 3 7 2 5 1 4 7 1 6 3 0 5 8 6 2 8 4 0 Bipartite edge-maximal doubly Eulerian graphs In Section 2, it was shown that there exist doubly Eulerian graphs which attain the upper density bound. In this section we restrict our attention to edge-maximal bipartite graphs. We have the following result. **Lemma 3.1**.: _Let \(G\) be a bipartite Eulerian graph, and suppose \(G\) admits a subgraph \(K\cong K_{3,2}\) such that removing the edges of \(K\) does not disconnect \(G\). Let \(v\) be a vertex of \(G\) which is in the larger partition of \(K\). Then \(G\) admits a pair of avoiding Eulerian circuits starting and ending at \(v\)._ Proof.: Let the vertices of \(K\) be \(t,u,v\) in one partition and \(w,x\) in the other. Remove the edges of \(K\) from \(G\). The result is a connected graph with every edge having even valency except \(w,x\). So there exists an Eulerian trail \(T\) in the graph from \(w\) to \(x\). Two avoiding Eulerian circuits in \(G\) can now be constructed as follows. \(v,x,u,w,T,x,t,w,v\); \(v,w,t,x,u,w,T,x,v\). **Theorem 3.2**.: _The complete bipartite graph \(K_{2r,2s}\), \(r,s\geq 2\), is doubly Eulerian._ Proof.: Every vertex of \(K_{2r,2s}\) satisfies the conditions of Lemma 3.1, so we can construct two avoiding Eulerian circuits starting and ending at any vertex. We remark that complete bipartite graphs \(K_{2,2s}\), \(s\geq 1\), are not doubly Eulerian. Two circuits both starting at either vertex of the \(2\)-partition cannot be avoiding. We have one further result. Denote by \(K^{*}_{2r+1,2r+1}\), \(r\geq 1\), the complete bipartite graph minus a perfect matching. **Theorem 3.3**.: _The graph \(K^{*}_{2r+1,2r+1}\), \(r\geq 2\), is doubly Eulerian._ Proof.: Every vertex of the graph satisfies the conditions of Lemma 3.1. Finally, again we remark that the graph \(K^{*}_{3,3}\) is the cycle \(C_{6}\) and is not doubly Eulerian. ## 4 Edge-minimal doubly Eulerian graphs Define the _edge excess_ of a graph \(G=(V,E)\) to be \(\xi=|E|-|V|\). If \(\xi=0\), then we have the cycles \(C_{n}\), \(n\geq 3\) which are not doubly Eulerian. If \(\xi=1\) then for an Eulerian graph G, one vertex has valency \(4\) and all others have valency \(2\). An example is illustrated in Figure 3(a). It is immediate that such graphs cannot be doubly Eulerian; consider a starting vertex at maximum distance from the valency \(4\) vertex on either of the cycles. If \(\xi=2\) then for an Eulerian graph there are two possibilities: (1) one vertex has valency \(6\) and all others have valency \(2\), or (2) two vertices have valency \(4\) and all others have valency \(2\). Again graphs satisfying possibility (1) cannot be doubly Eulerian and an example is given in Figure 3(b). For possibility (2), the vertices of valency \(2\) form four paths between the two vertices of valency \(4\) and so these graphs are uniquely determined by a set of four parameters \(0\leq a\leq b\leq c\leq d\) for the number of valency \(2\) vertices in each path A, B, C, D respectively. It will also be convenient to regard these paths as ordered sets. Denote the valency \(4\) vertices by \(u\) and \(v\) and the graphs by the notation \(\Gamma=\Gamma(V,E)=\Gamma(a,b,c,d)\). An example is shown in Figure 4. Let \(\rho(x,y)\) be the distance between two vertices \(x,y\in V\). These graphs have avoidance index of either \(1\) or \(2\). The smallest interesting case is graphs of order \(8\). From Table 1 we know that there are \(7\) doubly Eulerian graphs of this order; the \(5\) graphs which are \(4\)-regular are edge-maximal and were discussed in Section 2. The remaining graphs are shown in Figure 5. Graph (a) has edge excess \(\xi=2\) and is easily seen to be \(\Gamma(0,2,2,2)\) as described above; graph (b) has edge excess \(\xi=4\). Our investigations suggest that a complete analysis of the values of \(a,b,c,d\) for which the graph \(\Gamma(a,b,c,d)\) has either index is extremely complex and tedious and breaks into many different cases and subcases. Figure 4: The graph \(\Gamma(1,1,2,3)\) Figure 5: The graphs of order \(8\) with avoidance index \(2\) which are not regular Figure 3: Graphs which cannot be doubly Eulerian We therefore restrict our attention to the case where \(a=0\), i.e. the class of graphs for which the two vertices of valency 4 are adjacent. Two results are easily established. **Lemma 4.1**.: _If \(d\geq a+b+c+3\) then the graph \(\Gamma\) has avoidance index 1._ Proof.: Suppose that \(d\geq a+b+c+3\). The four paths A, B, C, D contain \(a+1\), \(b+1\), \(c+1\), \(d+1\) edges respectively. So the graph \(\Gamma\) contains a total number of edges \(|E|=a+b+c+d+4\leq 2d+1\). Now consider two Eulerian circuits starting on path D at the 2-valent vertex adjacent to vertex \(u\). The circuit starting along the path D in the direction of the vertex \(v\) includes \(d\) edges to reach \(v\) and \(d+1\) edges to clear it. Conversely, the final \(d\) edges of the other circuit start at vertex \(v\) and are along path D, with the preceding edge incident to vertex \(v\), also \(d+1\) in total. so the circuits can only be avoiding if the length of a circuit is at least \(2(d+1)\). As \(|E|\leq 2d+1\), these circuits are not avoiding and so graph \(\Gamma\) has avoidance index 1. **Lemma 4.2**.: _If \(c\leq a+1\) then the graph \(\Gamma\) has avoidance index 1._ Proof.: If \(d=a\) or \(d=a+1\), then any two Eulerian circuits starting at vertex \(u\) first reach vertex \(v\) within one step of each other and so are not avoiding. Now suppose that \(d>a+1\). Let \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) be two Eulerian circuits starting at the vertex on path D distant \(\lfloor(d-a)/2\rfloor\) from vertex \(u\), with \(\mathcal{C}_{1}\) in the direction of \(u\) and \(\mathcal{C}_{2}\) in the direction of \(v\). Then \(\mathcal{C}_{2}\) reaches vertex \(v\) after \(d+1-\lfloor(d-a)/2\rfloor=\lceil(d+a)/2\rceil+1\) steps. Now \(\mathcal{C}_{1}\) reaches vertex \(u\) after \(\lfloor(d-a)/2\rfloor\) steps. It continues from \(u\) to \(v\) along a path of length \(a+1\) or \(a+2\) reaching \(v\) after a total number of steps either \(\lfloor(d+a)/2\rfloor+1\) or \(\lfloor(d+a)/2\rfloor+2\). This is always within one step of \(\mathcal{C}_{2}\). Hence the two circuits are not avoiding. The next theorem, together with the above Lemma 4.1, provides the complete analysis of the case where \(a=0\). **Theorem 4.3**.: _The graph \(\Gamma(a,b,c,d)\) where \(a=0\) and \(d<a+b+c+3\) has avoidance index 2 if and only if it is bipartite._ Proof.: First suppose that \(\Gamma\) is not bipartite. Then as \(a=0\), at least one of the paths B, C, D between vertices \(u\) and \(v\) must have an odd number of valency 2 vertices. Denote such a path by X. Now consider two circuits starting from the middle vertex of X in opposite directions. They will arrive at vertices \(u\) and \(v\) after the same number of steps. But \(\rho(u,v)=1\), and so these circuits are not avoiding. Hence \(\Gamma\) has avoidance index 1. Now suppose that \(\Gamma\) is bipartite. Then as \(a=0\), the paths B, C, D between vertices \(u\) and \(v\) also have an even number of valency 2 vertices and as \(\Gamma\) is simple, \(b,c,d\geq 2\). Consider two Eulerian circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) starting at a common vertex. For \(j=1,2\), define mappings \(T_{j}\) from the set \(\{0,1,\ldots,|E|\}\) to \(V\) such that for \(0\leq i\leq|E|\), \(T_{j}(i)\) is the vertex in the circuit \(j\) reached after \(i\) steps. Since \(\Gamma\) is bipartite, \(T_{1}(i)\) and \(T_{2}(i)\) will lie in the same partition, so that \(\rho(T_{1}(i),T_{2}(i))\) is even. Hence if \(T_{1}(i)\) and \(T_{2}(i)\) are distinct then \(\rho(T_{1}(i),T_{2}(i))>1\). Thus it suffices to show, for every vertex, the existence of a pair of Eulerian circuits \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), starting at that vertex such that \(T_{1}(i)\) and \(T_{2}(i)\) are distinct for every \(i:0<i<|E|\). First consider Eulerian circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) starting at vertex \(u\). Suppose that \(\mathcal{C}_{1}\) follows path D to vertex \(v\), then path C to vertex \(u\), path B to vertex \(v\) and finally path A to return to vertex \(u\). Meanwhile \(\mathcal{C}_{2}\) follows path A to vertex \(v\), then path B to vertex \(u\), path D to vertex \(v\) and finally path C to return to vertex \(u\). Schematically these circuits can be represented by \(u\)D\(v\)CuBu\(v\) and \(uv\)BuD\(v\)Cu. First consider the vertices \(u\) and \(v\). Now \(T_{1}(i)=u\) for \(i=d+c+2\) and \(T_{2}(i)=u\) for \(i=b+2\). As \(d+c>b\), there is no conflict at vertex \(u\). For vertex \(v\), we have \(T_{1}(i)=v\) for \(i=d+1\) and \(i=d+c+b+3\), and \(T_{2}(i)=v\) for \(i=1\) and \(i=b+d+3\). As \(1<d+1<b+d+3<d+c+b+3\), there is also no conflict at vertex \(v\). Both circuits traverse path C from vertex \(v\) and path D from vertex \(u\) in the same direction. Since \(b\leq c\leq d\), \(T_{1}(i)\in\mathrm{B}\implies T_{2}(i)\in\mathrm{C}\cup\{v\}\) and \(T_{2}(i)\in\mathrm{B}\implies T_{1}(i)\in\mathrm{D}\cup\{v\}\). Thus there are no conflicts on any of the paths and hence the two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are avoiding. Now consider Eulerian circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) starting at vertex \(s\), an arbitrary valency 2 vertex on path X, where X is one of the paths B, C or D. Let the other two paths of B, C or D be denoted by Y and Z, and let the number of valency 2 vertices in X, Y and Z be \(x\), \(y\) and \(z\) respectively with \(y\leq z\). Let X\({}_{u}\) and X\({}_{v}\) be the subpaths from \(s\) to \(u\) and \(v\) respectively, with \(x_{u}\) and \(x_{v}\) valency 2 vertices apart from vertex \(s\), so that \(x_{u}+x_{v}+1=x\). Recall that \(x\) is even. Without loss of generality, assume that \(x_{u}<x_{v}\). There are two cases to consider. First, the case \(x_{u}+1<x_{v}\). Suppose that \(\mathcal{C}_{1}\) follows path X\({}_{u}\) to vertex \(u\), then path A to vertex \(v\), path Z to vertex \(u\), path Y to vertex \(v\) and finally path X\({}_{v}\) to return to vertex \(s\). Meanwhile \(\mathcal{C}_{2}\) follows path X\({}_{v}\) to vertex \(v\), then path Z to vertex \(u\), path Y to vertex \(v\), path A to vertex \(u\) and finally path X\({}_{u}\) to return to vertex \(s\). Schematically these circuits can be represented by \(s\)X\({}_{u}\)_uvZuvYvX\({}_{u}\)s_ and \(s\)X\({}_{v}\)_ZuYvX\({}_{u}\)s_. Consider the vertices \(u\) and \(v\). Now \(T_{1}(i)=u\) for \(i=x_{u}+1\) and \(i=x_{u}+z+3\) and \(T_{2}(i)=u\) for \(i=x_{v}+z+2\) and \(i=x_{v}+z+y+4\). As \(x_{u}+1<x_{u}+z+3<x_{v}+z+2<x_{v}+z+y+4\), there is no conflict at vertex \(u\). For vertex \(v\), we have \(T_{1}(i)=v\) for \(i=x_{u}+2\) and \(i=x_{u}+z+y+4\), and \(T_{2}(i)=v\) for \(i=x_{v}+1\) and \(i=x_{v}+z+y+3\). Now since \(d<a+b+c+3\), it follows that \(x_{v}<d<b+c+3\leq y+z+3\). Therefore \(x_{u}+2<x_{v}+1<x_{u}+z+y+4<x_{v}+z+y+3\), and so there is also no conflict at vertex \(v\). Both circuits traverse path Y from vertex \(u\) and path Z from vertex \(v\) in the same direction. For the path X\({}_{u}\), we have that \(T_{1}(i)\in\) X\({}_{u}\) for \(1\leq i\leq x_{u}\) and \(T_{2}(i)\in\) X\({}_{u}\) for \(|E|-x_{u}\leq i\leq|E|-1\). Since \(2x_{u}<x<|E|\), these two intervals are disjoint. A similar argument applies for path X\({}_{v}\). Since \(2x_{v}<2d-2<|E|\), again the two intervals are disjoint. Thus there are no conflicts on any of the paths and hence the two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are avoiding. The second case is where \(x_{u}+1=x_{v}\). In this case the two paths X\({}_{u}\) and X\({}_{v}\) are interchanged in the circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) defined above to give circuits \(s\)X\({}_{v}\)_uvZvYuX\({}_{u}\)s_ and \(s\)X\({}_{u}\)_uZvYuX\({}_{u}\)s_. Again consider the vertices \(u\) and \(v\). Now \(T_{1}(i)=u\) for \(i=x_{v}+2\) and \(i=x_{v}+z+y+4\) and \(T_{2}(i)=u\) for \(i=x_{u}+1\) and \(i=x_{u}+z+y+3\). As \(x_{u}+1<x_{v}+2<x_{u}+z+y+3<x_{v}+z+y+4\), there is no conflict at vertex \(u\). For vertex \(v\), we have \(T_{1}(i)=v\) for \(i=x_{v}+1\) and \(i=x_{v}+z+3\), and \(T_{2}(i)=v\) for \(i=x_{u}+z+2\) and \(i=x_{u}+z+y+4\). Since \(x_{v}+1<x_{u}+z+2<x_{v}+z+3<x_{u}+z+y+4\), there is also no conflict at vertex \(v\). The remainder of the proof is as in the case where \(x_{u}+1<x_{v}\). Hence the two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are avoiding. The above theorem gives a construction for edge-minimal doubly Eulerian graphs for all even orders \(n\geq 8\). The graphs have edge excess equal to 2. For \(n=10\), in addition to the graph \(\Gamma(0,2,2,4)\) given by the above theorem, we find that there is a further doubly Eulerian graph with edge excess equal to 2, namely \(\Gamma(1,1,3,3)\), also bipartite. There are three edge-minimal doubly Eulerian graphs on 12 vertices, \(\Gamma(0,2,2,6)\), \(\Gamma(0,2,4,4)\) and \(\Gamma(1,3,3,3)\), again all bipartite. The smallest non-bipartite doubly Eulerian graph of even order and with edge excess equal to 2 occurs when \(n=14\). There are two such graphs, \(\Gamma(1,2,4,5)\) and \(\Gamma(1,3,4,4)\). Bipartite graphs are \(\Gamma(0,2,4,6)\), \(\Gamma(0,4,4,4)\), \(\Gamma(1,3,3,5)\) and \(\Gamma(2,2,4,4)\). It remains to consider graphs of odd order. **Theorem 4.4**.: _The graph \(\Gamma(1,4,c,d)\) where \(d=c\geq 4\) has avoidance index 2._ Proof.: Counting from vertex \(u\) to vertex \(v\), let the valency 2 vertices on paths A, B, C, D be \(a\), \(b_{1}\), \(b_{2}\), \(b_{3}\), \(b_{4}\), \(c_{1},\ldots,c_{m}\), \(d_{1},\ldots,d_{m}\). By symmetry, we only need to consider the cases with Eulerian circuits starting at \(u\), \(a\), \(b_{i}\), \(i=1,2\) and \(c_{i}\), \(1\leq i\leq\lfloor m/2\rfloor\). Below in tabular form are avoiding Eulerian circuits together with the number of steps to reach the vertices \(u\) and \(v\). From this information, it is straightforward to verify that the circuits are always distance at least 2 apart and so are avoiding using similar arguments as employed in the proof of the previous theorem. \begin{tabular}{c c c c c c c c c} Vertex \(u\) & Circuit \(\mathcal{C}_{1}\) & \(u\) & \(a\) & \(v\) & C & \(u\) & D & \(v\) & B & \(u\) \\ & \# steps & 0 & 2 & \(m+3\) & & \(2m+4\) & & \(2m+9\) \\ & \(\begin{array}{ccccccccc}\text{Circuit}&\mathcal{C}_{2}&u&\text{B}&v& \text{C}&u&\text{D}&v&a&u\\ \text{\# steps}&0&\text{5}&&m+6&&2m+7&&2m+9\end{array}\) \\ \end{tabular} \begin{tabular}{c c c c c c c c c c} Vertex \(a\) & Circuit \(\mathcal{C}_{1}\) & \(a\) & \(u\) & C & \(v\) & D & \(u\) & B & \(v\) & \(a\) \\ & \# steps & 0 & 1 & & \(m+2\) & & \(2m+3\) & & \(2m+8\) & \(2m+9\) \\ & Circuit \(\mathcal{C}_{2}\) & \(a\) & \(v\) & B & \(u\) & C & \(v\) & D & \(u\) & \(a\) \\ & \# steps & 0 & 1 & & \(6\) & & \(m+7\) & & \(2m+8\) & \(2m+9\) \\ Vertex \(b_{i}\) & Circuit \(\mathcal{C}_{1}\) & \(b_{i}\) & \(u\) & C & \(v\) & D & \(u\) & \(a\) & \(v\) & \(b_{i}\) \\ & \# steps & 0 & \(i\) & & \(m+1+i\) & & \(2m+2+i\) & & \(2m+4+i\) & \(2m+9\) \\ & Circuit \(\mathcal{C}_{2}\) & \(b_{i}\) & \(v\) & \(a\) & \(u\) & C & \(v\) & D & \(u\) & \(b_{i}\) \\ & \# steps & 0 & \(5-i\) & & \(7-i\) & & \(m+8-i\) & & \(2m+9-i\) & \(2m+9\) \\ \end{tabular} For \(1\leq i\leq\lfloor(m-3)/2\rfloor\), \begin{tabular}{c c c c c c c c c c} Vertex \(c_{i}\) & Circuit \(\mathcal{C}_{1}\) & \(c_{i}\) & \(u\) & D & \(v\) & B & \(u\) & \(a\) & \(v\) & \(c_{i}\) \\ & \# steps & 0 & \(i\) & & \(m+1+i\) & & \(m+6+i\) & & \(m+8+i\) & \(2m+9\) \\ & Circuit \(\mathcal{C}_{2}\) & \(c_{i}\) & \(v\) & B & \(u\) & D & \(v\) & \(a\) & \(u\) & \(c_{i}\) \\ & \# steps & 0 & \(m+1-i\) & & \(m+6-i\) & & \(2m+7-i\) & & \(2m+9-i\) & \(2m+9\) \\ \end{tabular} For \(\lfloor(m-1)/2\rfloor\leq i\leq\lfloor m/2\rfloor\), \begin{tabular}{c c c c c c c c c c} Vertex \(c_{i}\) & Circuit \(\mathcal{C}_{1}\) & \(c_{i}\) & \(u\) & D & \(v\) & B & \(u\) & \(a\) & \(v\) & \(c_{i}\) \\ & \# steps & 0 & \(i\) & & \(m+1+i\) & & \(m+6+i\) & & \(m+8+i\) & \(2m+9\) \\ & Circuit \(\mathcal{C}_{2}\) & \(c_{i}\) & \(v\) & \(a\) & \(u\) & D & \(v\) & B & \(u\) & \(c_{i}\) \\ & \# steps & 0 & \(m+1-i\) & & \(m+3-i\) & & \(2m+4-i\) & & \(2m+9-i\) & \(2m+9\) \\ \end{tabular} The above theorem gives a construction for edge-minimal doubly Eulerian graphs for all odd orders \(n\geq 15\). Again the graphs have edge excess equal to 2. We have already noted that there are no double Eulerian graphs of order 3, 5 or 7. The next case \(n=9\) is of particular interest. There are no doubly Eulerian graphs with edge excess equal to 2. Edge-minimal graphs of this order have edge excess equal to 3; there are two such graphs, illustrated in Figure 6. For the cases of \(n=11\) and \(n=13\), the graphs \(\Gamma(1,2,3,3)\) and \(\Gamma(1,2,4,4)\) respectively are the unique edge-minimal doubly Eulerian graphs of these orders. Theorem 4.4 is a specific case of the more general theorem below which we give without proof. **Theorem 4.5**.: _The graph \(\Gamma(a,b,c,d)\) where \(a\geq 1\), \(b\geq a+3\) and \(d<a+b+c+3\) has avoidance index 2._ This follows similar arguments as employed in the proof of that theorem, but is both lengthy and tedious. This leaves the case where \(a\geq 1\) and \(a\leq b\leq a+2\), but as stated above we find that this is very complex and divides into a number of different cases and subcases. Figure 6: The two doubly Eulerian graphs of order 9 and edge excess 3 Avoidance index We have concentrated so far on graphs which admit a pair of avoiding Eulerian circuits starting at any vertex. A natural extension of this idea is to ask for more than two such circuits at every vertex; as we defined earlier, the maximum number \(k\) such that a graph \(G\) admits a set of \(k\) mutually avoiding Eulerian circuits from any vertex is called the _avoidance index_\(\operatorname{av}(G)\). Some simple bounds on the avoidance index are provided by the following lemma. **Lemma 5.1**.: _Let \(G\) be an Eulerian graph of order \(n\), minimum degree \(\delta\) and maximum degree \(\Delta\). Then:_ 1. \(\operatorname{av}(G)\leq\delta\)_;_ 2. \(\operatorname{av}(G)\leq\min_{v\in V(G)}\alpha(N(v))\)_, where_ \(\alpha(N(v))\) _is the independence number of the subgraph induced by the neighbours of_ \(v\)_;_ 3. _If_ \(\Delta=n-1\) _then_ \(\operatorname{av}(G)=1\)_, else_ \(\operatorname{av}(G)\leq\min_{v\in V(G)}\frac{S(v)-2}{\deg(v)}+1\)_, where_ \(S(v)\) _is the sum of the degrees of the non-neighbours of_ \(v\) _(other than itself) in_ \(G\)_. In particular, if_ \(\Delta\leq n-2\) _then_ \(\operatorname{av}(G)\leq n-\Delta-1\)_._ Proof.: Parts (i) and (ii) are immediate since at the first step after the starting vertex, all circuits must be at distinct mutually nonadjacent vertices. The first part of (iii) follows from Lemma 2.1; note that \(n\) must be odd in this case. For the second part, assume \(\Delta\leq n-2\); consider a vertex \(v\) and suppose we have \(k\) mutually avoiding Eulerian circuits beginning and ending at a vertex \(u\) which is not adjacent to \(v\). Each of these circuits visits \(v\) exactly \(\deg(v)/2\) times, for a total of \(k\deg(v)/2\) such visits across all the circuits. On each of these visits, at the same step the other \(k-1\) circuits must all be at some non-neighbour of \(v\). In any circuit, the number of visits to a non-neighbour \(w\) of \(v\), \(w\neq u\), is \(\deg(w)/2\), and the number of visits to \(u\) (excluding the start and end steps) is \(\deg(u)/2-1\). So the largest possible number of visits to a non-neighbour of \(v\) across all \(k\) circuits, excluding at the start and end points, is \(k(S(v)/2-1)\). Thus \((k-1)k\deg(v)/2\leq k(S(v)/2-1)\) and the result follows. The final part of (iii) follows by considering a vertex \(v\) of degree \(\Delta\); in this case \(v\) has \(n-\Delta-1\) non-neighbours and so \(S(v)\leq(n-\Delta-1)\Delta\). Lemma 5.1(iii) shows that in the case of a \(d\)-regular Eulerian graph of order \(2d\), the maximum possible value of the avoidance index is \(d-1\). Our next result shows that this upper bound can be attained for all even \(d\geq 2\). **Theorem 5.2**.: _Let \(G\) be the complete bipartite graph \(K_{2s,2s}\), \(s\geq 1\). Then \(\operatorname{av}(G)=2s-1\)._ Proof.: In the notation of Lemma 5.1, \(\delta=\Delta=\alpha=2s\). Therefore \(\operatorname{av}(G)\leq 2s-1\). We prove that \(\operatorname{av}(G)=2s-1\). The result is trivially true for \(s=1\). So assume that \(s\geq 2\). Represent the two partitions of the vertices of \(G\) as the rows and columns respectively of a square \(Q\) of side \(2s\), both indexed by the set \(\{0,1,2,\ldots,2s-1\}\). The entries in the square then represent the edges of the graph. We number these as follows. \(Q(0,0)=1\), \(Q(0,1)=4s\), \(Q(1,0)=2\), \(Q(1,1)=3\). \(Q(2i,1)=4i\), \(Q(2i,2)=4i+1\), \(Q(2i+1,2)=4i+2\), \(Q(2i+1,1)=4i+3\), \(1\leq i\leq s-1\). \(Q(i,j)=Q(i,j-2)+4s\), \(0\leq i\leq 1\), \(2\leq j\leq 2s-1\) and \(2\leq i\leq 2s-1\), \(3\leq j\leq 2s\). Arithmetic in the indices of the square is performed modulo \(2s\). The numbers form a "zig-zag" pattern in the square, with each number being alternately in the same column or same row as its predecessor. (The ordering for the case \(s=3\) is shown in the following example.) They therefore give an Eulerian circuit beginning and ending at the vertex row \(0\), with the numbers indicating the order in which the edges are traversed. This may be seen more easily if the above scheme is described algorithmically as follows, starting with \(Q(0,0)=1\). The operations \(D\), \(R\) and \(L\) mean moving down, right and left respectively and inserting the next number, the square \(Q\) being regarded cyclically. The algorithm is: \((D,R,(D,R,D,L)\) performed \((s-1)\) times, \(D,R\)) performed \(s\) times. Any numbering of the square \(Q\) which has the "zig-zag" pattern as described above will represent an Eulerian circuit. In order to construct mutually avoiding Eulerian circuits, all beginning and ending at the vertex row \(0\), the numbers \(1\) and \(4s^{2}\) must remain in row \(0\) but be in different columns in each square. Every other number must be in a different row and different column in each square. This is achieved by defining squares \(E_{i}\) and \(E^{\prime}_{i}\), \(1\leq i\leq 2s-1\), as follows. First let \(E_{1}=E^{\prime}_{1}=Q\). Construct square \(E^{\prime}_{i}\) from square \(E^{\prime}_{i-1}\), \(2\leq i\leq 2s-1\), by moving row \(0\) one column to the right (cyclically) and every other row one column to the right (cyclically) and one column upwards, with row \(1\) becoming row \(2s-1\). Finally, construct square \(E_{i}\) from square \(E^{\prime}_{i}\), \(2\leq i\leq 2s-1\), by interchanging the entries in row \(0\), apart from \(1\) and \(4s^{2}\), of \(E^{\prime}_{i}\) with the entries in the same column of row \(i\). The squares \(E_{i}\), \(1\leq i\leq 2s-1\), give a set of \(2s-1\) mutually avoiding Eulerian circuits of the graph \(K_{2s,2s}\). The construction is illustrated by the following example for the case \(s=3\). **Example**.: Five mutually avoiding Eulerian circuits for the complete bipartite graph \(K_{6,6}\) beginning and ending at the same vertex, and presented as squares as defined in Theorem 5.2. \[\begin{array}{c|cccccccc}&0&1&2&3&4&5\\ \hline 0&1&12&13&24&25&36\\ 1&2&3&14&15&26&27\\ 2&29&4&5&16&17&28\\ 3&30&7&6&19&18&31\\ 4&33&8&9&20&21&32\\ 5&34&11&10&23&22&35\\ \end{array}\qquad\begin{array}{c|cccccccc}&0&1&2&3&4&5\\ \hline 0&36&1&7&6&19&18\\ 1&28&29&4&5&16&17\\ 2&31&30&12&13&24&25\\ 3&32&33&8&9&20&21\\ 4&35&34&11&10&23&22\\ 5&27&2&3&14&15&26\\ \end{array}\qquad\begin{array}{c|cccccccc}&0&1&2&3&4&5\\ \hline 0&22&36&1&1&10&23\\ 1&18&31&30&7&6&19\\ 2&21&32&33&8&9&20\\ 3&25&35&34&12&13&24\\ 4&26&27&2&3&14&15\\ 5&17&28&29&4&5&16\\ \end{array}\] \[\begin{array}{c|cccccccc}&0&1&2&3&4&5\\ \hline 0&16&17&36&1&4&5\\ 1&20&21&32&33&8&9\\ 2&23&22&35&34&11\\ 3&15&26&27&2&3&14\\ 4&24&25&28&29&12&13\\ 5&19&18&31&30&7&6\\ \end{array}\qquad\begin{array}{c|cccccccc}&0&1&2&3&4&5\\ \hline 0&22&36&1&1&8\\ 1&10&23&22&35&34&11\\ 2&14&15&26&27&2&3\\ 3&5&16&17&28&29&4\\ 4&6&19&18&31&30&7\\ \end{array}\] The above theorem can easily be extended to complete multipartite graphs. **Corollary 5.3**.: _Let \(G\) be the complete multipartite graph \(K_{2s,2s,\ldots,2s}\) of order \(n=2ks\), \(s\geq 2\), \(k\geq 2\). Then \(\operatorname{av}(G)=2s-1\)._ Proof.: From Lemma 5.1, \(\operatorname{av}(G)\leq n-\Delta-1=2s-1\). The proof now proceeds by induction. It is true for \(k=2\) by Theorem 5.2. Let \(k\geq 3\) and assume that the result is true for the graph \(K_{2s,2s,\ldots,2s}\) of order \(2(k-1)s\). Now apply the procedure in Proof 2 of Theorem 2.5 to the \(2s-1\) avoiding Eulerian circuits. We then have \(2s-1\) avoiding circuits of the graph \(K_{2s,2s,\ldots,2s}\) of order \(2ks\). Thus \(\operatorname{av}(G)=2s-1\). We now briefly turn our attention to complete bipartite graphs \(K_{2r,2s}\), \(1\leq r<s\). When \(r=1\), such graphs have avoidance index \(1\), as can readily be seen by an Eulerian circuit starting at either point of the \(2\)-partition. However, it is easy to find two mutually avoiding Eulerian circuits beginning and ending at any vertex of the larger partition. This simple observation prompts the following definitions. The _avoidance index_\(\operatorname{av}(v)\) of a vertex \(v\in V(G)\) of an Eulerian graph \(G\) is the maximum number of mutually avoiding Eulerian circuits beginning and ending at \(v\). This is related to the avoidance index of the graph in the obvious way: \[\operatorname{av}(G)=\min_{v\in V(G)}\operatorname{av}(v).\] We may now define the _mean avoidance index_\(\operatorname{\overline{av}}(G)\) to be the average avoidance index across all vertices: \[\operatorname{\overline{av}}(G)=\frac{\sum_{v\in V(G)}\operatorname{av}(v)}{ |V(G)|}.\] Thus for the complete bipartite graph \(K_{2s,2s}\), \(\overline{\mathrm{av}}(K_{2,2s})=(4s+2)/(2s+2)=2-1/(s+1)\), which asymptotically tends to \(2\) as \(s\) tends to infinity. It is not the intention in this paper to investigate a number of interesting questions which arise with the mean avoidance index, but the observations immediately above can easily be extended to graphs \(K_{2r,2s}\), \(r<s\), where \(r\geq 2\). Here \(\delta=2r\) and \(\Delta=\alpha=2s\). Therefore \(\mathrm{av}(K_{2r,2s})\leq 2r-1\). The number of mutually avoiding Eulerian circuits beginning and ending at any vertex can be determined either by constructing a \(2r\times 2s\) rectangle or a \(2s\times 2r\) rectangle, and numbering as described in Theorem 5.2. From a vertex \(v\) in the \(2r\)-partition we have \(\mathrm{av}(v)=2r-1\), and in the \(2s\)-partition \(\mathrm{av}(v)=2r\). We therefore have the following result. **Theorem 5.4**.: _Let \(G\) be the complete bipartite graph \(K_{2r,2s}\), \(1\leq r<s\). Then \(\mathrm{av}(G)=2r-1\) and \(\overline{\mathrm{av}}(G)=2r-r/(r+s)\)._ **Example.** In order to illustrate the above argument, relevant rectangles for the complete bipartite graph \(K_{4,6}\) are given below. \begin{tabular}{|c|c \(0\)\(1\)\(2\)\(3\)\(4\)\(7\)\(8\)\(9\)\(10\)\(13\)\(0\)\(3\)\(6\)\(9\)\(12\)\(11\)\(10\)\(7\)\(6\)\(5\)\(4\)\(9\)\(0\)\(11\)\(2\)\(13\)\(8\)\(3\)\(12\)\(13\)\(4\)\(1\)\(6\)\(11\)\(8\)\(5\)\(2\)\(7\)\(12\)\(1\)\(10\)\(5\)\(0\)\(0\)\(13\)\(12\)\(7\)\(8\)\(9\)\(10\)\(7\)\(6\)\(5\)\(4\)\(7\)\(2\)\(3\)\(4\)\(1\)\(2\)\(5\)\(0\)\(1\)\(12\)\(3\)\(8\)\(13\)\(10\)\(1\)\(6\)\(11\)\(8\)\(5\)\(10\)\(11\)\(12\)\(9\)\(0\)\(3\)\(6\)\(9\)\(4\)\(13\)\(2\)\(11\)\(0\)\(0\)\(3\)\(4\)\(1\)\(2\)\(3\)\(6\)\(5\)\(4\)\(9\)\(10\)\(1\)\(12\)\(13\)\(2\)\(7\)\(6\)\(11\)\(10\)\(13\)\(0\)\(1\)\(6\)\(9\)\(12\)\(11\)\(0\)\(5\)\(10\)\(7\)\(12\)\(3\)\(8\)\(7\)\(4\)\(13\)\(8\)\(5\)\(2\)\(11\)\(8\)\(9\)\(0\)\(0\)\(11\)\(10\)\(5\)\(6\)\(11\)\(12\)\(13\)\(2\)\(3\)\(6\)\(9\)\(10\)\(7\)\(8\)\(9\)\(12\)\(12\)\(11\)\(8\)\(13\)\(10\)\(1\)\(0\)\(5\)\(2\)\(7\)\(4\)\(3\)\(0\)\(9\)\(4\)\(1\)\(6\)\(7\)\(12\)\(3\)\(8\)\(5\)\(4\)\(13\)\(0\)\(0\)\(0\)\(5\)\(6\)\(9\)\(10\)\(13\)\(2\)\(1\)\(12\)\(11\)\(8\)\(5\)\(4\)\(1\)\(6\)\(3\)\(8\)\(13\)\(4\)\(7\)\(6\)\(11\)\(2\)\(3\)\(4\)\(9\)\(12\)\(13\)\(0\)\(9\)\(8\)\(7\)\(2\)\(5\)\(10\)\(11\)\(0\)\(1\)\(10\)\(7\)\(12\)\(3\)\(0\)\(0\)\(0\)\(9\)\(8\)\(11\)\(12\)\(1\)\(4\)\(3\)\(8\)\(7\)\(12\)\(13\)\(8\)\(5\)\(10\)\(13\)\(4\)\(9\)\(12\)\(3\)\(2\)\(7\)\(4\)\(5\)\(6\)\(7\)\(10\)\(9\)\(6\)\(11\)\(2\)\(5\)\(0\)\(13\)\(2\)\(1\)\(10\)\(11\)\(0\)\(3\)\(6\)\(1\ Indeed for order \(12\) only one is not. This exceptional graph is illustrated in Figure 8. Clearly this graph has vertex connectivity \(1\); this is no coincidence as our next result shows. **Theorem 5.5**.: _For every \(n\geq 5\), there exists a 4-regular graph of order \(n\) which is not doubly Eulerian._ Proof.: The result is true for \(5\leq n\leq 10\) by Table 6. Let \(n\geq 11\) and suppose first that \(n=2s+1\) is odd. Let \(H\) be any 4-regular graph of order \(s\) and delete one edge from \(H\). Create a graph \(G\) by taking two disjoint copies of \(H\) and adding a new vertex \(v\) which is joined to both vertices of degree \(3\) in both copies of \(H\). Any Eulerian circuit starting and ending at \(v\) must traverse all the edges in one of the copies of \(H\) before returning to \(v\), since \(v\) is a cut vertex and is revisited only once on the circuit. Thus any two Eulerian circuits starting and ending at \(v\) must collide at this point. Now suppose \(n=2s\) is even, \(s\geq 6\). Choose any 4-regular graph \(H\) of order \(s-1\) and delete one edge from it. Let \(K\) be a graph of order \(s\) which has \(s-2\) vertices of order \(4\) and exactly two vertices of order \(3\), which must be adjacent. (Such a graph can always be constructed: we start with a circulant graph of order \(s\) and connection set \(\{-2,-1,1,2\}\), delete the edges from \(0\) to \(-1\) and \(1\) to \(2\) and add a new edge from \(-1\) to \(2\).) As before, construct a graph \(G\) by taking a copy of \(H\) and \(K\) plus a new vertex \(v\) which is joined to the degree \(3\) vertices in \(H\) and \(K\). Suppose we have a pair of mutially avoiding Eulerian circuits starting and ending at \(v\). By the argument in the case of odd \(n\), one circuit must visit all the edges of \(H\) before returning to \(v\); the other must have been in \(K\) and at the point when the first circuit is back at \(v\), the second is two steps away from \(v\). After one more step, the circuits are on the pair of vertices which have degree \(3\) in \(K\), and these are adjacent. (This situation is clearly seen by referring to Figure 8.) It is interesting to note that there are exactly three ways to construct a graph \(G\) of order \(13\) in the manner of the proof of Theorem 5.5, and from Table 6 these account for all the 4-regular graphs of order \(13\) which are not doubly Eulerian. Turning now to 6-regular graphs, by Lemma 5.1 the smallest possible order of a doubly Eulerian graph is \(9\). There are four Eulerian graphs of order \(9\), each of which is a complete graph \(K_{9}\) from which a 2-factor has been removed. The two which have respectively either a 9-cycle or three 3-cycles removed are circulants, and therefore vertex-transitive. These both have avoidance index \(2\). The other two which have respectively either a 6-cycle and a 3-cycle or a 5-cycle and a 4-cycle removed only have avoidance index \(1\). There are 21 6-regular graphs of order \(10\). By Lemma 5.1, the maximum possible avoidance index is \(3\). In fact, no 6-regular graph of order \(10\) has avoidance index \(3\). The computations required to determine the split of these 21 graphs between avoidance index \(1\) and \(2\) are substantial, and we have not attempted this. Finally in this section we present a doubling construction for Eulerian circulant graphs. **Theorem 5.6**.: _Let \(G\) be a an Eulerian circulant graph of order \(n\) and degree \(d\). Suppose that \(G\) admits a set of \(k\) mutually avoiding Eulerian circuits. Then there exists a graph of order \(2n\) and degree \(2d\) which also admits \(k\) mutually avoiding Eulerian circuits._ Figure 8: The unique 4-regular graph of order \(12\) which is not doubly Eulerian Proof.: We follow an amended version of Proof 2 of Theorem 2.6. Let the vertices of \(G\) be \(\{0,1,\ldots,n-1\}\) and suppose the connection set of \(G\) is \(S=\{s_{1},s_{2},\ldots,s_{d}\}\). Create a new graph \(G^{\prime}\) by adding new vertices \(\{0^{\prime},1^{\prime},\ldots,(n-1)^{\prime}\}\). In \(G\) the existing edges are of the form \(i\)\((i+s)\) for every \(s\in S\); in \(G^{\prime}\) we add the edges \(i\)\((i+s)^{\prime}\) and \(i^{\prime}\)\((i+s)^{\prime}\) for every \(s\in S\) (with arithmetic modulo \(n\)). Let \(H^{\prime}\) be the subgraph of \(G^{\prime}\) consisting of all the edges of the forms \(i\)\((i+s)^{\prime}\) and \(i^{\prime}\)\((i+s)^{\prime}\). Then \(H^{\prime}\) is not regular, but has degrees \(d\) and \(2d\) and hence is Eulerian. Consider a set \(\{C_{1},C_{2},\ldots,C_{k}\}\) of mutually avoiding Eulerian circuits in \(G\) starting and ending at vertex \(0\). Construct an Eulerian circuit \(C_{1}^{\prime}\) in \(G^{\prime}\) as follows. Follow \(C_{1}\) until the first return to vertex \(0\) after the start point; then follow any Eulerian circuit in \(H^{\prime}\) starting and ending at \(0\); then follow the remainder of \(C_{1}\). Now construct a circuit \(C_{2}^{\prime}\) by following \(C_{2}\) until the point that \(C_{1}\) has returned to \(0\); at this point \(C_{2}\) will be at some vertex \(z\). Now continue \(C_{2}^{\prime}\) by following a 'parallel' circuit to that followed by \(C_{1}^{\prime}\), i.e. if \(C_{1}^{\prime}\) visits vertex \(x\) or \(x^{\prime}\) then \(C_{2}^{\prime}\) visits \(x+z\) or \((x+z)^{\prime}\) respectively (arithmetic modulo \(n\)). The parallel circuit ends when all the edges of \(H^{\prime}\) have been visited, so \(C_{2}^{\prime}\) is back at vertex \(z\) (and \(C_{1}^{\prime}\) is back at \(0\)). Now continue with the remaining edges of \(C_{2}\). Continue in a similar way to construct circuits \(C_{3}^{\prime},\ldots,C_{k}^{\prime}\). The set \(\{C_{1}^{\prime},C_{2}^{\prime},\ldots,C_{k}^{\prime}\}\) is a set of \(k\) mutually avoiding Eulerian circuits in \(G^{\prime}\). ## 6 Summary and future research In this paper we have introduced the concepts of a doubly Eulerian graph and more generally, the avoidance index of an Eulerian graph. Not all Eulerian graphs are doubly Eulerian; indeed as we observed in the Introduction the extremal ones, i.e. the complete graphs on an odd number of vertices and the cycles are not. It was natural therefore to study the question of what could be said about extremal doubly Eulerian graphs and construct examples. In Section 2, we proved that for a graph of odd order \(n\), an edge-maximal doubly Eulerian graph must be a complete graph minus a \(2\)-factor. We proved in Theorem 2.4 that there exist infinite classes with a complete cycle removed; and in Theorem 2.3, when \(n\) is divisible by \(3\), a set of \(n/3\) disjoint triangles removed. From Lemma 5.1, the avoidance index of regular graphs of odd order \(n\) and degree \(n-3\) is either \(1\) or \(2\). When \(n=9\), removing cycles of the same (resp. different) lengths gives a graph with avoidance index \(2\) (resp. \(1\)). Is this the case more generally? The situation for \(n=11\), \(13\) and \(15\) may repay further investigation. We present this as the first problem. **Problem 1**.: Classify which regular graphs of odd order \(n\) and degree \(n-3\) have avoidance index either \(1\) or \(2\). For graphs of even order \(n\), an edge-maximal doubly Eulerian graph must be a complete graph from which a regular cubic graph, not necessarily connected, has been removed. Again from Lemma 5.1, the avoidance index of regular graphs of even order \(n\) and degree \(n-4\) is \(1\), \(2\) or \(3\). From Corollary 5.3, when \(n\) is doubly even removing a set of \(n/4\) disjoint complete graphs \(K_{4}\) gives a graph with avoidance index \(3\). The six regular graphs of order \(8\) and degree \(4\) were discussed in Section 2. Of these, \(K_{4,4}\) has avoidance index \(3\), four graphs have avoidance index \(2\) and there is a unique graph with avoidance index \(1\). We have not attempted an investigation of the \(8\)-regular graphs of order \(12\), of which there are \(94\), but for someone with access to powerful computer facilities we give this as the second problem. **Problem 2**.: Determine the avoidance index of the \(94\)\(8\)-regular graphs of order \(12\). Turning now to complete bipartite graphs, we have proved (Theorems 5.2 and 5.4) that the avoidance index of the graph \(K_{2s,2s}\), \(s\geq 1\), is \(2s-1\) and of the graph \(K_{2r,2s}\), \(1\leq r<s\), is \(2r-1\). For the graph \(K_{2r+1,2r+1}^{*}\), which is the complete bipartite graph \(K_{2r+1,2r+1}\) minus a perfect matching, we have shown that for \(r=2,3\) the avoidance index is \(2r\); we conjecture that this holds for all \(r\geq 2\). The third problem is to prove this. **Problem 3**.: Prove that \(\operatorname{av}(K_{2r+1,2r+1}^{*})=2r\), \(r\geq 2\). If the conjecture of Problem 3 is true then the graphs \(K_{2r+1,2r+1}^{*}\), \(r\geq 2\) are examples of saturated graphs, i.e. those whose avoidance index is equal to the degree of the regular graph. As we observed in Section 5, there can be no saturated graphs of degree 2 and we have examined the situation for degree 4 for graphs of order 13 or less. But we know of no infinite class. Based on our computer calculations outlined in Section 5, we conjecture that circulant graphs with various generating sets may provide such classes and this is the next problem. The conjecture also applies to regular graphs of degree 6 (and indeed of higher degree). The circulant graphs on \(\mathbb{Z}_{14}\), \(\mathbb{Z}_{16}\) and \(\mathbb{Z}_{18}\) with generating set \(\{1,3,5\}\) are also saturated. The first of these is of course the graph \(K^{*}_{7,7}\). **Problem 4**.: Prove that the circulant graphs on the group \(\mathbb{Z}_{n}\) with various generating sets are saturated for large enough \(n\). The smallest saturated non-Cayley graph is the graph \(K_{6,6}\) from which an 8-cycle and a 4-cycle have been removed. It is tempting to speculate that this may be the smallest example of an infinite class, but notwithstanding this leads to the next investigation. **Problem 5**.: Find more examples of non-Cayley regular graphs which are saturated. With the exception of the case where \(n=9\), edge-minimal doubly Eulerian graphs have edge excess equal to 2. The structure of such graphs is that they have two vertices of degree 4 connected by four paths containing \(a\), \(b\), \(c\) and \(d\) valency 2 vertices respectively, \(0\leq a\leq b\leq c\leq d\), and are thus characterised by these four parameters. In the important case where the two valency 4 vertices are connected, i.e. \(a=0\), Lemma 4.1 and Theorem 4.3 provide a complete analysis of which graphs have avoidance index 1 and which have avoidance index 2. This enabled us to find edge-minimal doubly Eulerian graphs for all even orders \(n\geq 8\). The graphs are bipartite. Our investigations suggest that a complete analysis of which edge excess 2 graphs have avoidance index 1 or 2 is extraordinarily long and tedious. Non-bipartite edge-minimal doubly Eulerian graphs of even order and with edge excess equal to 2 do exist; the smallest is of order 14. This is the next problem. **Problem 6**.: Prove that there exists a non-bipartite edge-minimal doubly Eulerian graph of order \(n\) with edge excess equal to 2 for all even \(n\geq 14\). We also proved that there exist (edge-minimal) doubly Eulerian graphs with edge excess equal to 2 for all odd \(n\geq 11\). These are not bipartite; indeed it is easy to see that no bipartite graph of odd order and with edge excess equal to 2 can exist. Either one of the parameters \(a,b,c,d\) is odd and the other three are even or vice-versa. Thus edge-minimal bipartite graphs of odd order must have edge excess at least 3. The two examples for \(n=9\) are shown in Figure 6. **Problem 7**.: Prove that there exists a bipartite edge-minimal doubly Eulerian graph of order \(n\) with edge excess equal to 3 for all odd \(n\geq 9\). This leads naturally to the next investigation. An Eulerian graph with edge excess equal to 2 cannot be Hamiltonian. So if we wish to find edge-minimal Hamiltonian doubly Eulerian graphs, they must have edge excess at least 3. The structure of these graphs with edge excess 3 is immediate; such a graph on \(n\) vertices must consist of an \(n\)-cycle (the Hamiltonian cycle), with \(n-3\) vertices of valency 2 and the other three vertices, say \(u,v,w\) of valency 4 forming a triangle. Let the number of vertices along the Hamiltonian cycle between vertices \(u\) and \(v\), \(v\) and \(w\) and \(w\) and \(u\) be \(a,b,c\) respectively, where \(a+b+c=n-3\). Without loss of generality we may assume that \(a\leq b\leq c\). Denote this graph by \(\Lambda=\Lambda(V,E)=\Lambda(a,b,c)\). An example is shown in Figure 9. It is immediate that if any of the parameters \(a,b,c\) are odd then the graph is not doubly Eulerian, because two circuits starting from the midpoint of that division of the Hamiltonian cycle will be at two of the valency 4 vertices at the same time. **Problem 8**.: Prove necessary and sufficient conditions on the parameters \(a,b,c\) for the graph \(\Lambda(a,b,c)\) to be doubly Eulerian. In particular, prove that there exists a Hamiltonian edge-minimal doubly Eulerian graph of order \(n\) and edge excess 3 for all odd \(n\geq 15\). The next problem relates to Eulerian graphs which are not doubly Eulerian. From Table 6, 4-regular graphs with avoidance index 1 seem to be rare. But in Theorem 5.5 we gave a simple proof that there exists such a graph for every order \(n\geq 5\). For \(n\geq 11\), these graphs have a cut vertex; indeed this feature is a critical part of the proof. However it would be interesting to discover whether there are other graphs. **Problem 9**.: Investigate whether for every \(n\geq 14\), there exists a \(4\)-regular graph of order \(n\) which is not doubly Eulerian and does not contain a cut vertex. Finally, in Section 5 we extended the definition of the avoidance index of an Eulerian graph to that of a vertex, and thus to the mean avoidance index of a graph. Apart from Theorem 5.4 we did not prove any results about the mean avoidance index and we have not investigated this concept in any detail. There are undoubtedly many questions that could be asked, and the whole field is completely open. So the final problem is more general. **Problem 10**.: Investigate the concepts of the avoidance index of a vertex and the mean avoidance index.
2306.04533
Generalized skyrmion crystals with applications to neutron stars
In this article we study properties of isospin asymmetric nuclear matter in the generalized Skyrme model. This is achieved by canonically quantizing the isospin collective degrees of freedom of the recently found multi-wall skyrmion crystal. We obtain, for the first time, an equation of state from the Skyrme model which interpolates between infinite isospin asymmetric nuclear matter and finite isospin symmetric atomic nuclei. This enables us to describe neutron stars with crusts within the Skyrme framework. Furthermore, we observe that the symmetry energy tends to a constant value at zero density, which can be identified with the asymmetry coefficient in the semi-empirical mass formula for atomic nuclei. The symmetry energy also reveals a cusp in its structure below the nuclear saturation point $n_0$ at $n_*\sim 3n_0/4$. This cusp density point $n_*$ can be interpreted as the nuclear density whereby the infinite crystalline multi-wall configuration undergoes a phase transition to a finite isolated multi-wall configuration. Both of these observations are observed to be generic features of skyrmion crystals that tend asymptotically to somewhat isolated skyrmion configurations in the zero density limit. We find that the resulting neutron stars from our study agree quite well with recent NICER/LIGO observational data.
Paul Leask, Miguel Huidobro, Andrzej Wereszczynski
2023-06-07T15:41:10Z
http://arxiv.org/abs/2306.04533v3
# Quantized and gravitating multi-wall skyrmion crystals with applications to neutron stars ###### Abstract In this article we study properties of isospin asymmetric nuclear matter in the generalized Skyrme model. This is achieved by canonically quantizing the isospin collective degrees of freedom of the recently found multi-wall skyrmion crystal. We obtain, for the first time, an equation of state from the Skyrme model which interpolates between infinite isospin asymmetric nuclear matter and finite isospin symmetric atomic nuclei. This enables us to describe neutron stars with crusts within the Skyrme framework. Furthermore, we observe that the symmetry energy tends to a constant value at zero density, which can be identified with the asymmetry coefficient in the semi-empirical mass formula for atomic nuclei. The symmetry energy also reveals a cusp in its structure below the nuclear saturation point \(n_{0}\) at \(n_{*}\sim 3n_{0}/4\). This cusp density point \(n_{*}\) can be interpreted as the nuclear density whereby the infinite crystalline multi-wall configuration undergoes a phase transition to a finite isolated multi-wall configuration. Both of these observations are observed to be generic features of skyrmion crystals that tend asymptotically to somewhat isolated skyrmion configurations in the zero density limit. We find that the resulting neutron stars from our study agree quite well with recent NICER/LIGO observational data. Introduction The Skyrme model [1] offers a unique, unified framework in which one can study baryonic matter at all scales - from single baryons and atomic nuclei to infinite nuclear matter which, after coupling the model to gravity, gives rise to neutron stars[2]. All of this emerges from an elegantly simple Lagrangian containing a limited number of terms and, in consequence, a few free coupling constants, where the fundamental degrees of freedom (d.o.f.) are the lightest mesons disguised into a matrix valued field. In the minimal version, which is used in this work, they are pions forming an SU(2) field. The attractiveness of this approach originates not only in a very small number of parameters but also in the manifestation of baryons. Namely, they are realized as non-perturbative excitations of the mesonic field, that is, as topological solitons, called skyrmions. Importantly, the topological degree of skyrmions has been identified with the baryon charge in a rigorous way. The Skyrme model has been very extensively studied in the context of nucleons [3, 4], and light atomic nuclei [5, 6, 7, 8, 9, 10, 11, 12, 13, 14] with many spectacular results. In particular, let us mention the description of the ground and Hoyle states in \({}^{12}C\)[15] and excitation bands of \({}^{16}O\)[16, 17] as well as the emergence of \(\alpha\)-cluster structure [18] which is expected for not too heavy atomic nuclei. This recent progress to large extent relies on an improved quantization procedure where, contrary to the usual rigid-rotor approach, one takes into account both the zero modes and the softest massive vibrations [19]. Also the long standing problem of biding energies has found a resolution by inclusion of additional terms [20, 21, 22, 23, 24] or additional mesonic degrees of freedom [25, 26, 27, 28], both physically well motivated. Finally, it is now clear how to extract nuclear forces from the Skyrme model [29, 30] which ultimately may provided a much better contact with more traditional nuclear models. Obviously, a natural field of application of the Skyrme framework is nuclear matter and neutron stars. The problem of infinite nuclear matter can be approached if one considers the model on a finite volume unit cell with periodic boundary conditions [31], which results in an infinite but periodic Skyrme crystal. Varying the volume of the unit cell (while keeping the baryon number fixed) allows one to study skyrmionic matter at finite densities and, inter alia, to obtain an equation of state (EoS). Taking the advantage of the Tolman-Oppenheimer-Volkoff construction, one obtains neutron stars. In the traditional approach to determining the EoS, the geometry of the unit cell was fixed to be cubic and the unit cell volume was varied by homothety about the cell center. This rendered the energy minimizing crystalline solutions to inherit the symmetry group of their corresponding initial configuration. Nevertheless, various crystal solutions were constructed [31, 32, 33, 34, 35]. This led to a consensus that, at moderate and large densities, the global energy minimizer should be very well approximated by the simple cubic (SC) crystal of half-skyrmions1[33, 35]. At even larger densities a transition to the body centered cubic (BCC) crystal of half-skyrmions is observed [32, 36], however, at lower densities the situation was much less clear. This is due to fact that for each fixed classical crystal solution (which in a natural way is identified with symmetric nuclear matter), the energy \(E\) per unit cell possesses a minimum for a certain volume \(V_{*}\), which may be consistently identified with the saturation point. Obviously, for \(V>V_{*}\), the solution is thermodynamically unstable as it formally corresponds to negative pressure. However, taking into account the isospin quantum corrections and some further contributions, the classical minimum should disappear, thereby providing a thermodynamically stable description even in the low density regime. This periodic crystal was then expected to be replaced by non-homogeneous solutions in this regime [37, 38, 39, 40]. Here [41] for example, a crystal of \(\alpha\)-particles and \(B=32\) skyrmions have been considered. Although these configurations lowered the classical energy per cell, they did not cure the instability issue [42]. In conclusion, the Skyrme model provided an EoS, but only above the saturation point. In particular, in the lower density regime which is typically identified with the crust of neutron star, no such EoS has been captured yet by the Skyrme model. Despite this limitation, one can still use the Skyrme model EoS valid in the high density regime above the saturation point. This simply provides a skyrmionic neutron star without a crust, but still being applicable to computations of the maximal masses allowed within the model, which are in fact only slightly affected by the presence of a crust. In fact, one can include a crust by smoothly joining such an EoS with a low density EoS obtained from another approach. In both cases the results are very encouraging, especially in the generalized Skyrme model where the so-called sextic term has been included. Indeed, this component of the generalized Skyrme Lagrangian was essential not only to significantly increases the value of the maximal mass of neutron star (from \(1.7M_{\odot}\)[43] to above \(2M_{\odot}\)[44]), but also to render nuclear matter more like a perfect fluid, especially at higher densities, which corresponds very well to the standard picture of a (super)fluid core of neutron star. These results are deeply anchored in the mathematical properties of the sextic term. Namely, if treated together with the (pion mass) potential term, the corresponding energy-momentum tensor has a perfect fluid form [45]. In addition it enjoys a volume preserving diffeomorphism symmetry which means that the energy of a solution is degenerate up all deformations which do not change its volume [20]. On the contrary, deformations that change the volume are strongly penalized as the corresponding EoS has a maximally stiff form [46, 47]. This agrees with a physical interpretation of the sextic term as a part of the action which effectively arises after integration of \(\omega\) mesons. Indeed, EoS' obtained in the Walecka model at large densities tend to maximally stiff EoS' due to the \(\omega\)-meson repulsion. This description has been recently further refined by taking into account semi-classical quantization of the isospin d.o.f. [48], which enables computation of the symmetry energy and particle fractions [42]. They then took this further by considering strange matter and allowing the presence of kaons [49]. However, it should be stressed that these results were obtained for the SC crystal of half-skyrmions, which has recently been shown [50] is _not_ the global minimizer at any density (although, at higher densities, they approximate the true minimizer quite well). Fortunately, the Leeds group [50] have recently developed a method to obtain crystalline solutions by not only considering the variation of the Skyrme field \(\varphi:\mathbb{R}^{3}/\Lambda\to\mathrm{SU}(2)\) but also by allowing non-cubic variations of the period lattice \(\Lambda\). The main idea is the identification between all 3-tori \((\mathbb{R}^{3}/\Lambda,d)\), with \(d\) the Euclidean metric, and the unit 3-torus \(\mathbb{T}^{3}=\mathbb{R}^{3}/\mathbb{Z}^{3}\), where \(\mathbb{T}^{3}\) is equipped with the flat pullback metric \(g=F^{*}d\) via a diffeomorphism \(F:\mathbb{T}^{3}\to\mathbb{R}^{3}/\Lambda\). Varying the metric \(g\) on \(\mathbb{T}^{3}\) is equivalent to considering variations of the period lattice \(\Lambda\). It is convenient to think of the metric \(g\) as a constant symmetric-positive-definite matrix \((g_{ij})\). Then one can address this variational problem by identifying the gradient of the energy with respect to the metric \((g_{ij})\) with the stress-energy tensor \((S_{ij})\) of the field \(\varphi\). Auckly and Kapitanski [51] showed that, for fixed metric \(g\), the energy functional \(E(\varphi,g)\) attains a minimum. In [50] they proved that, for fixed field configuration \(\varphi\), any critical metric \(g\) of the energy functional \(E(\varphi,g)\) is in fact a unique local minimum. Hence the period lattice \(\Lambda\), for which the Skyrme field \(\varphi\) has minimum energy, is unique (up to automorphism). This means that the resulting periodic crystalline configuration is indeed a true energy minimizer with respect to both variations of the period lattice \(\Lambda\) and the Skyrme field \(\varphi\). This slightly improves on the known crystalline solutions at medium and large densities, i.e. above the nuclear saturation point, but it has a tremendous impact on the low density regime where non-homogeneous solutions are expected to exist. In the current paper we apply this method to the generalized Skyrme model and obtain the lattice ground state of the generalized Skyrme model at all densities, that is, above and below the nuclear saturation point \(n_{0}\). As we have already pointed out, the low density regime \(n_{B}<n_{0}\) within the Skyrme framework has hitherto been poorly understood. Our main aim is to derive an EoS from purely skyrmionic matter that describes matter over all density regimes. Next, we include the isospin quantum contribution and find, under the assumption of \(\beta\)-equilibrium, the particle fractions. We also obtain the symmetry energy, now with a reliable low density solution, which reveals two novel and physically very interesting features. It has _a cusp structure_ close to the nuclear saturation point and tends to a _finite value_ at zero density. Importantly, this value corresponds quite well with the asymmetry energy constant from the Bethe-Weizsacker semi-empirical mass formula (SEMF). Therefore, we may conclude that _within the solitonic Skyrme framework we smoothly interpolate between infinite nuclear matter and finite atomic nuclei_. Finally we apply our EoS to study neutron star. ## 2 Skyrme crystals and phases of skyrmion matter ### The Skyrme model The generalized Skyrme model consists of a single scalar field \(\varphi:\Sigma\to\mathrm{SU}(2)\) where spacetime is given by the \((3+1)\)-dimensional Lorentzian manifold \(\Sigma=\mathbb{R}\times M\) with the product metric \(g=-\mathrm{d}t^{2}+h\), and \((M,h)\) is an oriented 3-dimensional Riemannian manifold with Riemannian metric \(h\). Let us introduce oriented local coordinates \((x^{0},x^{1},x^{2},x^{3})\) on the domain \(\Sigma\) and let \(\{\partial_{0},\partial_{1},\partial_{2},\partial_{3}\}\) be a local basis for the tangent space \(T_{x}\Sigma\) at \(x\in\Sigma\), where we have denoted \(\partial/\partial x^{\mu}\equiv\partial_{\mu}\). We equip \(\mathfrak{su}(2)\) with the \(\mathrm{Ad}(\mathrm{SU}(2))\) invariant inner product \((X,Y)_{\mathfrak{su}(2)}=\frac{1}{2}\operatorname{Tr}(X^{\dagger}Y)\). Let \(\Omega\in\Omega^{2}(\mathrm{SU}(2))\otimes\mathfrak{su}(2)\) be an \(\mathfrak{su}(2)\)-valued two-form on \(\mathrm{SU}(2)\) and \(\omega\in\Omega^{1}(\mathrm{SU}(2))\otimes\mathfrak{su}(2)\) be the left Maurer-Cartan form. Then, for any left invariant vector fields \(X,Y\in T_{\varphi(x)}\,\mathrm{SU}(2)\), where \(x\in\Sigma\), we define \[\Omega(X,Y)=\left[\omega(X),\omega(Y)\right], \tag{1}\] where \([\cdot,\cdot]:\mathfrak{su}(2)\times\mathfrak{su}(2)\to\mathfrak{su}(2)\) is the usual Lie bracket. The left Maurer-Cartan form \(\omega\) defines the \(\mathfrak{su}(2)\)-valued _left current_ \[L_{\mu}:=\omega_{\varphi}(\partial_{\mu}\varphi)=\varphi^{\dagger}\partial_{ \mu}\varphi. \tag{2}\] Let us write the pullback as \(\Omega_{\mu\nu}=\varphi^{*}\Omega(\partial_{\mu},\partial_{\nu})\). Then the curvature can be expressed in terms of the \(\mathfrak{su}(2)\)-valued left current as \[\Omega_{\mu\nu}=\Omega\left(\mathrm{d}\varphi(\partial_{\mu}),\mathrm{d} \varphi(\partial_{\nu})\right)=\left[\omega_{\varphi}(\partial_{\mu}\varphi), \omega_{\varphi}(\partial_{\nu}\varphi)\right]=\left[L_{\mu},L_{\nu}\right]. \tag{3}\] Consider the trivial foliation of spacetime \(\Sigma=\mathbb{R}\times M\) into spacelike hypersurfaces \(M\) and let \(M\) be compact and without boundary. (This is the case if, for example, \(M\) is a 3-torus or the usual vacuum boundary condition \(\varphi(x\to\infty)=\mathbb{I}_{2}\) is imposed on \(M=\mathbb{R}^{3}\).) Hopf's degree theorem ensures that mappings \(\varphi:M\to\mathrm{SU}(2)\cong S^{3}\) are characterized by a homotopy invariant: the topological degree \(B\in\pi_{3}(M)=\mathbb{Z}\). This topological degree is identified with the physical baryon number upon quantization, so we often to refer to \(B\) as the baryon number, which may be computed using \[B=\int_{M}\mathrm{d}^{3}x\sqrt{-g}\,\mathcal{B}^{0},\quad\mathcal{B}^{\mu}= \frac{1}{24\pi^{2}\sqrt{-g}}\epsilon^{\mu\nu\rho\sigma}\operatorname{Tr}(L_{ \nu}L_{\rho}L_{\sigma}). \tag{4}\] We consider the generalization of the massive Skyrme Lagrangian which yields an \(\omega\)-meson-like repulsion on short distances, while also allowing the quartic Skyrme term to describe scalar meson effects. This generalized Skyrme Lagrangian is composed of four terms and is given by \[\mathcal{L}_{0246}=\mathcal{L}_{0}+\mathcal{L}_{2}+\mathcal{L}_{4}+\mathcal{L }_{6}, \tag{5}\] where the the index \(i\) denotes the degree of each term as a polynomial in spatial derivatives. The four terms appearing in the energy functional are the potential, Dirichlet, Skyrme and sextic terms, respectively. It is conventional to label the models by terms used in the energy functional, e.g. the generalized model is labelled \(\mathcal{L}_{0246}\), the standard massive model is denoted \(\mathcal{L}_{024}\), the massless Skyrme model \(\mathcal{L}_{24}\) and the BPS model \(\mathcal{L}_{06}\). The first term is the potential which provides a mass for the pionic fields, \[\mathcal{L}_{0}=-\frac{c_{0}}{8\hbar^{3}}F_{\pi}^{2}m_{\pi}^{2}\operatorname{ Tr}\left(\mathbb{I}_{2}-\varphi\right). \tag{6}\] The Dirichlet, or kinetic, term is given by \[\mathcal{L}_{2}=c_{2}\frac{F_{\pi}^{2}}{16\hbar}g^{\mu\nu}\operatorname{Tr}(L _{\mu}L_{\nu}) \tag{7}\] and the Skyrme term, corresponding to the four pion interaction, is \[\mathcal{L}_{4}=\frac{c_{4}\hbar}{8e^{2}}g^{\mu\alpha}g^{\nu\beta}\operatorname {Tr}\left(\left[L_{\mu},L_{\nu}\right]\left[L_{\alpha},L_{\beta}\right]\right). \tag{8}\] Finally, we include the sextic term, defined by [52] \[\mathcal{L}_{6}=-\pi^{4}\lambda^{2}g^{\mu\nu}\mathcal{B}_{\mu}\mathcal{B}_{\nu}, \tag{9}\] where \(\mathcal{B}^{\mu}\) is the topological Chern-Simons current defined in (4). The \(c_{i}\) are coupling constants and, for the usual generalized Skyrme model, take the values \(c_{0}=c_{2}=1\) and \(c_{4}=1/4\). The free parameters of the model are the pion decay constant \(F_{\pi}\), the pion mass \(m_{\pi}\), the dimensionless Skyrme parameter \(e\), and \(\lambda\) which is related to the mass \(m_{\omega}\) and coupling constant \(g_{\omega}\) of the \(\omega\) meson via \(\lambda^{2}=g_{\omega}^{2}/(2\pi^{4}m_{\omega}^{2})\)[53]. The reduced Planck constant is \(\hbar=197.3\) MeV fm. Throughout we will use the values \[F_{\pi}=122\,{\rm MeV},\quad e=4.54,\quad m_{\pi}=140\,{\rm MeV},\quad\lambda ^{2}=1\,{\rm MeV}\,{\rm fm}^{3}. \tag{10}\] We are interested in static solutions and adopt the usual Skyrme units of length and energy. The classical energy scale is \(\tilde{E}=F_{\pi}/4e\) (MeV) and the length scale is \(\tilde{L}=2\hbar/eF_{\pi}\) (fm). Thus the quantum energy scale is defined by \(\tilde{\hbar}=2e^{2}\). In these dimensionless Skyrme units, the Lagrangian is given by \[{\cal L} = \frac{\tilde{L}^{3}}{\tilde{E}}{\cal L}_{0}+\frac{\tilde{L}}{ \tilde{E}}{\cal L}_{2}+\frac{1}{\tilde{L}\tilde{E}}{\cal L}_{4}+\frac{1}{ \tilde{L}^{3}\tilde{E}}{\cal L}_{6}\] \[= -c_{0}m^{2}\,{\rm Tr}\left(\mathbb{I}_{2}-\varphi\right)+\frac{c _{2}}{2}g^{\mu\nu}\,{\rm Tr}(L_{\mu}L_{\nu})+\frac{c_{4}}{4}g^{\mu\alpha}g^{ \nu\beta}\,{\rm Tr}\left(\left[L_{\mu},L_{\nu}\right]\left[L_{\alpha},L_{\beta }\right]\right)-c_{6}g^{\mu\nu}{\cal B}_{\mu}{\cal B}_{\nu},\] where the rescaled pion mass for our studies is \[m=\frac{2m_{\pi}}{F_{\pi}e} \tag{12}\] and the dimensionless sextic coupling constant is \[c_{6}=\frac{\pi^{4}\lambda^{2}e^{4}F_{\pi}^{2}}{2\hbar^{3}}. \tag{13}\] It will prove useful throughout to introduce the Hilbert energy-momentum tensor (in dimensionless Skyrme units): \[T_{\mu\nu} = -\frac{2}{\sqrt{-g}}\frac{\partial(\sqrt{-g}{\cal L}_{0246})}{ \partial g^{\mu\nu}}=-2\frac{\partial{\cal L}_{0246}}{\partial g^{\mu\nu}}+g_ {\mu\nu}{\cal L}_{0246} \tag{14}\] \[= -c_{2}\,{\rm Tr}(L_{\mu}L_{\nu})-c_{4}g^{\alpha\beta}\,{\rm Tr}( [L_{\mu},L_{\alpha}][L_{\nu},L_{\beta}])+2c_{6}{\cal B}_{\mu}{\cal B}_{\nu}+g_ {\mu\nu}{\cal L}_{0246}.\] The static energy functional can be obtained from the timelike part of the energy-momentum tensor, \(T_{00}={\cal E}_{\rm stat}+{\cal E}_{\rm kin}\), and is given by \[M_{B}(\varphi,g) = \int_{M}{\rm d}^{3}x\sqrt{-g}\,{\cal E}_{\rm stat} \tag{15}\] \[= \int_{M}{\rm d}^{3}x\sqrt{-g}\left\{c_{0}m^{2}\,{\rm Tr}\left( \mathbb{I}_{2}-\varphi\right)-\frac{c_{2}}{2}g^{ij}\,{\rm Tr}(L_{i}L_{j})- \frac{c_{4}}{4}g^{ia}g^{jb}\,{\rm Tr}\left([L_{i},L_{j}][L_{a},L_{b}]\right)\right.\] \[\left.+c_{6}\frac{\epsilon^{ijk}\epsilon^{abc}}{(24\pi^{2}\sqrt{-g })^{2}}\,{\rm Tr}(L_{i}L_{j}L_{k})\,{\rm Tr}(L_{a}L_{b}L_{c})\right\}.\] A field configuration \(\varphi\) which minimizes the static energy functional (15), for some choice of domain metric \(g\), is referred to as a skyrmion and the static energy \(M_{B}\) is often interpreted as the classical mass of the skyrmion. The associated Euler-Lagrange field equations can be approximately solved by discretizing the static energy (15) and employing a 4th order central finite-difference method. This is carried out using the quaternionic formulation detailed below. We can then regard the static energy as a function \(M_{B}:\mathcal{C}\to\mathbb{R}\), where the discretised configuration space is the manifold \(\mathcal{C}=(S^{3})^{N_{1}\,N_{2}\,N_{3}}\subset\mathbb{R}^{4\,N_{1}\,N_{2}\,N_{ 3}}\). To solve the Euler-Lagrange field equations we use arrested Newton flow: an accelerated gradient descent method with flow arresting, with some appropriate initial configuration. That is, we are solving the system of 2nd order ODEs \[\ddot{\varphi}=-\frac{\delta\mathcal{E}_{\text{stat}}}{\delta\varphi},\quad \varphi(0)=\varphi_{0}, \tag{16}\] with initial velocity \(\dot{\varphi}(0)=0\). Setting \(\psi:=\dot{\varphi}\) as the velocity with \(\psi(0)=\dot{\varphi}(0)=0\) reduces the problem to a coupled system of 1st order ODEs. We implement a 4th order Runge-Kutta method to solve this coupled system. In general, the initial configuration \(\varphi_{0}\) is not a minimizer and so it swaps its potential energy for kinetic energy as it evolves. During the evolution we check to see if the energy is increasing. If the energy is indeed increasing, we take out all the kinetic energy in the system by setting \(\psi(t)=\dot{\varphi}(t)=0\) and restart the flow (this is the arresting criteria). Naturally the field will relax to a local, or global, minimum in some potential well. The evolution then terminates when every component of the energy gradient \(\frac{\delta M_{B}}{\delta\varphi}\) is zero within some specified tolerance, e.g. \(\text{tol}=10^{-5}\). ### Metric independent integral formulation For numerical purposes, it is convenient to utilize the quaternionic representation of the target group \(\text{SU}(2)\), which is topologically isomorphic to \(S^{3}\). Let us parameterize the unit quaternion \(\varphi\in\mathbb{H}\) by the mesonic fields \((\varphi^{0},\varphi^{1},\varphi^{2},\varphi^{3})\): \[\text{SU}(2)\ni\begin{pmatrix}\varphi^{0}+i\varphi^{3}&i\varphi^{1}+\varphi^{ 2}\\ i\varphi^{1}-\varphi^{2}&\varphi^{0}-i\varphi^{3}\end{pmatrix}\leftrightarrow( \varphi^{0},\varphi^{1},\varphi^{2},\varphi^{3})\in S^{3}, \tag{17}\] with the unitary condition \(\sigma^{2}+\boldsymbol{\pi}\cdot\boldsymbol{\pi}=1\), where \(\boldsymbol{\pi}=(\varphi^{1},\varphi^{2},\varphi^{3})\) is normally identified with the triplet of pion fields and \(\sigma=\varphi^{0}\) with the \(\sigma\)-field. Then the Maurer-Cartan left current can be expressed as the vector quaternion: \[L_{i}=-iL_{i}^{a}\tau^{a},\quad L_{i}^{a}=\epsilon^{abc}\partial_{i}\varphi^{ b}\varphi^{c}+\partial_{i}\varphi^{0}\varphi^{a}-\partial_{i}\varphi^{a} \varphi^{0} \tag{18}\] where \(\tau^{a}\) are the isospin Pauli matrices and, similarly, the curvature in the quaternionic representation is given by \[\Omega_{ij}=[L_{i},L_{j}]=-2i\Omega_{ij}^{a}\tau^{a},\quad\Omega_{ij}^{a}= \epsilon^{abc}\partial_{i}\varphi^{b}\partial_{j}\varphi^{c}+\partial_{i} \varphi^{0}\partial_{j}\varphi^{a}-\partial_{i}\varphi^{a}\partial_{j}\varphi ^{0}. \tag{19}\] From this we get the following contractions, \[L_{i}^{a}L_{j}^{a}= \,\partial_{i}\varphi^{\mu}\partial_{j}\varphi^{\mu}, \tag{20a}\] \[\Omega_{ij}^{a}\Omega_{kl}^{a}= \,\partial_{i}\varphi^{\mu}\partial_{k}\varphi^{\mu}\partial_{j} \varphi^{\nu}\partial_{l}\varphi^{\nu}-\partial_{i}\varphi^{\mu}\partial_{l} \varphi^{\mu}\partial_{j}\varphi^{\nu}\partial_{k}\varphi^{\nu},\] (20b) \[L_{i}^{a}\Omega_{jk}^{a}= \,-\epsilon_{\mu\nu\alpha\beta}\varphi^{\mu}\partial_{i}\varphi^{ \nu}\partial_{j}\varphi^{\alpha}\partial_{k}\varphi^{\beta}. \tag{20c}\] The baryon number density in contraction form is \[\mathcal{B}^{0}=\frac{1}{12\pi^{2}\sqrt{-g}}\epsilon^{ijk}L_{i}^{a}\Omega_{jk}^{a}. \tag{21}\] For numerical simulations involving the minimization of the energy functional with respect to variations of the metric, it will be convenient to define the metric independent integrals: \[W(\varphi)= \,c_{0}m^{2}\int_{M}\mathrm{d}^{3}x\,\operatorname{Tr}(\mathbb{I} _{2}-\varphi)=2c_{0}m^{2}\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\,(1-\varphi^{0}), \tag{22a}\] \[L_{ij}(\varphi)= \,-\frac{c_{2}}{2}\int_{M}\mathrm{d}^{3}x\,\operatorname{Tr}(L_{i} L_{j})=c_{2}\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\,L_{i}^{a}L_{j}^{a},\] (22b) \[\Omega_{ijkl}(\varphi)= \,-\frac{c_{4}}{4}\int_{M}\mathrm{d}^{3}x\,\operatorname{Tr}([L_ {i},L_{j}][L_{k},L_{l}])=2c_{4}\int_{M}\mathrm{d}^{3}x\,\Omega_{ij}^{a}\Omega_ {kl}^{a},\] (22c) \[C(\varphi)= \,c_{6}\frac{\epsilon^{ijk}\epsilon^{lmn}}{(24\pi^{2})^{2}}\int_ {M}\mathrm{d}^{3}x\,\operatorname{Tr}(L_{i}L_{j}L_{k})\operatorname{Tr}(L_{l} L_{m}L_{n})=c_{6}\frac{\epsilon^{ijk}\epsilon^{lmn}}{(12\pi^{2})^{2}}\int_{M} \mathrm{d}^{3}x\,L_{i}^{a}\Omega_{jk}^{a}L_{l}^{b}\Omega_{mn}^{b}. \tag{22d}\] In terms of these metric independent integrals, the static energy can be compactly written as \[M_{B}(\varphi,g)=\sqrt{-g}W(\varphi)+\sqrt{-g}g^{ij}L_{ij}(\varphi)+\sqrt{-g} g^{ik}g^{jl}\Omega_{ijkl}(\varphi)+\frac{C(\varphi)}{\sqrt{-g}}. \tag{23}\] ### Skyrme crystals Our aim is to study Skyrme fields \(\varphi:\mathbb{R}^{3}\to\operatorname{SU}(2)\) that are periodic with respect to some 3-dimensional period lattice \(\Lambda\), i.e. we impose the condition \(\varphi(x+X)=\varphi(x)\) for all \(x\in\mathbb{R}^{3}\) and \(X\in\Lambda\). We can equivalently interpret the field as a map \(\varphi:\mathbb{R}^{3}/\Lambda\to\operatorname{SU}(2)\), where \((\mathbb{R}^{3}/\Lambda,d)\) is a 3-torus equipped with the standard Euclidean metric \(d\). In particular, we define a Skyrme crystal to be an energy minimizing map \[\varphi:\mathbb{R}^{3}/\Lambda_{\diamond}\to\operatorname{SU}(2),\quad\Lambda _{\diamond}=\left\{n_{1}\mathbf{X}_{1}+n_{2}\mathbf{X}_{2}+n_{3}\mathbf{X}_{3 }:n_{i}\in\mathbb{Z}\right\}, \tag{24}\] where \(\mathbb{R}^{3}/\Lambda_{\diamond}\) is some fixed 3-torus such that the field \(\varphi\) is also critical and stable with respect to variations of the lattice \(\Lambda\) about \(\Lambda_{\diamond}\). The problem of determining Skyrme crystals was addressed by the Leeds group [50]. They prove that, for a fixed field configuration \(\varphi\), there is a unique period lattice \(\Lambda_{\diamond}\) (up to automorphism) that minimizes the static energy \(M_{B}\). Therefore, the problem of determining Skyrme crystals is one of finding critical points of the static energy functional (15) with respect to variations of both the field \(\varphi\) and the period lattice \(\Lambda_{\diamond}\). For massless \(\mathcal{L}_{24}\)-skyrmions, the period lattice can be determined explicitly. However, only a numerical approach seems possible for generalized \(\mathcal{L}_{0246}\)-skyrmions. For some initial period lattice \(\Lambda_{0}\), the static energy can minimized with respect to variations of the period lattice using the method detailed in SS2.5. In tandem, with some appropriate initial field configuration \(\varphi_{0}\), the static energy functional can also be minimized with respect to variations of the field by using arrested Newton flow (ANF), which is detailed in SS2.1. Skyrme crystals have been studied extensively in the literature, with it being previously accepted that the \(1/2\)-crystal found independently by Kugler & Shtrikmann [33] and Castillejo _et al_. [35] is the minimum energy Skyrme crystal. However, in the massless \(\mathcal{L}_{24}\)-Skyrme model, this \(1/2\)-crystal is just one point on an SO(4) orbit of solutions, i.e. it is not an isolated critical point and all of these solutions are all energy degenerate. When the pion mass is turned on, there is no reason to expect these degenerate \(\mathcal{L}_{24}\) critical points to extend to \(\mathcal{L}_{0246}\) critical points upon perturbation. However, there are four critical points which survive perturbation as argued by [50]. These are the \(1/2\), \(\alpha\), chain and sheet crystals. Each crystal has baryon number \(B_{\rm cell}=4\) per unit cell, with three of the crystals having lower energy classically than the \(1/2\)-crystal for non-zero pion mass and non-cubic (trigonal) lattice geometry. The \(1/2\)-crystal, which we will label \(\varphi_{1/2}\), can be obtained from the Fourier series-like expansion of the fields as an initial configuration [35], \[\varphi^{0} =\sum_{i,j,k}^{\infty}\beta_{ijk}\cos\left(\frac{2i\pi x^{1}}{L} \right)\cos\left(\frac{2j\pi x^{2}}{L}\right)\cos\left(\frac{2k\pi x^{3}}{L} \right), \tag{25a}\] \[\varphi^{1} =\sum_{i,j,k}^{\infty}\alpha_{ijk}\sin\left(\frac{2i\pi x^{1}}{L} \right)\cos\left(\frac{2j\pi x^{2}}{L}\right)\cos\left(\frac{2k\pi x^{3}}{L} \right),\] (25b) \[\varphi^{2} =\sum_{i,j,k}^{\infty}\alpha_{ijk}\cos\left(\frac{2k\pi x^{1}}{L} \right)\sin\left(\frac{2i\pi x^{2}}{L}\right)\cos\left(\frac{2j\pi x^{3}}{L} \right),\] (25c) \[\varphi^{3} =\sum_{i,j,k}^{\infty}\alpha_{ijk}\cos\left(\frac{2j\pi x^{1}}{L} \right)\cos\left(\frac{2k\pi x^{2}}{L}\right)\sin\left(\frac{2i\pi x^{3}}{L} \right). \tag{25d}\] From the \(1/2\)-crystal, the other three crystals can be constructed by applying a chiral SO(4) transformation \(Q\in\text{SO}(4)\), such that \(\varphi=Q\varphi_{1/2}\), and minimizing the energy with respect to variations of the field and the lattice. These chiral transformations \(Q\in\text{SO}(4)\) can be determined by considering a deformed energy functional on the moduli space of critical points of the Skyrme energy functional, and are found to be [50] \[Q\in\left\{\mathbb{I}_{4},\underbrace{\left((0,-1,1,1)/\sqrt{3}\right)}_{Q_{ \alpha}},\underbrace{\left((0,0,0,1)\right)}_{Q_{\text{sheet}}},\underbrace{ \left((0,0,1,1)/\sqrt{2}\right)}_{Q_{\text{chain}}}\right\}. \tag{26}\] The other three rows of the chiral transformations \(Q_{\alpha}\), \(Q_{\text{sheet}}\) and \(Q_{\text{chain}}\), labeled by the asterisk, can be obtained by using the Gram-Schmidt process. Out of the four crystal configurations, the most of interest to astrophysicists are the \(\alpha\)-crystal, chain-crystal and the sheet-crystal; these resemble non-uniform phases of nuclear matter, known as nuclear "pasta". The iron rich crust of a neutron star could be modeled by \(B=56\) chunks of \(\alpha\)-particle crystals, such as those modeled by Feist _et al_. [54], describing the "gnocchi" phase. As we descend deeper towards the outer core, the pressure due to gravity increases and nuclei are squeezed together into long thin tubes of "spaghetti". This spaghetti phase can be modeled using the chain-crystal. Deeper still and the spaghetti flattens into parallel sheets, resembling "lasagna", of which the sheet-crystal is reminiscent of. Of course, for realistic applications the Coulomb interaction must be added. This is because of the fact that different crust phases arise due to a balance between the strong and electrostatic forces. Nevertheless, the Skyrme model has a built-in ability to model such phases. The sheet-crystal is the lowest energy solution at all baryon densities and also yields a lower compression modulus than the other three crystals. This makes it an ideal candidate for nuclear matter and an equation of state (EoS) at high and low densities. With \(\varphi_{0}=Q_{\rm sheet}\varphi_{1/2}\) as an initial configuration and by considering fixed baryon density variations, as laid out in SS2.6, the energy-volume curve can be computed and an EoS obtained. ### The stress-energy tensor To determine Skyrme crystal solutions, we identify every 3-torus (\(\mathbb{R}^{3}/\Lambda,d\)), equipped with the standard Euclidean metric \(d\), with the unit 3-torus (\(\mathbb{T}^{3},h\)) where \(h\) is a Riemannian metric and \(\mathbb{T}^{3}=\mathbb{R}^{3}/\mathbb{Z}^{3}\). This metric \(h\) on \(\mathbb{T}^{3}\) is the pullback \(h=F^{*}d\), with \(h_{ij}=\mathbf{X}_{i}\cdot\mathbf{X}_{j}\), via the diffeomorphism \(F:\mathbb{T}^{3}\to\mathbb{R}^{3}/\Lambda\) where \(F(\boldsymbol{x})=\mathcal{A}\boldsymbol{x},\mathcal{A}=[\mathbf{X}_{1} \,\mathbf{X}_{2}\,\mathbf{X}_{3}]\). Let the Skyrme field be the map \(\varphi\circ F:\mathbb{T}^{3}\to\mathrm{SU}(2)\). We vary the metric \(h_{s}\) on \(\mathbb{T}^{3}\) with \(h_{0}=F^{*}d\) which is equivalent to varying the lattice \(\Lambda_{s}\) with \(\Lambda_{0}=\Lambda\). The energy minimized over variations \(h_{s}\) of the domain metric is equivalent to determining the energy minimizing period lattice \(\Lambda_{\circ}\). Now let the static Skyrme field be the smooth map \(\varphi:\mathbb{T}^{3}\to\mathrm{SU}(2)\). Let \((x^{1},x^{2},x^{3})\) be oriented local coordinates on \(\mathbb{T}^{3}\) and \(\{\partial_{1},\partial_{2},\partial_{3}\}\) be a local frame for the tangent space \(T_{x}\mathbb{T}^{3}\) at \(x\in\mathbb{T}^{3}\). Let \(h_{s}\) be a smooth one-parameter family of metrics on \(\mathbb{T}^{3}\) with \(h_{0}=F^{*}d\). Set \(\delta h=\partial_{s}h_{s}|_{s=0}\in\Gamma(\odot^{2}T^{*}\mathbb{T}^{3})\), a symmetric 2-covariant tensor field on \(\mathbb{T}^{3}\). Denote the inner product on the space of 2-covariant tensor fields of the tangent space \(T_{x}\mathbb{T}^{3}\) to \(\mathbb{T}^{3}\) at \(x\in\mathbb{T}^{3}\) by \(\left\langle\cdot,\cdot\right\rangle\). Then for any pair of symmetric bilinear forms \(A,B\) we have \[\left\langle A,B\right\rangle_{h}=A_{ij}h^{jk}B_{kl}h^{li}. \tag{27}\] In particular, we have the following result: \[\mathrm{Tr}_{h}(A)=\left\langle A,h\right\rangle_{h}=h^{ij}A_{ij}. \tag{28}\] Let us consider the rate of change of the energy of the Skyrme field \(\varphi\) with respect to varying the domain metric \(h\). The first variation of the energy with respect to the variation \(h(s)\) of the metric on \(\mathbb{T}^{3}\) is given by \[\left.\frac{\mathrm{d}M_{B}(\varphi,h_{s})}{\mathrm{d}s}\right|_{s=0}=\int_{ \mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\left\langle S(\varphi,h),\delta h \right\rangle_{h}, \tag{29}\] where \(S(\varphi,h)\in\Gamma(\odot^{2}T^{*}\mathbb{T}^{3})\) is a symmetric 2-covariant tensor field on \(\mathbb{T}^{3}\), known as the _stress-energy tensor_, defined by \[S_{ij}= \,\frac{1}{2}\left[c_{0}m^{2}\,\mathrm{Tr}(\mathrm{Id}-\varphi)- \frac{c_{2}}{2}h^{kl}\,\mathrm{Tr}(L_{k}L_{l})-\frac{c_{4}}{4}h^{km}h^{ln}\, \mathrm{Tr}([L_{k},L_{l}][L_{m},L_{n}])-c_{6}(B_{0})^{2}\right]h_{ij}\] \[+\frac{c_{2}}{2}\,\mathrm{Tr}(L_{i}L_{j})+\frac{c_{4}}{2}h^{kl}\, \mathrm{Tr}([L_{i},L_{k}][L_{j},L_{l}]). \tag{30}\] This stress-energy tensor is related to the spatial part of the (static) energy-momentum tensor, \[S_{ij}=\frac{1}{\sqrt{h}}\frac{\delta(\sqrt{h}\mathcal{L}_{0246})}{\delta h^{ij}}= -\frac{1}{2}\,T_{ij}. \tag{31}\] The space of allowed variations \(\mathscr{E}\) is a 6-dimensional subspace of the space of sections of the rank 6 vector bundle \(\odot^{2}T^{*}\mathbb{T}^{3}\), \[\mathscr{E}=\left\{\delta h_{ij}\mathrm{d}x^{i}\mathrm{d}x^{j}\in\Gamma( \odot^{2}T^{*}\mathbb{T}^{3}):\delta h_{ij}\,\mathrm{constant},\delta h_{ji}= \delta h_{ij}\right\}. \tag{32}\] By definition, the energy \(M_{B}\) is critical with respect to variations \(h_{s}\) of the metric if and only if \[\left.\frac{\mathrm{d}M_{B}(\varphi,h_{s})}{\mathrm{d}s}\right|_{s=0}=\int_{ \mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\left\langle S(\varphi,h),\delta h \right\rangle_{h}=0, \tag{33}\] that is, if and only if \(S\perp_{L^{2}}\mathscr{E}\). Now let the orthogonal compliment of \(h\) in \(\mathscr{E}\), the space of traceless parallel symmetric bilinear forms, given by \[\mathscr{E}_{0}=\left\{\theta\in\Gamma(\odot^{2}T^{*}\mathbb{T}^{3}):\mathrm{ Tr}_{h}(\theta)=\left\langle\theta,h\right\rangle_{h}=0\right\}. \tag{34}\] Then the criticality condition \(S\perp_{L^{2}}\mathscr{E}\) can be reformulated as [55] \[\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\left\langle S(\varphi,h),h \right\rangle_{h}=0\quad\text{and}\quad S\perp_{L^{2}}\mathscr{E}_{0}. \tag{35}\] The first condition \(S\perp_{L^{2}}h\) is analogous to a virial constraint and the second condition \(S\perp_{L^{2}}\mathscr{E}_{0}\) coincides with the extended virial constraints derived by Manton [56]. We can determine the virial constraint by using the trace (28) and evaluating \[\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\left\langle S(\varphi,h),h \right\rangle_{h}=\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\,\mathrm{Tr}_{h }(S)=\frac{1}{2}\left(E_{2}-E_{4}+3E_{0}-3E_{6}\right). \tag{36}\] Hence, the condition \(S\perp_{L^{2}}h\) establishes the familiar virial constraint \[E_{2}-E_{4}+3(E_{0}-E_{6})=0. \tag{37}\] To determine the extended virial constraint corresponding to the condition \(S\perp_{L^{2}}\mathscr{E}_{0}\), we define a symmetric bilinear form \[\Delta:T_{x}\mathbb{T}^{3}\times T_{x}\mathbb{T}^{3}\to\mathbb{R},\quad\Delta _{ij}=-\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\left(\frac{c_{2}}{2}\, \mathrm{Tr}(L_{i}L_{j})+\frac{c_{4}}{2}h^{kl}\,\mathrm{Tr}([L_{i},L_{k}][L_{j},L_{l}])\right). \tag{38}\] In the metric independent integral formulation, this symmetric bilinear form \(\Delta\) reads \[\Delta_{ij}=\sqrt{h}\,L_{ij}(\varphi)+2\sqrt{h}\,h^{kl}\Omega_{ikjl}(\varphi). \tag{39}\] Then \(S\perp_{L^{2}}\mathscr{E}_{0}\) if and only if \(\Delta\) is orthogonal to \(\mathscr{E}_{0}\) with respect to the inner product \(\left\langle\cdot,\cdot\right\rangle_{\mathscr{E}}\). Therefore, for \(\lambda\in\mathbb{R}\) we must have \[\Delta=\lambda h. \tag{40}\] Taking the trace of both sides yields \[3\lambda=\sqrt{h}\,h^{ij}L_{ij}(\varphi)+2\sqrt{h}\,h^{ij}h^{kl}\Omega_{ikjl}( \varphi)=E_{2}+2E_{4}. \tag{41}\] Thus, the condition \(S\perp_{L^{2}}\mathscr{E}_{0}\) produces the extended virial constraint \[\Delta=\frac{1}{3}\left(E_{2}+2E_{4}\right)h. \tag{42}\] So we see that \(\varphi:\mathbb{T}^{3}\to\mathrm{SU}(2)\) is a skyrmion crystal if and only if it satisfies the extended virial constraints: \[E_{2}-E_{4}=3(E_{6}-E_{0}), \tag{43a}\] \[\Delta=\frac{1}{3}\left(E_{2}+2E_{4}\right)h. \tag{43b}\] We will verify numerically that the extended virial constraints are being satisfied within some tolerance, e.g. \(\mathrm{tol}=10^{-5}\). This is done by checking that \[\left|\frac{E_{4}}{E_{2}+3(E_{0}-E_{6})}-1\right|<\mathrm{tol}\quad\text{and} \quad\left|\frac{\Delta_{ij}}{(E_{4}+E_{6}-E_{0})h_{ij}}-1\right|<\mathrm{tol}. \tag{44}\] ### Numerical optimization of the lattice geometry Let us fix the field \(\varphi:\mathbb{T}^{3}\to\mathrm{SU}(2)\) and think of the energy \(M_{B}\) as a function of the metric \(h\) on \(\mathbb{T}^{3}\). That is, we define a map \(E_{\varphi}:\mathrm{SPD}_{3}\to\mathbb{R}\) such that \(E_{\varphi}:=M_{B}(\left.\varphi\right|_{\mathrm{fixed}},h)\), where \(\mathrm{SPD}_{3}\) is the space of symmetric positive-definite \(3\times 3\)-matrices. To minimize the energy functional \(E_{\varphi}\) with respect to variations of the metric \(h_{s}\), we use arrested Newton flow on \(\mathrm{SPD}_{3}\). The essence of the algorithm is as follows: we solve Newton's equations of motion for a particle on \(\mathrm{SPD}_{3}\) with potential energy \(E_{\varphi}\). Now let \(h_{s}\) be a smooth one-parameter curve in \(\mathrm{SPD}_{3}\) with \(h_{0}=F^{*}d\). Explicitly, we are solving the system of 2nd order ODEs \[\left.\frac{\mathrm{d}^{2}}{\mathrm{d}s^{2}}\right|_{s=0}(h_{ij})_{s}=-\frac{ \partial E_{\varphi}}{\partial h_{ij}}=-\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x \sqrt{h}\,S_{\varphi}^{ij},\quad(h_{ij})_{0}=\mathbf{X}_{i}\cdot\mathbf{X}_{j}, \tag{45}\] where \(S_{\varphi}=S(h)\) is the stress-energy tensor for fixed field configuration \(\varphi\). Setting \(\delta h_{s}=\partial_{s}h_{s}\) as the velocity with initial velocity \(\delta h_{0}=\left.\partial_{s}h_{s}\right|_{s=0}=0\) reduces the problem to a coupled system of 1st order ODEs. We implement a 4th order Runge-Kutta method to solve this coupled system. The components of the stress-energy tensor for fixed field \(\varphi\), given in the metric independent integral formulation, reads \[\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\,S_{\varphi}^{ij}= \frac{1}{2}h^{ij}\left(\sqrt{h}\,W-\frac{C}{\sqrt{h}}\right)+ \sqrt{h}\left(\frac{1}{2}h^{mn}h^{ij}-h^{im}h^{jn}\right)L_{mn}\] \[+\sqrt{h}\left(\frac{1}{2}h^{ij}h^{ln}-2h^{il}h^{jn}\right)h^{km} \Omega_{klmn}. \tag{46}\] In general, the dimension of \(\mathrm{SPD}_{n}\) is \(\mathrm{dim}(\mathrm{SPD}_{n})=n(n+1)/2\). In our case, we are working with \(\mathrm{SPD}_{3}\) and consider the energy as a function \(E_{\varphi}:\mathrm{SPD}_{3}\to\mathbb{R}\). So we are implementing arrested Newton flow on a 6 dimensional manifold. After each time step \(t\mapsto t+\delta t\), we check to see if the energy is increasing. If \(E_{\varphi}(t+\delta t)>E_{\varphi}(t)\), we take out all the kinetic energy in the system by setting \(\delta h(t+\delta t)=0\) and restart the flow. The flow then terminates when every component of the stress-energy tensor \(S_{\varphi}\) is zero to within a given tolerance (we have used \(10^{-6}\)). As the metric \(h_{s}\) on \(\mathbb{T}^{3}\) varies so too does the lattice \(\Lambda_{s}\), which we have labelled \(\Lambda_{s}=\Lambda(h_{s})\) where \(\Lambda_{0}=\Lambda\). As before, let \(\Lambda_{\diamond}\) be the energy minimising lattice and denote the corresponding energy minimising metric on \(\mathbb{T}^{3}\) by \(h_{\diamond}\). Let \(\mathbf{X}_{1}=(x_{1},y_{1},z_{1}),\mathbf{X}_{2}=(x_{2},y_{2},z_{2})\) and \(\mathbf{X}_{3}=(x_{3},y_{3},z_{3})\) be the period lattice vectors for \(\Lambda_{\diamond}\). In order to plot isosurfaces of the baryon density of the resulting skyrmion on \((\mathbb{R}^{3}/\Lambda_{\diamond},d)\), we need to reconstruct the lattice \(\Lambda_{\diamond}\) from the metric \(h_{\diamond}\). To do this we need to solve the following under-determined system of equations \[\begin{array}{c}\mathbf{X}_{1}\cdot\mathbf{X}_{1}=x_{1}^{2}+y_{1}^{2}+z_{1}^ {2}=h_{11}\\ \mathbf{X}_{1}\cdot\mathbf{X}_{2}=x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}=h_{12}\\ \mathbf{X}_{1}\cdot\mathbf{X}_{3}=x_{1}x_{3}+y_{1}y_{3}+z_{1}z_{3}=h_{13}\\ \mathbf{X}_{2}\cdot\mathbf{X}_{2}=x_{2}^{2}+y_{2}^{2}+z_{2}^{2}=h_{22}\\ \mathbf{X}_{2}\cdot\mathbf{X}_{3}=x_{2}x_{3}+y_{2}y_{3}+z_{2}z_{3}=h_{23}\\ \mathbf{X}_{3}\cdot\mathbf{X}_{3}=x_{3}^{2}+y_{3}^{2}+z_{3}^{2}=h_{33}\end{array} \tag{47}\] where we have written \(h_{ij}=(h_{\diamond})_{ij}\) for notational convenience. This has infinitely many solutions which we can solve for by fixing a particular lattice vector, or by setting \(y_{1}=z_{1}=z_{2}=0\), i.e. \(\mathbf{X}_{1}=(x_{1},0,0),\mathbf{X}_{2}=(x_{2},y_{2},0)\) and \(\mathbf{X}_{3}=(x_{3},y_{3},z_{3})\). Then, for the latter choice of period lattice vectors, the system of equations (47) has a unique solution given by \[\begin{array}{c}\mathbf{X}_{1}=\left(\sqrt{h_{11}},0,0\right)\\ \mathbf{X}_{2}=\left(\frac{h_{12}}{\sqrt{h_{11}}},\sqrt{h_{22}-\frac{h_{12}^ {2}}{h_{11}}},0\right)\\ \mathbf{X}_{3}=\left(\frac{h_{13}}{\sqrt{h_{11}}},\frac{1}{\sqrt{h_{22}-\frac{ h_{12}^{2}}{h_{11}}}}\left(h_{23}-\frac{h_{12}h_{13}}{h_{11}}\right),\sqrt{h_{33}- \frac{h_{13}^{2}}{h_{11}}-\frac{1}{\left(h_{22}-\frac{h_{12}^{2}}{h_{11}} \right)}\left(h_{23}-\frac{h_{12}h_{13}}{h_{11}}\right)^{2}}\right).\end{array}\] ### Phases of skyrmion matter Determining phases of nuclear matter and phase transitions in the Skyrme model is a difficult task, and is important if one wants to understand symmetric and asymmetric nuclear matter in high/low density regimes. To study phases of matter at various densities, we consider fixed density variations of the energy functional, i.e. we allow the lattice to vary but keep its volume fixed. Then the volume form \(\mathrm{vol}_{h}\) is required to be invariant under variations \(h_{s}\) of the metric, viz. \[\left.\frac{\mathrm{d}}{\mathrm{d}s}\right|_{s=0}\int_{\mathbb{T}^{3}}\mathrm{ d}^{3}x\sqrt{h_{s}}=\frac{1}{2}\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}h^{ij} \delta h_{ij}=0. \tag{48}\] That is, \(\delta h\) has to be an element of the space of traceless parallel symmetric bilinear forms \(\mathscr{E}_{0}\). In terms of the energy, we are dealing with a constrained minimization problem: minimize the energy functional for fixed field configuration \(\varphi=\varphi|_{\text{fixed}}\) subject to the constraint that \(\det(h)=\text{constant}\). We can approach this using the method of Lagrange multipliers. Let us define the Lagrangian \[L(h)=E_{\varphi}(h)+\lambda\left(\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}-c \right), \tag{49}\] where \(\lambda\in\mathbb{R}\) is a Lagrange multiplier and \(c\in\mathbb{R}_{>0}\) is a constant. Consider a one-parameter curve \(h_{s}\) in the space of metrics on \(\mathbb{T}^{3}\). Then, for any variation \(h_{s}\) of the metric, \[\left.\frac{\mathrm{d}L}{\mathrm{d}s}\right|_{s=0}=\left.\frac{\mathrm{d}E_{ \varphi}}{\mathrm{d}s}\right|_{s=0}+\lambda\left.\frac{\mathrm{d}}{\mathrm{d} s}\right|_{s=0}\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h_{s}}=0. \tag{50}\] We note that \[\left.\frac{\mathrm{d}E_{\varphi}(h_{s})}{\mathrm{d}s}\right|_{s=0}=\int_{ \mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\left\langle S(h),\delta h\right\rangle _{h}=\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{h}\,S_{\varphi}^{ij}\delta h_{ ij}, \tag{51}\] where \(S_{\varphi}\) is the (fixed field) stress tensor appearing in (46). Thus, we can determine the Lagrange multiplier by considering \[\left.\frac{\mathrm{d}L}{\mathrm{d}s}\right|_{s=0}=\int_{\mathbb{T}^{3}} \mathrm{d}^{3}x\sqrt{h}\left(S_{\varphi}^{ij}+\frac{\lambda}{2}h^{ij}\right) \delta h_{ij}=0\quad\Rightarrow\quad\lambda=-\frac{2}{3}h_{ij}S_{\varphi}^{ ij}. \tag{52}\] Figure 1: \(\mathcal{L}_{0246}\)-Skyrme sheet-crystal at a fixed baryon density. The isobaryon density is depicted in (a) and isosurface plots of the \(\sigma\) field, where the vacuum (\(\sigma=+0.9\)) is colored red and the anti-vacuum (\(\sigma=-0.9\)) blue, are shown in (b). Hence, we find that \[\left.\frac{\mathrm{d}L}{\mathrm{d}s}\right|_{s=0}=\int_{\mathbb{T}^{3}}\mathrm{d }^{3}x\sqrt{h}\left\langle S(h)-\frac{1}{3}\operatorname{Tr}_{h}(S_{\varphi}) \,h,\delta h\right\rangle_{h}=0. \tag{53}\] Therefore, we modify the stress-energy tensor in (45) via the mapping \[S_{\varphi}\mapsto\tilde{S}_{\varphi}=S_{\varphi}-\frac{1}{3}\operatorname{ Tr}_{h}(S_{\varphi})\,h \tag{54}\] and our convergence criterion becomes \(\max(\tilde{S}_{\varphi})<\mathrm{tol}\). Likewise, to ensure numerically that \(\delta h\) is traceless, we need to project out the component of variation vector \(\delta h\) parallel to the metric \(h\) via the mapping \[\delta h\mapsto\delta h-\frac{1}{3}(h^{ij}\delta h_{ij})\,h. \tag{55}\] By employing this process at various volumes it enables us to determine an energy-volume curve or, equivalently, an energy-density curve. This is key to obtaining an EoS within our framework, as the EoS is directly related to the \(E-V\) curve. The first main result of this section is the observation that, as it is for the massive \(\mathcal{L}_{024}\)-Skyrme model, the multi-wall crystal is also the ground state crystalline solution for the generalized \(\mathcal{L}_{0246}\)-Skyrme model at all densities. In the low density regime the solution clearly exhibits a two-layer structure, extending parallel to the \(xy\)-plane with the vacuum occupying the regions above and below the sheet. As the density increases, the regions occupied by the vacuum reduces and the non-cubic period lattice becomes more cubic, tending asymptotically to the \(1/2\)-crystal in the zero volume limit. In Fig. 2 we plot the classical static energy per baryon \(M_{B}/B\) of the multi-wall crystal as a function of the unit cell volume \(V_{\mathrm{cell}}\). As always we find a local minimum which is identified with the nuclear saturation point. For our choice of the coupling constants (10) the saturation energy per baryon and saturation density are respectively \(M_{B}/B=912\) MeV and \(n_{0}=0.160\) fm\({}^{-3}\), which almost perfectly corresponds to the physical values of the saturation energy and density. An important observation is that the difference between the energy at nuclear saturation and the classical energy at zero density is much smaller than in previous works. Indeed, the difference is now roughly \(\Delta E\approx 7\) MeV, which is a \(0.8\%\) difference with respect to the total energy. Whereas, for a \(B=32\) or \(B=108\)\(\alpha\)-crystal the differene is found to be approximately \(3\%\) and \(1.7\%\), respectively. This small difference in energy between the nuclear saturation and low-density asymptotic solutions is crucial for the existence of a purely skyrmion generated EoS at all densities. ## 3 Quantum skyrmion crystals and the symmetry energy In general, the full symmetry group of the generalized \(\mathcal{L}_{0246}\)-Lagrangian (5) is the direct product of the Poincare group and chiral group: \(\tilde{G}=\mathrm{O}(3)\ltimes\mathbb{R}^{3}\times\mathrm{SO}(4)_{\mathrm{ chiral}}\). However, static energy minimizers break the Poincare symmetry group \(\mathrm{O}(3)\ltimes\mathbb{R}^{3}\) to the Euclidean subgroup \(E_{3}=\mathrm{SO}(3)\times\mathbb{R}^{3}\), corresponding to spatial translations and rotations. The resulting symmetry group of the static energy functional (15) is thus \(G=E_{3}\times\mathrm{SO}(4)_{\mathrm{chiral}}\cong E_{3}\times\mathrm{SU}(2)_{L} \times\mathrm{SU}(2)_{R}\). The action of this group on the Skyrme field is given by \[\varphi(x)\mapsto A_{L}\varphi(Rx+X)A_{R}^{\dagger},\quad A_{L/R}\in\mathrm{SU}( 2)_{L/R},\quad R\in\mathrm{SO}(3),\quad X\in\mathbb{R}^{3}. \tag{56}\] For skyrmions on \(M=\mathbb{R}^{3}\), one must impose finite boundary conditions \(\varphi(x\to\infty)=\mathbb{l}_{2}\). This allows for the compactification of the domain \(\mathbb{R}^{3}\bigcup\{\infty\}\cong S^{3}\) and further reduces the symmetry group \(G\) to the subgroup \(H=E_{3}\times\mathrm{diag}[\mathrm{SU}(2)_{L}\times\mathrm{SU}(2)_{R}]\cong E_ {3}\times\mathrm{SU}(2)_{I}\), where \(\mathrm{SU}(2)_{I}\) is the isospin internal symmetry group. The corresponding action of the subgroup \(H\) on the Skyrme field is given by the transformation (56) with \(A_{L}=A_{R}=A\in\mathrm{SU}(2)_{I}\). When considering crystals on \(M=\mathbb{R}^{3}/\Lambda\), one must be careful when defining the isospin subgroup \(\mathrm{SU}(2)_{I}\); the vacuum boundary condition is no longer imposed and there is not a natural way to select the diagonal isospin subgroup \(\mathrm{SU}(2)_{I}\). This problem was addressed by Baskerville [48] in the context of the \(1/2\)-crystal in the \(\mathcal{L}_{24}\)-model, wherein she considered full \(\mathrm{SO}(4)_{\mathrm{chiral}}\) rotations. She deduced that there are two cubic point groups that can define the \(1/2\)-crystal, one of which is related to the centres of the half-skyrmions. The cubic point group symmetry corresponding to the half-skyrmion centres is reducible into the trivial \(1\)-dimensional irreducible representation and a \(3\)-dimensional irrep. We choose the \(\sigma=\varphi^{0}\) field to transform in the \(1\)-dimensional irrep. Then the isospin group \(\mathrm{SU}(2)_{I}\) can be defined as the group of isorotations of the pion fields \(\boldsymbol{\pi}=(\varphi^{1},\varphi^{2},\varphi^{3})\), corresponding to transformations in the \(3\)-dimensional irrep. If the pion mass potential term \(\mathcal{L}_{0}\) is included then this is a natural choice of isospin group \(\mathrm{SU}(2)_{I}\). Figure 2: The classical static energy per baryon \(M_{B}/B\) as a function of the nuclear density \(n_{B}\). The nuclear density at which at the cusp in the symmetry energy appears is labelled by \(n_{*}\). This corresponds to the density at which the infinite crystalline multi-wall solution begins transitioning to an isolated multi-wall configuration. ### Isospin quantization As a field theory, the Skyrme model is non-renormalizable. One must quantize it semi-classically. It is well-known that the classical dynamics of slowly moving solitons corresponds to geodesic motion on the moduli space of static configurations [57]. Minimal energy configurations in the Skyrme model are unique, for a given baryon number \(B\), up to actions of the symmetry group \(H=E_{3}\times\mathrm{SU}(2)_{I}\). The classical configuration space \(Q\) of skyrmions is split into connected components, labelled by the baryon number \(B\), \(Q=\bigcup_{B\in\mathcal{I}}Q_{B}\). The covering space \(\tilde{Q}_{B}\) of each component is a double-cover with a two-to-one map \(\pi_{Q}:\tilde{Q}_{B}\to Q_{B}\)[58]. It was argued by Finkelstein and Rubinstein [59] that the wave functions \(\Psi\in\mathcal{H}\) must be defined on the covering space of the configuration space \(\tilde{Q}\), where \(\mathcal{H}\) is a formal Hilbert space such that \(\Psi\) is normalizable and square integrable. That is, the wave functions are defined by the map \(\Psi:\tilde{Q}\to\mathbb{C}\). We make a simple approximation and require the wave function \(\Psi\) to be non-vanishing only on minimal energy configurations and their symmetry orbits. This quantization procedure is known as rigid-body, or zero mode, quantization. In the zero mode quantization method, a skyrmion is treated as a rigid body that is free to translate and rotate in physical space and also rotate in isospace, with the action defined by (56). These solutions are all degenerate in energy and this classical degeneracy is removed when one quantizes the theory. We wish to quantize the isorotational degrees of freedom and work in the zero-momentum frame, ignoring the translational and spin degrees of freedom. The action of the group of isorotations \(\mathrm{SU}(2)_{I}\) on the Skyrme field \(\varphi\) is defined by the mapping \(\varphi(x)\mapsto A\varphi(x)A^{\dagger}\). Semi-classical quantization is performed by promoting the the collective coordinate \(A\in\mathrm{SU}(2)\) to a dynamical degree of freedom \(A(t)\)[6]. The dynamical ansatz for the Skyrme field is then given by the transformation \[\varphi(x)\mapsto\hat{\varphi}(x,t)=A(t)\varphi(x)A^{\dagger}(t). \tag{57}\] Define the isorotational angular velocity \(\boldsymbol{\omega}\) to be \(A^{\dagger}\hat{A}=\frac{i}{2}\omega_{j}\tau^{j}\) such that \(\omega_{j}=-i\operatorname{Tr}(\tau^{j}A^{\dagger}\hat{A})\). Then, under the dynamical ansatz (57), the Maurer-Cartan left current transforms as \[\hat{L}_{\mu}=\hat{\varphi}^{\dagger}\partial_{\mu}\hat{\varphi}=\begin{cases} A\omega_{i}T_{i}A^{\dagger},&\mu=0\\ AL_{i}A^{\dagger},&\mu=i=1,2,3,\end{cases} \tag{58}\] where \(T_{i}=\frac{i}{2}\varphi^{\dagger}[\tau^{i},\varphi]\) is an \(\mathfrak{su}(2)\)-valued current. In the quaternionic formulation, the \(\mathfrak{su}(2)\) current \(T_{i}\) is expressed by the vector quaternion \[T_{i}=-iT_{i}^{a}\tau^{a},\quad T_{i}^{j}=\delta^{ij}\varphi^{k}\varphi^{k}- \varphi^{i}\varphi^{j}-\epsilon^{ijk}\varphi^{0}\varphi^{k}. \tag{59}\] The corresponding contractions are given by \[T_{i}^{k}T_{j}^{k}= \,\delta^{ij}\varphi^{k}\varphi^{k}-\varphi^{i}\varphi^{j}, \tag{60a}\] \[T_{i}^{k}L_{j}^{k}= \,-\epsilon^{ikl}\varphi^{k}\partial_{j}\varphi^{l}. \tag{60b}\] The dynamical ansatz (57) induces a rotational kinetic term in the energy functional, which is given by \[E_{\mathrm{rot}}=\int_{\mathbb{T}^{3}}\mathrm{d}^{3}x\sqrt{-g}\left\{-\frac{c _{2}}{2}\operatorname{Tr}\left(\hat{L}_{0}\hat{L}_{0}\right)-\frac{c_{4}}{2}g ^{ij}\operatorname{Tr}\left([\hat{L}_{0},\hat{L}_{i}][\hat{L}_{0},\hat{L}_{j} ]\right)+\frac{c_{6}}{g}g_{ij}\hat{\mathcal{B}}^{i}\hat{\mathcal{B}}^{j} \right\}, \tag{61}\] where the Chern-Simons current transforms as \[\hat{\cal B}^{i}=\frac{3}{24\pi^{2}}\epsilon^{ijk}\,{\rm Tr}(\hat{L}_{0}\hat{L}_{ j}\hat{L}_{k})=\frac{1}{8\pi^{2}}\epsilon^{ijk}\,{\rm Tr}(T_{l}L_{j}L_{k})\omega_{l}. \tag{62}\] The restriction of the kinetic energy functional of the model to the isospin orbit of a given static solution defines a left invariant metric on SO(3) called the isospin inertia tensor, which is the symmetric \(3\times 3\)-matrix given by \[U_{ij}= -\int_{\mathbb{T}^{3}}{\rm d}^{3}x\sqrt{-g}\left\{c_{2}\,{\rm Tr} (T_{i}T_{j})+c_{4}g^{kl}\,{\rm Tr}([L_{k},T_{i}][L_{l},T_{j}])\right.\] \[\left.-\frac{c_{6}}{2(4\pi^{2}\sqrt{-g})^{2}}g_{kl}\epsilon^{kmn} \epsilon^{lab}\,{\rm Tr}(T_{i}L_{m}L_{n})\,{\rm Tr}(T_{j}L_{a}L_{b})\right\}. \tag{63}\] Using the quaternion representation (17), this isospin inertia tensor takes the form \[U_{ij}= \,2\int_{\mathbb{T}^{3}}{\rm d}^{3}x\sqrt{-g}\left\{c_{2}\left( \delta^{ij}\pi^{k}\pi^{k}-\pi^{i}\pi^{j}\right)+4c_{4}g^{kl}\left((\delta^{ij}- \pi^{i}\pi^{j})\partial_{k}\pi^{0}\partial_{l}\pi^{0}+(\pi^{m}\pi^{m})\partial _{k}\pi^{i}\partial_{l}\pi^{j}\right.\right.\] \[\left.\left.+\pi^{0}\pi^{i}\partial_{k}\pi^{0}\partial_{l}\pi^{j} +\pi^{0}\pi^{j}\partial_{l}\pi^{0}\partial_{k}\pi^{i}\right)\right.\] \[\left.+\frac{c_{6}}{(4\pi^{2}\sqrt{-g})^{2}}g_{pq}\epsilon^{pmn} \left[(\delta^{ij}\pi^{a}\pi^{a}-\pi^{i}\pi^{j})\left(\partial_{m}\pi^{\mu} \partial_{k}\pi^{\mu}\partial_{n}\pi^{\nu}\partial_{l}\pi^{\nu}-\partial_{n} \pi^{\mu}\partial_{k}\pi^{\mu}\partial_{m}\pi^{\nu}\partial_{l}\pi^{\nu}\right)\right.\right.\] \[\left.+\epsilon^{jac}\pi^{a}\partial_{m}\pi^{c}\left(\epsilon^{ ibd}\pi^{b}\partial_{l}\pi^{d}\partial_{n}\pi^{\mu}\partial_{k}\pi^{\mu}- \epsilon^{ibd}\pi^{b}\partial_{k}\pi^{d}\partial_{n}\pi^{\mu}\partial_{l}\pi^ {\mu}\right)\right.\] \[\left.\left.+\epsilon^{jac}\pi^{a}\partial_{n}\pi^{c}\left( \epsilon^{ibd}\pi^{b}\partial_{k}\pi^{d}\partial_{m}\pi^{\mu}\partial_{l}\pi^ {\mu}-\epsilon^{ibd}\pi^{b}\partial_{l}\pi^{d}\partial_{m}\pi^{\mu}\partial_{ k}\pi^{\mu}\right)\right]\right\}. \tag{64}\] Therefore, the effective Lagrangian on this restricted space of configurations is \(L_{\rm eff}=L_{\rm rot}-M_{B}\), where \(M_{B}\) is the static mass of the skyrmion defined by (15) and \(L_{\rm rot}\) is the induced isorotational part of the Lagrangian given by \[L_{\rm rot}=\frac{1}{2}\omega_{i}U_{ij}\omega_{j}. \tag{65}\] The rigid-body wavefunctions will be on SU(2) with isospin half-integer if \({\cal B}\) is odd and integer if \({\cal B}\) is even. The isorotation angular momentum operator canonically conjugate to \(\omega\) is the body-fixed isospin angular momentum operator \({\bf K}\), defined by \[K_{i}=\partial L_{\rm rot}/\partial\omega_{i}=U_{ij}\omega_{j}. \tag{66}\] This is related to the usual space-fixed isospin angular momentum \({\bf I}\) by the relation \[I_{i}=-D(A)_{ij}K, \tag{67}\] where \(A\in{\rm SU(2)}\) has been recast in the SO(3) form via the map \[D:{\rm SU(2)}\rightarrow{\rm SO(3)},\quad D(A)_{ij}=\frac{1}{2}\,{\rm Tr} \left(\tau^{i}A\tau^{j}A^{\dagger}\right). \tag{68}\] These two classical momenta are promoted to quantum operators \({\bf\hat{K}}\) and \({\bf\hat{I}}\), both satisfying the \(\mathfrak{su}(2)\) commutation relations, and the Casimir invariants satisfy \({\bf\hat{I}}^{2}={\bf\hat{K}}^{2}\). On the double cover of the group of isorotations \(\mathrm{SU}(2)_{I}\), there is a basis of rigid-body wavefunctions \(\left|I,I_{3},K_{3}\right\rangle\) with \(-I\leq K_{3}\leq I\), where \(I\) is the total isospin quantum number, \(K_{3}\) is the third component of \(\mathbf{\hat{K}}\) and \(I_{3}\) is the third component of isospin relative to the space-fixed axes (in units of \(\hbar\)) as defined in nuclear physics. The operator \(\mathbf{\hat{I}}^{2}\) has eigenvalue \(I(I+1)\) and \(I_{3}\) the eigenvalue for the operator \(\hat{I}_{3}\). The rigid-body Hamiltonian takes the general form \[\mathscr{H}=\frac{\hbar^{2}}{2}\mathbf{\hat{K}}U^{-1}\mathbf{\hat{K}}^{T}+M_{ B}. \tag{69}\] For Skyrme crystals, we can set the principal axes of inertia to be the usual orthogonal axes such that \(U_{ij}=0\) for \(i\neq j\). Let us label the eigenvalues \(U_{i}=U_{ii}\), then the quantum Hamiltonian takes the form \[\mathscr{H}=\frac{\hbar^{2}}{2}\left(\frac{1}{U_{1}}+\frac{1}{U_{2}}\right) \mathbf{\hat{K}}^{2}+\frac{\hbar^{2}}{2}\left(\frac{1}{U_{3}}-\frac{1}{U_{1}}- \frac{1}{U_{2}}\right)\hat{K}_{3}^{2}-\frac{\hbar^{2}}{2U_{2}}\hat{K}_{1}^{2}- \frac{\hbar^{2}}{2U_{1}}\hat{K}_{2}^{2}+M_{B}. \tag{70}\] The energy eigenstates of the Hamiltonian (70) can be classified by \(I\) and \(I_{3}\). To determine bound states with definite energy one must solve the static Schrodinger equation corresponding to this Hamiltonian, \(\mathscr{H}\left|\Psi\right\rangle=E\left|\Psi\right\rangle\). The Schrodinger equation can be expressed more explicitly within a particular \((I,I_{3})\) sector by expanding the quantum state \(\left|\Psi\right\rangle\) in terms of the total wavefunctions \(\Psi\) as \[\left|\Psi\right\rangle=\sum_{K_{3}=-I}^{+I}\Psi_{K_{3}}(q)\left|I,I_{3},K_{3} \right\rangle,\quad\mathbf{\Psi}(q)=\begin{pmatrix}\Psi_{-I}(q)\\ \vdots\\ \Psi_{+I}(q)\end{pmatrix},\quad q\in\tilde{Q}, \tag{71}\] and substituting this into the Hamiltonian (70). ### Symmetry energy and the cusp structure So far we have only considered symmetric nuclear matter, which we have described by using the classical multi-wall skyrmion crystal. In order to study nuclear matter in neutron stars we must consider isospin asymmetric nuclear matter, whereby a small fraction of protons are permitted. Now let us consider asymmetric nuclear matter with baryon number \(B=N+Z\), where \(N\) is the number of neutrons and \(Z\) the number of protons. The asymmetry of such matter is determined by the isospin asymmetry parameter \(\delta=(N-Z)/(N+Z)=1-2\gamma\), where \(\gamma\) is the proton fraction. We define the nuclear density to be \(n_{B}=B/V\), with the nuclear saturation density \(n_{0}\) defined to be the nuclear density such that \((\partial M_{B})/(\partial n_{B})|_{n_{B}=n_{0}}=0\). Then the binding energy per baryon number of asymmetric nuclear matter is given by \[\frac{E}{B}(n_{B},\delta)=E_{N}(n_{B})+S_{N}(n_{B})\delta^{2}+\mathrm{O}( \delta^{3}). \tag{72}\] The two terms appearing in the asymmetric binding energy (72) are the binding energy of isospin-symmetric matter \(E_{N}\) and the symmetry energy \(S_{N}\). In terms of our model, the symmetric binding energy is defined by \(E_{N}=(M_{B}-BM_{1})/B\). The symmetry energy \(S_{N}\) dictates how the binding energy changes when going from symmetric (\(\delta=0\)) to asymmetric (\(\delta\neq 0\)) nuclear matter. We can expand the isospin symmetric binding energy \(E_{N}\) and the symmetry energy \(S_{N}\) around the saturation density \(n_{0}\) for symmetric matter [60], \[E_{N}(n_{B})= \,E(n_{0})+\frac{1}{18}K_{0}\epsilon^{2}, \tag{73}\] \[S_{N}(n_{B})= \,S_{0}+\frac{1}{3}L_{\rm sym}\epsilon+\frac{1}{18}K_{\rm sym} \epsilon^{2}+{\rm O}(\epsilon^{3}), \tag{74}\] where \(\epsilon=(n_{B}-n_{0})/n_{0}\), \(K_{0}\) is the incompressibility at the saturation point and \(S_{0}=S_{N}(n_{0})\) is the symmetry energy coefficient at saturation. We remind ourselves that, for our choice of coupling constants (10), the nuclear saturation point is characterized by the density \(n_{0}=0.160\,{\rm fm}^{-3}\) and energy (per baryon) \(M_{B}/B=912\) MeV. The compression modulus \(K_{0}\) gives the curvature of the symmetric binding energy \(E_{N}\) at the saturation density \(n_{0}\) and determines the increase in symmetric binding energy due to compression. In comparison with studies involving the 1/2-crystal, we do observe an improvement in the (in)compressibility by approximately 200 MeV. However, as in all the previous Skyrme model studies, its value is still a few times larger than expected. The higher-order coefficients, \(L_{\rm sym}\) and \(K_{\rm sym}\), appearing in the symmetry energy \(S_{N}\) are defined as \[L_{\rm sym}=3n_{0}\left.\frac{\partial S_{N}}{\partial n_{B}}\right|_{n_{B}=n_ {0}},\quad K_{\rm sym}=9n_{0}^{2}\left.\frac{\partial^{2}S_{N}}{\partial n_{B} ^{2}}\right|_{n_{B}=n_{0}}. \tag{75}\] The precise values of these coefficients are not known, but are predicted to be \(L_{\rm sym}=57.7\pm 19\) MeV and \(K_{\rm sym}=-107\pm 88\) MeV [61]. Consider an infinitely extended and rigidly iso-spinning Skyrme crystal with each unit cell containing baryon number \(B_{\rm cell}\). In order to calculate the isospin correction to the energy of the crystal we would need to know the quantum state of the whole crystal. This is obviously a very difficult computation since the crystal is infinitely extended and is therefore composed of an infinite number of baryons. However we may impose the following restrictions to solve this problem: * The total isospin quantum state of the crystal \(|\Psi\rangle\) is written as the superposition of each individual unit cell state \(|\psi\rangle\). That is \(|\Psi\rangle=\otimes_{N_{\rm cell}}|\psi\rangle\), where \(N_{\rm cell}\rightarrow\infty\) in the thermodynamic limit. * The symmetry of the classical configuration in each unit cell is extended to the whole crystal, so both wavefunctions share the same point symmetry group. Under these assumptions, and since we have \(B_{\rm cell}=4\) within our unit cell, there are a finite number of possible quantum states with allowed quantum numbers \(I=0,1,2\)[42]. The charge neutral case \(I_{3}=-2\), which corresponds to a pure neutron crystal, would be the one with the lowest energy since it has a negligible Coulomb contribution compared to the other cases. This is obviously the most asymmetric state possible. It is known that inside neutron stars there is a huge asymmetry between protons and neutrons. However, a realistic description of neutron stars would require the presence of protons. Although the concrete value is still unknown, simulations yield values around \(\gamma\sim 10^{-2}-10^{-1}\)[62, 63]. Therefore, following the arguments in [42] we perform a mean-field approximation considering a larger chunk of crystal, enclosing an arbitrary number of unit cells \(N_{\rm cell}\), which is in a generic quantum state with fixed eigenvalue, \[I_{3}=\frac{(Z-N)}{2}=-\frac{(1-2\gamma)}{2}N_{\rm cell}B_{\rm cell}. \tag{76}\] Note that in this case the nuclear density of the crystal chunk can be directly interpreted as the nuclear density of the unit cell, since \[n_{B}=\frac{B_{\rm crystal}}{V_{\rm crystal}}=\frac{N_{\rm cell}B_{\rm cell}}{ N_{\rm cell}V_{\rm cell}}=\frac{B_{\rm cell}}{V_{\rm cell}}. \tag{77}\] In previous applications of skyrmion crystals to model neutron stars (see, for example, [41, 42, 49, 64, 65]), the 1/2-crystal was considered. This crystal has an isotropic inertia tensor with eigenvalue \(U_{i}=\lambda\), with \(\lambda\) some constant. However, the multi-wall crystal considered in this paper is not isotropic and the isospin inertia tensor generically has the eigenvalues \(U_{1}=U_{2}\neq U_{3}\). The Schrodinger equation corresponding to such a rigidly iso-spinning crystal with \(N_{\rm cell}\) unit cells can be written as \[\mathscr{H}\left|\Psi\right\rangle=\left(N_{\rm cell}M_{B}+E_{I,I_{3}}\right) \left|\Psi\right\rangle, \tag{78}\] where the isospin correction to the energy of the crystal is given by \[E_{I,I_{3}}=\frac{\hbar^{2}I(I+1)}{N_{\rm cell}U_{1}}+\frac{\hbar^{2}I_{3}^{ 2}}{2}\left(\frac{1}{U_{3}}-\frac{2}{U_{1}}\right). \tag{79}\] The eigenvalue \(I_{3}\) is already fixed from the mean-field approximation (76), and the value of \(I=I_{3}\) is the one which minimizes the isospin energy, since by definition \(I^{2}\geq I_{3}^{2}\). In the thermodynamic limit \(N_{\rm cell}\rightarrow\infty\) we obtain a final expression for the quantum correction (per unit cell) to the energy due to the isospin degrees of freedom, \[E_{\rm iso}=\frac{\hbar^{2}}{8U_{3}}B_{\rm cell}^{2}\delta^{2}. \tag{80}\] This quantum isospin energy is explicitly related to the proton fraction \(\gamma\), and so we will need to include leptons if we are to allow the crystal to have a non-zero proton fraction. This is required in order for the system to remain electrically neutral. Thus the proton fraction, and hence the quantum state of the crystal, will be obtained by imposing \(\beta\)-equilibrium for each value of the density. From the quantum isospin energy (80), we can determine the nuclear symmetry energy of the multi-wall crystal, which in general plays a crucial role in the structure of neutron-rich nuclei and, of more interest to us, in neutron stars. For general skyrmion crystals the symmetry energy is given by \[S_{N}(n_{B})=\frac{\hbar^{2}}{8U_{3}}V_{\rm cell}n_{B}, \tag{81}\] where the eigenvalue \(U_{3}\) of the isospin inertia tensor (63) is implicitly dependent on the nuclear density \(n_{B}\). We determine the symmetry energy at at saturation to be \(S_{0}=28.1\) MeV, which is in good agreement with the experimentally observed value \(S_{0}\sim 30\) MeV [66]. The resulting symmetry energy curve \(S_{N}(n_{B})\) for the multi-wall crystal is plotted in Fig. 3. Having obtained the symmetry energy curve we can determine its slope and curvature, which are computed at the nuclear saturation point. We find that they are, respectively, \(L_{\rm sym}=36.6\) MeV and \(K_{\rm sym}=-15.1\) MeV. Let us now summarize the results obtained for the multi-wall crystal. First of all, we find that at lower densities the isospin moment of inertia, and specifically its eigenvalue \(U_{3}\), tends to a constant value. This is an obvious consequence of the inhomogeneous nature of the solution which, in the limit \(V_{\rm cell}\to\infty\), tends to an "isolated" multi-wall configuration on \(M=S^{1}\times S^{1}\times\mathbb{R}\). This simple fact has an important consequence. Namely, it leads to a non-zero value of the symmetry energy at zero density, \(S_{N}(0)=29.5\) MeV. At a first glance, this seems to be in contradiction with the standard description of nuclear matter where the symmetry energy vanishes at zero density. However, we want to argue that this is a desirable property of the Skyrme model as it indicates a smooth transition between infinite nuclear matter and finite atomic nuclei. Indeed, the asymmetry energy in the Bethe-Weizsacker SEMF reads \[E_{\rm asym}=a_{A}\frac{(N-Z)^{2}}{B}=a_{A}\delta^{2}B, \tag{82}\] where \(a_{A}\approx 23\) MeV. Thus, our symmetry energy at zero density can be directly identified with \(a_{A}\) within reasonable agreement. Moving away from zero nuclear density towards \(n_{*}\sim 3n_{0}/4\), the isospin energy and consequently the symmetry energy slowly decreases, as can be seen in Fig. 3. This again is not unexpected result in the Skyrme model. It was noticed by Kopeliovich _et al._[67] that the careful analysis of mass splittings of nuclear isotopes leads to the symmetry energy decreasing with increasing baryon number \(B\). Here, we reproduce this result, however, using a completely different setup, i.e. the collective coordinate quantization of the crystal ground state. Figure 3: The nuclear symmetry energy \(S_{N}\) as a function of the baryon density \(n_{B}\), exhibiting the cusp structure detailed in the text at \(n_{*}\sim 3n_{0}/4\). Below the nuclear saturation point \(n_{0}\) at the density \(n_{*}\sim 3n_{0}/4\), the symmetry energy exhibits a _cusp_ structure. This cusp also seems to be a generic feature of the Skyrme model, independent of the choice of values for the coupling parameters (10) but rather can be interpreted as the point where the multi-wall crystal begins transitioning to an "isolated" multi-wall sheet. On the other hand, its position with respect to the saturation point certainly may be affected by a choice of the model parameters. One can also expect such a cusp to be present where a crystalline configuration transitions to an isolated configuration at zero nuclear density, e.g. for the \(\alpha\) and chain crystals. It is interesting to remark that such a cusp, albeit above the saturation density \(n_{B}>n_{0}\), has been advocated in [68, 69] as an effect of an assumed transition from the FCC crystal of \(B=1\) hedgehogs to the 1/2-crystal as the nuclear density grows. Although, in reality such a transition does not occur in the Skyrme model as it is found to occur in the thermodynamically unstable regime \(n_{B}<n_{0}\)[40]. To conclude our findings on the symmetry energy cusp, we propose that the origin of the cusp can be associated with a phase transition between an _infinite crystalline_ state and a somewhat _isolated_ state that is _non-homogeneous_ and _nucleated_. ## 4 Particle fractions of \(npe\mu\) matter in \(\beta\)-equilibrium For a more realistic description of cold nuclear matter inside neutron stars we need to consider not completely asymmetric nuclear matter. As was shown in the previous section, this can be achieved by allowing a small fraction of protons over neutrons. The presence of protons gives the crystal positive electric charge, so we need to include a background of negatively charged leptons to neutralize the system. To determine the proton fraction \(\gamma\) at a prescribed nuclear density \(n_{B}\) we impose charge neutrality and \(\beta\)-equilibrium conditions, and then we solve the underlying equilibrium equation. Additionally, the presence of protons would require the inclusion of Coulomb interaction within the unit cell and between neighbouring cells. It has been argued [31] that the contribution of this energy diverges in the crystal due to infinitely many interactions between the cells. However, including a background of negatively charged particles in the system suppresses the Coulomb interaction between neighbouring cells and hence has a negligible contribution to the energy [42]. In the neutron star interior, the interaction between leptons and nuclear matter is mediated by the weak force. We can describe the exchange of leptons and nucleons by electron capture and \(\beta\)-decay processes, respectively, \[p+l \to n+\nu_{l} \tag{83a}\] \[n \to p+l+\bar{\nu}_{l}. \tag{83b}\] These processes take place simultaneously at the same rate, assuming that the charge neutrality, \[n_{p}=\frac{Z}{V}=n_{e}+n_{\mu}, \tag{84}\] and the \(\beta\)-equilibrium conditions [70], \[\mu_{p}=\mu_{n}-\mu_{I}\quad\Rightarrow\quad\mu_{I}=\mu_{l},\quad l=e,\mu, \tag{85}\] are satisfied. Here \(\mu_{I}\) is the isospin chemical potential given by \[\mu_{I}=\frac{\delta B\hbar^{2}}{2U_{3}}=\frac{(1-2\gamma)B\hbar^{2}}{2U_{3}}. \tag{86}\] Leptons inside a neutron star are treated as a non-interacting, relativistic, highly degenerate Fermi gas. The corresponding chemical potential for such a type of lepton is given by [49] \[\mu_{l}=\sqrt{(\hbar k_{F})^{2}+m_{l}^{2}}, \tag{87}\] where \(k_{F}=(3\pi^{2}n_{l})^{1/3}\) is the associated Fermi momentum and \(m_{l}\) the lepton mass. For electrons we take the ultra-relativistic approximation \(\mu_{e}\approx\hbar k_{F,e}\). From the charge neutrality condition (84), the electron number density is \[n_{e}=\frac{\gamma B}{V}-n_{\mu}. \tag{88}\] The \(\beta\)-equilibrium condition (85) for electrons yields the following relation \[\mu_{I}=\mu_{e}\quad\Rightarrow\quad\frac{\hbar B(1-2\gamma)}{2U_{3}}=\left[ 3\pi^{2}\left(\frac{\gamma B}{V}-n_{\mu}\right)\right]^{1/3}, \tag{89}\] and for muons gives \[\mu_{I}=\mu_{\mu}\quad\Rightarrow\quad n_{\mu}=\frac{1}{3\pi^{2}}\left[\left( \frac{\hbar B(1-2\gamma)}{2U_{3}}\right)^{2}-\left(\frac{m_{\mu}}{\hbar} \right)^{2}\right]^{3/2}. \tag{90}\] In the low density regime the electron chemical potential will be smaller than the muon mass, \(\mu_{e}<m_{\mu}\). So we can solve (89) in the low density regime considering only electrons, by setting \(n_{\mu}=0\) until \(\mu_{e}\geq m_{\mu}\). Once the electron chemical potential \(\mu_{e}\) reaches the muon mass \(m_{\mu}=105.66\) MeV at high densities, it will be energetically favourable for muons to appear. Then we solve (89) and (90) simultaneously [49], and construct the proton fraction curve \(\gamma=\gamma(n_{B})\). In Fig. 4 we plot the particle fractions of \(npe\mu\) matter in \(\beta\)-equilibrium for the multi-wall crystal. Note that the cusp structure present in the symmetry energy, or equivalently in the isospin energy, results in an appearance of a similar structure in the particle fractions. This reinforces the proposition that the cusp density point \(n_{*}\) is the density at which a phase transition between isospin asymmetric infinite nuclear matter and symmetric finite matter begins. Furthermore, the fact that the symmetry energy \(S_{N}\) tends to a constant value at zero density leads to a similar behavior for the proton, neutron and electron particle fractions. Namely, they take their minimal/maximal value at \(n_{*}\) then they increase/decrease as zero density is approached. This is once again a direct consequence of a non-zero value of the isospin moment of inertia at this limit and, therefore, a generic feature of the Skyrme model. We remark that at zero density \(n_{B}=0\), which, in the Skyrme model framework, can be interpreted as a limit where we find nuclei in the vacuum, the nuclear matter becomes totally isospin symmetric with \(\gamma_{p}(0)=0.5\). This corresponds quite well to the proton fraction in \({}^{56}\)Fe, \(\gamma_{p}=0.46\), which is the element expected to be present in the crust of neutron stars [71]. We now summarize our findings and compute the total energy per unit cell in a \(\beta\)-equilibrated multi-wall skyrmion crystal, that is \[E_{\rm cell}(\gamma)=M_{B}(\gamma)+E_{\rm iso}(\gamma)+E_{e}(\gamma)+E_{\mu}( \gamma), \tag{91}\] where the isospin energy for a \(\beta\)-equilibrated crystal is given by \[E_{\rm iso}(\gamma)=\frac{\hbar^{2}B_{\rm cell}^{2}}{8U_{3}}(1-2\gamma)^{2}. \tag{92}\] The lepton energies are the energies of a relativistic Fermi gas at zero temperature, \[E_{l} =\frac{V}{\hbar^{3}\pi^{2}}\int_{0}^{\hbar k_{F}}k^{2}\sqrt{k^{2} +m_{l}^{2}}\,{\rm d}k\] \[=\frac{Vm_{l}^{4}}{8\hbar^{3}\pi^{2}}\left[\frac{\hbar k_{F}}{m_ {l}}\left(1+2\left(\frac{\hbar k_{F}}{m_{l}}\right)^{2}\right)\sqrt{\left( \frac{\hbar k_{F}}{m_{l}}\right)^{2}+1}-\sinh^{-1}\left(\frac{\hbar k_{F}}{m_{ l}}\right)\right]. \tag{93}\] The crucial observation is that, in the case of the multi-wall skyrmion crystal, the inclusion of the \(\beta\)-equilibrated isospin energy and lepton energies does not completely erase the small minimum in the classical energy \(M_{B}\). Strictly speaking there is still a very shallow minimum at a density smaller than the saturation density, \(n_{B}=0.146\,{\rm fm}^{-3}\). For smaller densities the total energy grows, until a small maximum is reached. After that the total energy decreases as the Figure 4: Plot of the particle number densities \(n_{i}\) as functions of the baryon density \(n_{B}\). The particle number densities are normalized such that the total number density is \(\sum_{i}n_{i}=1\). The transition between isospin asymmetric infinite matter and symmetric finite matter at the cusp density \(n_{*}\) is now manifest. nuclear density approaches the zero density limit, \(n_{B}\to 0\). Importantly, the asymptotic value of the total energy per unit cell is smaller than the energy at the minimum. This means that, although the total energy per unit cell still possesses a thermodynamically unstable region, we can take advantage of the Maxwell construction and derive an EoS which is valid at all densities. This is a valid construction and has a minute affect on the EoS since the difference in energy between the asymptotic solution and the minimum is \(\Delta E\sim 0.1\) MeV. The formulation of the Maxwell construction is detailed below and the corresponding total energy \(E(n_{B})\) curve is plotted in Fig. 5, alongside the classical \(M_{B}(n_{B})\) curve. We remark that for the \(\alpha\)-crystal the total energy in the zero density limit is greater than the energy at the minimum, so the Maxwell construction is not possible. On the other hand, for \(B=32\) and \(B=108\) crystals constructed from \(\alpha\)-particles, such a construction is possible but it extends over a non-physical range of densities and occurs for relatively high values of the pressure. For example, the neutron stars obtained from these crystals would almost be entirely made from the Maxwell construction phase. The Maxwell construction, or equal area rule, is implemented as follows. We find three points \(V_{1}\), \(V_{2}\) and \(V_{\rm int}\) on the \(E_{\rm cell}(V_{\rm cell})\) curve, with \(V_{1}<V_{\rm int}<V_{2}\), that have the same gradient/pressure, i.e. \(p(V_{i})=:p_{\rm MX}\). These three points are chosen such that the area enclosed between \(p([V_{1},V_{\rm int}])\) and \(p_{\rm MX}\) is equal to the area enclosed between \(p([V_{\rm int},V_{2}])\) and \(p_{\rm MX}\), where \(p([V_{1},V_{\rm int}])\leq p_{\rm MX}\) and \(p([V_{\rm int},V_{2}])\geq p_{\rm MX}\). This ensures that the total energy of the thermodynamic system remains the same while implementing this construction. Then, in the corresponding MC density regime \(V_{1}<V_{\rm cell}<V_{2}\), the total energy function is replaced by a straight line connecting \(E(V_{1})\) and \(E(V_{2})\). The resulting total energy per unit cell function Figure 5: Comparison between the isospin symmetric crystal (blue curve) and the \(\beta\)-equilibrated asymmetric crystal with the MC applied (red curve). can be summarized as \[E_{\rm cell}^{\rm MC}(V_{\rm cell})=\left\{\begin{array}{ll}E_{\rm cell}(V_{ \rm cell})&V\leq V_{1}\\ E_{\rm cell}(V_{1})-p_{\rm MX}(V_{\rm cell}-V_{1})&V_{1}\leq V_{\rm cell}\leq V_{2 }\\ E_{\rm cell}(V_{\rm cell})&V_{\rm cell}\geq V_{2}\end{array}\right.. \tag{94}\] Now we are in a position to determine the EoS for the multi-wall configuration. The multi-wall crystal EoS for isospin asymmetric nuclear matter can be obtained by defining the energy density \(\rho\) and pressure \(p\) as, respectively, \[\rho= \,\frac{E}{V}=\frac{E_{\rm cell}}{V_{\rm cell}}=\frac{n_{B}}{B}E_{ \rm cell}, \tag{95}\] \[p= \,-\frac{\partial E}{\partial V}=-\frac{\partial E_{\rm cell}}{ \partial V_{\rm cell}}=\frac{n_{B}^{2}}{B}\frac{\partial E_{\rm cell}}{ \partial n_{B}}. \tag{96}\] This EoS \(\rho=\rho(p)\), generated purely from the generalized multi-wall skyrmion crystal, is valid at all densities. In our case, the pressure at which the Maxwell construction is applied is quite small, \(p_{\rm MX}=0.023\) MeV fm\({}^{-3}\). The resulting EoS is shown in Fig. 7, alongside the EoS without the Maxwell construction applied. Although the obtained equation of state covers the full range of densities one has to be aware that the multi-wall crystal does not describe the low density regime in its entirety. As we have already mentioned, to get a more realistic description of the crust the electrostatic interaction should be included. This can have an impact on the structure and symmetry of the skyrmions, which could potentially lead to the appearance of other non-homogeneous solutions with different baryon numbers per unit cell. ## 5 Neutron stars and quantum skyrme crystals coupled to gravity In order to describe neutrons stars within the Skyrme framework, we need to couple the generalized Skyrme model to gravity. We do this by introducing the Einstein-Hilbert-Skyrme action [72] \[S=\frac{1}{16\pi G}\int_{\Sigma}{\rm d}^{4}x\sqrt{-g}R+S_{\rm matter}, \tag{97}\] where \(G=1.3238094\times 10^{-42}\,{\rm fm\,Me\kern-1.0ptV}^{-1}\) is the gravitational constant and \(R\) the Ricci scalar. The matter part of the Einstein-Skyrme action, \(S_{\rm matter}\), describes matter in the interior of the neutron star. It is well known that the interior of a neutron star is well described as a perfect fluid of nearly free neutrons and a very degenerate gas of electrons. We exploit this and use a perfect fluid model such that the energy-momentum tensor takes the form \[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S_{\rm matter}}{\delta g^{\mu\nu} }=\left(\rho+p\right)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{98}\] where the energy density \(\rho\) and the pressure \(p\) are related by the multi-wall crystal EoS \(\rho=\rho(p)\). ### The Tolman-Oppenheimer-Volkoff system Our aim is to calculate the maximum permitted mass and radius for a neutron star described by our system, and obtain the mass-radius curve. Therefore we have to solve the resulting Einstein equations for some particular choice of metric ansatz. The simplest case is that of a static non-rotating neutron star. We use a spherically symmetric ansatz of the spacetime metric, which in Schwarzschild coordinates reads [46] \[\mathrm{d}s^{2}=-A(r)\mathrm{d}t^{2}+B(r)\mathrm{d}r^{2}+r^{2}\left(\mathrm{ d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right)=g_{\mu\nu}\mathrm{d}x^{ \mu}\mathrm{d}x^{\nu}, \tag{99}\] where \(x=(t,r,\theta,\phi)\in\Sigma\). The mass and radius of the neutron star can be calculated by inserting this spherical metric ansatz into the Einstein equations \[G_{\mu\nu}=8\pi GT_{\mu\nu}, \tag{100}\] where \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\) is the Einstein tensor, and solving the standard Tolman-Oppenheimer-Volkoff (TOV) equations. From the metric ansatz (99), we can determine the Christoffel symbols \[\Gamma^{\lambda}_{\mu\nu}=\frac{1}{2}g^{\lambda\sigma}\left(\partial_{\mu}g_{ \nu\sigma}+\partial_{\nu}g_{\mu\sigma}-\partial_{\sigma}g_{\mu\nu}\right), \tag{101}\] of which the non-zero components are found to be \[\begin{split}\Gamma^{r}_{tt}=\Gamma^{t}_{tr}=\frac{1}{2A}\frac{ \mathrm{d}A}{\mathrm{d}r},\quad\Gamma^{t}_{rt}=\frac{1}{2B}\frac{\mathrm{d}A}{ \mathrm{d}r},\quad\Gamma^{r}_{rr}=\frac{1}{2B}\frac{\mathrm{d}B}{\mathrm{d}r},\quad\Gamma^{\phi}_{\phi\theta}=\Gamma^{\theta}_{\phi\phi}=\cot\theta,\\ \Gamma^{\theta}_{r\theta}=-\frac{r}{B},\quad\Gamma^{\theta}_{ \theta r}=\Gamma^{r}_{\theta\theta}=\Gamma^{\phi}_{\phi r}=\Gamma^{r}_{\phi \phi}=\frac{1}{r},\quad\Gamma^{\phi}_{r\phi}=-\frac{r\sin^{2}\theta}{B},\quad \Gamma^{\phi}_{\theta\phi}=-\sin\theta\cos\theta.\end{split} \tag{102}\] Thus the Riemann curvature tensor can be obtained using the non-zero Christoffel symbols (102), \[R^{\sigma}_{\rho\mu\nu}=\partial_{\mu}\Gamma^{\sigma}_{\nu\rho}-\partial_{\nu }\Gamma^{\sigma}_{\mu\rho}+\Gamma^{\lambda}_{\nu\rho}\Gamma^{\sigma}_{\mu \lambda}-\Gamma^{\lambda}_{\mu\rho}\Gamma^{\sigma}_{\nu\lambda}. \tag{103}\] The Ricci tensor is given by \(R_{\mu\nu}=g^{\rho\sigma}R_{\rho\mu\sigma\nu}\) and the relevant components are found to be given by \[R_{tt}= \,-\frac{1}{4B^{2}}\left[\frac{\mathrm{d}A}{\mathrm{d}r}\frac{ \mathrm{d}B}{\mathrm{d}r}+B\left(-\frac{4}{r}\frac{\mathrm{d}A}{\mathrm{d}r}+ \frac{1}{A}\left(\frac{\mathrm{d}A}{\mathrm{d}r}\right)^{2}-2\frac{\mathrm{d}^ {2}A}{\mathrm{d}r^{2}}\right)\right], \tag{104a}\] \[R_{rr}= \,\frac{1}{4A^{2}Br}\left[A\frac{\mathrm{d}B}{\mathrm{d}r}\left(4 A+r\frac{\mathrm{d}A}{\mathrm{d}r}\right)+Br\left(\left(\frac{\mathrm{d}A}{ \mathrm{d}r}\right)^{2}-2A\frac{\mathrm{d}^{2}A}{\mathrm{d}r^{2}}\right)\right]. \tag{104b}\] Now we can compute the Ricci scalar \(R=g^{\mu\nu}R_{\mu\nu}\), that is \[R=\frac{1}{2A^{2}B^{2}r^{2}}\left[Br^{2}\left(\frac{\mathrm{d}A}{\mathrm{d}r} \right)^{2}+4A^{2}\left(r\frac{\mathrm{d}B}{\mathrm{d}r}+B^{2}-B\right)+Ar \left(r\frac{\mathrm{d}A}{\mathrm{d}r}\frac{\mathrm{d}B}{\mathrm{d}r}-2B\left( r\frac{\mathrm{d}^{2}A}{\mathrm{d}r^{2}}+2\frac{\mathrm{d}A}{\mathrm{d}r} \right)\right)\right]. \tag{105}\] Now we have all the ingredients required to compute the Einstein tensor, \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\). The relevant components of the Einstein tensor are found to be \[G_{tt} = \frac{A(r)}{B(r)^{2}r^{2}}\left[r\frac{\mathrm{d}B(r)}{\mathrm{d}r }+B(r)\left(B(r)-1\right)\right], \tag{106a}\] \[G_{rr} = \frac{1}{A(r)r^{2}}\left[r\frac{\mathrm{d}A(r)}{\mathrm{d}r}-A(r )\left(B(r)-1\right)\right]. \tag{106b}\] In the static case, and for a diagonal metric (that of which is applicable to us), we have \(u_{\mu}=(\sqrt{-g_{00}},0,0,0)\) and the non-zero components of the energy-momentum tensor are given by \[T_{00}=-\rho(p(r))g_{00},\quad T_{ij}=p(r)g_{ij}. \tag{107}\] In particular, for the spherical metric ansatz (99), the energy-momentum tensor reduces to the four terms: \[T_{tt} = \rho(p(r))A(r), \tag{108a}\] \[T_{rr} = B(r)p(r),\] (108b) \[T_{\theta\theta} = r^{2}p(r),\] (108c) \[T_{\phi\phi} = r^{2}p(r)\sin^{2}\theta. \tag{108d}\] We are now in a position to calculate the Einstein equations (100) by using the energy-momentum tensor (108) and the Einstein tensor (106). From this, and the Bianchi identity \[0=\nabla_{\nu}T^{r\nu}=\frac{\partial T^{r\nu}}{\partial x^{\nu}}+T^{\sigma \nu}\Gamma^{r}_{\sigma\nu}+T^{r\sigma}\Gamma^{\nu}_{\sigma\nu}, \tag{109}\] we get the following TOV system of ODEs \[\frac{\mathrm{d}A}{\mathrm{d}r}= A(r)r\left(8\pi GB(r)p(r)-\frac{1-B(r)}{r^{2}}\right), \tag{110a}\] \[\frac{\mathrm{d}B}{\mathrm{d}r}= B(r)r\left(8\pi GB(r)\rho(p(r))+\frac{1-B(r)}{r^{2}}\right),\] (110b) \[\frac{\mathrm{d}p}{\mathrm{d}r}= -\frac{p(r)+\rho(p(r))}{2A(r)}\frac{\mathrm{d}A}{\mathrm{d}r}. \tag{110c}\] The resulting TOV system involves 3 differential equations for \(A\), \(B\) and \(p\), which must be solved for a given value of the pressure in the centre of the neutron star (\(p(0)=p_{0}\)) until the condition \(p(R_{\mathrm{NS}})=0\) is achieved. The radial point \(R_{\mathrm{NS}}\) at which the pressure vanishes defines the radius of the neutron star, and the mass \(M\) is obtained from the Schwarzschild metric definition outside the star, \[B(R_{\mathrm{NS}})=\frac{1}{1-\frac{2MG}{R_{\mathrm{NS}}}}. \tag{111}\] In order for the metric function \(B(r)\) to be non-singular at \(r=R_{\rm NS}\), the pressure \(p(r)\) must obey \(p^{\prime}(R_{\rm NS})=0\). The TOV system (110) is solved via a central shooting method from some initial central pressure \(p_{0}\) at \(r=0\) until the edge of the star has been reached (corresponding to \(p(R_{\rm NS})=0\)). The amount of matter contained at \(r=0\) should be zero, which gives the boundary conditions \(B(0)=A(0)=1\). That is, the spacetime metric should approach the Minkowski metric towards the neutron star core. We can simultaneously apply a \(4^{\rm th}\) order Runge-Kutta method to the system of IVPs (110b), (110c), for the initial conditions \(B(0)=1\) and \(p(0)=p_{0}\), until the condition \(p(R_{\rm NS})=0\) is achieved. This yields the metric function \(B(r)\) and the pressure profile \(p(r)\) satisfying the necessary boundary conditions. Then the metric function \(A(r)\) can be easily obtained by numerically integrating (110a). The corresponding radius \(R\) and the stellar mass \(M=M(R_{\rm NS})\) can be extracted from the Schwarzschild definition (111). Increasing the central pressure \(p_{0}\) in succession corresponds to determining a sequence of neutron stars of increasing mass, until the mass limit is reached [70]. The mass limit is approximately \(2.5M_{\odot}\), where the solar mass is \(M_{\odot}=1.116\times 10^{60}\,{\rm MeV}\). ### Neutron star properties and the mass-radius curve Now we solve the TOV equations using the EoS obtained from the isospin asymmetric multi-wall crystal solution in the generalized Skyrme model. In Fig. 6 we present the mass-radius curve for the MC crystal (blue line) together with recent astrophysical observations. It can be seen clearly that the obtained mass-radius curve passes through many observational constraints. For our choice of coupling constants (10), the Skyrme model generates an EoS which supports rather heavy neutron stars, \(M>2M_{\odot}\). Indeed, the maximum mass is predicted Figure 6: Mass-radius curves for neutron stars obtained from the multi-wall crystal EoS with (blue curve) and without (red curve) the Maxwell construction. The maximal mass \(M_{\rm max}\) obtained from the MC multi-wall crystal EoS is also shown. to be \(M_{\rm max}=2.0971M_{\odot}\), occurring for a neutron star of radius \(R=13.12\,{\rm km}\). For this solution the central energy density is \(\rho(0)=784\,{\rm MeV\ fm^{-3}}\), while the central pressure is \(p(0)=155.7\,{\rm MeV\ fm^{-3}}\). We find that the speed of sound in the core is approximately half of the speed of light, \(c_{s}=0.491c\). The maximal mass can be further increased if we assume higher value of the sextic term coupling constant \(\lambda\), at the cost of increasing the corresponding radius. The main improvement presented by the generalized multi-wall crystal, in comparison to previous studies involving the 1/2-crystal, is in the low density regime. In previous attempts, except the pure BPS Skyrme case, neutron stars obtained from Skyrme models did not have crusts, i.e. the EoS was only defined up to the nuclear saturation point \(n_{B}\geq n_{0}\), and not in the low density region \(n_{B}<n_{0}\). In order to obtain a crust, the 1/2-crystal EoS can be smoothly joined with an EoS that well describes the low density regime, e.g. the BCPM EoS, as in [44]. In the resulting hybrid EoS, the high density region is still described by the 1/2-crystal. This Figure 7: Plots at \(M_{\rm max}\) of the pressure \(p\), energy density \(\rho\), metric function \(B(r)\) and equations of state \(\rho=\rho(p)\). The blue curve is for the crystal EoS with the Maxwell construction applied, removing any negative pressure from the system, whereas the red curve is for the “true” crystal EoS. typically increases the radius of neutron star by 1-2 km, depending on the mass of the neutron star. However, such a construction is not required here as the EoS from the multi-wall crystal with the Maxwell construction is valid at both high and low densities, naturally giving the neuron star a crust. ## 6 Conclusion In the present paper we have obtained a ground state crystalline configuration for the generalized Skyrme model for densities \(n_{B}\) extending from \(n_{B}\sim 0\) to a few saturation densities \(n_{B}\sim 5n_{0}\). The only limiting assumption is the amount of the baryon charge hosted by the unit cell, which is \(B_{\rm cell}=4\). For our choice of the values for the coupling constants (10), we determine the ground state solution in the \({\cal L}_{0246}\)-model to be the multi-wall crystal, as was recently observed by [50] in the context of the \({\cal L}_{024}\)-model. At low densities this solution takes the form of an isolated and planar two-sheet layer of skyrmionic matter. As the nuclear density grows \(n_{B}>n_{0}\) then there appears to be a restoration of chiral symmetry, and the solution tends to the cubic 1/2-crystal. However, the main improvement, in comparison with the 1/2-crystal or non-homogeneous crystals (e.g. \(B=32\) or \(B=108\) crystals composed of \(\alpha\)-particles), is observed in the low density regime. Namely, the classical energy per baryon (of the unit cell) again reveals a minimum identified with the nuclear saturation point, but now the difference between the energy at this point and at zero density is less than one percent. Due to the existence of this minimum, which exists for totally isospin symmetric nuclear matter, we have considered isospin asymmetric matter. After inclusion of the quantum corrections to the total energy, due to the isospin d.o.f., and the lepton energy contributions for a \(\beta\)-equilibrated crystal, the total energy \(E_{\rm cell}\) of the isospin asymmetric multi-wall crystal as a function of the nuclear density \(n_{B}\) was obtained. This minimum still existed but had reduced significantly and is practically negligible. Nonetheless, it still existed so we had to use the Maxwell construction, which allowed us to obtain an EoS valid at all densities within the Skyrme model. It covers the high density regime (identified as the core of a neutron star) as well as medium and low density regimes (identified as the crust). Using this EoS we were able to compute the mass-radius curve for the resulting neutron stars. The maximal mass was found to be \(M_{\rm max}=2.0971M_{\odot}\), which is a sufficiently large mass and the mass-radius curve fits very well to known astrophysical data. We remark that the Maxwell construction was required to avoid a thermodynamically unstable region which formally has negative pressure. Similar regions were found in previous studies where \(\alpha\), \(B=32\) or \(B=108\)-crystals were studied. However, it is worth underlining that in these cases the Maxwell construction was impossible (c.f. the \(\alpha\)-crystal) or extended to unacceptably large pressure/density regions (e.g. the corresponding neutron stars would possess cores mainly filled up by such regions). In the current work, the pressure at which the Maxwell construction is applied is only \(p_{\rm MX}=0.022\) MeV fm\({}^{-3}\) and it extends to densities below the saturation point. Consequently, our neutron stars are mainly governed by the part of EoS above \(p_{\rm MX}\), which is described by the multi-wall crystal EoS. The most important results, however, are related to the multi-wall crystal in the density regime below saturation, \(n_{B}<n_{0}\), where the ground state is quite well approximated by the multi-wall crystal. There are two novel findings of our study. The first is the symmetry energy's disclosure of its cusp structure below the nuclear saturation density, \(n_{*}\sim 3n_{0}/4<n_{0}\), and, secondly, the finite value of the symmetry energy in the zero density limit, \(n_{B}\to 0\). A cusp in the symmetry energy has previously been advocated for in [68], wherein they attributed the presence of this cusp to a change in topology due to a transition between the FCC crystal of skyrmions and the \(1/2\)-crystal. A key component of their argument relies on this transition occurring in the high density regime \(n_{B}>n_{0}\), however, this transition is believed to take place in the low density regime \(n_{B}<n_{0}\)[40]. However, we have argued that these two features are generic of the Skyrme model and should occur for any infinite nuclear matter that undergoes a phase transition to somewhat isolated and finite matter in the zero density limit. This asymptotic transition to finite matter in the zero density limit is essential as the isolated solution will have a finite isospin moment of inertia tensor. A prime example of a crystalline solution in which such a transition occurs is that of the \(\alpha\)-crystal, which tends to the isolated \(\alpha\)-particle solution as \(n_{B}\to 0\). Therefore, both the presence of the cusp and the non-zero value of the symmetry energy at the vacuum can be attributed as generic properties of the Skyrme model. In fact, we have observed a further key feature of the symmetry energy. That is, a direct correspondence between the value of the symmetry energy at the vacuum and the asymmetry energy in the Bethe-Weizsacker SEMF for nuclear binding energies. This strengthens our suggestion that the Skyrme model can be interpreted as a natural interpolation between infinite isopsin asymmetric nuclear matter and finite (almost) symmetric atomic nuclei. This is further supported by the observation that the proton fraction \(\gamma_{p}\to 0.5\) in the zero density limit \(n_{B}\to 0\), which describes almost totally isospin symmetric nuclear matter, and then, for small densities, decreases yielding asymmetric matter. In this pattern one may again recognize finite nuclei. Indeed, the proton number and neutron number are approximately equivalent (\(\delta\sim 0\)) for smaller atomic nuclei while for larger nuclei there is an asymmetry (\(\delta\neq 0\)) caused by a surplus of neutrons. It should be underlined that the generalized Skyrme model has only four coupling constants \(\{F_{\pi},m_{\pi},e,\lambda\}\), of which the pion mass \(m_{\pi}\) and the pion decay constant \(F_{\pi}\) are, from the onset, fixed to their physical values, or as close to them as possible. The two other parameters \(e\) and \(\lambda\), which, respectively, multiply the quartic (Skyrme) and sextic terms can be treated as free parameters in this model. They can be constrained by fitting the multi-wall crystal to nuclear observables, i.e. they can be chosen such that the symmetric energy \(M_{B}(n_{B})\) and nuclear density \(n_{B}\) at saturation \(n_{0}\) are close to the experimentally determined values. There are several directions in which our study can be continued. First of all, it is widely known that the lower density phases of nuclear matter are governed by a balance between nuclear and Coulomb forces, which leads to a plethora of geometrically different structures. The fact that the generalized Skyrme model, even without the inclusion of electrostatic interactions, gives rise to the multi-wall crystal (a lasagna like structure) can be viewed as an intrinsic ability of the model to provide such solutions. Other non-homogeneous configurations have been observed in the Skyrme model [73, 74], however their applications to nuclear physics remain to be clarified. Undoubtedly, inclusion of the Coulomb interaction seems mandatory, see e.g. [75]. It seems likely that including Coulomb interactions will not only give insight into such geometric phases but could also allow one to avoid use of the Maxwell construction. Thus it could possibly provide a complete description of the crust in neutron star within the Skyrme model framework. Finally, we find an improvement in the compressibility at nuclear saturation if one uses the multi-wall crystal. However, the observed improvement is too small and this suggests that non-homogeneous solutions alone cannot address the problem. This may indicate that inclusion of other DoF is inevitable to resolve this issue. ## Acknowledgments PL is supported by a PhD studentship from UKRI, Grant No. EP/V520081/1. MHG thanks the Xunta de Galicia (Conselleria de Cultura, Educacion y Universidad) for the funding of his predoctoral activity through Programa de ayudas a la etapa predoctoral 2021. AW was supported by the Polish National Science Centre (NCN 2020/39/B/ST2/01553). The authors thanks Christoph Adam and Alberto Martin Garcia-Caro for discussions and comments.
2308.12825
Requirements Quality Assurance in Industry: Why, What and How?
Context and Motivation: Natural language is the most common form to specify requirements in industry. The quality of the specification depends on the capability of the writer to formulate requirements aimed at different stakeholders: they are an expression of the customer's needs that are used by analysts, designers and testers. Given this central role of requirements as a mean to communicate intention, assuring their quality is essential to reduce misunderstandings that lead to potential waste. Problem: Quality assurance of requirement specifications is largely a manual effort that requires expertise and domain knowledge. However, this demanding cognitive process is also congested by trivial quality issues that should not occur in the first place. Principal ideas: We propose a taxonomy of requirements quality assurance complexity that characterizes cognitive load of verifying a quality aspect from the human perspective, and automation complexity and accuracy from the machine perspective. Contribution: Once this taxonomy is realized and validated, it can serve as the basis for a decision framework of automated requirements quality assurance support.
Michael Unterkalmsteiner, Tony Gorschek
2023-08-24T14:31:52Z
http://arxiv.org/abs/2308.12825v1
# Requirements quality assurance in industry: why, what and how? ###### Abstract [Context & Motivation] Natural language is the most common form to specify requirements in industry. The quality of the specification depends on the capability of the writer to formulate requirements aimed at different stakeholders: they are an expression of the customer's needs that are used by analysts, designers and testers. Given this central role of requirements as a mean to communicate intention, assuring their quality is essential to reduce misunderstandings that lead to potential waste. [Problem] Quality assurance of requirement specifications is largely a manual effort that requires expertise and domain knowledge. However, this demanding cognitive process is also congested by trivial quality issues that should not occur in the first place. [Principal ideas] We propose a taxonomy of requirements quality assurance complexity that characterizes cognitive load of verifying a quality aspect from the human perspective, and automation complexity and accuracy from the machine perspective. [Contribution] Once this taxonomy is realized and validated, it can serve as the basis for a decision framework of automated requirements quality assurance support. Keywords:Requirements Engineering, Requirements Quality, Natural Language Processing, Decision Support ## 1 Introduction The requirements engineering process and the artefacts used in coordination and communication activities influence the performance of downstream development activities [6]. While research has proposed myriads of formal, semi-formal and informal methods to convey requirements, plain natural language (NL) is the _lingua franca_ for specifying requirements in industry [17, 14]. One potential reason is that NL specifications are easy to comprehend without particular training [3]. However, NL is also inherently imprecise and ambiguous, posing challenges in objectively validating that requirements expressed in NL represent the customers' needs [1]. Therefore it is common practice to perform some sort of review or inspection [14] to quality assure NL requirements specifications. While there exists a plethora of methods to improve requirements specifications [15], there are no guidelines that would support practitioners in deciding which method(s) to adopt for their particular need. We think that a first step to such a decision framework is to characterize the means by which quality attributes in requirements specifications can be affected. Therefore, we initiated an applied research collaboration with the Swedish Transport Administration (STA), the government agency responsible for the rail, road, shipping and aviation infrastructure in Sweden. STA's overall goal is to improve the communication and coordination with their suppliers, mostly handled through NL requirements specifications. Infrastructure projects vary in duration (months to decades) and budget (up to 4 Billion USD), requiring an adaptive quality assurance strategy that is backed by methods adapted to the needs of the particular project. The large number of requirements (several thousands) and the need to communicate them to various suppliers makes specifications in NL the only viable choice. Still, STA needs to quality assure the requirements and decide what level of quality is acceptable. In this paper we present the basic components for a taxonomy that will drive, once the research is completed, a requirements quality assurance decision support framework. To this end, we illustrate a research outline aimed at answering our overall research question: **How can we support practitioners in achieving "good-enough" requirements specification quality?** ## 2 Related Work Davis et al. [7] proposed a comprehensive set of 24 attributes that contribute to software requirements specification (SRS) quality. Saavedra et al. [16] compared this set with later contributions that studied means to evaluate these attributes. Similarly, Pekar et al. [15] reviewed the literature and identified 36 studies proposing techniques to improve SRS quality. While Agile software development is notorious for promoting as little documentation as possible [10], Heck and Zaidman [13] identified 28 quality criteria used for Agile requirements, six of them being novel and specifically defined for Agile requirements. All these reviews point to relevant related work potentially contributing to the components of a decision support framework for requirements quality assurance. The importance of providing decision support to practitioners is growing hand-in-hand with the complexity of today's developed software products and the available number of technologies to realize them [12]. To the best of our knowledge, no framework exists to support the selection of requirements quality assurance techniques. ## 3 Characterizing Requirements Quality Assurance The purpose of this taxonomy is to characterize the components that are involved in the process to achieve a particular requirements quality (RQ) level (Figure 1). This systematization then allows to take informed decisions about effort and potential impact for RQ improvement. A _goal_ determines what the improvement of RQ should achieve. Typical goals could be to improve the communication between stakeholders, to improve the ability to verify the product, or better cost estimates. Different goals can also contradict each other. Goals are important as they provide a scope that limits the potential actions on the operational level to a set that is economically acceptable - this enables focus of efforts to assure certain quality aspects within the given opportunities of the resources afforded. _Quality attributes_ describe the favourable properties of a requirement. For example, unambiguity is commonly defined as the quality of a statement being interpretable in a unique way. Quality attributes for requirements have been described in numerous quality models, reviewed by Saavedra et al. [16]. Quality attributes are not independent, i.e. one attribute can positively or negatively influence another. Figure 2 provides an overview of RQ attributes and their relationships to each other. For example, atomicity positively influences design independence, traceability and precision of a requirement, as indicated by the (+) in Figure 2. On the other hand, unambiguous requirements, often achieved by higher formality, are generally also less understandable. Goals and quality attributes build the _conceptual level_ of the taxonomy. They can help to answer questions pertaining to why an improvement of RQ is necessary, and what quality attributes are associated with that goal. Taking the example from earlier, improving the ability to verify the product based on the stated requirements, one can see in Figure 2 that many quality attributes influence requirements verifiability. Depending on constraints in the operational level, discussed next, one can decide how to reach the stated goal by choosing a set of quality attributes, which in turn are associated with operators. _Operator_ is the generic term we use for instruments that tangibly characterize quality attributes. An operator provides a definition of how a requirement is analysed w.r.t. the associated quality attribute. Examples of operators are metrics [8, 11], requirement smells [9] or rules and constraints on how to formulate requirements. An operator can be implemented by either a person or a computer program (or both). In either case, we want to characterize the operator by some notion of cost and accuracy, providing input for the decision on how and whether at all to realize the operator. We borrow the concept of _cognitive load_ from the field of instruction design where cognitive load theory [18] is used to describe and improve learning efficiency. Each operator is associated with a level Figure 1: Requirements quality assurance taxonomy of intrinsic cognitive load, describing the complexity of applying the operator on a single requirement or a complete specification. For example, if the operator is the ambiguous adverbs requirements smell [9], then the intrinsic cognitive load is determined by the number of ambiguous terms one has to remember to detect these terms in the requirements text. Since cognitive load is additive [18], there are (individual) limits to the efficiency of applying operators, and is therefore one determinant for the effective cost of RQ assurance. If an operator is realized through machine-based processing of information, we characterize this realization by its _automation complexity_. Continuing with the example of ambiguous adverbs, the automation complexity of this operator is low as it can be implemented with a dictionary [9]. On the other hand, some of the requirements writing rules found in STA are rather complex. For example, one rule states that repetition of requirements shall be avoided and a reference to a general requirement shall be made (addressing redundancy). The detection of rule violations requires the analysis of the complete specification, identifying similar phrased statements. While this is certainly possible (e.g. with code clone and plagiarism detection [5]), the analytical complexity is higher than for a dictionary lookup. ## 4 Research Outline The taxonomy serves three main purposes which are outlined in this section, together with six research questions and our planned approaches to answer them. ### Prioritize quality attributes We have asked six requirements experts at STA to rank RQ attributes (definitions were extracted from the review by Saavedra et al. [16]) by their importance Figure 2: Quality attributes and their relationships (adapted from Saavedra et al. [16]; color coding and numbers are our addition, and used and explained in Section 4) using cumulative voting [2]. Figure 2 shows the five top and bottom attributes in green and orange respectively. Individual quality attributes have been researched earlier, focusing on ambiguity, completeness, consistency and correctness [15]. While the perceived importance of completeness and correctness is matched by research on these attributes, ambiguity and consistency were ranked by the experts only at position 13 and 16 respectively. At first sight, this might indicate that research focus needs adjustment. However, taking into consideration the relationships between quality attributes, we see a moderate overlap between the needs at STA and past research. Nevertheless, there are certain quality attributes whose evaluation has seen little research, like traceability [15], while being important for STA since they affect verifiability and correctness. The relationships between quality attributes inform us also about potential inconsistencies among the goals of quality improvement. For example, design independence was ranked by STA's experts on position 21 while it affects verifiability, ranked at position 3. This could indicate that, while verifiability is important for STA, design independence as a related aspect has been overlooked as a means to achieve this. These examples show how the relationships between quality attributes can be used to analyse the goals of the company. However, since Saavedra et al. [16] deduced the relationships shown in Figure 2 by interpreting the quality models they reviewed, these dependencies need further empirical validation, leading to _RQ1: To what extent do requirements quality attributes affect each other?_ One approach to address this question, dependent on the answers to the questions in Section 4.2, would be to analyse the correlation between operators for different quality attributes. We plan to perform this analysis at STA, which in turn partially answers _RQ2: To what extent can quality attribute rankings be used for planning quality assurance activities?_ Further inquiries at STA are needed to identify factors that affect planning, such as timing (does quality attribute importance depend on the project phase?) and implementation cost. ### Determine operators and their accuracy At STA we have identified 110 _operators_ in the form of requirements writing rules. These rules describe how requirements shall be formulated and provide review guidelines. Table 1 shows five examples of writing rules. We have mapped, where the description allowed it, which quality attribute was primarily targeted by each rule. The numbers in Figure 2 indicate how many operators we identified for each quality attribute. Several quality attributes have no or very few associated operators, leading to the question _RQ3: Which quality attributes can be characterized by an operator?_ We plan to answer this question by systematically reviewing the literature, extending the work by Saavedra et al. [16], Pekar et al. [15], and Heck and Zaidman [13]. On the other hand, we have identified 110 operators in STA, leading to the questions _RQ4: How can NL processing be used to implement operators?_ and _RQ5: What is the accuracy of these operators in relation to state-of-practice?_ We estimated that 40-50% of the writing rules in STA can be implemented with current techniques, e.g. as proposed by Femmer et al. [9]. However, as indicated in the last column of Table 1, techniques to implement rules 4 and 5 still need to be determined. In addition we plan to evaluate the practical benefits of machine-supported RQ assurance compared to the state-of-practice, i.e. manual quality assurance, at STA. ### Estimate cognitive load and automation complexity Applying all 110 operators on a specification consisting of thousands of requirements is a cognitively demanding task. For deciding how to implement an operator, it would be useful to be able to estimate the cognitive load each operator will cause and the complexity to implement the operator in a computer-based support system, leading to _RQ6: How can the cognitive load and automation complexity of an operator be estimated?_ Cognitive load could be approximated by a heuristic that describes whether the application of the operator requires domain knowledge or not, and to what extent context needs to be considered. Context could be defined as "local", referring to a single requirement, "regional" referring to a section or chapter in the specification, or "global" the whole specification and beyond, e.g. regulations and standards. There exist also multiple approaches to measure cognitive load directly [4]. Automation complexity could be estimated by categorizing operators on the linguistic aspect they address. Operators that require semantic understanding are more complex than operators that require syntactic or lexical analyses of a requirement. The least complex operators are statistical, i.e. analyses that work with letter, word or sentence counts. Since, to the best of our knowledge, no such characterization of operators exists, we plan to collaborate with experts from both neuropsychology and linguistics to perform literature reviews and design experiments. ## 5 Conclusion In this paper, we have proposed a requirements quality assurance taxonomy that, once the stated research questions are answered, forms the engine for a decision framework that allows companies to initiate or improve their requirements quality assurance program through (a) realizing the consequences of dependencies between quality attributes in their current manual activities for quality assurance, (b) mapping cognitive load to the prioritized actions for quality assurance, \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} Rule & Quality Attribute & Implementation \\ \hline 1. No time should be specified in the technical documents. Instead, refer to the Schedule document. & Non-redundant & Named entity extraction \\ 2. Numbering of figures, illustrations and tables should be consecutively numbered throughout the document, starting from & Organized & Document meta-data analysis \\ 3. Numbers “1-12” shall be written as shown in the following Unambiguous example, “to be at least two (2).” & POS Tagging \\ 4. Terms such as “user”, “dispatcher”, “operator” should be Unambiguous used consistently. & TBD \\ 5. If a functional requirement is supplemented by additional requirements to clarify fulfilment, these must be written as separate requirements. & Atomic & TBD \\ \end{tabular} \end{table} Table 1: Examples of requirements writing rules at STA and (c) enabling the decision on the trade-off between manual and machine-supported quality assurance, given cost and accuracy of the choices.
2303.01295
Iterative Assessment and Improvement of DNN Operational Accuracy
Deep Neural Networks (DNN) are nowadays largely adopted in many application domains thanks to their human-like, or even superhuman, performance in specific tasks. However, due to unpredictable/unconsidered operating conditions, unexpected failures show up on field, making the performance of a DNN in operation very different from the one estimated prior to release. In the life cycle of DNN systems, the assessment of accuracy is typically addressed in two ways: offline, via sampling of operational inputs, or online, via pseudo-oracles. The former is considered more expensive due to the need for manual labeling of the sampled inputs. The latter is automatic but less accurate. We believe that emerging iterative industrial-strength life cycle models for Machine Learning systems, like MLOps, offer the possibility to leverage inputs observed in operation not only to provide faithful estimates of a DNN accuracy, but also to improve it through remodeling/retraining actions. We propose DAIC (DNN Assessment and Improvement Cycle), an approach which combines ''low-cost'' online pseudo-oracles and ''high-cost'' offline sampling techniques to estimate and improve the operational accuracy of a DNN in the iterations of its life cycle. Preliminary results show the benefits of combining the two approaches and integrating them in the DNN life cycle.
Antonio Guerriero, Roberto Pietrantuono, Stefano Russo
2023-03-02T14:21:54Z
http://arxiv.org/abs/2303.01295v1
# Iterative Assessment and Improvement ###### Abstract Deep Neural Networks (DNN) are nowadays largely adopted in many application domains thanks to their human-like, or even superhuman, performance in specific tasks. However, due to unpredictable/unconsidered operating conditions, unexpected failures show up on field, making the performance of a DNN in operation very different from the one estimated prior to release. In the life cycle of DNN systems, the assessment of accuracy is typically addressed in two ways: offline, via sampling of operational inputs, or online, via pseudo-oracles. The former is considered more expensive due to the need for manual labeling of the sampled inputs. The latter is automatic but less accurate. We believe that emerging iterative industrial-strength life cycle models for Machine Learning systems, like MLOps, offer the possibility to leverage inputs observed in operation not only to provide faithful estimates of a DNN accuracy, but also to improve it through remodeling/retraining actions. We propose DAIC (DNN Assessment and Improvement Cycle), an approach which combines "low-cost" online pseudo-oracles and "high-cost" offline sampling techniques to estimate and improve the operational accuracy of a DNN in the iterations of its life cycle. Preliminary results show the benefits of combining the two approaches and integrating them in the DNN life cycle. Deep Neural Networks, Accuracy assessment, Accuracy improvement ## I Introduction Nowadays, Machine Learning (ML) finds large adoption in various application domains. This trend is due to the ability of ML, in particular of Deep Neural Networks (DNN), to reach human beings' effectiveness in many tasks [1, 2, 3]. The reliability of ML systems is usually measured in terms of _accuracy_. In the case of classification, the accuracy is computed as the number of correctly classified examples out of the total. The difficulty to automate the assessment of DNN accuracy still represents a threat to their application also in critical domains. The main activities related to evaluating the accuracy and consequently improving the DNN are typically executed before its release in the execution environment. Metamorphic testing [4] and mutation testing [5, 6] represent the most common strategies to evaluate the robustness of the DNN and to forecast the reliability of these systems in the operational environment. However, the accuracy estimated before release can substantially diverge from the one obtained during operation (_operational accuracy_). Retch _et al._ demonstrated how the accuracy scores of classifiers can significantly drop when completely new data are submitted [7]. This problem grows up when unexpected phenomena occur in operation, such as _distribution shift_ or _label shift_[8]. Iterative life cycles specific for DNN - such as MLOps [9, 10] - have been envisaged by companies like Google. In these DNN life cycle models, development and operational stages are linked in a loop [11], aiming to assess and improve the accuracy the DNN according to the operating conditions. In particular, they may exploit operational data for remodeling and/or retraining the DNN before the new deployment (_experimental stage_), and for both the automatic evaluation of the accuracy and the automatic re-training of the models on the field (_deployment stage_). The true label of operational data collected by monitoring the DNN is generally unknown. This is a general issue in software testing, known as the _oracle problem_[12]. For ML systems, according to Murphy _et al._, _"there is no reliable test oracle to indicate what the correct output should be for arbitrary input"_[13]. The problem clearly affects also DNN accuracy estimation [14]. Two approaches to assess DNN operational accuracy are: 1. to automatically evaluate the correct classification of operational inputs by means of _pseudo-oracles_ (often in turn based on ML models), which may detect mispredictions based on various sources of knowledge; 2. to reduce the size of the operational dataset to be labelled, by proper statistical sampling of few representative inputs. Pseudo-oracles do not need human intervention; however, they typically suffer from a high number of false positives [15], due to the probabilistic nature of the knowledge used to evaluate the output of the DNN under assessment. Sampling techniques may reduce but not avoid costly and time-consuming feedback from a human oracle; however, they avoid false positives, and provide more faithful estimates of the DNN operational accuracy. This paper proposes the DNN Assessment and Improvement Cycle (DAIC), integrating automatic assessment via pseudo-oracles and assessment via sampling. Its objectives are to provide faithful estimates of the operational accuracy of a DNN while reducing the cost of manual intervention, and to exploit the new labeled examples to take remodeling/retraining actions to improve the DNN accuracy. The preliminary results of experiments with the MNIST handwritten digits dataset [16] show that DAIC is effective in providing DNN accuracy estimates, by leveraging automatic pseudo-oracles to follow the accuracy of the DNN with unlabeled samples, and triggering the high-cost sampling-based assessment only when necessary to update estimates. Collected operational samples are then further exploited to improve the accuracy of the DNN in an iteration cycle. DAIC is robust to phenomena like label shift. The paper is structured as follows. Section II describes the techniques for the operational accuracy assessment of DNN. Section III introduces the DNN Assessment and Improvement Cycle; Section IV presents the preliminary results. Section V describes future plans; Section VI presents the conclusions. ## II Operational accuracy assessment of DNN ### _Assessment via pseudo-oracles_ Automatic pseudo-oracles are typically built using _cross referencing_[17, 18, 19] based on the knowledge encoded into the training set. This knowledge is extracted through multiple implementations - diverse from each other (e.g., different ML models, or same ML model but different architectures) - to perform a majority voting. These techniques are strictly affected by biases in the training set. When training data are not representative of the operational environment, performance of that oracles degrades significantly. Other techniques have been proposed to extract knowledge from the training data to build automatic oracles, for instance, by using dedicated networks (ConfidNet [20] and autoencoders [21]) or exploiting features of the system under assessment itself (e.g. the output of internal layers [22]). Techniques considering only the training dataset and the DNN as knowledge to build a pseudo-oracle are particularly sensitive to deviations of the operational context from the pre-deployment one. Therefore, they are expected to poorly perform in presence of phenomena like label shift [8, 23]. Supervised DNN algorithms face a label shift when the distribution of the labels of inputs changes with respect to training, despite everything else remains unchanged: in practice, when unlabeled operational inputs are similar to training examples, thus are classified by the DNN as per training, yet their actual class is different from the one learnt during training. For image classification problems, the ICOS oracle surrogate has been proposed to assess the accuracy of Convolutional Neural Networks after their release in operation [24]. ICOS extracts invariants form different sources of knowledge to evaluate unlabeled operational examples. Similarly to ICOS, we consider a pseudo-oracle - hereafter called DNN-OS - which exploits three different sources of knowledge (the operational domain, training data, and the DNN) to define three set of invariants (_domain_, _data_, _model_), used to automatically evaluate the output of the DNN under assessment. An example of _domain invariant_ for an autonomous driving vehicle, assuming a street with a speed limit of \(50\)\(km/h\), is: \(fail\) :- \(speed\_limit=50\)\(km/h,accelerate=true,current\_speed=50\)\(km/h\). Such domain invariants allow the oracle to detect failures looking at the output of the DNN and at its effect on the whole system. The usage of domain invariants makes DNN-OS robust against unexpected phenomena in operation with respect to the state-of-the-art techniques. _Data_ and _model invariants_ can be automatically extracted from the training and validation datasets with a ML algorithm. These invariants look for patterns in the input data and the DNN, respectively, such as a subset of pixels (for _data_) or neurons (for the _model_) that always assume specific values when a failure occurs. For its characteristics, the assessment via pseudo-oracle can be performed _online_, namely when the system is in operation. The automated oracle computes the accuracy on actual inputs. This estimate can be used to suggest to the testers if correcting/improving actions are needed. The online assessment is characterized by a fixed cost for the "knowledge extraction" and parameters tuning of the pseudo-oracle algorithm, which occur _una tantum_. ### _Assessment via sampling_ The usage of sampling to reduce the cost of the manual labeling of operational examples has been explored in the recent literature [25, 26, 27, 28]. Some techniques are used to select a small data sample that accurately represents the population [25, 26, 27] to obtain a faithful estimate of the accuracy provided during operation. A representative sample would roughly contain the same proportion of examples causing misprediction as the operational dataset. However, the mere imitation of the expected input can be inefficient, especially with very accurate DNN, because of the great effort to manually label correctly classified examples to get an acceptable estimate of the operational accuracy. This cost makes it evident that maximizing the sampling of examples related to wrong outputs, while still getting an unbiased estimate of the operational accuracy, is preferable. With DeepEST, Guerriero _et al._[28] aim to both provide faithful estimates of the operational accuracy, but trying to sample more failures examples (e.g. misclassifications) and balancing the unequal sampling during the estimation process. This assessment strategy can be performed _offline_, namely when the monitored operational data are available together with the outputs of the DNN. With this data, an estimate of the accuracy can be computed via sampling and manual labeling. The high cost of manual labeling the operational input is balanced by the possibility to use the labeled examples to take improving actions for the DNN under assessment. ## III DNN Assessment and Improvement Cycle A way to reduce the cost of applying and maximize the benefit is to combine the online and offline assessment in a cycle, called DNN Assessment and Improvement Cycle (DAIC). The idea is to have at each cycle a "low-cost" estimate of the accuracy provided through a pseudo-oracle, and to trigger a "high-cost" (but more faithful) offline sampling-based estimate only when the operational accuracy estimated by the automatic pseudo-oracle drops under a given threshold. Like MLOps, DAIC entails an _experimental_ and a _deployment stage_, with the following phases (Figure 1): 1. **Data Preprocessing**: in the starting phase of a cycle, the training and the verification datasets are updated considering new labeled examples (if available) and based on the accuracy estimate computed in the previous iteration. This phase updates the training set so as to better represent the operating conditions actually observed. 2. **Remodeling and Retraining**: the model is trained from scratch (first iteration or in case of re-modeling), or re-trained with the training set output of the Data Processing phase. 3. **Model verification**: the accuracy of the trained DNN is computed prior to release (_verification accuracy_) on the verification dataset generated by the Data Preprocessing phase. 4. **Deploy**: the DNN is deployed into the execution environment, and put in operation. 5. **Monitoring**: (unlabeled) inputs to the DNN and the corresponding DNN outcomes are collected; additional information on operating conditions (input sources, user typologies, operational profile, etc.) may be collected, if available, to build domain invariants. 6. **Assessment via pseudo-oracle**: an automatic pseudo-oracle is used to classify each output of the DNN as _Pass_ or _Fail_. The oracle predictions are used to compute an estimate of the DNN accuracy in operation, called _predicted accuracy_. 7. **Evaluation**: when the predicted and the verification accuracy diverge, an offline assessment session is triggered (8); otherwise, the sampling-based assessment is skipped. 8. **Assessment via sampling**: a set of inputs is sampled and (manually) labeled, and an estimate of the operational accuracy is computed. The idea is to consider the pseudo-oracle for a continuous evaluation of the operational accuracy provided by the DNN to reduce the cost of manual labeling, retraining, and remodeling, performing them only when required. ## IV Experiments ### _Accuracy assessment algorithms, datasets, open artifacts_ DAIC experiments have been conducted with two pseudo-oracles and one sampling accuracy assessment algorithms. The two pseudo-oracles are SelfChecker [22] (the automatic oracle that exploits the features of the DNN under test itself to evaluate the predictions), and DNN-OS. The sampling-based assessment algorithm is DeepEST [28]; it considers auxiliary variables, such as the _confidence_ of DNN predictions, to guide the sampling through as much as possible failing examples and to balance the unequal sampling in the estimation. The dataset considered for the preliminary experiments is MNIST [16], a famous dataset for handwritten digits classification. In particular, \(1,000\) examples are considered for training, \(500\) for the verification set, and \(1,000\) unlabeled inputs (for each cycle) as the operational dataset. DNN-OS invariants are obtained as follows: * _Domain_ invariants: defined by domain experts about the input sources of the DNN. In particular, for MNIST, we assume that users insert input into three different forms requiring respectively digits without straight lines {0, 3, 6, 8, 9}, digits with straight lines {1, 4, 7} only, and remaining digits {2, 5}. We define an invariant for each form. The output provided by the DNN for each operational input is checked against the set of possible digits expected for the source form. * _Data_ invariants: automatically extracted from the training data in form of decision rules (_C4.5_ algorithm [29]) and filtered based on their confidence (\(C\geq 0.99\)) and support (\(S\geq 10\)). Fig. 1: The proposed MLOps-like DNN Assessment and Improvement Cycle (DAIC) * _Model_ invariants: extracted from validation data with _Random Forest_ using the output of the neurons of the last layer as features. The sample size considered for DeepEST is \(500\), and the proportion of examples sampled randomly with respect to those with weighted sampling is set to \(0.5\). For independent verification or replication, the experimental code is available on GitHub at: [https://github.com/dessertlab/DAIC.git](https://github.com/dessertlab/DAIC.git). ### _Results_ DAIC is experimented by running eight cycles, with five repetitions. The pseudo-oracle assessment is executed at each iteration. In the experiments, the triggering condition for the sampling-based assessment is: \(\{\)_predicted accuracy_\(<\) (_verification accuracy_ - \(0.05\))\(\}\) OR \(\{\)_predicted accuracy_\(<\) _minimum accuracy_\(\}\) that is, the _offline_ assessment is triggered when the difference between the accuracy estimated _online_ (_predicted accuracy_) and the accuracy estimated prior to release (_verification accuracy_) drops below a given threshold (here set to \(0.05\)), or when the _predicted accuracy_ falls below a _minimum accuracy_ required for the DNN (set to \(0.80\) in the experiments). When DeepEST is triggered, a set of new manually labeled samples are sent to the Data Preprocessing phase, where they are integrated into the training and verification sets. The proportion between new and old samples in the training dataset may be varied according to the accuracy estimates of last cycle(s). By default, both new and old samples are considered. Figures 2 and 3 show the average results and the confidence intervals over 5 repetitions of 8 DAIC iterations. Tables I and II provide the details for each cycle. The first three cycles represent the nominal conditions, namely when the training and validation set faithfully represent the operational dataset. As expected, the operational accuracy computed with SelfChecker (Figure 2, and first three rows of Table I) and with DNN-OS (Figure 3, and first three rows of Table II) does not trigger sampling in the first three cycles. high actual accuracy. In both cycles 7 and 8 the accuracy is correctly estimated by DNN-OS as greater than 0.8, avoiding the triggering of DeepEST. DNN-OS exhibits a single unnecessary trigger in cycle 6 (repetition 2), where it does not catch that the actual accuracy was already higher than the minimum. ## V Future Plans Life cycles for DNN-based systems adopted in continuous delivery contexts are iterative by nature. The proposed integration of pseudo-oracles and sampling techniques supports both the assessment and the improvement of the DNN accuracy. It helps engineers to leverage collected features in the operational environment to more faithfully evaluate and then specialize the DNN performing the task they need in the way they need. We plan to refine DAIC defining more sophisticated strategies for the automatic improvement of the DNN in the loop. Techniques like DeepEST can spot a high number of failing examples, which, along with operational features, can help improve the performance of DNN also in corner cases. An advancement is to integrate the automatic improvement both at the experimental and deployment stage. As shown in the preliminary results, it is rarely required to change a well-performing model in case of unexpected phenomena like label shift. Often, additional training or training from scratch, by incorporating the operational examples in the training set, may suffice to improve the operational accuracy. For this reason, in line with MLOps perspectives, strategies for the online aut-improvement of DNN can be based on the "probabilistic" output of the pseudo-oracles. Moreover, by automating data preprocessing, the offline re-training step can be run without human intervention. To a second extent, we plan to apply inferential engines on operational features to automatically extract operational constraints aiming to improve pseudo-oracle effectiveness in estimating the accuracy during the operation. A recent work from Google stresses the importance of incorporating domain knowledge as a set of rules to improve ML components accuracy [30]. The iterative assessment and the improvement of the accuracy of the DNN can be of interest beyond the experimented image classification domain. We plan to apply DAIC in industry-relevant domains like Autonomous Driving: for instance, to the throttle/braking/steering angle prediction, which are regression problems. ## VI Conclusions Preliminary results confirm that the accuracy computed before the release can be very different from the one achieved in operation by the DNN in presence of unexpected phenomena like label shift. However, the accuracy predicted by DNN-OS follows the actual accuracy with the operational data thanks the _domain_ invariants, triggering the assessment via sampling only when required. The estimates provided by DeepEST can be used to faithfully evaluate the accuracy provided in operation. The experimental results also showed that the performance of the DNN in operation can be sensibly increased thanks to the availability of the new labeled examples. ## Acknowledgment This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 871342 "uDEVOPS". It is also supported by the DIETI COSMIC project.
2304.05054
Lower- versus higher-order nonclassicalities for a coherent superposed quantum state
A coherent state is defined conventionally in different ways such as a displaced vacuum state, an eigenket of annihilation operator or as an infinite dimensional Poissonian superposition of Fock states. In this work, we describe a superposition $(ta+ra^\dagger)$ of field annihilation and creation operators acting on a continuous variable coherent state $|{\alpha}\rangle$ and specify it by $|\psi\rangle$. We analyze the lower- as well as the higher-order nonclassical properties of $|\psi\rangle$. The comparison is performed by using a set of nonclassicality witnesses (e.g., higher-order photon-statistics, higher-order antibunching, higher-order sub-Poissonian statistics, higher-order squeezing, Agarwal-Tara parameter, Klyshko's condition and a relatively new concept, matrix of phase-space distribution). It is found that higher-order criteria are much more efficient to detect the presence of nonclassicality as compared to lower-order conditions.
Deepak, Arpita Chatterjee
2023-04-11T08:21:57Z
http://arxiv.org/abs/2304.05054v1
# Lower- versus Higher-order nonclassicalities for a coherent superposed quantum state ###### Abstract A coherent state is defined conventionally in different ways such as a displaced vacuum state, an eigenket of annihilation operator or as an infinite dimensional Poissonian superposition of Fock states. In this work, we describe a superposition \((ta+ra^{\dagger})\) of field annihilation and creation operators acting on a continuous variable coherent state \(\ket{\alpha}\) and specify it by \(\ket{\psi}\). We analyze the lower- as well as the higher-order nonclassical properties of \(\ket{\psi}\). The comparison is performed by using a set of nonclassicality witnesses (e.g., higher-order photon-statistics, higher-order antibunching, higher-order sub-Poissonian statistics, higher-order squeezing, Agarwal-Tara parameter, Klyshko's condition and a relatively new concept, matrix of phase-space distribution). It is found that higher-order criteria are much more efficient to detect the presence of nonclassicality as compared to lower-order conditions. pacs: 42.50.-p, 42.50.Ct, 42.50.Pq ## I Introduction Coherent state, a specific quantum state introduced by Glauber [1] using the harmonic oscillator algebra, has been a leading field of interest in the quantum optics and atom optics community for a number of reasons. For example, a coherent state can be used to solve the quantum mechanical problem of a harmonic oscillator acted on by a time-dependent force. In the quantum theory, a coherent state can be employed to describe a wide range of physical systems like the oscillating motion of a particle confined in a quadratic potential well, a state in a system for which the ground-state wave-packet is displaced from the origin of the system etc. Another application of coherent state can be found in the context of the sensitivity limit imposed by the quantum mechanics on detectors for gravitational radiation [2; 3]. There are many other fields of applications of canonical coherent states, ranging from quantization to signal processing and image processing. In chemistry, linear superposition of coherent states is used in order to construct multidimensional wavefunctions. In the field of biology, these can be used to describe the long-range forces between human blood cells and the long-range phase coherence in the bacteriorhodopsin macromolecules [4]. With the advent of quantum state engineering [5; 6; 7], quantum computing and communication ([8; 9] and references therein), a large number of theoretical as well as experimental strategies have been proposed for manufacturing and controlling various types of coherent states [10; 11]. Manipulation of a light field at the single-photon level provides a promising area for many important applications in quantum information science [12; 13], such as non-Gaussian two-mode entangled states are used for a nonlocality test [14] and entanglement distillation [15], photon-added squeezed states are suggested to improve the fidelity of continuous variable (CV) teleportation [16] etc. In particular, two elementary operations on a single-mode field (i.e., photon subtraction and addition represented by bosonic annihilation and creation operators \(a\) and \(a^{\dagger}\), respectively) can be employed to transform a field state to a desired one [17]. For example, Agarwal and Tara [18] proposed theoretically a non-Gaussian, non-classical state, which is intermediate between the coherent state \(\ket{\alpha}\) (most classical-like quantum state) and the number state \(\ket{n}\) (purely quantum state), by repeated application of the photon creation operator on the coherent state basis. The nonlinear coherent state or \(f\)-coherent state \(\ket{f}_{\alpha}\) was introduced by [19; 20] as eigenstate of a deformed annihilation operator \(A\ket{\alpha}_{f}=\alpha\ket{\alpha}_{f}\) where \(A=af(N)\), \(f(N)\) being a deformation function of the number operator \(N=a^{\dagger}a\), and also by the application of a deformed displacement operator upon the vacuum state, such as \(\ket{\alpha}_{D}=D_{D}(\alpha)\ket{0}\)[21]. Another idea was developed by Kim et. al. [22] to implement a coherent superposition \(aa^{\dagger}+a^{\dagger}a\) of two-product operations \(aa^{\dagger}\) and \(a^{\dagger}a\). Later Lee and Nha considered a coherent superposition of photonic operations at a more elementary level; that is, the superposition of photon subtraction and addition, \(ta+ra^{\dagger}\), and investigated how it transforms a classical coherent state to a nonclassical one [23]. Furthermore, they introduced an interference set-up to realize this coherent operation in an optical experiment and employed it together with displacement operators to generate an arbitrary superposition of number states involving up to two photons. The superposition state \(c_{0}\ket{1}+c_{1}\ket{1}+c_{2}\ket{2}\) can be used for quantum information processing; for example, the nonlinear sign-shift (NS) gate (a basic element of the CNOT gate) [24], and the optimal estimation of the loss parameter of a bosonic channel [25]. We extend the concept of Lee and Nha by studying the higher-order nonclassical properties of a state generated by applying \(ta+ra^{\dagger}\) over input \(\ket{\alpha}\). A quantum state is defined as nonclassical (i.e. a state having no classical analogue) if its Glauber-Sudarshan \(P\)-function has negative values. Unfortunately, except a single proposal for the measurement of \(P\)-function in a special case [26], there is no method for experimental determination of \(P\)-function. Thus a number of feasible criteria for witnessing nonclassicality has been developed ([27; 28] and references therein). These nonclassicality witnesses can be expressed in terms of moments of annihilation and creation operators. If the moments include terms up to fourth orders of \(a\) and \(a^{\dagger}\) (i.e. second-order correlations), the corresponding nonclassical feature is referred as lower-order nonclassicality. As a consequence, higher-order nonclassicality is related with the conditions observed via higher-order correlations. Most frequently studied higher-order nonclassical features are higher-order antibunching (HOA) [29], higher-order sub-Poissonian photon statistics (HOSPS) [30], higher-order squeezing (HOS) of Hillery type [31] and Hong-Mandel type [32] etc. The experimental success in detecting higher-order nonclassicality and the fact that weaker nonclassicality not detected by lower-order criteria can be spotted by their higher-order counterparts have led to a large number of theoretical works in this direction [33; 34]. In fact, HOA has been reported in optomechanical and optomechanical-like system [35], optical coupler [36], hyper Raman process [37] etc., HOSPS has been reported in finite dimensional coherent state [38], photon added and subtracted squeezed coherent states [39] etc., and HOS has been reported in finite dimensional coherent state [38] and a pair of anharmonic oscillators [40]. However, the fact that no effort (to the best of our knowledge) has been made so far to investigate the higher-order nonclassical properties of a superposed coherent state \(\left(ta+ra^{\dagger}\right)\left|\alpha\right\rangle\), is motivated us to work on it. We have also employed a very recent approach for certifying the nonclassical features of \(\left|\psi\right\rangle\) via correlations of phase-space distributions [41]. The paper is structured as follows: we describe the general theory for the superposed coherent state \(\left|\psi\right\rangle\) in Section II. The next section illustrates different higher-order nonclassical criteria of \(\left|\psi\right\rangle\) and matrix of phase-space distribution of it. Section III ends with a summary of the main results of this article. ## II General theory for our quantum state of interest In this section, we focus on a coherent superposition of elementary photonic operations, that means, the superposition of photon subtraction and addition \(ta+ra^{\dagger}\), \(t\) and \(r\) are scalars with \(t^{2}+r^{2}=1\). Given a coherent state \(\left|\alpha\right\rangle\) as an input field, the superposed state can be described as [23] \[\left|\psi\right\rangle=N^{-1/2}(ta+ra^{\dagger})\left|\alpha\right\rangle, \tag{1}\] where \(N=\left[r^{2}+\left|\alpha\right|^{2}+rt(\alpha^{2}+\alpha^{*2})\right]\) is the normalization constant. The generation of the desired quantum operation \(ta+ra^{\dagger}\) involves proper sequencing of photon subtraction and photon addition operators, and then coherent superposition of them by removing the which-path information [42]. An experimental scheme for generating this quantum operation is shown in Fig. 1. A high-transmissivity beam-splitter BS\({}_{1}\) is used here for photon subtraction. When an arbitrary input field \(\left|\psi\right\rangle\) is injected into a high-transmissivity beam-splitter with the other input in a vacuum mode, the detection of a photon in the photodetector implies that a single photon is subtracted from the initial state, due to the conservation of photon number. This corresponds to the action of \(a\left|\psi\right\rangle\), which holds well particularly when the transmissivity \(t_{1}\) of BS\({}_{1}\) is large enough [43]. A parametric down-converter (PDC) is used to add photon. If the initial state is injected to a signal mode of a PDC with the idler mode in a vacuum state, the detection of a single photon at the output idler mode heralds that one photon is added to the input state, due to the pairwise photon-creation and destruction mechanism of the PDC. This corresponds to the action \(a^{\dagger}\left|\psi\right\rangle\), which holds well particularly when the interaction strength in the PDC is small [44]. An additional beam-splitter BS\({}_{2}\) with transmissivity \(t_{2}\) and reflectivity \(r_{2}\) is used to erase the which-path information on the detected single photon. M is a highly reflective mirror and PD\({}_{1}\), PD\({}_{2}\) are the photodetectors, which detect the success of the addition or subtraction process in an optical path. The generation of a coherent superposed state can be described mathematically using standard operators for the various paths involved in the scheme. In Fig. 1, an arbitrary state \(\left|\psi\right\rangle\) is injected into the parametric down converter with small coupling strength \(\eta\ll 1\), which acts as \[e^{\left(-\eta a^{\dagger}c^{\dagger}+\eta ac\right)}\left|\psi\right\rangle_{ a}\left|0\right\rangle_{c}\approx\left(1-\eta a^{\dagger}c^{\dagger}\right) \left|\psi\right\rangle_{a}\left|0\right\rangle_{c}\] Next, the state is incident upon a beam-splitter BS\({}_{1}\) (transmissivity \(t_{1}\approx 1\)). The resulting operation can be written as \[B_{1ab}(1-\eta a^{\dagger}c^{\dagger})\left|\psi\right\rangle_ {a}\left|0\right\rangle_{b}\left|0\right\rangle_{c}\approx\left(1-\frac{r_{1} ^{*}}{t_{1}}ab^{\dagger}\right)(1-\eta a^{\dagger}c^{\dagger})\] \[\left|\psi\right\rangle_{a}\left|0\right\rangle_{b}\left|0 \right\rangle_{c}\] The second beam-splitter BS\({}_{2}\) with the transformations \(b^{\prime}=t_{2}b+r_{2}c\) and \(c^{\prime}=t_{2}^{*}c-r_{2}^{*}b\) is used to remove the path information and produce the superposition state. Here \(b\) and \(c\) (\(b^{\prime}\) and \(c^{\prime}\)) are the input (output) modes of the beam-splitter. Using the above relations, BS\({}_{2}\) yields \[B_{2bc}B_{1ab}(1-\eta a^{\dagger}c^{\dagger})\left|\psi\right\rangle_ {a}\left|0\right\rangle_{b}\left|0\right\rangle_{c}\] \[\equiv \left\{1-\frac{r_{1}^{*}}{t_{1}}a(t_{2}^{*}b^{\dagger}+r_{2}^{*} c^{\dagger})\right\}\left\{1-\eta a^{\dagger}(t_{2}c^{\dagger}-r_{2}b^{ \dagger})\right\}\] \[\left|\psi\right\rangle_{a}\left|0\right\rangle_{b}\left|0 \right\rangle_{c}\] The detection of single photon at PD\({}_{1}\) (PD\({}_{2}\)) and no photon at PD\({}_{2}\) (PD\({}_{1}\)) leads to the state \(\left(ta+ra^{\dagger}\right)\left|\psi\right\rangle\) with \(t\approx-\frac{r_{1}^{*}t_{2}^{*}}{t_{1}}\) \(\left(-\frac{r_{1}^{*}r_{2}^{*}}{t_{1}}\right)\) and \(r\approx-\eta t_{2}\) (\(\eta r_{2}\)). ## III Nonclassical features of the superposed state An arbitrary quantum state is named as nonclassical if its Glauber-Sudarshan \(P\)-function fails to be a classical probability distribution [1; 45]. That means the negative value of \(P\)-function suggests that the state is not enjoying classical status, and can be considered as a nonclassical one. Since there is no direct measurement for \(P\)-function, many operational criteria, such as, negative values of Wigner function [46; 47], zeros of \(Q\) function [48; 49], several moment-based measures [27; 28] have been proposed for identification of nonclassicality. Most of these conditions are one-sided only in the sense that if a criteria is satisfied then the state is definitely nonclassical but when the condition is not satisfied, one cannot conclude about the nature of the state. In this section, we discuss the nonclassicality behaviour of the state by using different criteria like higher-order Mandel's \(Q_{M}\) parameter, higher-order antibunching (HOA), higher-order sub-Poissonian photon statistics (HOSPS), higher-order squeezing (HOS) of Hong-Mandel type, Aggarwal-Tara and Klyshko's conditions. Since most of these experimentally measurable nonclassicality witnesses can be expressed in terms of the moments of annihilation and creation operators [50], it is beneficial to find out an analytic expression for the most general moment \(\langle a^{\dagger m}a^{n}\rangle\), \(m\), \(n\) being non-negative integers. For calculating \(\langle a^{\dagger m}a^{n}\rangle\), we proceed as in follows: \[\begin{array}{lcl}aa^{\dagger p}&=&aa^{\dagger}a^{\dagger p-1}\\ &=&a^{\dagger p-1}+a^{\dagger}aa^{\dagger p-1}\\ &=&a^{\dagger p-1}+a^{\dagger}aa^{\dagger}a^{\dagger p-2}\\ &=&2a^{\dagger p-1}+a^{\dagger 2}aa^{\dagger p-2}\\ &=&\ldots\\ &=&pa^{\dagger p-1}+a^{\dagger p}a\;\;\mbox{(proceeding similarly $p$ times)}\end{array} \tag{2}\] Similarly we have, \[a^{p+1}a^{\dagger}=(p+1)a^{p}+a^{\dagger}a^{p+1} \tag{3}\] Using (2) and (3), we have obtained \[\begin{array}{lcl}&&aa^{\dagger p}a^{p}a^{\dagger}\\ &=&aa^{\dagger p}(pa^{p-1}+a^{\dagger}a^{p})\\ &=&paa^{\dagger p}a^{p-1}+aa^{\dagger p+1}a^{p}\\ &=&p(pa^{\dagger p-1}+a^{\dagger p}a)a^{p-1}+\left((p+1)a^{\dagger p}+a^{ \dagger p+1}a\right)a^{p}\\ &=&p^{2}a^{\dagger p-1}a^{p-1}+(2p+1)a^{\dagger p}a^{p}+a^{\dagger p+1}a^{p+1} \end{array} \tag{4}\] Again using (2), (3) and (4), \(\langle a^{\dagger m}a^{n}\rangle\) can be derived as \[\begin{array}{lcl}\langle a^{\dagger m}a^{n}\rangle&=&\langle\psi|a^{ \dagger m}a^{n}|\psi\rangle\\ &=&N^{-1}\left\langle a\right|\left\{t^{2}a^{\dagger m+1}a^{n+1}+r^{2}aa^{ \dagger m}a^{n}a^{\dagger}+rt\,a^{\dagger m+1}a^{n}a^{\dagger}+rt\,aa^{\dagger m }a^{n+1}\right\}\left|\alpha\right\rangle\\ &=&N^{-1}\alpha^{*m-1}\alpha^{n-1}\Big{[}|\alpha|^{4}+rt\Big{\{}(m+|\alpha|^{2} )\alpha^{2}+(n+|\alpha|^{2})\alpha^{*2}\Big{\}}\\ &&+r^{2}\Big{\{}mn+(m+n+1)|\alpha|^{2}\Big{\}}\Big{]}\end{array} \tag{5}\] This analytic expression of \(\langle a^{\dagger m}a^{n}\rangle\), \(m\) and \(n\) are non-negative integers, is of great help when we are calculating different moment-based witnesses of nonclassicality. Many other moments can be obtained from (5) as particular cases, e.g. 1. If \(\alpha\) is real then \(\langle a^{\dagger m}a^{n}\rangle\) reduces to a polynomial in \(\alpha\) given by Figure 1: (Color online) An illustration for the experimental proposal of \(ta+ra^{\dagger}\). \[\Big{\{}r^{2}(m+n+1)+rt(m+n)\Big{\}}\alpha^{m+n}+r^{2}mn\,\alpha^{m+n-2}\Big{]}\] 2. If \(m=n=l\) (say) and \(\alpha\) is complex then \(\langle a^{\dagger l}a^{l}\rangle=N^{-1}|\alpha|^{2(l-1)}\Big{[}|\alpha|^{4}+r ^{2}\Big{\{}l^{2}+(2l+1)|\alpha|^{2}\Big{\}}+rt(l+|\alpha|^{2})(\alpha^{2}+ \alpha^{*2})\Big{]}\) 3. If \(m=n=1\) (say) and \(\alpha\) is real then \(\langle a^{\dagger m}a^{n}\rangle\) reduces to a polynomial given by \(\langle a^{\dagger}a\rangle=N^{-1}\Big{[}(2rt+1)\alpha^{4}+(3r^{2}+2rt)\alpha ^{2}+r^{2}\Big{]}\) ### Higher-order photon statistics The Mandel's parameter \(Q_{M}\)[51] illustrates the nonclassicality of a quantum state through its photon number distribution. The introductory definition of \(Q_{M}\) can be generalized to an arbitrary order \(l\) as [52] \[Q_{M}^{(l)}=\frac{\langle(\Delta\mathcal{N})^{l}\rangle}{\langle a^{\dagger}a \rangle}-1, \tag{6}\] where \(\Delta\mathcal{N}=a^{\dagger}a-\langle a^{\dagger}a\rangle\) is the dispersion in the number operator \(\mathcal{N}=a^{\dagger}a\). Using the identity [52] \[\langle(\Delta\mathcal{N})^{l}\rangle=\sum_{k=0}^{l}\binom{l}{k}(-1)^{k} \langle(a^{\dagger}a)^{l-k}\rangle\langle a^{\dagger}a\rangle^{k}\] and [53] \[(a^{\dagger}a)^{r}=\sum_{n=0}^{r}S_{r}^{(n)}a^{\dagger n}a^{n}, \tag{7}\] where \(S_{r}^{(n)}\) is the Stirling number of second kind [54] \[S_{r}^{(n)}=\frac{1}{n!}\sum_{j=0}^{n}(-1)^{n-j}\binom{n}{j}j^{r}, \tag{8}\] the higher-order Mandel parameter \(Q_{M}^{(l)}\) can be evaluated explicitly upto order \(l\). The negativity of \(Q_{M}^{(2)}\) signifies the negativity of the conventional Mandel's \(Q_{M}\). All expectations in (6) have been calculated with help of (5). The negative values of \(Q_{M}^{(l)}\) parameter essentially indicate the negativity of the \(P\) function and hence it gives a witness for nonclassicality. For all \(l\geq 2\), the photon number distribution is Poissonian if \(Q_{M}^{(l)}=0\). Whereas, \(Q_{M}^{(l)}>0\) and \(Q_{M}^{(l)}<0\) correspond to the super-Poissonian and sub-Poissonian cases, respectively. In Fig. 2, a comparison between lower- (\(l=2\)) and higher-order (\(l=3,\,5\)) Mandel's \(Q_{M}^{(l)}\) is shown with respect to the state parameter \(\alpha\) and for different values of \(r\). When \(l=2\) and \(r=0.2,\,(0.38,\,0.94)\), the state \(|\psi\rangle\) has \(Q_{M}^{(l)}\) parameter value -1 corresponding to \(\alpha\approx 0.25,\,(0.6,\,0.8)\), respectively, which attributes that the state becomes most nonclassical for those values. With the increase in \(r\) values, the superposed state \(|\psi\rangle\) exceeds the Poissonian limit (\(Q_{M}^{(l)}=0\)) for larger \(\alpha\). The lower-order \(Q_{M}^{(l)}\) eventually becomes super-Poissonian if \(\alpha\) increases further. While \(l\) changes from 2 to 3 and then to 5 [cf. Figs. 2(b), (c)] and keeping \(\alpha\) small (\(\leq 0.4\)), \(|\psi\rangle\) initially demonstrates nonclassicality for a short range of \(\alpha\). Then if \(\alpha\) crosses 1, the higher-order plot has a sudden fall and \(Q_{M}^{(l)}\) remains negative. That means the higher-order \(Q_{M}^{(l)}\) performs better in detecting the nonclassicality and provides an enhanced sub-Poissonian characteristic for a specific choice of \(\alpha\). In Fig. 3, \(Q_{M}^{(l)}\) is plotted as a function of \(r\) and for \(\alpha=0.25\) and \(1.1\). This figure also supports that the higher-order Mandel's \(Q\) can identify the nonclassicality when \(\alpha\geq 1\) but lower-order cannot. We have observed that \(Q_{M}^{(l)}\) behaves similarly even if \(\alpha\) is a complex quantity [cf. 3(c)]. The presence of higher-order nonclassicality while its lower-order counterpart is absent approves the relevance of the present study. ### Higher-order antibunching Different well-known criteria for detecting higher-order nonclassicality can be expressed in compact forms for the superposed state described in (1). In this subsection, we focus on higher-order antibunching. The concept of HOA, by using the theory of majorization, was introduced by Lee [55]. Later it was modified by Pathak and Gracia [29] to provide a clear physical meaning and a more simple expression. The \((l-1)\)-th order antibunching is observed in a quantum state if it satisfies the following condition: \[d(l-1)=\langle a^{\dagger l}a^{l}\rangle-\langle a^{\dagger}a\rangle^{l}\ <\ 0 \tag{9}\] Since the negativity of \(d(l-1)\) indicates that the probability of photons coming bunched is less compared to that of coming independently, therefore the nonclassicality feature (9) typifies how suitable the state \(|\psi\rangle\) is as a single photon resource. Now \[d(l-1) = \langle a^{\dagger l}a^{l}\rangle-\langle a^{\dagger}a\rangle^{l}\] \[= N^{-1}|\alpha|^{2(l-1)}\Big{[}|\alpha|^{4}+r^{2}\Big{\{}l^{2}+(2 l+1)|\alpha|^{2}\Big{\}}+rt(l+|\alpha|^{2})(\alpha^{2}+\alpha^{*2})\Big{]}\] \[-\left\{N^{-1}\Big{[}|\alpha|^{4}+r^{2}(1+3|\alpha|^{2})+rt(l+| \alpha|^{2})(\alpha^{2}+\alpha^{*2}\Big{]}\Big{\}}^{l}\right\}\] The signature of lower-order antibunching can be obtained as a special case of (10) for \(l=2\), and that for \(l\geq 3\), the negative values of \(d(l-1)\) correspond to the higher-order antibunching of order \((l-1)\). In Fig. 4, the variation of lower- as well as higher-order antibuncing is shown with respect to \(\alpha\). All the plots exhibit that the state is antibunched for the specific parametric values chosen here. Also Figs. 4(b) and 4(c) show that the depth of nonclassicality of the superposed state \(|\psi\rangle\) increases with the order of antibunching. This fact is consistent with the earlier observations [36; 39] that the higher-order criteria is more effective in detecting weaker nonclassicality. It is also observed that the state is more antibunched for a relatively large value of \(r\). ### Higher-order sub-Poissonian photon statistics Higher-order sub-Poissonian photon statistics is an important feature that affirms the existence of higher-order nonclassicality of a radiation field. The lower-order antibunching and sub-Poissonian photon statistics are closely connected as the presence of later ensures the possibility of observing the first one. But recently these two phenomena are proved to be independent of each other [36; 39]. It is also reported that the higher-order antibunching and sub-Poissonian photon statistics can exist irrespective of whether their lower-order counterparts exist or not [38]. The generalized criteria for observing the \((l-1)\)-th order sub-Poissonian photon statistics (for which \(\langle(\Delta\mathcal{N})^{l}\rangle<\langle(\Delta\mathcal{N})^{l}\rangle |_{\text{Poissonian}}\) is given by [56] \[\mathcal{D}_{h}(l-1)=\sum_{e=0}^{l}\sum_{f=1}^{e}S_{2}(e,f)^{l}C_{e}(-1)^{e}d( f-1)\langle a^{\dagger}a\rangle^{l-e}\ <\ 0 \tag{11}\] where \(S_{2}(e,f)=\sum_{r=0}^{f}{}C_{r}(-1)^{r}r^{e}\) is the Stirling number of second kind, \(C_{e}\) is the usual binomial coefficient. The analytic expression of HOSPS for the superposed state can be obtained by substituting (5) in (11). \(\mathcal{D}_{h}(l-1)\) is plotted in Fig. 5 with respect to \(\alpha\) and for different values of \(l\) and \(r\). The figure ensures the presence of sub-Poissonian photon statistics for \(l=2\) and higher-order sub-Poissonian photon statistics for \(l>2\). In case of changing \(l\), the behavior of HOSPS is analogous to that of HOA. That means, the depth of the nonclassicality witness increases while its order increases. Further, it can be seen that as \(r\) develops from \(0.2\) to \(0.94\), lower- as well as higher-order sub-Poissonian characteristics of the superposed state \(|\psi\rangle\) are always decreasing. ### Higher-order squeezing Coherent state, being the minimum uncertainty state, the product of the fluctuations in two field quadratures becomes minimum and the fluctuations in each quadrature become equal. For lower-order squeezing, the variance in one of the field quadrature (defined by a linear combination of annihilation and creation operators) reduces below the coherent state limit at the cost of enhanced fluctuation in the other quadrature. The idea of higher-order squeezing is originated by the pioneering work of Hong and Mandel [57]. According to them, the \(l\)-th order higher-order squeezing (\(l>2\)) is obtained while the \(l\)-th order moment of a field quadrature operator is less than the corresponding coherent state value. Hong-Mandel's criteria for higher-order squeezing can be described by the following inequality \[S(l)=\frac{\langle(\Delta X)^{l}\rangle-\left(\frac{1}{2}\right)_{\left(\frac{l} {2}\right)}}{\left(\frac{1}{2}\right)_{\left(\frac{l}{2}\right)}}\ <\ 0, \tag{12}\] where \((x)_{l}\) is the conventional Pochhammer symbol and the quadrature variable is defined as \(X=\frac{1}{\sqrt{2}}(a+a^{\dagger})\). The inequality in (12) can also be rewritten as \[\langle(\Delta X)^{l}\rangle\ <\ \left(\frac{1}{2}\right)_{\left(\frac{l}{2} \right)}=\frac{1}{2^{\frac{l}{2}}}(l-1)!!, \tag{13}\] with \[\begin{array}{lll}\langle(\Delta X)^{l}\rangle\ =&\sum_{r=0}^{l}\sum_{i=0}^{ \frac{p}{2}}\sum_{k=0}^{r-2i}(-1)^{r}\frac{1}{2^{\frac{l}{2}}}(2i-1)!^{2i}\\ &&C_{k}^{l}C_{r}^{r}C_{2i}\langle a^{\dagger}+a\rangle^{l-r}\langle a^{\dagger k }a^{r-2i-k}\rangle,\end{array} \tag{14}\] where \(l\) is an even number and \[n!!=\left\{\begin{array}{ll}&n(n-2)(n-4)\ldots 4.2\ \ \mbox{if}\ n\ \mbox{is even},\\ &&\\ &n(n-2)(n-4)\ldots 3.1\ \ \mbox{if}\ n\ \mbox{is odd},\end{array}\right.\] The analytic expression for the Hong-Mandel type HOS can be obtained by using (1) in (12)-(14). Fig. 6 illustrates the existence of Hong-Mandel type HOS in the superposed state \((ta+ra^{\dagger})\left|\alpha\right\rangle\), assuming \(\alpha\) to be real, for different orders of squeezing (\(l=\) 2, 4, 6). Unlike other nonclassical features discussed so far, lower-order squeezing provides better result than the higher orders. For different values of \(r\), lower-order squeezing (\(S(l)\) for \(l=2\)) is detected throughout the range of \(\alpha\). But as far as HOS is concerned, nonclassical behavior can be noticed only for higher values of \(\alpha\). With increase in the order of squeezing, the state displays nonclassicality for increasing \(\alpha\) further. The dependence of lower-order squeezing on the phase \(\phi\) of the coherent state parameter \(\alpha=\left|\alpha\right|e^{i\phi}\), taking \(\left|\alpha\right|=1\), is also described here [cf. Fig. 6(d)]. ### \(Q\) function A direct phase space description of a quantum mechanical system is not possible due to the uncertainty principle. This fact leads to the construction of quasiprobability distributions which are very useful in quantum mechanics as they provide a quantum classical correspondence and facilitate the calculation of quantum mechanical averages in close analogy to classical phase space averages [58]. One such quasiprobability distributions is \(Q\) function, and zeros of this function are a signature of nonclassicality [48]. \(Q\) function is defined as \[Q=\frac{1}{\pi}\left\langle\beta\right|\rho\left|\beta\right\rangle, \tag{15}\] where \(\left|\beta\right\rangle\) is the usual coherent state. This can be calculated as \[\begin{array}{lll}Q&=&\frac{1}{\pi}\left\langle\beta|\rho|\beta\right\rangle \\ &=&\frac{1}{\pi}N^{-1}|\left\langle\beta|\psi\right\rangle|^{2}\\ &=&\frac{1}{\pi}N^{-1}|t\alpha+r\beta^{*}|^{2}|e^{-\left|\alpha\right|^{2}- \left|\beta\right|^{2}+\alpha\beta^{*}+\alpha^{*}\beta}\end{array} \tag{16}\] The zeros of Husimi \(Q\) function in (16) can be found when \(t\alpha+r\beta^{*}=0\), that means \(r=\frac{\alpha}{\sqrt{\alpha^{2}+\beta^{*}}^{2}}\). Incidentally, the quasiprobability distribution fails to grab the nonclassical features of the superposed state which are already exhibited by different moment based criteria. From Fig. 7, it is understood that the values of \(r\) as well as the coherent state parameter \(\alpha\) have a mere effect on the Husimi's \(Q\) function. ### Matrix of phase-space distributions Testing the nonclassical features of a physical system is a key challenge in quantum physics. Besides its fun Figure 4: (Color online) Comparison of \(d(l-1)\) for different vales of \(r\) and (a) \(l=2\), (b) \(l=3\) and (c) \(l=5\), respectively. damental importance, the notion of nonclassicality provides the basis for many applications in photonic quantum technology and quantum information [59]. Nonclassicality is, for example, a resource in quantum networks [60], quantum metrology [61], boson sampling [62], or distributed quantum computing [63]. A very recent way of revealing nonclassical effects is by using the matrix of phase-space distributions. The condition [41] \[\det(M)=Q(\beta_{1})Q(\beta_{2})-e^{-|\beta_{2}-\beta_{1}|^{2}/2}\,Q\left( \frac{\beta_{1}+\beta_{2}}{2}\right)^{2}\,<\,0 \tag{17}\] certifies nonclassical light, when the correlations from \(Q\) functions at different points in phase space fall below the classical limit zero. For the superposed state \(|\psi\rangle\), \(Q(\beta)\) is zero when \(\beta^{*}=-\alpha\frac{t}{r}\). Assuming \(\beta_{1}=-\alpha^{*}\frac{t}{r}\), the inequality (17) yields \[\det(M) = -e^{-\frac{|\beta_{2}+\alpha^{*}\frac{t}{r}|^{2}}{2}}Q\left( \frac{\beta_{2}-\alpha^{*}\frac{t}{r}}{2}\right)^{2} \tag{18}\] \[= -\frac{1}{2}e^{-\frac{|\beta_{2}+\alpha^{*}\frac{t}{r}|^{2}}{2}} |t\alpha+r\beta_{2}^{*}|^{4}\,e^{-2}\Big{|}^{\frac{2\alpha-\beta_{2}+\alpha^{* }\frac{t}{r}}{2}}\Big{|}\] Thus \(\det(M)\) is always less than zero and equals to zero iff \(\frac{\beta_{2}-\alpha^{*}\frac{t}{r}}{2}=-\alpha^{*}\frac{t}{r}\) which gives \(\beta_{2}=-\alpha^{*}\frac{t}{r}=\beta_{1}\). Thus the special case of phase-space matrix approach confirms the nonclassicality of \(|\psi\rangle\). ### Agarwal-Tara criterion Agarwal and Tara introduced a moment based criterion to investigate the witness of the nonclassical characteristics of a given quantum state [64]. They defined \(A_{3}\) which consists of the moments of the number distribution \(\mu_{j}\) and the normal ordered moments \(m_{j}\). The analytic expression of \(A_{3}\) in terms of these higher ordered moments is [65] \[A_{3}=\frac{\det\,m^{(3)}}{\det\,\mu^{(3)}-\det\,m^{(3)}}\,\,<\,0, \tag{19}\] Figure 6: (Color online) Hong-Mandel type higher-order squeezing \(S(l)\) as a function of coherent state amplitude \(\alpha\), \(\alpha\) real, for different vales of \(r\) and (a) \(l=2\), (b) \(l=4\) and (c) \(l=6\), respectively, (d) lower-order squeezing as a function of phase \(\phi\) of the displacement parameter \(\alpha=|\alpha|e^{i\phi}\), \(|\alpha|=1\). Figure 7: (Color online) \(Q\) as a function of \(\beta\) for different vales of \(\alpha\) and \(r\) such as (a) \(\alpha=0.02\), \(r=0.2\), (b) \(\alpha=0.72\), \(r=0.38\) and (c) \(\alpha=1.32\), \(r=0.94\), respectively. Contour plots of the \(Q\) function with same parametric values are given in (d), (e), (f). where \[m^{(3)}=\left(\begin{array}{ccc}1&m_{1}&m_{2}\\ m_{1}&m_{2}&m_{3}\\ m_{2}&m_{3}&m_{4}\end{array}\right)\] and \[\mu^{(3)}=\left(\begin{array}{ccc}1&\mu_{1}&\mu_{2}\\ \mu_{1}&\mu_{2}&\mu_{3}\\ \mu_{2}&\mu_{3}&\mu_{4}\end{array}\right)\] The matrix elements are defined by \(m_{j}=\langle a^{\dagger j}a^{j}\rangle\) and \(\mu_{j}=\langle(a^{\dagger}a)^{j}\rangle\). The parameter \(A_{3}\) is zero for a coherent state (classical state) and -1 for a Fock state (most nonclassical state), respectively. Thus for a nonclassical state, \(A_{3}\) is negative and bounded by the value -1 when the state becomes maximally nonclassical. In order to investigate the nonclassicality of the superposed state in terms of \(A_{3}\), we plot the corresponding results in Fig. 8. Here \(A_{3}\) varies between 0 to -0.008 with respect to \(\alpha\) and thus depicts the presence of nonclassicality. Also, for higher values of \(r\), the depth of the nonclassicality increases which is consistent with the results obtained by different moment based criteria. ### Klyshko's criterion Klyshko introduced a criterion to witness the nonclassicality property of a quantum state by using only three successive photon-number probabilities [66]. If \(p_{m}=\langle m|\rho|m\rangle\) is the photon-number probability of a state having density matrix \(\rho\), then the Klyshko's inequality can be written as \[B(m)=(m+2)p_{m}p_{m+2}-(m+1)(p_{m+1})^{2}\ <\ 0 \tag{20}\] Using \(p_{m}=N^{-1}\frac{e^{-|\alpha|^{2}}}{m!}|\alpha|^{2(m-1)}|t\,\alpha^{2}+r\,m|^ {2}\), the detailed expression for \(B(m)\) is \[B(m)=-\frac{e^{-2|\alpha|^{2}}|\alpha|^{4m}}{N^{2}m!(m+1)!}r^{2}\Big{[}r^{2}( 2m^{2}+4m+1)+2rt(m+1)(\alpha^{2}+\alpha^{\star 2})+t^{2}(\alpha^{4}+ \alpha^{\star 4})\Big{]} \tag{21}\] The advantage of the Klyshko's criterion over any other existing moment based criterion is that a very small amount of information is required. In this criterion, we need only the photon number distribution \(p_{n}\) for the three successive values of \(n\). The negative values of \(B(m)\) serve as the witness of nonclassicality here. For fixed values of \(\alpha\) and \(r\), we observe that \(B(m)\) is always negative which signifies the existence of a nonclassical photon statistics. It can be visualized that \(B(m)\) becomes most negative around \(m=3\). Fig. 10 describes a comparative plot for all the seven criteria studied so far, as a function of \(r\) and \(\alpha\). The first figure presents the domain of nonclassicality for lower-order (\(l=2\)) criteria while the next one is for higher-order (\(l=4\)) conditions. ## IV Conclusion In conclusion, we have introduced a quantum state by applying a combination of two operators \(a\) and \(a^{\dagger}\) to a coherent state \(|\psi\rangle\). The scalars \(t\) and \(r\) act as control parameters for manipulation of the nonclassical character of the output state. We have focused on the higher-order nonclassical features of the state. In the present work, a schematic diagram is presented to realize the superposed operation \(ta+ra^{\dagger}\). Then the quantum state \(|\psi\rangle\) is formed by operating \(ta+ra^{\dagger}\) over a coherent state \(|\alpha\rangle\). A set of various measurement techniques is used here to check the existence of higher-order nonclassicality in the superposed state. It is found that higher-order Mandel's \(Q_{M}\) parameter can identify the nonclassicality in a certain range of state parameter \(\alpha\) but the lower-order cannot. The same is true for HOA. Further, it is observed that the probability of getting a bunch of photons is decreased as \(r\) increases. Another higher-order nonclassicality phenomenon HOSPS is found in accordance with HOA. But in case of squeezing, the superposed state depicts the lower-order property while corresponding HOS is absent. The dependence of lower-order squeezing on phase parameter is also displayed. In addition, the nonclassical nature is also investigated through a quasiprobability \(Q\) function, Agarwal-Tara \(A_{3}\) parameter and Klyshko's criterion. All these measures (except Husimi \(Q\)) can detect nonclassicality. The phase-space-matrix approach, which incorporates nonclassicality tests based on negativities of the phase-space distributions, is also applied to show the nonclassical nature of the superposed state. It is also clarified from the figures that the amount of nonclassicality increases with the control parameter \(r\). **ACKNOWLEDGEMENT** Deepak's work is supported by the Council of Scientific and Industrial Research (CSIR), Govt. of India (Award no. 09/1256(0006)/2019-EMR-1). **DISCHOSURES** The authors declare no conflicts of interest.
2305.12601
Simply typed convertibility is TOWER-complete even for safe lambda-terms
We consider the following decision problem: given two simply typed $\lambda$-terms, are they $\beta$-convertible? Equivalently, do they have the same normal form? It is famously non-elementary, but the precise complexity - namely TOWER-complete - is lesser known. One goal of this short paper is to popularize this fact. Our original contribution is to show that the problem stays TOWER-complete when the two input terms belong to Blum and Ong's safe $\lambda$-calculus, a fragment of the simply typed $\lambda$-calculus arising from the study of higher-order recursion schemes. Previously, the best known lower bound for this safe $\beta$-convertibility problem was PSPACE-hardness. Our proof proceeds by reduction from the star-free expression equivalence problem, taking inspiration from the author's work with Pradic on "implicit automata in typed $\lambda$-calculi". These results also hold for $\beta\eta$-convertibility.
Lê Thành Dũng Nguyên
2023-05-21T23:24:22Z
http://arxiv.org/abs/2305.12601v4
# Simply typed convertibility is Tower-complete even for safe Lambda-terms ###### Abstract. We consider the following decision problem: given two simply typed \(\lambda\)-terms, are they \(\beta\)-convertible? Equivalently, do they have the same normal form? It is famously non-elementary, but the precise complexity - namely Tower-complete - is lesser known. One goal of this short paper is to popularize this fact. Our original contribution is to show that the problem stays Tower-complete when the two input terms belong to Blum and Ong's safe \(\lambda\)-calculus, a fragment of the simply typed \(\lambda\)-calculus arising from the study of higher-order recursion schemes. Previously, the best known lower bound for this safe \(\beta\)-convertibility problem was PSpace-hardness. Our proof proceeds by reduction from the star-free expression equivalence problem, taking inspiration from the author's work with Pradic on "implicit automata in typed \(\lambda\)-calculi". These results also hold for \(\beta\eta\)-convertibility. Key words and phrases:non-elementary complexity, safe \(\lambda\)-calculus Thanks to Anupam Das, Damiano Mazza and Noam Zeilberger for many instructive discussions on the topic of complexity of normalization for typed \(\lambda\)-calculi The author was supported by the LambdaComb project (ANR-21-CE48-0017) while working at Ecole Polytechnique and by the LABEX MILYON (ANR-10-LABX-0070) of Universite de Lyon, within the program "Investissements d'Avenir" operated by the French National Research Agency (ANR)
2310.07904
From Realizability Modulo Theories to Synthesis Modulo Theories Part 1: Dynamic approach
Reactive synthesis is the process of using temporal logic specifications in LTL to generate correct controllers, but its use has been restricted to Boolean specifications. Recently, a Boolean abstraction technique allows to translate LTL T specifications that contain literals in theories into equi-realizable LTL specifications. However, no synthesis procedure exists yet. In synthesis modulo theories, the system to synthesize receives valuations of environment variables in a first-order theory T and outputs valuations of system variables from T . In this paper, we address how to syntheize a full controller using a combination of the static Boolean controller obtained from the Booleanized LTL specification together with dynamic queries to a solver that produces models of a satisfiable existential formulae from T . This is the first method that realizes reactive synthesis modulo theories.
Andoni Rodríguez, Cesar Sanchez
2023-10-11T21:21:47Z
http://arxiv.org/abs/2310.07904v1
# From Realizability Modulo Theories to Synthesis Modulo Theories ###### Abstract Reactive synthesis is the process of using temporal logic specifications in _LTL_ to generate correct controllers, but its use has been restricted to Boolean specifications. Recently, a Boolean abstraction technique allows to translate _LTL_ specifications that contain literals in theories into equi-realizable _LTL_ specifications. However, no synthesis procedure exists yet. In synthesis modulo theories, the system to synthesize receives valuations of environment variables in a first-order theory \(\mathcal{T}\) and outputs valuations of system variables from \(\mathcal{T}\). In this paper, we address how to synthesize a full controller using a combination of the static Boolean controller obtained from the _Booleanized LTL_ specification together with dynamic queries to a solver that produces models of a satisfiable existential formulae from \(\mathcal{T}\). This is the first method that realizes reactive synthesis modulo theories. ## Introduction Reactive synthesis is the problem of automatically producing a system that models a given temporal specification, where the Boolean variables (i.e., atomic propositions) are split into variables controlled by the environment and variables controlled by the system. Realizability is the related decision problem of deciding whether such a system exists. These problems have been widely studied [10], specially in the domain of Linear Temporal Logic (_LTL_) [11]. Realizability corresponds to an infinite game where players alternatively choose the valuations of the Boolean variables they control. A specification is realizable if and only if the system has a strategy such that the specification is satisfied in all plays played according to the strategy. The synthesis process is produced from a winning system strategy. Both reactive synthesis and realizability are decidable for _LTL_[10]. _LTL_ modulo theories (_LTL_) is the extension of _LTL_ where Boolean atomic propositions can be literals from a (multi-sorted) first-order theory \(\mathcal{T}\). Realizability of _LTL_ specifications is decidable under certain conditions over \(\mathcal{T}\), shown in [1] using a Boolean abstraction or _Booleanization_ method that translates specifications in _LTL_ into equi-realizable _LTL_ formulae, which means that the original specification in _LTL_ is realizable if and only if the produced Boolean LTL specification is realizable, and vice versa. Note than an _LTL_ reactive specification splits the theory variables into environment controlled and system controlled variables that can appear in a single literal, while _LTL_ Boolean atoms belong fully to either player. In this paper, we propose a general method that uses procedures to dynamically produce outputs as the results of computing models of existential \(\mathcal{T}\) formulae. Concretely, the method we propose statically receives an _LTL_\(\mathcal{T}\) specification \(\varphi\), Booleanizes \(\varphi\) using [1] and synthesizes a controller \(S\) using standard methods. Then, dynamically \(S\) is combined with a tool that can produce models of satisfiable \(\mathcal{T}\) formulae (e.g., an SMT solver) which collaborate in tandem at each step of the execution. To guarantee that the reaction is produced at every step, we require that \(\mathcal{T}\) has an efficient procedure to provide models of existential fragments of \(\mathcal{T}\). Our approach does not guarantee termination using semi-decidable \(\mathcal{T}\).We also use an additional component, called partitioner, which discretizes the environment \(\mathcal{T}\)-input providing a suitable input for the Boolean controller (but this is computed statically). To the best of our knowledge, this is the first successful decidable reactive synthesis procedure for _LTL_ specifications. ## Preliminaries Boolean abstraction.For this paper, we assume the reader is familiar with LTL [10], _LTL_\(\mathcal{T}\)[1] and reactive synthesis [12]. The Boolean abstraction procedure takes an input formula \(\varphi_{\mathcal{T}}\) with literals \(l_{i}\) and produces a new specification \(\varphi_{\mathcal{B}}=\varphi[l_{i}\gets s_{i}]\wedge\square\varphi^{ \mathit{extra}}\), where \(s_{i}\) are fresh Boolean variables and \(\varphi^{\mathit{extra}}\in\mathbb{B}\). The core of the algorithm is the additional subformula \(\varphi^{\mathit{extra}}\) which uses the freshly introduced variables \(s_{i}\)--controlled by the system--as well as additional Boolean variables \(\overline{e}_{k}\) controlled by the environment and captures that, for each possible \(\overline{e}_{k}\), the system has the power to choose a response among a specific \(s_{i}\). The extra requirement captures precisely the finite collection of input decisions of the environment (partitions of the environment space of valuations) and the resulting (finite) choices of the system to respond (partitions of the system choices that results in the same Boolean valuations of the literals). Motivating running example.As for an example of reactive specifications in \(\text{LTL}_{\mathcal{T}}\), let \(\square\) be the usual _globally_ operator in LTL and \(\bigcirc\) the _next_ operator. Consider \(\varphi\mathcal{T}=\square(R_{0}\wedge R_{1})\) as the running example for the paper, where \[R_{0}:(x<2)\rightarrow\bigcirc(y>1)\qquad\qquad R_{1}:(x\geq 2)\rightarrow(y<x)\] In \(\varphi\mathcal{T}\), \(x\in\mathcal{T}\) belongs to the environment and \(y\in\mathcal{T}\) belongs to the system. Note that \(\varphi\mathcal{T}\) is not realizable for \(\mathcal{T}=\mathcal{T}_{\mathbb{Z}}\), since, if at a given time instant \(t\), the environment plays \(x=0\), and hence \((x<2)\) holds, then \(y\) must be greater than \(1\) at time \(t+1\). Then, if at \(t+1\) the environment plays \(x=2\), then \((x\geq 2)\) holds but there is no \(y\) such that both \((y>1)\) and \((y<2)\). However, for \(\mathcal{T}=\mathcal{T}_{\mathbb{R}}\), \(\varphi\) is realizable (consider the system strategy to always play \(y=1.5\)). The Boolean abstraction method transforms \(\varphi\mathcal{T}\) into a purely Boolean specification \(\varphi_{\mathbb{R}}\) that allows to perform automatic _LTL_ realizability checking. For instance, for \(\mathcal{T}=\mathcal{T}_{\mathbb{Z}}\), the Booleanized version of \(\varphi\mathcal{T}\) is the following: \[\varphi_{\mathbb{B}}=\varphi^{\prime\prime}\wedge\square(\varphi^{\text{legal} }\rightarrow\varphi^{\text{extra}}),\] where \(\varphi^{\text{legal}}\) encodes that \(e_{0}\), \(e_{1}\) and \(e_{2}\) characterize a partition of the input decisions of the environment \((e_{0}\lor e_{1}\lor e_{2})\wedge(e_{0}\rightarrow\neg(e_{1}\wedge e_{2})) \wedge(e_{1}\rightarrow\neg(e_{0}\wedge e_{2}))\wedge(e_{2}\rightarrow\neg( e_{0}\wedge e_{1}))\). Also \(\varphi^{\prime\prime}=(s_{0}\rightarrow\Diamond s_{1})\wedge(\neg s_{0} \to s_{2})\) is a direct translation of \(\varphi\mathcal{T}\), where \(s_{0}\) abstracts the literal \((x<2)\), \(s_{1}\) abstracts \((y>1)\) and \(s_{2}\) abstracts \((y<x)\). Solely replacing literals with fresh system variables over-approximates the power of the system, therefore we need an additional formula \(\varphi^{\text{extra}}\) that encodes the original power of each player in \(\varphi\mathcal{T}\): \[\varphi^{\text{extra}}:\left(\begin{array}{ccc}&\big{(}e_{0} &\rightarrow&s_{01\overline{2}}\lor s_{0\overline{1}2}\big{)}\\ \wedge&\big{(}e_{1}&\rightarrow&s_{0\overline{1}2}\lor s_{0\overline{1}2} \big{)}\\ \wedge&\big{(}e_{2}&\rightarrow&s_{\overline{0}12}\lor s_{\overline{0}12} \big{)}\end{array}\right),\] where \(e_{0},e_{1},e_{2}\in\mathbb{B}\) belong to the environment and where \(s_{01\overline{2}}=(s_{0}\wedge s_{1}\wedge\neg s_{2})\), \(s_{0\overline{1}2}=(s_{0}\wedge\neg s_{1}\wedge s_{2})\), \(s_{0\overline{1}2}=(s_{0}\wedge\neg s_{1}\wedge\neg s_{2})\) and \(s_{\overline{0}12}=(\neg s_{0}\wedge\neg s_{1}\wedge s_{2})\), where \(s_{0},s_{1},s_{2}\in\mathbb{B}\) belong to the system. Sub-formulae \(s_{01\overline{2}},s_{0\overline{1}2},s_{0\overline{1}2},s_{0\overline{1}2},s _{0\overline{1}2}\) and \(s_{\overline{0}\overline{1}2}\) represent the _choices_ of the system, that is, given a decision \(e_{k}\) of the environment, the system can _react_ with one of the choices \(c_{i}\) in the disjunction implied by \(e_{k}\). Note that \(\varphi^{\text{legal}}\) encodes that \(e_{0},e_{1},e_{2}\) is a (finite) partition in the domain of the (infinite) valuations of the environment, where \(e_{0}\) abstracts its decision \(x\) such that \((x<2)\), \(e_{1}\) represents \(x\) such that \((x=2)\) and \(e_{2}\) represents \((x>2)\). Note that if the considered \(\mathcal{T}\) is different, \(\varphi_{\mathbb{B}}\) may also differ. ## Description of the Approach For synthesis modulo theories it is not enough to synthesize a controller for the Booleanized _LTL\({}_{\mathcal{T}}\)_ specifications, because the actual controller will receive inputs in \(\mathcal{T}\) from the environment and produce outputs from complex values in \(\mathcal{T}\). For instance, consider a specification where the environment controls an integer variable \(x\) and the system controls an integer variable \(y\) in the specification \(\varphi^{\mathcal{T}}=\square(y>x)\). In this paper we propose general alternative approach, shown in Fig. 1, which we call _dynamic LTL\({}_{\mathcal{T}}\) synthesis_. This method consists on computing statically a Boolean controller for \(\varphi^{\mathbb{B}}\) (which has been Booleanized from \(\varphi^{\mathcal{T}}\)), and dynamically combine it with a method to provide models from formulae in \(\mathcal{T}\). At runtime, at each instant of time, (1) given the valuations \([\overline{x}\leftarrow\overline{v}]\) of the environment (where \(\overline{v}\) are actual input values for each environment variable \(x\in\mathcal{T}\)), then (2) the partitioner discretizes this valuation generating a Boolean input for the Boolean controller; (3) the controller responds with a choice \(c_{i}\in\mathbb{B}\) (which corresponds to a verdict on the Boolean valuations of literals in the formula). Our controller still needs to produce actual values of the output variables that make the verdict of the literals be as in \(c_{i}\), for which a formula of the form \(\exists\overline{y}\). \(c_{i}^{\mathcal{T}}(\overline{y})\) is generated (where \(c_{i}^{\mathcal{T}}(\overline{y})\) is the \(\mathcal{T}\) formula that contains one conjunction per literal, and the input variables replaced by their values). This formula represents all the values that the system controls, that result in the choice \(c_{i}\) that the Boolean controller has output. By the correctness of the Booleanization process this formula must be satisfiable. Stage (4), called provoiler, uses an SMT solver to produce a model \(\overline{w}\) of \(\exists\overline{y}\). \(c_{i}^{\mathcal{T}}(\overline{y})\) so \([\overline{y}\leftarrow\overline{w}]\) will guarantee the original specification \(\varphi^{\mathcal{T}}\). Note that we replace \(\overline{x}\) by the input valuation \(\overline{v}\) in \(c_{i}^{\mathcal{T}}(\overline{y})\), so \(c_{i}^{\mathcal{T}}(\overline{y})\) only has \(\overline{y}\) as variables. ExecutionWe now illustrate using the running example \(\varphi\mathcal{T}\) how the dynamic approach behaves in practise. Specification \(\varphi\mathcal{T}\) is unrealizable for \(\mathcal{T}_{\mathbb{Z}}\), but a slight modification makes it realizable. If we replace \((y<x)\) with \((y\leq x)\) we obtain \(\varphi^{\prime}_{\mathcal{T}}=\square(R_{0}\wedge R_{1}^{\prime})\), where: \[R_{0}:(x<2)\rightarrow\bigcirc(y>1)\qquad\qquad R_{1}^{\prime}:(x\geq 2) \rightarrow(y\leq x)\] Specification \(\varphi^{\prime}_{\mathcal{T}}\) is realizable in \(\mathcal{T}_{\mathbb{Z}}\) (consider the strategy of the system to always play \(y=2\)). The Booleanized version of \(\varphi^{\prime}_{\mathcal{T}}\) is \(\varphi^{\prime}_{\mathcal{B}}=\varphi^{\prime\prime}\wedge\square([(e_{0}\lor e _{1})\wedge(e_{0}\leftrightarrow\neg e_{1})]\rightarrow\varphi^{\text{extra} \prime})\), where \(\varphi^{\prime\prime}=(s_{0}\rightarrow\Diamond s_{1})\wedge(\neg s_{0} \to s_{2})\) and \(\varphi^{\text{extra}\prime}\) is: \[\varphi^{\text{extra}\prime}:\left(\begin{array}{ccc}&\big{(}e_{0} &\rightarrow&\big{(}s_{01\overline{2}}\lor s_{0\overline{1}2}\big{)}\\ \wedge&\big{(}e_{1}&\rightarrow&\big{(}s_{\overline{0}12}\lor s_{0\overline{1}2} \big{)}\lor s_{\overline{0}12}\big{)}\end{array}\right),\] where \(e_{0},e_{1}\in\mathbb{B}\) belong to the environment and represent \((x<2)\) and \((x\geq 2)\), respectively. Note that in \(\varphi^{\prime}_{\mathbb{B}}\) there are no separated \(e_{k}\) for \((x=2)\) and \((x>2)\). We show a concrete execution in Tab. 1, where we see how the \(\mathcal{T}\)-controller responds to a few \(\mathcal{T}\)-inputs. For instance, in the first step, the input \(x=4\) is discretized into the Boolean decision \(e_{1}\) which is passed to the Boolean controller. The controller responds \(s_{\overline{0}12}=\neg s_{0}\wedge s_{1}\wedge s_{2}\) to this input, which is translated into \(s_{\overline{0}12}^{\mathcal{T}}=\neg(x<2)\wedge(y>1)\wedge(y\leq x)\). Figure 1: The dynamic synthesis architecture. Then, provider substitutes valuation \([x\gets 5]\) in \(\mathcal{S}^{\mathcal{T}}_{012}\), and solves \(\exists y.s^{\mathcal{T}}_{012}(y)[x\gets 5]\), i.e., \(\exists y.\neg(5<2)\wedge(y>1)\wedge(y\leq 5)\) which is guaranteed to succeed. A possible model is \(y=2\). ## 5 Related Work and Conclusions Related Work.Recently, [2] introduced \(\mathit{LTL}_{\mathcal{T}}\), and showed that the realizability problem for \(\mathit{LTL}_{\mathcal{T}}\) is decidable via a Boolean abstraction technique. We extended this approach here to full reactive synthesis modulo theory. Alternative definitions for LTL modulo theories [1] have been developed for finite traces, but allowing temporal operators within predicates, which makes the logic undecidable. Similar undecidability is reported in [1]. Other approaches (e.g., [1, 13, 14]) restrict expressivity whether temporal-wise, theory-wise or both. Some works (e.g., [1, 1, 15, 16] consider synthesis or realizability of first-order theories, but none of them offers termination guarantees and they only consider some temporal fragments. Our approach guarantees termination of the computation of the controller if the used theory is decidable in the \(\exists^{*}\forall^{*}\) fragment and runtime guarantees termination in each timestep if the SMT solver supports the theory. Moreover, all these approaches above adapt one specific technique and implement it in a monolithic way, whereas Boolean abstraction allows us the construct the general dynamic architecture, since it generates an equalizable (Boolean) LTL specification. Note that our dynamic approach benefits from all advantages of using synthesis from Boolean abstractions and is fully automatic (unlike [15]). Temporal Stream Logic (TSL) [16] extends LTL with complex data that can be related accross time and [16, 17, 18] use extensions of TSL to theories. Again, realizability (and thus synthesis) is undecidable in all these works. In comparison, our Boolean abstraction cannot relate values accross time but provides a decidable synthesis procedure. Also, TSL is undecidable already for safety, the theory of equality and Presburger arithmetic. More precisely, TSL is only known to be decidable for three fragments (see Thm. 7 in [16]). Conclusion.We have studied the problem of \(\mathit{LTL}_{\mathcal{T}}\) synthesis which is more challenging than \(\mathit{LTL}_{\mathcal{T}}\) realizability modulo theories, since synthesis implies computing a system that receives valuations in \(\mathcal{T}\) and provides valuations in \(\mathcal{T}\). We propose an _dynamic_ approach that first discretizes the input from the environment, then uses a Boolean controller synthesized from the Booleanized specification of \(\mathit{LTL}_{\mathcal{T}}\), and finally produces a reaction using a procedure that provides models of existential formulae of \(\mathcal{T}\).
2306.14720
Experimental study of underwater explosions below a free surface: bubble dynamics and pressure wave emission
The current work experimentally studies the complex interaction between underwater explosion (UNDEX) bubbles and a free surface. We aim to reveal the dependence of the associated physics on the key factor, namely, the dimensionless detonation depth $\gamma$ (scaled by the maximum equivalent bubble radius). Four typical bubble behavior patterns are identified with the respective range of $\gamma$: (i) bubble bursting at the free surface, (ii) bubble jetting downward, (iii) neutral collapse of the bubble, and (iv) quasi-free-field motion. By comparison of the jet direction and the migration of the bubble centroid, a critical value of $\gamma$ is vital for ignoring the effects of the free surface on UNDEX bubbles. Good agreements are obtained between the experimental data and the unified theory for bubble dynamics by Zhang et al. Additionally, the dependence of the pressure signals in the flow field on $\gamma$ is investigated. The peak pressure, impulse, and energy dissipation in the UNDEX are investigated.
Ming-Kang Li, Shiping Wang, Shuai Zhang, Hemant Sagar
2023-06-26T14:13:50Z
http://arxiv.org/abs/2306.14720v1
Experimental study of underwater explosions below a free surface: bubble dynamics and pressure wave emission ###### Abstract The current work experimentally studies the complex interaction between underwater explosion (UNDEX) bubbles and a free surface. We aim to reveal the dependence of the associated physics on the key factor, namely, the dimensionless detonation depth \(\gamma\) (scaled by the maximum equivalent bubble radius). Four typical bubble behavior patterns are identified with the respective range of \(\gamma\): (i) bubble bursting at the free surface, (ii) bubble jetting downward, (iii) neutral collapse of the bubble, and (iv) quasi-free-field motion. By comparison of the jet direction and the migration of the bubble centroid, a critical value of \(\gamma\) is vital for ignoring the effects of the free surface on UNDEX bubbles. Good agreements are obtained between the experimental data and the unified theory for bubble dynamics by Zhang et al. [1]. Additionally, the dependence of the pressure signals in the flow field on \(\gamma\) is investigated. The peak pressure, impulse, and energy dissipation in the UNDEX are investigated. ## I Introduction Underwater explosion (UNDEX) plays a vital role in the national defense field [2]. However, there are many fundamental problems to be solved regarding UNDEX. Generally, the attentions in previous studies are focused on: shock wave, bubble pulse, and the evolution of the water surface. The shock wave is characterized by a high peak but a short duration of time [3; 4; 5], which usually causes local damage to a floating structure. The bubble pulsation process is more complex and is highly dependent on the boundary condition [6]. Contrary to the shock wave, the bubble pulse is characterized as a low-pressure magnitude but temporally for a longer duration. The impulse of the bubble pulse is thought to be at the same magnitude level as the shock wave [2]. When the bubble's oscillation frequency matches with the natural frequency of the marine structure, the violent resonant response may be caused [7; 8], resulting in significant structural damage. The pulsation of a bubble in an infinite fluid field can be well predicted by various analytical models [9; 10; 11], such as Rayleigh-Plesset model [12], Keller Miksis model [13] and etc. When the bubble is generated in the vortex, it assumes complex evolution patterns. Zhang et al. [14] revealed the influence law of viscosity, surface tension and buoyancy to the vortex bubble entrainment and provided new insights into the control on vortex bubble entrainment. When the bubble is initiated near the boundary, the mutual interaction between the oscillating bubble and the boundary, the so-called Bjerknes effect, changes the bubble shape to aspherical [15; 16; 17; 18]. Usually, a strong jet [19] drives the bubble moving closer or farther to the boundary. Zhang et al. [20] originally proposed the multiple vortex ring model and discovered the mechanism of a toroidal bubble splitting near a rigid boundary. It has been confirmed by various experiments [21; 22; 23] and numerical simulations [24; 25; 26] that the Bjerknes force drives the bubble away from the boundary when the bubble is located beneath the water surface. It was pointed out by Wang et al. [27] that the repulsive force is derived from a stagnation point along the symmetry axis between the top of the bubble and the free surface when the bubble contracts. According to Bernoulli's principle, a high-pressure region exists at the stagnation point which redirects the incoming flow to the downward direction. It has also been discussed in Ref. [28] that this jet may also have resulted from the combined effect of less Rayleigh time at the bubble top (higher curvature at the bubble top leads to faster collapse) [29] and the higher universal driving force above the bubble [30]. The variation of the relative strength of this Bjerknes force and buoyancy renders the bubble to have different behavioral patterns. Many numerical studies have been conducted to investigate the interaction between the oscillatory bubble and free surface, including bubble shape evolution at different buoyancy parameter \(\delta\) and standoff distance \(\gamma\)[27; 31], the dynamics of the two bubbles along the axis [25], bubble and free surface dynamics in shallow underwater explosion [32], etc. Apart from these, some experiments have been conducted for validation [33; 34; 35; 36] as well as to investigate the phenomena that are hard for numerical simulations to clarify: bubble migration for multiple bubble oscillation cycles [37] and the interaction of bubble and free surface when bubble inception position is extremely close to the free surface [22] etc. There are generally three kinds of experimental methods to study the bubble dynamics, namely: laser-induced bubble [38], spark-generated bubble [38], and underwater explosion (UNDEX) bubble [39]. The laser-induced bubble has an ideal spherical shape during growth and often these bubbles were investigated at microscale [40]. The spark-generated bubble [41] is the most widely used method as an alternative to investigating UNDEX bubble dynamics for its convenience, safety, economy, and ability to study the high-pressure bubble dynamics under some non-dimensional parameters, such as standoff parameter \(\gamma\) and buoyancy parameter \(\delta\) at reduced ambient pressure. However, it has been analyzed by Hung et al. [42] that the products of the spark-generated bubble come from the dissolved air and water, which will disintegrate into the surround ing water when the bubble collapses due to the high pressure. That results in a reduction in the energy of the bubble compared with the UNDEX bubble, especially after one bubble cycle. The pressure induced by the first shock wave is not possible to be captured for a spark-generated bubble as the detonation process is not involved. Therefore a necessity to conduct a real underwater explosion experiment remains in force. Because of the excessive cost and safety concerns, the underwater experiments for scientific investigations are usually limited to small-charge (\(R_{\rm max}\approx 0.25\) m) [39; 5] or mini-charge (\(R_{\rm max}\approx 0.15\) m) [43; 42]. The maximum radius of the bubble in our experiments was about 0.4 m which is larger than that in the aforementioned literature. There are three phenomena for a high-pressure bubble near the free surface that previously have attracted much attention: the water plume rising and splashing phenomenon, bubble dynamics patterns at different distances to the free surface, and the shock wave emission characteristics. Further two phenomena are discussed in the scope of the current study. Usually, the buoyancy parameter \(\delta\) is small for the conventional experimental-scale bubble. Zhang et al. [37] studied bubble dynamics at variable atmospheric pressure, which showed completely different bubble dynamics at large buoyancy parameters. Brett et al. [44] studied the characteristics of bubble collapse pressure wave near a free surface based on a mini-charge UNDEX experiment, in which they found that the bubble pulse pressure reaches a maximum value when the bubble's migration is not observable. Apart from these, the energy dissipation mechanism is another important issue related to the bubble pulsation that was not frequently discussed and test cases only focused on the free field condition [45; 46]. There are generally three sources for energy dissipation [2; 46], such as heat transfer, induced turbulent flow, and the compressibility of the fluid, among which the compressibility is thought to be the main source for an UNDEX bubble. Previous researchers usually concentrate on one aspect in their investigations. However, we are systematically investigating these issues based on large-scale UNDEX experiments in the present study. Based on the state of the art stated above, a series of UNCEX experiments with larger bubble radius \(R_{\rm max}\approx 0.4\) m beneath the free water surface were conducted. The bubble dynamics and migration processes were captured by a high-speed camera. The temporal pressure curves of the shock wave and the bubble pulse were measured by the pressure sensors. Based on the pressure measurements, the laws of the bubble pulse peak, impulse, and the shock wave impulse with gauge distances as well as the depth of the gauge point were obtained. The energy dissipation of bubble-water systems at different standoff distances \(\gamma\) was investigated by the pressure curve along with the recorded images. Overall, our study gives the overview of the large-scale scientific underwater explosion investigations at various relative depths. In addition, our findings regarding pressure peaks and images may be helpful for an in-depth understanding of full-scale underwater explosions and their detonation strength. ## II Experiment setup and data processing The UNDEX experiments are carried out in a 4 \(\times\)4 \(\times\)4 m\({}^{3}\) cubical tank made of steel wall with a thickness of about 1 cm. The cubical container has in total three windows for various purposes. There is an observation window located on one side for high-speed photography as is shown in Fig.1(a). While the other two windows were fixed right at the neighboring walls of the tank for illumination and further observation. The main illumination we use was sunlight and the spotlights served as an auxiliary illumination. Bubble shapes were captured by the high-speed camera (Phantom V12.1) at a speed of 9150-16000 frames per second. The captured images had a resolution of 480 \(\times\)600 pixels with a calibrated resolution of 2 mm per pixel. The lower resolution can have effects on the quality of images providing fewer details. In order to compensate for the imaging speed, we were strict with the resolution of images. In our case, the maximum bubble size (\(\sim\)0.4m) was covered by 200 pixels per bubble radius which reflected overall acceptable global features of bubble dynamics.There was a simple truss structure placed at the top of the tank. The explosive charge was attached to a string with its upper end fixed on the center of the truss and its lower end attached with a counter weight to keep the string straight. The captured images containing the shape of the bubble are processed further to obtain clarity about the bubble dynamics. The charge type used in this study is RDX (Research Department Explosive), which is detonated by an electric detonator with consideration of safety. Two piezoelectric pressure sensors (PCB\({}^{\copyright}\)) were used to measure the transient pressure. Their resolution is 0.07 kPa which is significantly small compared with the shock wave and the bubble pulse, as will be shown in Fig.4 and Fig.5. After the explosion, a bubble filled with high-pressure explosion gases was formed, during which the chemical energy of the charge turns into the potential energy inside the bubble. At the effect of balance between internal bubble pressure and external pressure at the bubble wall, the bubble would experience several cycles of expansion and contraction till energy is entirely dissipated. The bubble keeps its spherical shape during the expansion phase after detonation. The bubble would enter the stage of collapse when it contracts to a small volume. A high-speed jet is formed at the end of collapse and rebounds. During this process, the bubble does not remain spherical anymore. Hence the equivalent radius \(R\) is used in this study, which is obtained by estimating the volume of the bubble \(V\) and then with the formula \(R=\sqrt[3]{3V/4\pi}\). The volume of the bubble is obtained in a slice-by-slice manner, see Fig.1(b). For volume estimation, we assumed the bubble shape to be axisymmetric during the first two bubble oscillations. Accordingly, we establish a coordinate system having the origin at the charge center and sliced the bubble image from top to bottom into about 20 to 30 sections. For each section, the shape is assumed to be the frustum of a cone. Hence the volume of the bubble is obtained by \[V=\sum_{i=1}^{n}\pi(z_{i}-z_{i+1})(r_{i}^{2}+r_{i}r_{i+1}+r_{i+1}^{2})/3 \tag{1}\] where \(r\) and \(z\) are radial and vertical coordinates respectively. As the migration of the bubble will be considered in the later section, the centroid of the bubble is calculated by \[\sum_{1}^{n}[\pi(z_{i}-z_{i+1})^{2}(3r_{i}^{2}+2r_{i}r_{i+1}+r_{i+1} ^{2})/12\] \[Z=\frac{+\pi z_{i+1}(z_{i}-z_{i+1})(r_{i}^{2}+r_{i}r_{i+1}+r_{i+1} ^{2})/3]}{V} \tag{2}\] The movement of a high-pressure bubble near the free surface is influenced by two forces: buoyancy and Bjerknes force. These two forces are in opposite directions and their relative strength affects the bubble dynamics near the free surface. As Bjerknes force is strongly influenced by the bubble distance to the free surface, a standoff parameter \(\gamma\) is introduced as follows: \[\gamma=\frac{H}{R_{\rm max}}, \tag{3}\] where \(H\) denotes the detonation depth and \(R_{\rm max}\) is the maximum equivalent radius of the bubble at the respective water depth. The surface tension effect is an important quantity that affects the bubble dyanmics [47; 48]. To take the surface tension into account, the Weber number is introduced [49]: \(We=R_{\rm max}P_{\infty}/\sigma\), in which \(\sigma\) is the surface tension coefficient. Then if we take \(R_{\rm max}=0.4\) m, \(P_{\infty}=1\times 10^{5}\) Pa and \(\sigma=7.28\times 10^{-2}\) N/m into above equation, then we get \(We=5.5\times 10^{5}\). In Li et al. [49], the authors showed that when \(We=1.2\times 10^{3}\), the volume evolution is identical to that at \(We=\infty\). Thus in our UNDEX experiments, the surface tension effect can be ignored for the significantly higher Weber number. To measure the strength of buoyancy, a buoyancy parameter [37] is defined as : \[\delta=\sqrt{\frac{\rho gR_{\rm max}}{p_{\infty}}}, \tag{4}\] where \(\rho\) and \(g\) denote the density of water and gravity respectively. And \(p_{\infty}\) is the ambient pressure consisting of the atmospheric pressure and still water pressure at the initial charge center. Throughout this paper, the non-dimensional length and time, which are denoted by the superscript "*", are scaled by \(R_{\rm max}\) and \(R_{\rm max}(\rho/\Delta\rho)^{1/2}\), respectively. ## III Theoretical model Zhang et al. [1] for the first time proposed a unified equation for bubble dynamics which simultaneously considers boundaries, bubble interactions, ambient flow field, gravity, bubble migration, fluid compressibility, etc. Zhang's equation [1] is a significant breakthrough and an epoch-making milestone in the field of theoretical research of bubble dynamics after Rayleigh-Plesset equation [12; 50] (1917, 1949), Gilmore equation [51] (1952) and Keller equation [13; 52] (1956, 1980). The oscillation and migration of the bubble in a compressible fluid field can be described by their unified bubble equation in an elegant mathematical form as: \[\left(\frac{C-\dot{R}}{R}+\frac{\rm d}{\rm d t}\right)\left[\frac{R^{2}}{C} \left(\frac{1}{2}\dot{R}^{2}+\frac{1}{4}v^{2}+h\right)\right]=2R\dot{R}^{2}+R ^{2}\ddot{R}, \tag{5}\] where each dot on the variable denotes taking the time derivative one time. \(R\), \(C\), \(v\), and \(h\) are bubble radius, sound speed, migration velocity, and the enthalpy difference at the bubble surface respectively. The above equation is coupled with the following bubble migration equation: \[C_{d}R\dot{v}+(3C_{a}\dot{R}+\dot{C}_{a}R)v-gR+\frac{3}{8}C_{d}S(v)=0 \tag{6}\] where \(C_{a}\) is the added mass coefficient, \(C_{d}\) is the drag coefficient, \(g\) is the gravity, and \(S(\cdot)=(\cdot)|\cdot|\) is the signed square operator. In the unified equation for the underwater explosion bubble, the shock wave propagation is considered. The initial conditions for the bubble expansion are obtained by solving the Euler equations, for more details, see Zhang et al. [1]. As is shown in Eq.(6), when further solving the bubble oscillation equation, the drag coefficient \(C_{d}\) and the added mass coefficient \(C_{a}\) need to be determined. Here a preliminary experiment was conducted to justify the reliability of the current experiments as well as to determine the proper values for \(C_{a}\) and \(C_{d}\). A free field experiment was conducted with 10 grams (about 13 grams of equivalent TNT) of explosive charge at 2 m depth. Two pressure sensors were located at a radial distance of 1.11 m and 1.75 m respectively from the charge. The time series of representative instances of the bubble dynamics are shown in Fig.2. During the first cycle of the bubble (frames 1-7), the shock wave reached the surface of the observation window right after the detonation, which caused cavitation on the window surface (see frame 2 of Fig.2). The bubble expanded to its maximum volume at frame 3, and then it got into the contraction phase. On the edge of collapse, the bubble shape had become aspherical (see frame 6 of Fig.2). The bottom part of the bubble contracted quicker than the upper part, thereupon, it got flat first. Thereafter, an upward jet was formed at the bottom and then it threaded through the bubble. Finally, the jet penetrated the upper side of the bubble wall nearly at the same time when the bubble contracted to the minimum volume. The bubble split into two parts: the upper bubble bulk and the lower toroidal bubble (see the enlarged view in frame 8 of Fig.2). After that two distinct bubbles started to rebound, and a pulsation pressure wave was generated by the bubbles. When the pressure wave reached the surface of the window, the cavitation was caused on the window again (see frame 9). Simultaneously, two bubbles were observed to coalesce. A protrusion was formed at the upper end which was due to the violent upward jet. The jet carried the mixture of bubble and water and penetrated the upper end of the bubble wall. At the end of the second cycle (frames 14-16), there was no obvious jet observed as it did in the first cycle because of the opacity of the bubble cloud. The bottom surface of the bubble got flat and collapsed faster in the second cycle, which indicated that buoyancy still influenced the bubble dynamics. The maximum radius in the free field experiment is 35.0 cm, which is small compared to the distance from the detonation center to the boundary (2 m). The time history of the equivalent radius of the bubble and the pressure measured by pressure sensors at two probes are shown in Fig.3, Fig.4 and Fig.5, respectively. The setups of these two pressure sensors are shown in Table.2. In the equivalent radius time history curve, the experimental data and the solution from the unified equation for UNDEX bubble1 (\(C_{d}=1.5\), \(C_{a}=0.2\)) are compared. In terms of radius, the two curves match well in the aspect of the maximum radius and the period in the first cycle. During the second period, some discrepancies present with smaller bubble sizes and shorter oscillation periods in experiments. In terms of pressure peaks and trends, the theoretical model matches well with the experimental data. The comparisons show that the data of this simple experiment can be reliably validated by the theoretical model. Therefore one can rely on the experimental data obtained afterward based on the same experimental setups. To further discuss the energy loss issues of bubble collapse in sectionV.4, the illustration of the energy of the bubble is briefed here. The process of detonation and the transformation of chemical energy to the internal energy of explosive products are not considered in the present study. According to the conservation of energy, the total energy of the bubble-water system can be written as: \[E=E_{k}+E_{p} \tag{7}\] in which \(E_{k}\) and \(E_{p}\) are kinetic energy and potential energy, respectively. According to Li et al. [35](2019) and Tian et al. [53](2021), the potential energy can be further extended in the non-dimensional form as: \[E_{p}=V(1-\delta^{2}z_{c})+\frac{eV}{\gamma-1}\left(\frac{V_{0}}{V}\right)^{T} \tag{8}\] in which \(z_{c}\), \(\varepsilon\), \(\gamma\), \(V_{0}\), and \(V\) are the vertical locations of the bubble center, the strength parameter, the specific heat ratio, the equivalent initial volume of the bubble and the transient volume of bubble, respectively. The first term on the right-hand side of Eq.(8) denotes the gravitational potential \(E_{p}^{g}\) of water and the second term is the internal energy \(E_{p}^{b}\) of the bubble. As the mass of a bubble is significantly small, its kinetic energy and gravitational potential are usually not taken into consideration. The oscillation of the bubble is accompanied by the inter-transformation of the energies mentioned above as well as the energy loss taken by the pressure wave. For ease of analysis, we assume that the energy of the bubble-water system is conserved during each cycle, and energy is only lost at the beginning of the rebounding phase. At the start, the internal energy \(E_{p}^{b}\) of explosive gas reaches its maximum. By doing work to the external water, \(E_{p}^{b}\) is transformed to \(E_{k}\) and \(E_{p}^{g}\). Because of the inertia of bubble expansion, the bubble will continue to expand even when the internal pressure equals the ambient pressure. During this process, \(E_{k}\) and \(E_{p}^{b}\) all transform to \(E_{p}^{g}\). When the bubble expands to its maximum volume, \(E_{k}\) is assumed to be zero, which makes the potential energy take up the majority. To analyze the magnitude of gravitational potential, the experimental parameters are taken into Eq.(8). As \(\delta^{2}\approx 0.03\), hence \(E_{p}^{g}\approx V\). The volume ratio \(V_{0}/V\) in the second term of Eq.(8) can be referred to the theoretical model, see Fig.3. As \(R_{0}/R_{\text{max}}\approx 0.1\), \(V_{0}/V\approx 0.001\), which indicates that \(E\approx V\) when the bubble expands to its maximum volume. The overall energy of the system in each cycle can be measured by the maximum volume of the bubble respectively, which is referred to as the volume-based approach in this paper. It has also been pointed out by Lee et al. [46] (2007) that the cube of the bubble oscillation period \(T^{3}\) can also indicate the energy of the bubble, which is referred to as the period-based approach. These two approaches are principally similar if the spherical Figure 1: (a) A sketch for the experimental setup. A truss structure is placed at the top of the tank. A string hangs straight with its upper end fixed at the center of the truss structure and the lower end attached with a counter weight. The explosive charge is attached to the string at the presetting depth. (b) The sketch for calculating the volume and centroid of the bubble in a slice-by-slice manner. A local coordinate is established at the detonation point. The bubble image is divided into 20 to 30 parts and calculate their volume and their moment to the coordinate axis respectively. bubble model is assumed. To quantitatively investigate the portion of the energy carried by the pressure wave, the recorded pressure curve is utilized via the following formula [2]: \[E_{w}=\frac{4\pi r^{2}}{pc}\int P^{2}dt, \tag{9}\] where \(r\) is the distance from the gauge point to the detonation position and \(P\) is the excess pressure. Eq.(9) will be used in the discussion about energy loss issues in sectionV.4. ## IV Bubble dynamics patterns near free surface This section elaborates on a series of experiments close to the water surface conducted to investigate the dynamics of the UNDEX bubble, in which 10 grams of RDX charge is used for all the experimental cases. Twelve different standoff distances are investigated including one repeated experiment at \(H=0.3\) m and two repeated experiments at \(H=0.8\) m, detailed information on the experimental test cases can be found in Table.1. Four bubble dynamics patterns are observed in our experiments, which are described and illustrated below. Figure 3: The time variation of the equivalent radius of the bubble from the experiment and Zhang equation [1] for underwater explosion bubble with \(C_{d}=1.5\), \(C_{a}=0.2\). Figure 2: The bubble dynamics within two bubble’s oscillation in the free field at a depth of 2 m. The black frame number is marked on the northwest corner and the corresponding physical time with unit ms is marked on the southwest corner on each frame hereafter if not specifically illustrated. The white frame number denotes the enlarged view of the bubble’s shape at the respectively time. At the transformation of expansion and contraction, the bubble moves upwards at the effect of buoyancy. At the minimum volume (frame 8 and 16), a pressure wave is generated which causes cavitation on the window. ### Bubble bursting at free surface We observed the bubble bursting phenomenon at the free surface when the detonation depth is significantly small (\(\gamma<0.41\)). The bubble pulsation was not observed and the bubble assumed a half-sphere in the first oscillation cycle. Hence, the \(R_{\rm max}\) in Table.1 is half of the maximum horizontal width \(R_{\rm w}\) during the entire process. Fig. 6 presents the bubble dynamics for \(\gamma=0.14\). Instantly after the detonation, a significant amount of bulk cavitation could be observed in the water (frame 1 of Fig. 6). The bubble boundary appeared to be crystal clear at the early stage of the expansion phase (frames 2-3, Fig. 6). The mixture of explosive gas and vapor inside the bubble was sprayed out, which is an important characteristic of a UNDEX bubble bursting at the surface. Later on, the bubble wall turned opaque in frame 4. We think that it may be caused by the impact of the falling water particle inside the bubble's opened cavity and their splashing on the bubble's wall. We found that as \(\gamma\) increases, the time required for the formation of opacity is prolonged. The non-dimensional time at which the opacity takes place at the bubble wall is 0.22, 0.57, 0.92, 1.12 for \(\gamma\)=0.14, 0.41, 0.66, and 0.75, respectively. This phenomenon is not typical of the bubble bursting as it also occurred at \(\gamma=0.66\) and 0.75 during the bubble's contraction stage, in which the bubble remained intact. We have conducted the repeated experiment at \(\gamma=0.75\) where we did not find the presence of this opacity. The uncertainty in observing the opacity may conclude that \(\gamma=0.75\) approaches the critical value for the formation of such opacity. The formation of opacity may affect the characteristics of the bubble's pulsation pressure significantly but doesn't seem to influence the migration of the bubble centroid, which will be shown in Fig.12 and Fig.15 and elaborated later. After the water droplets splashed over the bubble's bottom wall, a protrusion was formed at the bottom (frame 5 of Fig.6) and penetrated the bubble wall. It has been analyzed by Tian et al.(2018)[54] that this phenomenon occurs due to the breaking and re-closure of the bubble. After the bubble breaks, the airflow can make the displaced water at the surface join together along the vertical axis. The rejoined water impacts each other and two violent opposite jets Figure 4: The comparison of the recorded pressure curves and the results from Zhang equation [1] with \(C_{d}=1.5\), \(C_{a}=0.2\) at the position \(r=1.11\) m. Left: shock wave stage Right: bubble oscillation stage. Figure 5: The comparison of the recorded pressure curves and the results from Zhang equation [1] with \(C_{d}=1.5\), \(C_{a}=0.2\) at the position \(r=1.75\) m. Left: shock wave stage Right: bubble oscillation stage. are formed simultaneously at the surface. The upper jet becomes the so-called "water spike" and the lower part results in the bubble wall penetration as well as the formation of opacity on the bubble wall. This phenomenon is sensitive to the detonation depth when \(\gamma\approx 0\). According to our observation, if the charge is located with its upper end touching the water surface, this protrusion can be observed. If the charge is located with its lower end touching the water surface, this protrusion is not formed. As the protrusion developed further and moved downward, the whole bubble became a cloud of bubbles (frames 7-9, Fig.6) with no distinct continuous boundary. The upper part of the bubble moved upward towards the free surface slowly and the detached bottom part moved more quickly, see frames 10-12 of Fig.6. During this process, the upper part of the bubble continued to rise. The gas inside the upper part of the bubble ultimately leaked into the atmosphere. The standoff distance was so small that the water layer above the bubble didn't have the potential to suppress the expansion of the high-pressure gas inside the bubble thereby breaking out of the top surface. On the one hand, high-temperature explosive gas leaked into the atmosphere; consecutively, the surrounding air at atmospheric pressure flowed into the bubble from opened part of the bubble. The mechanism from these two aspects ultimately reduces the total energy content of the bubble significantly. According to our pressure measurements, there was no pulsed pressure recorded during the bubble bursting by the sensor. The time history of variation of the bubble width at different \(\gamma\) is shown in Fig.7 and compared with the solution from the Rayleigh-Plesset equation [1]. It is found that the experimental data match well with the RP equation in the early stage (\(t*<0.1\)). After that, the data of all three experiments deviate from analytical solutions. As \(\gamma\) increases, the deviation in time increases. After the stability point, the speed for widening in experiments is higher than the RP equation. To account for this phenomenon, we postulate that the leakage of gases does not take place instantly. It needs some time for the explosive gas to get out to the external atmosphere. Therefore, at the earlier stage of bubble expansion, there are no significant differences between the experimental data and the analytical estimations. The internal pressure of the bubble quickly falls below the ambient pressure for an intact bubble and the expansion speed decelerates. While for the bursting bubble, due to the interaction with the atmosphere, the air flows into the bubble which increases the internal pressure of the bubble. Consequently, the time for the internal pressure to be lower than the ambient pressure is supposed to be delayed which explains the faster expansion speed of the bursting bubble. As the free surface effect has been considered in Zhang equation, its expansion time history is closer to the experimental data, which shows the improvement of their proposed equation. ### Jetting downward When the detonation depth is higher, the sphericity and integrity of the bubble were maintained during the first oscillation cycle. From now on, the bubble assumed the pulsation characteristic. Fig.8 shows the captured sequences of the bubble \begin{table} \begin{tabular}{c c c c} NO. & Depth \(H\) (m) & Maximum radius \(R_{\max}\) (m) & Standoff parameter \(\gamma\) \\ \hline 1 & 0.0 & 0.39 & 0.0 \\ 2 & 0.05 & 0.37 & 0.14 \\ 3 & 0.15 & 0.38 & 0.41 \\ 4 & 0.25 & 0.38 & 0.66 \\ 5 & 0.3 & 0.4 & 0.75 \\ 6 & 0.3 & 0.4 & 0.75 \\ 7 & 0.5 & 0.39 & 1.28 \\ 8 & 0.7 & 0.39 & 1.79 \\ 9 & 0.8 & 0.39 & 2.05 \\ 10 & 0.8 & 0.38 & 2.11 \\ 11 & 0.8 & 0.38 & 2.11 \\ 12 & 0.9 & 0.39 & 2.31 \\ 13 & 1.0 & 0.38 & 2.63 \\ 14 & 1.2 & 0.37 & 3.16 \\ 15 & 1.3 & 0.37 & 3.47 \\ \end{tabular} \end{table} Table 1: Experiment cases and parameters Figure 6: The bubble dynamics of the bubble bursting at the standoff parameter \(\gamma\)=0.14. The cavitation took place at the water surface instantly after the detonation (frame 1). Soon the bubble jetted downward (frames 5-6) and became a cloud of water and air (frame 7). There was no obvious pulsation phenomenon of the bubble in the bubble bursting pattern. ble shape evolution at \(\gamma=1.79\). After the bubble expanded to its maximum volume (frame 3, fig.8), it started to contract, during which the upper surface of the bubble got flat first (frame 4, fig.8). Later on, a downward re-entrant jet was then supposed to be formed (frames 5-6 ) with the help of massive bubble experiments of other sources and numerical simulations. When the bubble collapsed to its minimum volume, the jet penetrated the bubble wall and the bubble became a toroidal bubble (frames 7-8, fig.8). The jet carried some portion of the gas along with it and a small bubble cloud was thereupon separated from the bulk bubble. This small bubble attained high-frequency pulsation characteristics than the bulk bubble. After four cycles of pulsation, its pulsation characteristics had been significantly weakened compared with the bulk bubble. This indicates that the kinetic energy of the small bubble has nearly vanished. The bubble separation phenomenon was observed at \(\gamma=1.79\), where the main bubble was divided into an upper bulk bubble and a lower detached bubble. It has influenced the energy dissipation up to a certain extent, which will be described in the sectionV.4. At the rebounding stage, the toroidal bubble coalesced into one single-connected bubble (frame 9). At the end of the second oscillation cycle, a downward jet was formed resulting in the migration of the bubble constantly away from the free surface. At this stage, the bubble's dynamics were mainly influenced by the Bjerkens force resulting from the surface, so that the bubble was constantly repelled away from the free surface downwards. This dynamics pattern is characterized as constant downward migration of bubble in our experiments, which covers the test cases of standoff parameter from \(\gamma\)= 0.66 to \(\gamma\)= 1.79. ### Neutral collapse We increased the detonation depth further so that the balance between Bjerknes force and the buoyancy can be obtained at \(H=0.8\)m (\(\gamma\approx 2.1\)). In such condition, the dynamics of the bubble is expected to be more complex. The experiments of this case have been repeated three times and the maximum equivalent bubble radius, the bubble oscillation period and the time history of the bubble's equivalent radius are remarkably similar. Some discrepancies exist among the three repeated experiments at the collapse stage where the bubble attains its minimum volume at which instabilities are significant. Fig.9 shows bubble evolution which will be referred to as type I in the scope of the study. It can be seen that at the end of the first cycle (frames 6-8, fig. 9), the bubble shrank horizontally faster which resulted in an annular jet. After jetting the bubble split into two separate parts. These two distinct parts started to coalesce along the line that two bubbles contact (frames 9-10) into a single bulk cavity at the rebounding stage. At the end of the second oscillation cycle, the bubble attained a flat lateral oval shape (see frame 13). In the third oscillation period, the annular jet was again formed (frames 17, 18). By the time the bubble split, two opposite jets are formed pushing two individual parts away from each other (see frame 18). Fig.10 shows another type of neutral collapse pattern referred to as type II. It shows that the bubble nearly collapses spherically at the end of first oscillation cycle, and no significant annular jeting phenomenon was observed during the entire collapse process. For the third variant (type III), a weak downward jet was observed every time, when the bubble collapsed and the bubble migrated slightly downward. The comparison among the time history of the equivalent radius of the bubble for these three types of bubble dynamics is presented in Fig.11. All three curves of equivalent bubble radius are nearly identical except for a few discrepancies in the second cycle. The difference observed in the maximum equivalent radius in the second cycle postulates that some energy loss occurred during the second oscillation cycle, which will be discussed in the sectionV.4. Though the bubble dynamics patterns observed in images of Fig.9 and Fig.10 at H = 0.8 m are a bit different, the variation in equivalent radius over time is nearly the same. It indicates that bubble dynamics are unstable when the bubble collapses to the minimum volume around this point. The source of this instability might be due to an imbalance between Bjerknes force and buoyancy force in terms of magnitude. Small perturbations in the experimental setup might have also caused the bubble to evolve differently. ### Quasi-free field movement Once the standoff distance is larger than double the maximum equivalent radius of the bubble, such as \(\gamma>2.1\), the bubble dynamics is similar to that in a free field. The discrepancy is that the migration of the bubble, as well as the upward migration speed at the end of the second cycle, is not as large as that in the free field (\(H=2\)m). On the one hand, though the bubble jets upward during the 1st cycle, it doesn't mean that the repellent force from the free surface is negligible: the re-pellent force is smaller than the buoyancy force. On the other hand, the bubble's migration in the first cycle should be considered for its depth decreases as the bubble migrates. For the free surface cases, the bubble centroid migrates upward during the first cycle. This results in the strengthened repellent Figure 7: The growth of bubble width \(R_{\rm w}\) scaled by \(R_{\rm max}\) for a bursting bubble at the free surface. The analytical solutions from Rayleigh-Plesset equation [12] and Zhang equation [1] are added for comparison. Figure 8: The bubble dynamics at the standoff parameter \(\gamma\)=1.79. When the bubble reached to its minimum volume (frame 7), a downward jet penetrated the bubble wall, and carried some portion of bubble content with it (frames 7-9). The carried small bubble oscillates much faster than the bubble bulk (frames 10-11). Figure 10: The bubble dynamics of neutral collapse of type II. The bubble nearly oscillates spherically and there is no obvious jetting phenomenon observed. The bubble slightly migrates upward at the effect of buoyancy. Figure 9: The bubble dynamics of the neutral collapse of type I within three bubble’s oscillation. This bubble dynamics is characterized as an annular jet splitting the bubble at the end of the first and the second cycles (frame 7 and 17). force from the surface, which further influences the bubble's migration. ## V More discussions ### Migration of the bubble The migration of the bubble is an indicator of the magnitude of the combined effect of the Bjerknes effect and the buoyancy to the bubble. The migrations of the bubble centroid at different standoff parameters during the two consecutive cycles are shown in Fig.12. The bubble nearly assumes a spherical shape when it first expands, during which process the centroid nearly doesn't move except for the cases when \(\gamma<\)1.79 because of the upward direction of the overall pressure gradient along the axis. The direction and strength of the jet at the end of collapse determine the corresponding migration direction and the migration speed. At the rebounding stage of the second cycle, the migration speed decelerates. The relatively large discrepancies in the three repeated experiments at \(\gamma=2.1\) again illustrate the instability of the bubble when contracting to the minimum at this depth. Generally, the free field experiments are conducted deep enough (\(\gamma>7\)) below the surface to get rid of the free surface effect whether it is an underwater explosion bubble, spark-generated bubble, or laser-induced bubble. However, there is no quantitative standard existed to determine the depth. Currently, there is no agreement on how to identify whether the bubble dynamics are influenced by the free surface or not. Kannan _et al._[55] postulated that the free surface is not necessary to be considered when the re-entrant jet is suppressed during the first collapse phase. Based on this criterion, \(\gamma=2.1\) is a limit of the standoff distance after which the free surface effect is negligible as demonstrated in our experiment. But we estimated that this criterion is still a little robust. The re-entrant jet is suppressed means that the Bjerknes force and buoyancy have the same magnitude. However, it still indicates some involvement of the free surface effect. By comparing the time history of the migration curves of free surface cases with that of the free field experiment, we noticed that the bubble migration curve at \(\gamma=3.16\) and \(3.47\) match well with that in the free field. For quantitative analysis, we computed the time integral of displacement, i.e.: the area below the migration curve for \(\gamma=3.16\), \(3.47\), and free field experiment. We found that the relative errors among them are within the limit of \(3\%\). Also, it is explained in section.V.2 that the non-dimensional bubble oscillation periods of these two cases are close to that in the free field condition. The discrepancies among them are simply due to the difference in ambient hydrostatic pressure. This quantitative comparison concludes that \(\gamma=3.16\) is the critical standoff distance at which the free surface is not necessary to be considered. This standoff distance is mainly dependent on the specific condition, such as buoyancy effect, viscosity, etc. If the buoyancy parameter decreases, this critical standoff distance is supposed to increase as the buoyancy effect is weakened. In the experiments by Kannan et al. [55], this critical standoff distance is obtained at \(\gamma=6\) for deionized water based on their criterion that the re-entrant jet being suppressed in the first collapse phase is the condition for ignoring the free surface effect. The size of the maximum bubble radius in their experiment was a few millimeters, at which the buoyancy effect was considered to be significantly smaller than in our experiments. Their results are in accordance with our assumption. In Fig.12, the spark-generated bubble experiment at standoff distance \(\gamma=0.66\) from Zhang et al. [38] is added for comparison. The maximum equivalent bubble radius in Zhang et al. [38] is about 27.75 mm, which is significantly smaller than the UNDEX experiments in our study (\(\sim\)0.4 m). By comparing the spark-generated bubble experiment at \(\gamma=0.66\) with the UNDEX experiment at \(\gamma=0.75\), it is found that the migration magnitude for the smaller scale bubble is relatively larger than the UNDEX bubble not only in the initial moving upward stage but also in the later consecutive downward migration stage. As the buoyancy effect is significantly smaller for the spark-generated bubble, the migration of the bubble is more susceptible to the free surface at nearly the same standoff distance \(\gamma\). ### Bubble's oscillation period The bubble's oscillation period is the quantity that researchers are particularly concerned about. Especially the relation between the bubble's oscillation period and the natural frequency of the structure due to the possibility of resonance. Some researchers have investigated the bubble's oscillation period when the bubble is initiated around the boundary [33; 38; 42]. Generally, the free surface will decrease the bubble's oscillation period, indicating faster bubble expansion and contraction. Like the derivation process by Rayleigh [12] in the free field condition, a Rayleigh-like period [1] can be determined for the standoff distance \(\gamma\) by taking the free surface Figure 11: The time variation of the non-dimensional bubble radius in two bubble’s oscillation. The three curves nearly overlap with each other in the first cycle and their differences are only revealed at the second cycle. condition into consideration, see Eq.(10). \[T^{*}=\sqrt{6}\int_{0}^{1}\sqrt{\frac{x^{3}}{1+x^{3}}(1-\frac{x}{2\gamma})}\,dx \tag{10}\] To our knowledge, unlike the Rayleigh period, the analytical solution can't be obtained for a random standoff distance \(\gamma\) in Eq.(10), rather it is solved numerically. The fitting curve of the Rayleigh-like period for Eq.(10) with at the range \(1\leq\gamma\leq 3.2\) is \[T^{*}=0.012\gamma^{5}-0.1479\gamma^{4}+0.73\gamma^{3}-1.85\gamma^{2}+2.51 \gamma+0.151 \tag{11}\] Fig.13 shows the variation of the bubble's oscillation period with standoff distance \(\gamma\) from experiments as well as from Eq.(5) and Eq.(11). The results from Eq.(5) are calculated considering the optimal parameter derived from the free field experiment: \(C_{d}=1.5\) and \(C_{a}=0.2\). It shows that the non-dimensional bubble's oscillation period increases with the standoff distance and the theoretical model (Eq.5) can reliably predict the bubble's oscillation period. The Rayleigh-like period is a bit underestimated than the results from Eq.(5). The reason for the underestimated results can be that the internal bubble pressure is not considered in the derivation process and it can be used as a reference period just like the Rayleigh period in the free field condition. Contrary to the assumption that the pressure remains constant in the infinity for the free field condition, the air remains at atmospheric pressure in the vicinity of the bubble when the bubble is near a free surface. This results in the increased external pressure around the bubble that makes the bubble contract more swiftly. We suppose this accounts for the decreased bubble's oscillation period when the detonation position is closer to the free surface. By comparing it with experimental data from other data in the existing literature, it shows that our free surface oscillation period is generally larger than the data from Ref. [34], in which the bubble was induced by laser. It may be due to the dissolvability of the bubble's internal gaseous content. The bubble's internal content from spark-generated bubbles or laser-induced bubbles tends to disintegrate into the surrounding water under Figure 12: The migration of the bubble centroid at different standoff distance \(\gamma\). The bubble in the free field as well as the one that moves downward (\(\gamma<2.1\)) is presented with solid line. The upward movement (\(\gamma>2.1\)) is presented with the separated points and the bubble migrations at the transitional region (\(\gamma\approx 2.1\)) are presented with the solid line combined with the separated points. The spark-generated bubble experimental data from Zhang et al. [38] is added for comparison. Figure 13: The first bubble oscillation period at different standoff distance \(\gamma\). The free surface experiments and the free field experiment are all presented and compared with that of the laser-induced bubble experiment [34], the mini-charge UNDEX experiment [42] as well as equation(5). high internal pressure. This mechanism reduces the overall internal energy which in turn decreases its ability to resist external pressure. It should be considered that the non-dimensional oscillation period in the free field case (1.79) is slightly lower than the Rayleigh period (1.83). We postulate that it is caused by the reflected wave from the tank walls. As this discrepancy is not significantly large enough. Therefore, we focused only on the free surface effects which are much more influential on the bubble dynamics, thereby neglecting the boundary effect. ### Pressure characteristics near a free surface The characteristics of pressure for shock waves and bubble pulses have always been of research interest because they are the direct loads on the floating structures. In the free field condition, the bubble is free from the influence of the boundary, and the pressure at one fixed point decreases exponentially with time for the shock wave, see the experimental results in Cui et al. (2016) [39] and the empirical formulas in Zamyshlyayev and Yakovlev [56] (1973). If the free surface is taken into consideration, a rarefaction wave is reflected because of the significant difference in acoustic impedance between air and water. The rarefaction wave magnitude decreases the local pressure significantly below saturation pressure causing the bulk cavitation under the free surface, as shown in Fig.6. In this section, we are presenting the pressures measured near the free surface with two pressure sensors. The test conditions and positions of these two pressure sensors or the gauge points are shown in Table.2. Fig.14 shows the time history of the measured pressures at \(\gamma=2.63\), in which the shock wave, first bubble-induced pulse, and second bubble induced pulse have been captured precisely. In Fig.14(b), the graph showed an abrupt jump in pressure and this jump could be due to the arrival of the rarefaction wave. This can be verified theoretically. The reflected rarefaction wave can be deemed to be radiated from a fictitious charge by mirroring the real charge at the surface. The speed of a rarefaction wave can be approximated by the speed of sound in the water. By calculating the distances between two charges to the sensor, the time for the shock wave can be calculated analytically. The analytical value for the experimental setup of Fig.14 is estimated as 0.56 ms which is close to our experimental result of 0.62 ms. The second peak pressure after the cavitation is due to the reflected wave from the boundary as the time for its arrival matches well for a reflected wave from the tank walls. For the initial stage of the shock wave (Fig.14(c)), it shows that the pressure curve matches well with the empirical fitting curve [56]: \[P=P_{\text{max}}e^{-t/\theta} \tag{12}\] in which \(e\) and \(\theta\) are natural exponential and the exponential damping constant, respectively. The exponential damping constant \(\theta\) denotes the time that the shock wave pressure reduces from its maximum peak pressure \(P_{\text{max}}\) to the value of \(P_{\text{max}}/\text{e}\), which reveals the shock wave damping characteristics at the early stage. Hence it is thought to be irrelevant to the standoff distance \(\gamma\) and the bubble dynamics. Its value should be determined for the later time-integral calculation of pressure. Its value ranged from 0.02 ms to 0.036 ms in our experiments. The mean value of them was estimated as 0.028 ms, which was adopted during the later calculation. For the bubble pulse pressure (Fig.14(d)), a clear rising and falling trend of pressure graphs is observed for the duration of several milliseconds. While there exists a sudden rise and jump in the shock wave curve, and the lasting time for it is only half a millisecond (see Fig.14(b)). Detailed information on these two types of pressure waves will be discussed separately. The decreased peak pressure between the 1st and the 2nd bubble pulse indicates that a portion of the bubble energy is lost during each collapse phase. As it has been shown in Section.IV.1, we have observed two different phenomena at the depth \(H=0.3\) m: droplets splashing on the bubble interface when the bubble collapses in one experiment and the absence of droplets splashing in the repeated experiment. The setups of pressure sensors for these two experiments were identical, which enables us to compare the influence of droplet splashing on the bubble's pulse pressure. The bubble pulse pressures measured by both sensors for these two experiments are compared in Fig.15. The comparison shows two kinds of totally different pressure curves. The time history of pressure at different distances is similar for both cases (see Fig.15(a)(c) and (b)(d)). For the case with droplets splashing on the bubble wall, the curve is oscillatory and contains multiple peaks. While for the case without droplets impacting on the bubble surface, the pressure graphs consist of a clear single peak. This single peak magnitude is larger than the former one at the same distance (see Fig.15(a)(b) and (c)(d)). By calculating the time integral of the pressure, i.e. pressure impulse, it is found that the impulse for the case without droplets impacting on the bubble wall is larger at the respective distance. We think that it might be caused by the disintegration of the bubble bulk into smaller daughter bubbles. Then, each bubble collapses separately accounting for the multi-peak in the curve. To comprehensively assess the action of pressure with time, the impulse is also taken into account, which is obtained via the following formula: \[I=\int_{t_{\text{lower}}}^{t_{\text{upper}}}Pdt, \tag{13}\] where \(t_{\text{lower}}\) and \(t_{\text{upper}}\) denote the lower limit and upper limit of the integral range, respectively. As it is shown in Fig.14(c), the shock wave pressure has subsided to a low level after a time interval of \(5\theta\) and the area below the pressure curve during this time interval has occupied the majority of the pressure impulse. Hence, \(t_{\text{upper}}\) equals \(5\theta\) in our study, which is also suggested by Cole [2] (1948). According to our observation, the reflected rarefaction wave arrives much later than \(5\theta\). Therefore the shock wave impulse is assumed to be unaffected by the rarefaction wave. The variations in the magnitude of peak pressure and impulse of the shock wave with gauge distance \(r\) are shown in Fig.16. It shows that the peak \(P_{\text{max}}\) and the distance \(r\) follow \(1/r^{1.11}\) dependency by the regression analysis, which is remarkably close to the empirical rule of \(1/r^{1.13}\) relation. As for shock wave impulse \(I_{s}\), it is found that \(I_{s}\) and \(r\) follow a \(1/r^{1.29}\) relation. Like the shock wave, the pressure characteristics for the bubble pulse can also be analyzed by the pressure peak and the pressure impulse. The estimation of pressure peak is relatively simple while there are two points needed to be considered for the impulse \(I_{\text{b}}\): the time integral limit and baseline for the pressure calculation. There is no strict rule applicable for the above two points just like the shock wave. As it is shown in Fig.4, the pressure signal curve remains below zero for most of the time. Theoretically, the bubble impulse should \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{sensor 1 (S1)} & \multicolumn{3}{c}{sensor 2 (S2)} \\ \cline{3-8} NO. & depth & standoff & & & & & \\ (m) & parameter & depth (m) & horizontal distance (m) & distance (m) & depth (m) & horizontal distance (m) & distance (m) \\ \hline 1 & 0.3 & 0.75 & 0.4 & 0.75 & 0.76 & 0.35 & 0.55 & 0.55 \\ 2 & 0.3 & 0.75 & 0.4 & 0.75 & 0.76 & 0.35 & 0.55 & 0.55 \\ 3 & 0.5 & 1.28 & 0.67 & 0.8 & 0.82 & 0.52 & 1.3 & 1.3 \\ 4 & 0.7 & 1.79 & 0.7 & 0.8 & 0.8 & & & \\ 5 & 0.8 & 2.05 & 0.7 & 1.0 & 1.0 & & & \\ 6 & 0.8 & 2.1 & 0.67 & 0.8 & 0.81 & 0.52 & 1.3 & 1.33 \\ 7 & 0.8 & 2.1 & 0.35 & 0.55 & 0.71 & 0.4 & 0.75 & 0.85 \\ 8 & 1.0 & 2.63 & 0.6 & 1.0 & 1.08 & 0.4 & 0.8 & 1.0 \\ 9 & 1.2 & 3.16 & 0.25 & 1.0 & 1.38 & 0.5 & 0.6 & 0.92 \\ 10 & 2.0 & 5.71 & 1.35 & 0.9 & 1.11 & 0.5 & 0.9 & 1.75 \\ \hline \hline \end{tabular} \end{table} Table 2: The setups for the pressure sensors. Figure 14: The measured pressure at the standoff parameter \(\gamma=2.63\). The pressure sensor is located 1.08 m away from the charge. (a) The time history of the pressure wave in the whole process. (b) The pressure wave in the shock wave process from the detonation. (c) The measured pressure and the pressure obtained from the empirical formula [56] in the early stage of the shock wave. (d) The time history of the 1st bubble pulse induced by the bubble collapse. be calculated when the pressure is above the hydrostatic pressure. However, the recorded pressure curves were generally oscillatory, which makes it uneasy to identify the time integral range (\(t_{\text{lower}}\) and \(t_{\text{upper}}\) in Eq.(13)) to calculate impulse. In our study, \(t_{\text{lower}}\) is chosen to be the time when there is an obvious rise in the pressure curve, and \(t_{\text{upper}}\) is the time when pressure subsides to nearly the same pressure value at \(t_{\text{lower}}\). According to the analysis of Cole\({}^{2}\) (1948), the time that pressure remains positive takes up 22% of the bubble's oscillation period. While in our experiments, the integral range is about 2 ms at most which is 3% of the bubble's oscillation period. To compensate this discrepancy, a lower limit value \(I_{b}^{\text{lower}}\) and an upper limit value \(I_{b}^{\text{upper}}\) for the impulse are obtained by choosing two baselines. For the lower limit value \(I_{b}^{\text{lower}}\), the baseline for the pressure is chosen to be zero at the time \(t_{\text{lower}}\). And for the upper limit value \(I_{b}^{\text{upper}}\), the baseline at the time \(t_{\text{lower}}\) is 0.1MPa. The selected baseline does not affect the bubble pulse peak but influences the time integral quantities. The ul Figure 16: Right: shock wave peak at different gauge distances. Left: shock wave impulse at different gauge distances. Figure 15: The measured bubble pulse pressure at the standoff parameter \(\gamma=0.75\). (a)(c) The bubble pulse pressure at two sensor points S1 (radial distance 0.75 m) and S2 (radial distance 0.55 m) with droplets splashing on the bubble wall, respectively. (b)(d) The bubble pulse pressure at two sensor points S1 (radial distance 0.75 m) and S2 (radial distance 0.55 m) without droplets splashing on the bubble wall, respectively. timate impulse \(I_{b}\) is the mean value of the above-mentioned results, i.e. \((I_{b}^{\rm upper}+I_{b}^{\rm lower})/2\). As the charge weight is identical in all our experiments, the reduced pressure peak \(P_{\rm max}\cdot r\) and reduced impulse \(I_{b}\cdot r\) are used to investigate the relation of \(P_{\rm max}\) and \(I_{b}\) with gauge distance \(r\) at different standoff distance \(\gamma\), which is shown in Fig.17. As it can be seen in Fig.17(a), the experimental data for reduced pressure are mainly distributed along the dashed line \(P\cdot r=1.5\), which is the calculated mean value of these concentrated points. This indicates a roughly \(1/r\) relation between \(P_{\rm max}\) and \(r\). But it can also be seen that the points at \(\gamma=2.1\) are higher than the dashed line. Referring to the aforementioned bubble dynamics patterns, it can be seen that the bubble collapses neutrally at \(\gamma=2.1\) where Bjerknes force and buoyancy are roughly balanced. As has been mentioned by Brett et al.[44], a local high-pressure is also captured at the point where the bubble nearly remains stationary. The discrepancy is that this point in Brett et al.[44] is not related to the neutral collapse point as referred to in our experiments. It has been observed that the highest point at \(\gamma=2.1\) is from the case that the bubble collapses spherically (typell of the neutral collapse, see Fig.10). This feature can be attributed to the full compression of gaseous contents inside the bubble[44] which can partially support the energy loss mechanism. This energy loss phenomenon is discussed in sectionV.4. For the reduced impulse shown in Fig.17(b), there is no line that most experimental data reached. It can be observed that the higher pressure peak doesn't guarantee a higher impulse. The obtained experimental data can be divided into two regimes by the dashed line in Fig.17(b): the points above the dashed line can be considered to be distributed along a line between them, which means that these points follow the \(1/r\) relation derived in Cole[2] (1947). While the points below the dashed line are scattered independently. With the non-dimensional gauge depth scaled by the maximum radius marked aside, it can be noticed that the gauge depth may be responsible for the points distribution stated above. For example, the gauge points above the dashed line are all placed deep enough while the gauge points below the dashed line are all placed closer to the free surface. The most contrasting examples are the case in the free field (\(\gamma=5.7\)). The bubble pulse peak for the two gauge points follows the \(1/r\) relation very well, while the impulse of the smaller gauge depth (1.01) is significantly smaller than that of the larger gauge depth (3.08). It shows that the free surface has an enormous influence on the bubble impulse pressure magnitude. As the recorded wave profile is the superposition of the direct wave emitted by the bubble and the reflected wave from the boundary, the bubble impulse pressure magnitude is thought to be mainly influenced by the rarefaction wave from the surface. Unlike shock waves, the bubble pulse has a much wider pulse width, which makes it vulnerable to the reflected rarefaction wave. When the gauge point is close to the free surface, the reflected rarefaction wave follows the direct wave right after. From Fig.17, we can see that the non-dimensional depth \(h^{*}=1.58\) is the critical depth that the reflected wave does not influence the magnitude of impulse in our experiments. When the gauge point is deeper than this depth, the bubble impulse \(I_{b}\) with gauge distance \(r\) again conforms to the \(1/r\) relation. The ratio of the bubble impulse and the shock wave impulse ranges from 0.84 to 2.1 (most of the data are above 1). It seems that the impulse magnitude for the bubble pressure pulse is generally larger than that of the shock wave magnitude. The mechanisms of impulse and shock wave emission are of research interest and need to be taken into consideration during the underwater explosion process to comprehensively analyze the loads. ### Energy loss during the first collapse It has been indicated by Lee et al.[46], most of the energy of the bubble is radiated out in the form of a pressure wave, which may cause severe damage to the nearby structure. It has been observed in our experiments that the second bubble pulse is either too small or is not being measured by pressure sensors. It indicates that the remaining energy of the bubble during the second oscillation cycle is nearly negligible compared with the first collapse. This reveals that the majority of energy loss takes place during the first collapse, which is the focus of the current study. Here, the energy loss parameter is defined as \(\alpha=1-E_{n+1}/E_{n}\), in which \(n\) denotes the oscillation cycle number. Then the energy loss due to bubble collapse can be calculated via the following two formulas. \[\alpha=1-\frac{V_{n+1}}{V_{n}} \tag{14}\] \[\alpha=1-(\frac{T_{n+1}}{T_{n}})^{3} \tag{15}\] Fig.18 presents the calculated results of the energy loss parameter during the first collapse against the standoff distance based on both, the volume-based and the period-based approaches. It shows that though the absolute values for two approaches at the respective standoff distance \(\gamma\) are different, the overall tendencies of \(\alpha\) against \(\gamma\) are fairly identical: Energy loss parameter \(\alpha\) increases with an increase in \(\gamma\) until the neutral collapse limit point \(\gamma=2.1\). After that, it decreases with further increases in \(\gamma\). A reasonable way to explain this phenomenon is that the free surface influences energy loss by affecting the bubble dynamics. In the process of bubble contracting to the minimum volume, i.e. collapse, \(E_{p}^{g}\) is released to be transformed to the potential energy of the bubble \(E_{p}^{b}\) and kinetic energy of the liquid \(E_{k}\). As the bubble over-contracts due to inertia, part of \(E_{k}\) turns to \(E_{p}^{b}\). When the bubble starts to rebound, the internal pressure of the bubble is comparatively higher than the ambient pressure of the liquid. Such extreme discontinuity generates the bubble's pulsation wave, and it takes up some portion of energy loss which is derived from \(E_{p}^{b}\). The more kinetic energy is transformed into the internal energy of the bubble, the more energy tends to be dissipated by the pulse wave. Accordingly, we can see that when the bubble is close to the surface, a strong jet is supposed to be produced which carries much kinetic energy. The migration curve in Fig.12 can also reveal the kinetics of migration. As the detonation point goes deeper, the migration kinetic energy decreases and the internal energy increases correspondingly. At the point of neutral collapse, the vertical migration of the bubble is not large as is shown in Fig.12. The gaseous content inside the bubble can be fully compressed for a spherically oscillating bubble. That is the reason that the pressure sensor recorded the highest pulse pressure for the spherical oscillation bubble. After the point of neutral collapse, the free surface effect is reduced and the buoyancy starts to become dominant. The bubble forms the upward jet which in turn increases \(E_{k}\) and decreases in \(E_{p}^{b}\). It signifies that the reduced energy loss occurs when the detonation depth further increases. It should be noticed that more energy is lost at the depth \(\gamma=1.79\), which is resulted from the jet carrying some portion of gas with it when the jet penetrates the bubble surface, see Fig.8 (frames 7-8). It means that the loss of mass or explosive content is also an important source of energy loss. Here the proportion of energy carried by the shock wave is assessed by Eq.(9). As the recorded pressure is the superposition of the direct wave and the reflected wave, \(E_{w}\) will be overestimated for the wave reflected from the rigid boundary or underestimated for the rarefaction wave from the surface. As has been analyzed before, we think that the points above the dashed line in Fig.17(b) are unaffected by the free surface, which are selected to calculate \(E_{w}\). The ultimate calculated results are shown in Table.3. \(\Delta E\) is the total energy loss at the 1st collapse based on the volume-ratio measurement and the values for \(E_{w}\) include the upper limit and lower limit as are done in the calculation of impulse. The ratio \(E_{w}/\Delta E\) indicates the proportion of the radiated energy of the pressure wave to the total lost energy. It shows that this ratio reveals some relevance to the depth, which resembles the energy loss parameter against depth. The small portion of radiated energy at \(H=0.7\) m may be due to the larger denominator \(\Delta E\) which Figure 17: (a) The reduced bubble pulse peak pressure \(P_{\text{max}}\cdot r\) against the standoff distance \(\gamma\). The dashed line denotes the mean value of the concentrated points. (b) The reduced bubble pulse impulse \(I_{b}\cdot r\) against the standoff distance \(\gamma\) with non-dimensional gauge depth marked aside. The dashed line divides the experimental data into two parts: one part is the region where the bubble pulse impulse has been reduced by the rarefaction wave from the free surface; the other is region where the calculated bubble pulses impulse are not affected by the rarefaction wave. Figure 18: The variation of the energy loss paramiter during the first collapse with the standoff distance \(\gamma\) based on the period-based approach and the volume-based approach, respectively. The dashed lines denote the energy loss parameter in the free field. The tendencies based on the two approaches are the same though there are some discrepancies on the specific values. \begin{table} \begin{tabular}{c c c c c} Depth \(H\) (m) & \(\gamma\) & \(\Delta E\) (J) & \(E_{w}\) (J) & \(\frac{E_{w}}{\Delta E}\) \\ \hline 0.5 & 1.28 & 12644 & 6473-7927 & 0.512-0.627 \\ 0.7 & 1.79 & 15082 & 6785-8214 & 0.450-0.545 \\ 0.8 & 2.1 & 14505 & 7557-9442 & 0.521-0.651 \\ 0.8 & 2.1 & 12994 & 8927-9954 & 0.687-0.766 \\ 1 & 2.63 & 11637 & 7576-8623 & 0.651-0.741 \\ 2 & 5.71 & 8634 & 5992-7012 & 0.694-0.812 \\ \end{tabular} \end{table} Table 3: The calculated energy loss based on the volume-ratio approach at several depths and the calculated energy loss based on the radiated pressure wave. is caused by the loss of explosive product. If we look closer at the specific values, we can find that the difference between the upper limit and the lower limit of the ratio can reach as large as 10%, while all this difference only comes from the adoption of pressure baseline to calculate the integral in Eq.(9). And it also shows that the radiated energy occupies approximately 60% to 80% of the total lost energy for the first collapse. ## VI Conclusion A series of UNDEX experiments were conducted with varying standoff distance \(\gamma\). The variations of the bubble collapse patterns, oscillation period, centroid migration, and the energy loss against the dimensionless standoff parameter \(\gamma\) are systematically investigated. Additionally, the characteristics of the shock wave and bubble pulsation pressure beneath the free surface are also measured. The significant findings and conclusions drawn from our study are as follows: (1) Four patterns of UNDEX bubble dynamics are identified for different regimes of the standoff parameters: (i) \(0\leq\gamma\leq 0.41\): bubble bursting at the free surface, (ii) \(0.41<\gamma<2.05\): bubble jetting downward, (iii) \(2.05\leq\gamma\leq 2.11\): neutral collapse, and (iv) \(2.11<\gamma<3.16\): quasi-free field movement of bubble. In our UNDEX experimental setup, \(\gamma=3.16\) is thought to be the critical standoff distance limit at which the effect of the free surface on bubble dynamics is negligible, in terms of jet direction and centroid migration. (2) The oscillation period decreases with a decreasing detonation depth \(\gamma\). A satisfactory agreement is obtained from Zhang equation [1] and our experimental data. Derived from Zhang equation [1], the Rayleigh-like period can reliably predict the bubble oscillation period when the bubble is close to the free surface (\(\gamma>1\)). (3) The decrease of the bubble pulsation pressure versus the distance \(r\) follows a \(1/r\) law except for the neutral collapse condition. The strength of the bubble pulse can be weakened by the disintegration of the integrated bubble into daughter bubbles. Additionally, the ratio of impulse for the bubble pulse and the shock wave is found to be between 0.84 and 2.1 (most data are above 1), which shows the importance of the bubble pulse in underwater explosions. (4) The energy loss parameter \(\alpha\) (defined as \(\alpha=1-E^{2}/E^{1}\), where \(E^{i}\) denotes the bubble energy during the \(i\)th cycle) increases with \(\gamma\) until the neutral collapse position at \(\gamma\approx 2.1\), after which it decreases with \(\gamma\). The loss of the explosive product is found to be an important source of the lost energy. Additionally, the proportion of the radiated energy to the total energy loss is found to increase with \(\gamma\) while it reaches a local maximum value for the neutral collapse position. This proportion reaches 70% to 80% in the free field experiment. ## VII Acknowledgement Special thanks should be given to Dr. Liu Nian-Nian for his assistance to the experiments. ## VIII Declaration of interests The authors report no conflict of interest.
2307.14409
Exploring the Bitcoin Mesoscale
The open availability of the entire history of the Bitcoin transactions opens up the possibility to study this system at an unprecedented level of detail. This contribution is devoted to the analysis of the mesoscale structural properties of the Bitcoin User Network (BUN), across its entire history (i.e. from 2009 to 2017). What emerges from our analysis is that the BUN is characterized by a core-periphery structure a deeper analysis of which reveals a certain degree of bow-tieness (i.e. the presence of a Strongly-Connected Component, an IN- and an OUT-component together with some tendrils attached to the IN-component). Interestingly, the evolution of the BUN structural organization experiences fluctuations that seem to be correlated with the presence of bubbles, i.e. periods of price surge and decline observed throughout the entire Bitcoin history: our results, thus, further confirm the interplay between structural quantities and price movements observed in previous analyses.
Nicolò Vallarano, Tiziano Squartini, Claudio J. Tessone
2023-07-13T14:54:19Z
http://arxiv.org/abs/2307.14409v1
# Exploring the early Bitcoin Mesoscale ###### Abstract The open availability of the entire history of the Bitcoin transactions opens up the possibility to study this system at an unprecedented level of detail. This contribution is devoted to the analysis of the _mesoscale_ structural properties of the Bitcoin User Network (BUN), across its entire history (i.e. from 2009 to 2017). What emerges from our analysis is that the BUN is characterized by a _core-periphery_ structure a deeper analysis of which reveals a certain degree of _bow-tieness_ (i.e. the presence of a Strongly-Connected Component, an IN- and an OUT-component together with some tendrils attached to the IN-component). Interestingly, the evolution of the BUN structural organization experiences fluctuations that seem to be correlated with the presence of _bubbles_, i.e. periods of price surge and decline observed throughout the entire Bitcoin history: our results, thus, further confirm the interplay between structural quantities and price movements observed in previous analyses. 1. Bitcoin. 2. Networks. 3. Motifs. 4. Transactions. ## 1 Introduction Introduced in 2008 by Satoshi Nakamoto with the release of a white paper [1], Bitcoin is the first and most widely adopted _cryptocurrency_. Loosely speaking, it consists of a decentralised peer-to-peer network to which users connect to exchange native tokens (i.e. the _bitcoins_). After having been validated by the so-called _miners_ - according to the consensus rules that are part of the Bitcoin protocol [2, 3] - each transaction is included in a replicated database, i.e. the _blockchain_. The cryptographical protocols Bitcoin rests upon aim at preventing the possibility for the same digital token to be spent more than once, in absence of a central, third party that guarantees the validity of the transactions themselves [1, 4]: remarkably, the transaction-verification mechanism Bitcoin relies on allows its entire transaction history to be openly accessible - a feature that, in turn, allows it to be analyzable in the preferred representation. The Bitcoin structural properties have been only recently started to be investigated: in [5], the authors consider the Bitcoin user network at the _macroscale_, in order to check for its small-worldness; in [6], the authors explore the evolution of the _local_ properties of four different representations of Bitcoin (i.e. both the _user_ and the _address_ network, at both the _daily_ and weekly_ time scale) with the aim of investigating their relationship with the price movements; in,[7] the authors analyze the so-called Bitcoin Lightning Network (BLN), highlighting the increasingly centralized character of such a network structure. With the present paper, we aim at contributing to this stream of research by studying the structure of the Bitcoin User Network (BUN) at the _mesoscale_. ## 2 Data ### Data Subsection As said before, Bitcoin relies on a decentralised public ledger, the blockchain, that records all transactions of bitcoins among users. A transaction is nothing else than a set of input and output addresses: the output addresses that are said 'unspent', i.e. not yet recorded on the ledger as input addresses, can be claimed, and therefore spent, only by the owner of the corresponding cryptographic key. This is the reason why one speaks of _pseudonimity_: an observer of the blockchain can see all unspent addresses but cannot link them to the actual owners. Techniques exist, however, to infer the identity of users (hereby, this term will indicate 'groups of addresses'): they rest upon the the so-called _heuristics_, i.e. sets of rules taking advantage of the implementation of the Bitcoin protocol. Bitcoin Address Network (BAN).The Bitcoin Address Network is the simplest network that can be constructed from the blockchain records: it is a directed, weighted graph whose nodes represent addresses. The direction and the weight of links are provided by the input-output relationships defining the transactions recorded on the blockchain. The only free parameter is represented by the temporal window chosen for the data aggregation. In this paper, we chose to aggregate these data at a daily time scale, i.e. the shortest scale that still guarantees that the resulting network is connected.[1] Bitcoin User Network (BUN).Since the same owner may control several addresses, one can derive a network of users whose nodes are _clusters of addresses_. These clusters are derived by implementing different _heuristics_: let us now provide a brief description of the ones that have been employed here and that have been have derived from the state-of-the-art literature.[9, 10, 11, 12] The first heuristics is the so-called _multi-input heuristics_: it is based on the assumption that two (or more) addresses that are part of the input of the same transaction are controlled by the same user. The key idea behind this heuristics is that the private keys of all addresses must be accessible to the creator of a transaction, in order to produce it. It is generally believed to be the safest one for clustering addresses. The second heuristics is the so-called _change-address identification heuristics_ and is based upon the observation that transaction outputs must be fully spent upon re-utilisation; hence, \begin{table} \begin{tabular}{|l|l|l|l|} \hline Bubble & Start & End & Days \\ \hline 1 & 25 May 2012 & 18 Aug 2012 & 84 \\ \hline 2 & 3 Jan 2013 & 11 Apr 2013 & 98 \\ \hline 3 & 7 Oct 2013 & 23 Nov 2013 & 47 \\ \hline 4 & 31 Mar 2017 & 18 Dec 2017 & 155 \\ \hline \end{tabular} \end{table} Table 1: The four bitcoin bubbles as detected in.[8] the transaction creator usually controls also one of the output addresses. More specifically, we assume that if an output address is new and the amount transferred to it is lower than all the inputs, then it must belong to the input user. Whenever the Bitcoin User Network is mentioned in the paper, we refer to the representation obtained by clustering the nodes of the BAN according to a combination of the two heuristics above. Naturally, users can employ different wallets that are not necessarily linked together by transactions: as a consequence, the user network we obtain should not be considered as a perfect representation of the actual network of users but, rather, an attempt enabling us to group addresses while minimising the presence of false positives. Once the addresses are grouped into 'users', we build the network as follows: any two nodes \(i\) and \(j\) are connected via a directed edge from \(i\) to \(j\) if at least one transaction from one of the addresses defining \(i\) to one of the addresses defining \(j\) occurs at the considered time scale. In the present work, when not otherwise stated, we consider the Bitcoin User Network at a weekly time scale. Price bubbles dataThe valuation of cryptocurrencies is an emerging field of studies, and there's no unanimous consensus on what the bitcoin real value should be nor on the actual definition to detect bitcoin bubbles. In this paper we rely on the bubbles identified in [8]. The method to identify bubbles developed in [8] uses a generalized version of Metcalfe's law to determine bitcoin fundamental value, which is shown to be heavily exceeded, on at least four occasions, listed in table 1. ## 3 Results Connected ComponentsAs a first inspection we observe the transaction networks connected components evolution over time. Aside from the initial phase until fall 2010, we can detect the emergence of both the huge Weakly-Connected Component(WCC) and the Strongly-Connected Component(SCC) in two different points in time. Figure 1 panel (c) shows the ratio between the largest connected component and the second largest one. As one can see both at the strong ad weak level the huge ledgerjournal.org Figure 1: The evolution of connected components(CC) over time. Panel (a) shows the percentage of nodes in the weak and strong large connected component. Panel (b) depicts the number of connected components over time. Panel (c) shows the ratio between the largest connected component(LCC) and the seocnd largest connected component conected component emerge after 2012. In panel (a) we observe that after the initial phase two clear different trends emerge at the strong and weak level: the huge weak connected component size is stable around the 80% of the total number of nodes, while the huge strong connected component oscillates between the 10% and the 40% of the nodes; in the specific we observe a long plateau from 2014 to 2016 where the huge strong connected component size is around 40% of the total nodes and then gets back to the stable value of 10%. At last, panel (b) shows the number of connected components: it depicts a situation where the large majority of connected components is composed of a very small number of nodes. It seems clear that while a large majority of nodes is connected, the paths drawn by transactions are usually one way, implying the reduced dimension of the strongly connected component. Then nodes out of the main cluster tend to be isolated, thus explaining the large number of connected components. Bitcoin DisassortativityLet us now consider the assortativity of Bitcoin Transactions Networks. A network is said to be assortative when nodes with large degree tend to connect to each other, as opposed to disassortative networks, where nodes with large degree connects to nodes with low degree. Following [13], on undirected networks one defines the assortativity coefficient \(r_{und}\) as: \[r_{und}=\frac{\sum_{j,k}jk(e_{jk}-q_{j}q_{k})}{\sigma_{q}^{2}} \tag{1}\] where the sum runs over the 'excess degrees' of a node: imagine to reach a vertex following a specific edge, the 'excess degree' of the vertex is the degree of the vertex minus that specific edge you followed. \(q_{k}\) is the 'excess degree' probability distribution, reading \[q_{k}\propto p_{k+1} \tag{2}\] (with \(p_{k+1}\) being the plain degree distribution), \(\sigma_{q}^{2}\) is its standard deviation and \(e_{jk}\) is the fraction of edges in the network connecting nodes of degree \(j\) with nodes of degree \(k\). Naturally, \(\sum_{j}e_{jk}=q_{k}\). When considering directed networks [14], instead, four variants of the aforementioned Pearson coefficient can be calculated, i.e. the ones accounting for the correlation between out-degrees and Figure 2: Evolution of the four directed variants of Newman’s assortativity coefficient, revealing the (weakly) disassortative character of our BUNs. Moreover, since \(r_{dir}^{out-in}\) is ‘asymptotically’ zero, one can conclude that \(e_{jk}\simeq q_{j}^{out}q_{k}^{in}\) (and analogously for the other indices). out-degrees, out-degrees and in-degrees, in-degrees and out-degrees, in-degrees and in-degrees. For example, one of these variants reads \[r^{out-in}_{dir}=\frac{\sum_{j,k}jk(e_{jk}-q^{out}_{j}q^{in}_{k})}{\sigma_{q^{out}} \sigma_{q^{in}}} \tag{3}\] where \(e_{jk}\) now represents the percentage of edges starting from nodes whose out-degree is \(j\) and ending on nodes whose in-degree is \(k\). Naturally, it also holds true that \(\sum_{j}e_{jk}=q^{in}_{k}\). Plotting the evolution of the aforementioned coefficients on our BUNs shows their weakly disassortative nature (see figure 2). In particular, since \(r^{out-in}_{dir}\) is 'asymptotically' zero, one can conclude that \(e_{jk}\simeq q^{out}_{j}q^{in}_{k}\) - and analogously for the other indices of direct assortativity (the years until 2011 can be considered as a 'transient' period where the Bitcoin ecosystem was still of reduced dimensions, hence sensitive to even small structural changes). Bitcoin core-periphery structure.Our first result concerns the Bitcoin _core-periphery-ness_. In order to analyse it, we have run a recently-proposed method [15] based on the multivariate extension of the _surprise_ score function. The application of the multivariate surprise to the partition induced by the bow-tie structure reveals that it indeed induces a significant core-periphery structure (the surprise score function is steadily below the threshold of 5%), the core being the SCC and the periphery being composed by all the other nodes. Figure 3 shows the evolution of the percentage of nodes composing the core and the periphery of our BUNs. As expected from the results concerning the SCC, the periphery contains the vast majority of nodes throughout the entire Bitcoin history. As fig. 3 shows, there seem to be three different phases: the first one coincides with the biennium 2012-2014, during which the core portion of the BUN steadily rises until it reaches the 40% of the network; afterwards, during the biennium 2014-2016, it remains quite constant; then, during the last two years covered by our dataset (i.e. 2016-2018), the core portion of the BUN shrinks and the percentage of nodes belonging to it goes back to the pre-2012 values. In order to gain insight into the correlations between the evolution of purely topological quantities and the Bitcoin price, let us plot the trend of the _temporal z-score_, defined as \[z^{(t)}_{X}=\frac{X^{(t)}-\overline{X}}{s_{X}} \tag{4}\] ledgerjournal.org Figure 3: Evolution of the percentage of nodes belonging to the core portion of the Bitcoin User Network. for a generic quantity \(X\), where the mean \(\overline{X}=\sum\frac{X^{t}}{Y}\) and the standard deviation \(s_{X}=\sqrt{\overline{X^{2}}-\overline{X}^{2}}\) have been computed over a sample of values covering the six months before time \(t\). As fig. 4 and 5 show, the calculation of the temporal z-score for the percentage of core nodes/periphery links reveals the presence of peaks in correspondence of the first three bubbles (identified by the shaded areas), thus indicating the existence of periods during which the price and the structural quantities of interest co-evolve. In particular, while the number of links within the core and the periphery rises significantly, with respect to the previous temporal interval, periods in-between the bubbles are, instead, characterized by a decrease of the statistical significance of the same quantities. Bitcoin bow-tie structure.The core-periphery structure of the BUN is characterized by an additional structure, known as _bow-tie_ structure. The definition of _bow-tieness_ rests upon the concept of _reachability_: we say that \(j\) is _reachable_ from \(i\) if there exist a path from \(i\) to \(j\). A directed graph is said to be _strongly connected_ if any two nodes are mutually reachable. Mutual reachability is an equivalence relation on the vertices of a graph, the equivalence classes being Figure 4: Evolution of the temporal z-score \(z_{X}^{(t)}=\frac{X^{(t)}-\overline{X}}{s_{X}}\) for the percentage of core nodes (the mean \(\overline{X}\) and the standard deviation \(s_{X}\) have been computed over a sample of values covering the six months before time \(t\)). Peaks are clearly visible in correspondence of the bubbles identified by the shaded areas, thus indicating the existence of periods during which the price and the structural quantities of interest co-evolve. the strongly connected components of the graph itself. The bow-tie decomposition of a graph, hence, consists of the following sets of nodes:[16] * Strongly-connected Component:SCC \(\equiv S\). Each node within the Strongly-Connected Component(SCC) can be reached by any other node within it. This means that a directed path exists connecting each node with each other node; * In-Component \(\equiv\{i\in V\setminus S\,|\,S\text{ is reachable from }i\}\); * Out-Component \(\equiv\{i\in V\setminus S\,|\,i\text{ is reachable from }S\}\); * Tubes \(\equiv\{i\in V\setminus S\cup\text{IN}\cup\text{OUT}\,|\,i\text{ is reachable from IN and OUT is reachable from }i\}\); * In-Tendrils \(\equiv\{i\in V\setminus S\,|\,i\text{ is reachable from IN and OUT is not reachable from }i\}\); * Out-Tendrils \(\equiv\{i\in V\setminus S\,|\,i\text{ is not reachable from IN and OUT is reachable from }i\}\); * Others: \(\equiv\{i\in V\setminus S\cup\text{IN}\cup\text{OUT}\cup\text{TUBS}\cup \text{IN-TENDRILS}\cup\text{OUT-TENDRILS}\}\). Generally speaking, a large SCC, incorporating the vast majority of nodes, starts emerging in 2012,'stabilizes' around mid-2013 and persists until 2016. More specifically, during the biennium 2012-2013 the SCC steadily rises until it reaches \(\simeq 30\%\) of the network size; afterwards, during the biennium 2014-2015, it remains quite constant; then, during the last two years covered by our data set (i.e. 2016-2018), it shrinks and the percentage of nodes belonging to it goes back to the pre-2012 values. While in the biennium 2014-2016 the percentage of nodes constituting the SCC is larger than the percentage of nodes belonging to the other subsets, since 2016 this is no longer true: in fact, while both the SCC and the OUT-component shrink, the IN-component becomes the dominant portion of the network. Different results have been reported in:[17] however, this may be due to the different data collection and data mining processes implemented there. Dyadic motifs.Let us now consider the structure of dyadic motifs, meaning the patterns involving two nodes. Althought the number of dyadic motifs is very limited and of easy definition, these may be of interest to study the evolution of economic networks.[18, 19] On a directed network \(G(N,E)\), given two nodes \(i,j\in E\) you can observe three disjoint possibilities: Figure 6: Temporal evolution of the percentage of nodes constituting the bow-tie components listed above. While the SCC coincides with the core portion of the BUN, the periphery gathers all other node subsets. The green vertical line indicates the Mt.Gox breakdown. * a _reciprocated dyad_: \((i,j),(j,i)\in E\), meaning a connection exists both from \(i\) to \(j\) and vice versa. We denote the total number of reciprocated dyads in the network as \(L^{\leftrightarrow}\) * a _non reciprocated dyad_: \((i,j)\in E\) or \((j,i)\in E\), meaning only one of the two possible links exists. We denote the total number of non reciprocated dyads as \(L^{\rightarrow}\) * an _empty dyad_\((i,j),(j,i)\notin E\), meaning no link exists between the two nodes. The total number of empty dyads is denoted by \(L^{\not\leftrightarrow}\) Let us now study dyadic motifs by adopting a different approach with respect to the one employed so far. Instead of using the temporal \(z\)-score to spot 'temporal' outliers, i.e. values that are statistically significant with respect to a time average, let us consider an index that points out quantities not compatible with a given null model. In order to do so, we will employ the Directed Binary Configuration Model as null model, from the Exponential Random Graph Model(ERGM) family [18, 20]. In this framework, a \(z\)-score of the kind \[z[X]=\frac{X(\mathbf{A}^{*})-\langle X\rangle}{\sigma[X]} \tag{5}\] remains naturally defined, where \(X(\mathbf{A}^{*})\) is the empirical value of the quantity of interest (i.e. observed on the original network \(\mathbf{A}^{*}\)), \(\langle X\rangle\) and \(\sigma[X]\) are, respectively, its expectation value and its standard deviation, both computed on the ensemble induced by the DBCM. The interpretation of this \(z\)-score is the following one: values such that \(z[X]>+3\) signal that the empirical value is significantly larger than expected while values such that \(z[X]<-3\)) signal that the empirical value is significantly smaller than expected. In both cases one may conclude that the empirical value \(X(\mathbf{A}^{*})\) is not compatible with the specific model and something else is required to fully account for it. On the other hand, if \(-3\leq z[X]\leq+3\), there is no evidence of a significant deviation from the expected value and one may conclude that \(X(\mathbf{A}^{*})\) is completely explained by the constraints defining the model at hand. This exercise gives us an idea of how much the weekly BTC transaction network differ from the randomized network derived from its degree distribution. The results are shown in 7. The first thing that stands out are the out-of-scale values of the reciprocated dyads z-score, which makes totale sense:the BUNs (and in general the networks obtained from BTC transactions) are very sparse network, were the large majority of nodes have very few connections; it makes sense that each bilateral connection would be very odd in comparison to a randomized network with the same number of nodes and links. The interpretation of the behavior of empty and single dyads is analogous: by chance, a larger-than-observed number of non-reciprocated dyads are created, whence their over-representation within the DBCM ensemble and the negative \(z\)-score recovered by our analysis. In order to understand why this implies that the DBCM tends to create less-than-observed empty dyads, let us imagine to 'destroy' a reciprocal dyad, by decoupling the two paired links: in order to create'more' single dyads, one of the two links mus be redirected towards a previously disconnected \begin{table} \begin{tabular}{c|c} \hline Dyadic motif \(m\) & Abundance \(N_{m}\) \\ \hline \(L^{\leftrightarrow}\) & \(\sum_{i\neq j}(1-a_{ij})(1-a_{ji})\) \\ \(L^{\rightarrow}\) & \(\sum_{i\neq j}a_{ij}(1-a_{ji})\) \\ \(L^{\leftrightarrow}\) & \(\sum_{i\neq j}a_{ij}a_{ji}\) \\ \hline \end{tabular} \end{table} Table 2: Definition of the abundance of dyadic motifs. node; upon doing so, a reciprocal dyad disappears, as well as an empty dyad, while two single dyads are created. Centrality and centralizationWe computed four different centrality measures: degree centrality, closeness centrality, betweenness centrality and eigenvector centrality. The normalized closeness centrality is the average length of the shortest path between a node and all the others. The more central a node is, the closer it is to all the other nodes. Betweenness centrality denotes the number of times a node is on the shortest path connecting two other nodes. It was introduced to measure the control a node had on the communication layer between other nodes on the network Eigenvector centrality measures the influence of a node over the network. Each node is assigned a relative score, based on the idea that being connected to high-scored nodes increases a node score. We condensate the information given by centralities for each network datapoint by considering two different aggregate indices derived from each centrality distribution: the Gini index and the centralisation index. The Gini coefficient attempts to measure the unevenness of a distribution of a certain quantity1: given a set of values \(\{c_{i}\}_{i=1}^{N}\), the Gini index is defined as Footnote 1: Usually, it is employed to measure the unevenness of the income distribution[21] \[G_{c}=\frac{\sum_{i=1}^{N}\sum_{j=1}^{N}|c_{i}-c_{j}|}{2N\sum_{i=1}^{N}c_{i}} \tag{6}\] and assumes values between 0 and 1; while a Gini index of 0 indicates perfect evenness (e.g. everyone has exactly the same income), a Gini index of 1 indicates perfect unevenness (e.g. a Figure 7: Left panels: evolution of the empirical value of the dyads (from top to bottom; empty, non-reciprocal and reciprocated dyads). Right panels: evolution of the dyads \(z\)-score, computed over the ensemble induced by the Directed Binary Configuration Model. Dashed, gray lines signal the values of \(\pm 2\) and \(\pm 3\). Points are coloured according to the value of the log-return of the Bitcoin price in USD, in that week. Shaded areas indicate periods during which the price grows. population whose entire income is concentrated in the hands of a single individual). Applying the Gini coefficient to the degrees of our BUNs aims at shedding light on the (un)evenness of the nodes degree centralization: while a value close to 0 would depict an ecosystem where all actors have exactly the same number of interactions with each other, a value close to 1 would indicate that there are nodes participating to the vast majority of transactions. Centralization indices are global measures intended to measure the centrality of the entire network (instead of providing a rank of its nodes). In mathematical terms, the centralization reads \[C_{c}=\frac{\sum_{i=1}^{N}(c^{*}-c_{i})}{\max\left\{\sum_{i=1}^{N}(c^{*}-c_{i} )\right\}} \tag{7}\] where \(c^{*}=\max\{c_{i}\}_{i=1}^{N}\) represents the empirical, maximum value of the chosen centrality measure (i.e. computed on the network under consideration) and the denominator is calculated over a benchmark graph, defined as the one providing the maximum attainable value of the quantity \(\sum_{i=1}^{N}(c^{*}-c_{i})\). The benchmark graph for which is nothing else than a star graph (with the same number of nodes of the network under inspection). For each centrality we should compute the centralization index by computing the star-graph centralization maximum value. \[C_{k}=\frac{\sum_{i=1}^{N}(k^{*}-k_{i})}{(N-1)(N-2)} \tag{8}\] and the degree centralization would reveal to us if (and, in case, 'how much') Bitcoin has become similar to a star graph at a certain point during its history. From the results in figure 8 appears that, after a period of growth lasted until mid-2013, during which it reached values as large as 0.75, the Gini coefficient has decreased and is now steadily around the value of 0.5. Overall, we would like to stress that 0.5 is not a small value: in fact, it describes an ecosystem where the 50% of connections are incident to the 1% of nodes. It is also interesting to notice the big leap down of the Gini coefficient in 2013: during that year, Mt. Gox (which managed \(\simeq\) 70% of transactions at the time[22]) started the down-ward spiral which eventually led to its bankruptcy in 2014: USD withdrawals halting, financial investigations and expensive lawsuits weakened the trading website ability to stay on the market. The final blow was the public discovery of a huge theft of around 750.000 bitcoins, which went on undetected for years. The huge decrease of the Gini coefficient may be, thus, related to the Mt. Gox 'loss of prominence' in the Bitcoin ecosystem. On the other hand, bubble periods seem to have little correlation with the evolution of the Gini coefficient. Figure 8: Temporal evolution of Gini coefficient and centralisation indices for the four centrality measures under examination The evolution of the Gini coefficient may lead us to imagine that the Bitcoin ecosystem has become similar to a very centralized structure, pretty much similar to a star graph, at some point during its history. In order to answer this question, we have computed the so-called _centralization index_ at the weekly time scale (from [7]). While during the initial phases of its life, Bitcoin was indeed quite similar to a star graph, figure 8 reveals that the degree-centralization has quickly stabilized around very small values. Overall, we may, thus, conclude that Bitcoin is not evolving towards a star-like structure, where a single central node participates to all transactions. However, the large value of the Gini coefficient let us suspect that there may be several hubs: hence, the unrealistic picture of a star-like structure may be replaced by the more realistic one depicting several 'locally star-like' structures. The centers of these structures are 'local hubs', i.e. vertices with a large number of connections, that are crossed by a large percentage of paths and that are connected among them. In order to justify the apparent contradiction, in the appendix we show a toy model which partially reproduces the results. Small-worldIn order to explore the small-word properties of Bitcoin Transaction Networks, we compute the Average Path Length (APL) \(L\) and the global clustering coefficient \(c\) evolution of the BUNs in our dataset. [23] Because of computational constraints, the analysis is limited to the first half of the period under examination(until 2014). Please note that the measures are taken on the giant connected component. The results are displayed in figure 9: in the period under exam, both the measures seems to be stable. The clustering coefficient floats around very small values, in the order of \(10^{-2}\), while the APL is stable around 10 and 15, with rare exceptions. From figure 9 (a) it is clear that a linear relation between \(L\) and \(\log N\) does hold. This means that at least the close-connected behaviour of small-world network is present in Bitcoin Transaction Networks. Keeping this in mind, we can't ignore that the BUNs we analysed have a very low clustering nature, meaning that it's hard to observe two neighbours nodes(meaning users) of a node connected among each others. In order to put the clustering values in perspective we plot the BUNs coefficient versus the Erdos-Renyi randomized clustering coefficient. In figure 9 (b) we can see that while the magnitude of the clustering is small both on the original and on the Erdos-Renyi randomization, Figure 9: Panel (a): scatter plot of the logarithm of the number of nodes versus the average path length \(L\). Each marker is a Weekly BUN between 2011 and 2014. On the right, the time series of the BUNs clustering coefficient versus the clustering coefficient of the Erdos Renyi random graph generated with the same number of nodes and edges of the same week BUN counterpart. we notice that constantly over time the original clustering is at least one order of magnitude larger than the Erdos-Renyi: this means that while there are not many triangles, Bitcoin Transactions Networks have more triangles than an equivalent random graph. ## 4 Discussion In this paper we analyse the impact of structural properties of the Bitcoin transaction network on the generation and crash of bubbles in the exchange with respect to fiat currencies. Specifically, we examine network features such as heterogeneity of the degree distributions and frequency of connectivity patterns (i.e., motifs). We find significant changes in these properties during the period of price bubbles. A more detailed analysis unveils that, during the first bubble, the frequency of motifs indicating the relationship hubs have with new, low-degree users changes significantly; this suggests that hubs take an even more important role in becoming liquidity providers. These results are confirmed in the second bubble: There, by analysing the heterogeneity of the in-degree, out-degree, and total degree distributions, we find that there is a significant widening (narrowing) of the out-degree (in-degree) distributions, whereas the total degree does not change its distribution significantly. By performing additional analyses on the two largest hubs, we find that these structural changes - similar to what is observed during the first bubble - is likely to be caused by the _centralising_ role hubs take on as liquidity providers. Although we find that measures can explain well some price bubbles but not others, these results highlight that tracking properties of hubs in the transaction network is key for understanding the underlying mechanisms of a bubble. Moreover, at least in the first three Bitcoin bubbles, the behaviour of hubs significantly increased the systemic risk of the Bitcoin economy, eventually leading to systemic failure and sudden price crashes. These results also suggest that Bitcoin bubbles are difficult to forecast, but can be prevented, or at least alleviated, by introducing policies that aim at reducing the importance of large hubs in the network. In future work, we plan to extend our analysis by introducing new structural measures and by covering all the bubbles that happened to date. These structural changes suggest the presence of two hubs that centralise the market by selling bitcoins to most of the traders that enter the market during the bubble, resulting in a significant increase of the systemic risk. Indeed, if only a few hubs account for most of the transactions in the network, if at any point in time one of them fails, the whole network may crash. This is exactly what happened on April 10th 2013, when Mt Gox, the major Bitcoin exchange, broke under the high trading volume, triggering the burst of the bubble.
2305.02412
Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents
Pre-trained large language models (LLMs) capture procedural knowledge about the world. Recent work has leveraged LLM's ability to generate abstract plans to simplify challenging control tasks, either by action scoring, or action modeling (fine-tuning). However, the transformer architecture inherits several constraints that make it difficult for the LLM to directly serve as the agent: e.g. limited input lengths, fine-tuning inefficiency, bias from pre-training, and incompatibility with non-text environments. To maintain compatibility with a low-level trainable actor, we propose to instead use the knowledge in LLMs to simplify the control problem, rather than solving it. We propose the Plan, Eliminate, and Track (PET) framework. The Plan module translates a task description into a list of high-level sub-tasks. The Eliminate module masks out irrelevant objects and receptacles from the observation for the current sub-task. Finally, the Track module determines whether the agent has accomplished each sub-task. On the AlfWorld instruction following benchmark, the PET framework leads to a significant 15% improvement over SOTA for generalization to human goal specifications.
Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, Shrimai Prabhumoye
2023-05-03T20:11:22Z
http://arxiv.org/abs/2305.02412v2
# Plan, Eliminate, and Track -- ###### Abstract Pre-trained large language models (LLMs) capture procedural knowledge about the world. Recent work has leveraged LLM's ability to generate abstract plans to simplify challenging control tasks, either by action scoring, or action modeling (fine-tuning). However, the transformer architecture inherits several constraints that make it difficult for the LLM to directly serve as the agent: e.g. limited input lengths, fine-tuning inefficiency, bias from pre-training, and incompatibility with non-text environments. To maintain compatibility with a low-level trainable actor, we propose to instead use the _knowledge_ in LLMs to simplify the control problem, rather than solving it. We propose the Plan, Eliminate, and Track (**PET**) framework. The Plan module translates a task description into a list of high-level sub-tasks. The Eliminate module masks out irrelevant objects and receptacles from the observation for the current sub-task. Finally, the Track module determines whether the agent has accomplished each sub-task. On the Aff-World instruction following benchmark, the **PET** framework leads to a significant 15% improvement over SOTA for generalization to human goal specifications. Machine Learning, ICML, ICML ## 1 Introduction Humans can abstractly plan their everyday tasks without execution; for example, given the task "Make break-fast", we can roughly plan to first pick up a mug and make coffee, before grabbing eggs to scramble. Embodied agents, endowed with this capability will generalize more effectively by leveraging common-sense reasoning. Recent work (Huang et al., 2022; 20; Ahn et al., 2022; Yao et al., 2020) has used LLMs (Bommasani et al., 2021) for abstract planning for embodied or gaming agents. These have shown incipient success in extracting procedural world knowledge from LLMs in linguistic form with posthoc alignment to executable actions in the environment. However, they treat LLMs as the actor, and focus on adapting LLM outputs to executable actions either through fine-tuning (Micheli and Fleuret, 2021) or constraints (Ahn et al., 2022). Using LLM as the actor works for pure-text environments with limited interactions (Huang et al., 2022; Ahn et al., 2022) (just consisting of "picking/placing" objects), but limits generalization to other modalities. In addition, the scenarios considered have been largely simplified from the real world. Ahn et al. (2022) provides all available objects and possible interactions at the start and limits tasks to the set of provided objects/interactions. Huang et al. (2022) limits the environment to objects on a single table. On the other hand, to successfully "cut some lettuce" in a real-world room, one has to "find a knife", which can be non-trivial since there can be multiple drawers or cabinets (Chaplot et al., 2020; Min et al., 2021; Blukis et al., 2021). A more realistic scenario leads to a Figure 1: PET framework. Plan module uses LLM to generate a high-level plan. Eliminate Module uses a QA model to mask irrelevant objects in observation. Track module uses a QA model to track the completion of sub-tasks. diverse, complicated set of tasks or large and changing action space. Furthermore, the text description of the observation increases as a function of the number of receptacles and objects the agent sees. Combined with growing roll-outs, the state becomes too verbose to fit into any LLM. In this work, we explore alternative mechanisms to leverage the prior knowledge encoded in LLMs without impacting the trainable nature of the actor. We propose a 3-step framework (Figure 1): Plan, Eliminate, and Track (PET). **Plan** module simplifies complex tasks by breaking them down into sub-tasks. It uses a pre-trained LLM to generate a list of sub-tasks for an input task description employing example prompts from the training set similar to Huang et al. (2022); Ahn et al. (2022). The **Eliminate** module addresses the challenge of long observations. It uses a zero-shot QA language model to score and mask objects and receptacles that are irrelevant to the current sub-task. The **Track** module uses a zero-shot QA language model to determine if the current sub-task is complete and moves to the next sub-task. Finally, the **Action Attention** agent uses a transformer-based architecture to accommodate for long roll-out and variable length action space. The agent observes the masked observation and takes an action conditioned on the current sub-task. We focus on instruction following in indoor households on the AlfWorld (Shridhar et al., 2020) interactive text environment benchmark. Our experiments and analysis demonstrate that LLMs not only remove 40% of task-irrelevant objects in observation through common-sense QA, but also generate high-level sub-tasks with 99% accuracy. In addition, multiple LLMs may be used in coordination with each other to assist the agent from different aspects. Our contributions are as follows: 1. **PET**: A novel framework for leveraging pre-trained LLMs with embodied agents; our work shows that each of P, E, T serves a complementary role and should be simultaneously addressed to tackle control tasks. 2. An Action Attention agent that handles the changing action space for text environments. 3. A 15% improvement over SOTA for generalization to human goals via sub-task planning and tracking. ## 2 Related Work Language Conditioned PoliciesA considerable portion of prior work studies imitation learning (Telex et al., 2011; Mei et al., 2016; Nair et al., 2022; Stepputtis et al., 2020; Jang et al., 2022; Shridhar et al., 2022; Sharma et al., 2021) or reinforcement learning (Misra et al., 2017; Jiang et al., 2019; Cideron et al., 2020; Goyal et al., 2021; Nair et al., 2022; Akakzia et al., 2020) policies conditioned on natural language instruction or goal (MacMahon et al., 2006; Kollar et al., 2010). While some prior research has used pre-trained language embeddings to improve generalization to new instructions (Nair et al., 2022), they lack domain knowledge that is captured in LLMs. Our PET framework enables planning, progress tracking, and observation filtering through the use of LLMs, and is designed to be compatible with any language conditional policies above. LLMs for ControlLLMs have recently achieved success in high-level planning. Huang et al. (2022) shows that pre-trained LLMs can generate plausible plans for day-to-day tasks, but the generated sub-tasks cannot be directly executed in an end-to-end control environment. Ahn et al. (2022) solves the executability issue by training an action scoring model to re-weigh LLM action choices and demonstrates success on a robot. However, LLM scores work for simple environments with actions limited to pick/place (Ahn et al., 2022), but fails with environments with more objects and diverse actions (Shridhar et al., 2020). Song et al. (2022) uses GPT3 to generate step-by-step low-level commands, which are then executed by respective control policies. the work improves Ahn et al. (2022) with more action diversity and on-the-fly re-plan. In addition, all the above LLMs require few-shot demonstrations of up to 17 examples, making the length of the prompt infeasible for AlfWorld. Micheli and Fleuret (2021) fine-tuned a GPT2-medium model on expert trajectories in AlfWorld and demonstrated impressive evaluation results. However, LM fine-tuning requires a fully text-based environment, consistent expert trajectories, and a fully text-based action space. Such requirements greatly limit the generalization to other domains, and even to other forms of task specification. We show that our PET framework achieves better generalization to human goal specifications which the agents were not trained on. Hierarchical Planning with Natural LanguageDue to the structured nature of natural language, Andreas et al. (2017) explored associating each task description to a modular sub-policy. Later works extend the above approach by using a single conditional policy (Mei et al., 2016), or by matching sub-tasks to templates (Oh et al., 2017). Recent works have shown that LLMs are proficient high-level planners (Huang et al., 2022; Ahn et al., 2022; Lin et al., 2022), and therefore motivates us to revisit the idea of hierarchical task plan ning with progress tracking. To our knowledge, PET is the first work combining a zero-shot subtask-level LLM planner and zero-shot LLM progress tracker with a low-level conditional sub-task policy. Text GamesText-based games are complex, interactive simulations where the game state and action space are in natural language. They are fertile ground for language-focused machine learning research. In addition to language understanding, successful play requires skills like memory and planning, exploration (trial and error), and common sense. The AlfWorld (Shridhar et al., 2020) simulator extends a common text-based game simulator, TextWorld Cote et al. (2018), to create text-based analogs of each ALFRED scene. Agents for Large Action SpaceHe et al. (2015) learns representation for state and actions with two different models and computes the Q function as the inner product of the representations. While this could generalize to large action space, they only considered a small number of actions. Fulda et al. (2017); Ahn et al. (2022) explore action elimination in the setting of affordances. Zahavy et al. (2018) trains a model to eliminate invalid actions on Zork from external environment signals. However, the functionality depends on the existence of external elimination signal. ## 3 Plan, Eliminate, and Track In this section, we explain our 3-step framework: Plan, Eliminate, and Track (PET). In **Plan** module (\(\mathcal{M_{\mathbf{P}}}\)), a pre-trained LLM generates a list of sub-tasks for an input task description using samples from the training set as in-context examples. The **Eliminate** module (\(\mathcal{M_{\mathbf{E}}}\)) uses a zero-shot QA language model to score and mask objects and receptacles that are irrelevant to the current sub-task. The **Track** module (\(\mathcal{M_{\mathbf{T}}}\)) uses a zero-shot QA language model to determine if the current sub-task is complete and moves to the next sub-task. Note that Plan is a generative task and Eliminate and Track are classification tasks. We also implement an attention-based **agent** (Action Attention), which scores each permissible action and is trained on imitation learning on the expert. The agent observes the masked observation and takes an action conditioned on the current sub-task. Problem SettingWe define the task description as \(\mathcal{T}\), the observation string at time step \(t\) as \(\mathcal{O}^{t}\), and the list of permissible actions \(\{a_{i}^{t}|a_{i}^{t}\) can be executed} as \(A^{t}\). For each observation string \(\mathcal{O}^{t}\), we define the receptacles and objects within the observation as \(r_{i}^{t}\) and \(o_{i}^{t}\) respectively. The classification between receptacles and objects is defined by the environment (Shridhar et al., 2020). For a task \(\mathcal{T}\), we assume there exists a list of sub-tasks \(\mathcal{S_{\mathcal{T}}}=\{s_{1},\ldots s_{k}\}\) that solves \(\mathcal{T}\). ### Plan Tasks in the real world are often complex and need more than one step to be completed. Motivated by the ability of humans to plan high-level sub-tasks given a complex task, we design the **Plan** module (\(\mathcal{M_{\mathbf{P}}}\)) to generate a list of high-level sub-tasks for a task description \(\mathcal{T}\). Inspired by the contextual prompting techniques for planning with LLMs (Huang et al., 2022), we use an LLM as our plan module \(\mathcal{M_{\mathbf{P}}}\). For a given task description \(\mathcal{T}\), we compose the query question \(\mathcal{Q_{\mathcal{T}}}\) as "What are the middle steps required to \(\mathcal{T}\)?", and require \(\mathcal{M_{\mathbf{P}}}\) to generate a list sub-tasks \(\mathcal{S_{\mathcal{T}}}=\{s_{1},\ldots s_{k}\}\). Specifically, we select the top 5 example tasks \(\mathcal{T}^{E}\) from the training set based on RoBERTa (Liu et al., 2019) embedding similarity with the query task \(\mathcal{T}\). We then concatenate the example tasks with example sub-tasks in a query-answer format to build the prompt \(\mathcal{P_{\mathcal{T}}}\) for \(\mathcal{M_{\mathbf{P}}}\) (Fig. 2): \[\mathcal{P_{\mathcal{T}}}=\mathtt{concat}(\mathcal{Q_{\mathcal{T}^{E}}}, \mathcal{S_{\mathcal{T}^{E}_{1}}},\ldots,\mathcal{Q_{\mathcal{T}^{E}_{n}}}, \mathcal{S_{\mathcal{T}^{E}_{n}}},\mathcal{Q_{\mathcal{T}}})\] An illustration of our prompt format is shown in Figure 2, where \(\mathcal{T}\) ="heat some apple and put it in fridge", and \(\mathcal{Q_{\mathcal{T}^{E}_{1}}}\) ="What are the middle steps required to put two spraybottles on toilet", \(\mathcal{S_{\mathcal{T}^{E}_{1}}}\) ="take a spraybottle, Figure 2: Plan Module (Sub-task Generation). 5 full examples are chosen from the training set based on RoBERTa embedding similarity with the task query description. Then the examples are concatenated with the task query to get the prompt. Finally, we prompt the LLM to generate the desired sub-tasks. place the spraybottle in/on toilet, take a spraybottle, place the spraybottle in/on toilet!". The expected list of sub-tasks to achieve this task \(\mathcal{T}\) is \(s_{1}=\)'take an apple', \(s_{2}=\)'heat the apple', and \(s_{3}=\)'place the apple in/on fridge' You are in the middle of a room. Looking quickly around you, you see a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 2, a countertop 1, a diningtable 1, a drawer 5, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a garbagecan 1, a sinkbasin 1, and a microwave 1. Your task is to heat some apple and put it in the fridge. Where should you go? ### Eliminate Typical Alfworld scenes can start with around 15 receptacles, each containing up to 15 objects. In some close-to-worst cases, there can be around 30 open-able receptacles (e.g. a kitchen with many cabinets and drawers), and it easily takes an agent with no prior knowledge more than 50 steps for the agent to find the desired object (repeating the process of visiting each receptacle, opening it, closing it). We observe that many receptacles and objects are irrelevant to specific tasks during both training and evaluation, and can be easily filtered with common-sense knowledge about the tasks. For example, in Fig. 3 the task is to heat some apple. By removing the irrelevant receptacles like the coffeemachine, garbagecan, or objects like knife, we could significantly shorten our observation. We therefore propose to leverage commonsense knowledge captured by large pre-trained QA models to design our Eliminate module \(\mathcal{M}_{\mathbf{E}}\) to mask out irrelevant receptacles and objects. For task \(\mathcal{T}\), we create prompts in the format \(\mathcal{P}_{r}=\)"Your task is to: \(\mathcal{T}\). Where should you go to?" for receptacles and \(\mathcal{P}_{o}=\)"Your task is to: \(\mathcal{T}\). Which objects will be relevant?" for objects. Using the pre-trained QA model \(\mathcal{M}_{\mathbf{E}}\) in a zero-shot manner, we compute score \(\mu_{o_{i}}=\mathcal{M}_{\mathbf{E}}(\mathcal{P}_{o},o_{i})\) for each object \(o_{i}\) and \(\mu_{r_{i}}=\mathcal{M}_{\mathbf{E}}(\mathcal{P}_{o},r_{i})\) for each receptacle \(r_{j}\) in observation at every step. \(\mu\) represents the belief score of whether the common-sense QA model believes the object/receptacle is relevant to \(\mathcal{T}\). We then remove \(o_{i}\) from observation if \(\mu_{o_{i}}<\tau_{o}\), and remove \(r_{i}\) if \(\mu_{r_{i}}<\tau_{r}\). Threshold \(\tau_{o},\tau_{r}\) are hyper-parameters. ### Track For the agent to utilize the high-level plan, it first needs to know which sub-task to execute. A human actor typically starts from the first item and check-off the tasks one by one until completion. Therefore, similar to Section 3.2, we use a pre-trained QA model to design the Track module \(\mathcal{M}_{\mathbf{T}}\) to perform zero-shot sub-task completion detection.1 Footnote 1: Note that the current system design does not allow re-visiting finished sub-tasks, so the agent has no means to recover if it undoes its previous sub-task at test time. Specifically, as illustrated in Figure 4, for sub-task list \(\mathcal{S}_{\mathcal{T}}=\{s_{1},\ldots s_{k}\}\), we keep track of a progress tracker \(p\) (initialized at 1) that indicates the sub-task the agent is currently working on (\(s_{p}\)). We then compose the context as the last \(d\) steps of the agent observation Figure 4: Track Module (Progress Tracking). At every step, we take the last 3 steps of roll-out as context and append a query (about whether the current sub-task is completed) to get the prompt. A pre-trained QA model generates a Yes/No answer to the prompt. For the answer “Yes”, we update the tracker to the next sub-task. Figure 3: Eliminate Module (Receptacle Masking). We use a pre-trained QA model to filter irrelevant receptacles/objects in the observation of each scene. As we can see, the original observation is too long and the receptacles shown in red are not relevant for task completion. These receptacles are filtered by the QA model making the observation shorter. for the current sub-task and the question as "Did you finish the task of \(s_{p}\)?". For efficiency, we set \(d:=\min(d+1,3)\) at each step. Note that \(d\) is reset to 1 whenever the progress tracker updates. Hence, the template \(\mathcal{P}_{a}=\mathtt{concat}(\mathcal{O}^{t-d},\ldots,\mathcal{O}^{t-1},\) "Did you finish the task of \(s_{p}\)?"). We feed \(\mathcal{P}_{a}\) to a pre-trained zero-shot QA model \(\mathcal{M}_{\mathbf{T}}\) and compute the probability of tokens 'Yes' and 'No' as follows: \(p_{\mathcal{M}_{\mathbf{T}}}(``Yes"|\mathcal{P}_{a})\) and \(p_{\mathcal{M}_{\mathbf{T}}}(``No"|\mathcal{P}_{a})\). If \(p_{\mathcal{M}_{\mathbf{T}}}(``Yes"|\mathcal{P}_{a})>p_{\mathcal{M}_{\mathbf{ T}}}(``No"|\mathcal{P}_{a})\) then we increment the tracker \(p\) to track the next sub-task. If the tracking ends prematurely, meaning that \(p>len(\mathcal{S}_{\mathcal{T}})\) but the environment has not returned "done", we fall back to conditioning with \(\mathcal{T}\). We study the rate of pre-mature ends in Section 4.4 in terms of precision and recall. ### Agent Since the number of permissible actions can vary a lot by the environment, the agent needs to handle arbitrary dimensions of action space. While Shridhar et al. (2020) addresses this challenge by generating actions token-by-token, such a generation process leads to degenerate performance even on the training set. We draw inspiration from the field of text summarization, where models are built to handle variable input lengths. See et al. (2017) generates a summary through an attention-like "pointing" mechanism that extracts the output word by word. Similarly, an attention-like "pointing" model could be used to select an action from the list of permissible actions. Action AttentionWe are interested in learning a policy \(\pi\) that outputs the optimal action among permissible actions. We eschew the long rollout/ large action space problems by (1) representing observations by averaging over history, and (2) individually encoding actions (Fig 5). In our proposed action attention framework, we first represent historical observations \(H^{t}\) as the average of embeddings of all individual observations through history (Eq. 1), and \(H^{A}\) as the list of embeddings of all the current permissible actions (Eq. 2). Then, in Eq. 3, we compute the query \(Q\) using a transformer with a "query" head (\(\mathcal{M}_{\mathcal{Q}}\)) on task embedding (\(H^{t}\)), the current observation embedding (\(\mathcal{O}^{t}\)), and the list of action embeddings (\(H^{A}\)). In Eq. 4 we compute the key \(K_{i}\) for each action \(a_{i}\) using the same transformer with a "key" head (\(\mathcal{M}_{\mathcal{K}}\)) on task embedding (\(H^{t}\)), the current observation embedding (\(\mathcal{O}^{t}\)), and embedding of action (\(a_{i}\)). Finally, we compute the dot-product of the query and keys as action scores for the policy \(\pi\) (Eq. 5). \[H^{t} =\mathrm{avg}_{j\in[1,t-1]}\mathrm{Embed}(\mathcal{O}^{j}) \tag{1}\] \[H^{A} =\left[\mathrm{Embed}(a_{1}^{t}),...,\mathrm{Embed}(a_{n}^{t})\right]\] (2) \[Q =\mathcal{M}_{\mathcal{Q}}\left(\mathrm{Embed}(\mathcal{T}),H^{t},\mathrm{Embed}(\mathcal{O}^{t}),H^{A}\right)\] (3) \[K_{i} =\mathcal{M}_{\mathcal{K}}\left(\mathrm{Embed}(\mathcal{T}),H^{t},\mathrm{Embed}(\mathcal{O}^{t}),\mathrm{Embed}(a_{i}^{t})\right)\] (4) \[\pi =\mathrm{softmax}\left([Q\cdot K_{i}|i\in\text{all permissible actions}]\right) \tag{5}\] ## 4 Experiments and Results We present our experiments as follows. First, we explain the environment setup and baselines for our experiments. Then we compare PET to the baselines on different splits of the environment. Finally, we conduct ablation studies and analyze the PET framework part by part. We show that PET generalizes better to human goal specification under efficient behavior cloning training. ### Experimental Details **AlfWorld Environment** ALFWorld (Shridhar et al., 2020) is a set of TextWorld environments (Cote et al., 2018) that are parallels of the ALFRED embodied dataset (Shridhar et al., 2020). ALFWorld includes 6 task types that each require solving multiple compositional sub-goals. There are 3553 training task instances ({tasktype, object, receptacle, room}), 140 in-distribution evaluation task instances (seen split - tasks themselves are novel but take place in rooms seen during training) and 134 out-of-distribution evaluation task instances (unseen split - tasks take place in novels rooms). An example of the task could be: "Rinse the egg to put it in the microwave." Each training instance in AlfWorld comes with an expert, from which we collected our training demonstration. **Human Goal Specification** The crowd-sourced human goal specifications for evaluation contain 66 unseen verbs and 189 unseen nouns (Shridhar et al., 2020). In comparison, the template goals use only 12 ways of goal specification. In addition, the sentence structure for human goal specification is more diverse compared to the template goals. Therefore, human goal experiments are good for testing the generalization of models to out-of-distribution scenarios. **Pre-trained LMs.** For the **Plan** module (sub-task generation), we experimented with the open-source GPT-Neo-2.7B (Black et al., 2021), and an industry-scale LLM with 530B parameters (Smith et al., 2022). For the **Eliminate** module (receptacle/object masking), we choose Macaw-11b (Tafjord and Clark, 2021), which is reported to have common sense QA performance on par with GPT3 (Brown et al., 2020) while being orders of magnitudes smaller. We use a decision threshold of 0.4 for Macaw score below which the objects are masked out. For the **Track** module (progress tracking), we use the same Macaw-11b model as the Eliminate module answer to Yes/No questions. **Actor Model Design.** Our **Action Attention** agent (\(\mathcal{M}_{\mathcal{Q}}\) and \(\mathcal{M}_{\mathcal{K}}\)) is a 12-layer transformer with 12 heads and hidden dimension 384. The last layer is then fed into two linear heads to generate \(K\) and \(Q\). For embedding of actions and observations, we use pre-trained RoBERTa-large (Liu et al., 2019) with embedding dimension 1024. For sub-task generation, we use ground-truth sub-tasks for training, and generated sub-tasks from Plan module for evaluation. **Experimental Setup.** Unlike the original benchmark (Shridhar et al., 2020), we experiment with models trained with behavior cloning. Although Shridhar et al. (2020) observe that models benefit greatly from DAgger training, DAgger assumes an expert that is well-defined at all possible states, which is inefficient and impractical. In our experiments, training is 100x slower with DAgger compared to behavior cloning (3 weeks for DAgger v.s. 6 hours for Behavior Cloning). In addition, we demonstrate that our models surpass the DAgger training performance of the BUTLER (Shridhar et al., 2020) agents trained with DAgger, even when our agent does not have the option to interact with the environment. **Baselines.** Our first baseline is the BUTLER:BRAIN (**BUTLER**) agent (Shridhar et al., 2020), which consists of an encoder, an aggregator, and a decoder. At each time step \(t\), the encoder takes initial observation \(s^{0}\), current observation \(s^{t}\), and task string \(s_{\text{task}}\) and generates representation \(r^{t}\). The recurrent aggregator combines \(r^{t}\) with the last recurrent state \(h^{t-1}\) to produce \(h^{t}\), which is then decoded into a string \(a^{t}\) representing action. In addition, the BUTLER agent uses beam search to get out of stuck conditions in the event of a failed action. Our second baseline **GPT**(Micheli and Fleuret, 2021) is a fine-tuned GPT2-medium on 3553 demonstrations from the AlfWorld training set. Specifically, the GPT is fined-tuned to generate each action step word-by-word to mimic the rule-based expert using the standard maximum likelihood loss. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{Model} & seen & unseen & seen & unseen \\ \hline BUTLER + DAgger* (Shridhar et al., 2020) & 40 & 35 & 8 & 3 \\ BUTLER + BC (Shridhar et al., 2020) & 10 & 9 & - & - \\ GPT (Micheli and Fleuret, 2021) & **91** & **95** & 42 & 57 \\ PET + Action Attention (Ours) & 70 & 67.5 & **52.5** & **60** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different models in terms of completion rate per evaluation split (seen and unseen), with and without human annotated goals. PET under-performs GPT on Template goal specifications but generalizes better to human goal specifications. * We include the performance of BUTLER with DAgger for completeness. All other rows are trained without interaction with the environment, MLE for GPT and behavior cloning for BUTLER+BC and PET. Figure 5: Agent (Action Attention). Action Attention block is a transformer-based framework that computes a key \(K_{i}\) for each permissible action and output action scores as dot-product between key and query \(Q\) from the observations. Template Goal Specification Human Goal Specification ### Overall Results on Template and Human Goals We compare the performance of action attention assisted by PET with BUTLER (Shridhar et al., 2020) and fine-tuned GPT (Micheli and Fleuret, 2021) in Table 1. For human goal specifications, PET outperforms SOTA (GPT) by 25% on seen and 5% on the unseen split. Although PET under-performs GPT on Template goal specifications, GPT requires fine-tuning on fully text-based expert trajectory and thus loses adaptability to different environment settings. Qualitatively, on human goal specification tasks, where the goal specifications are out-of-distribution, GPT often gets stuck repeating the same action after producing a single wrong move. On the other hand, since the Plan module of PET is not trained on the task, it generalizes to the variations for human goal specifications as shown in Section 4.5. Quantitatively, GPT suffers from a relative 50% performance drop transferring from template to human-goal specifications, whereas PET incurs only a \(15\sim 25\%\) drop. The setting closest to PET is BUTLER with behavior cloning (BUTLER + BC). Since BUTLER + BC performs poorly, we also include DAgger training results. Nevertheless, action attention assisted by PET outperforms BUTLER with DAgger by more than 2x while being much more efficient. (Section 4.1) ### Ablations for Plan, Eliminate, and Track In Table 3, we analyze the contribution of each PET module by sequentially adding each component to the action attention agent on 140 training trajectories sampled from the training set. The data set size is chosen to match the size of the seen validation set, for an efficient and sparse setting. Note that we treat Plan and Track as a single module for this ablation since they cannot work separately. Adding Plan and Track greatly improves the completion rate relatively by 60%, which provides evidence to our hypothesis that solving some embodied tasks step-by-step reduces the complexity. We observe a relatively insignificant 3% improvement in absolute performance when adding Eliminate without sub-task tracking. On the other hand, when applying Eliminate to sub-tasks with Plan and Track, we observe more than 60% relative improvement over just Plan and Track alone. We, therefore, deduce that Plan and Track boost the performance of Eliminate during evaluation, since it is easier to remove irrelevant objects when the objective is more focused on sub-tasks. ### Automated Analysis of PET modules Plan ModuleWe experiment with different LLMs such as GPT2-XL (Radford et al., 2019), GPT-Neo-2.7B (Black et al., 2021), and the 530B parameter MT-NLG (Smith et al., 2022) models. Table 2 reports the generation accuracy and the RoBERTa (Liu et al., 2019) embedding cosine similarity against ground-truth sub-tasks. We observe that all LLMs achieve high accuracy on template goal specifications, where there is no variation in sentence structures. For human goal specification, MT-NLG generates subtasks similar to ground truth in terms of embedding similarity, while the other smaller models perform significantly worse. Eliminate moduleWe evaluate the zero-shot receptacle/object masking performance of Macaw on the three splits of AlfWorld. In Fig 6, we illustrate the AUC curve of the relevance score that the model assigns to the objects v.s. objects that the rule-based expert interacted with when completing each task. Since the Macaw QA model is queried in a zero-shot manner, it demonstrates consistent masking performance on all three splits of the environment, even on the unseen split. In addition, we note that object receptacle accuracy is generally lower than object accuracy because of the counter-intuitive spawning locations described in Section 4.5. In our experiments, a decision threshold of 0.4 has a recall of 0.91 and reduces the number of objects in observation by 40% on average. Track moduleSince sub-task alignment information is not provided by the environment, we explore an alternative performance metric for the detection of the event of completion. Ideally, a sub-task tracker should record the last sub-task as "finished" if and only if the environment is "fully solved" by the expert. As an agreement measure, we report a precision of 0.99 and a recall of 0.78 for Macaw-11B and a precision of 0.96 and a recall of 0.96 for Macaw-large. The larger model (Macaw-11b) is more precise but misses more detection, therefore limiting the theoretical performance to 78%. The smaller model is much less accurate according to human evaluation but does not limit the overall model performance in theory. In our experiments, we find that both models produce similar overall results, which may suggest that the overall results could be improved with LLMs doing better on both precision and recall. ### Qualitative Analysis Plan ModuleWe show two types of failure examples for sub-task generation in Table 4. The first type of error is caused by generating synonyms of the ground truth, and the second type of error is caused by inaccu racies in the human goal specifications. Note that our Action Attention framework uses RoBERTa Liu et al. (2019) embedding for sub-tasks, known to be robust to synonym variations. Eliminate ModuleWe observe that the main source of elimination error occurs when the module incorrectly masks a receptacle that contains the object of interest so the agent fails to find such receptacles. This is often because some objects in the AI2Thor simulator do not spawn according to common sense. As noted in the documentation of the environment2, objects like Apple or Egg has a chance of spawning in unexpected receptacles like GarbageCan, or TVStand. However, such generations in AI2Thor are unlikely in real deployment; thus, the "mistakes" of our Eliminate module are reasonable. Footnote 2: ai2thor.allenai.org/ithor/documentation/objects/object-types/ Track ModuleExperimentally, we find that sub-task planning/tracking is particularly helpful for tasks that require counting procedures. As shown in Table 1, PET breaks the task of "Place two soapbar in cabinet" into two repeating set of sub-tasks: "take soapbar\(\rightarrow\)place soapbar in/on cabinet". Sub-task planning and tracking, therefore, simplify the hard problem of counting. \begin{table} \begin{tabular}{l c c c c} \multicolumn{5}{c}{**Plan, Eliminate, and Track**} \\ \hline \hline & \multicolumn{2}{c}{Template Goals} & \multicolumn{2}{c}{Human Goals} \\ \hline LLM & seen & unseen & seen & unseen \\ \hline GPT-2 Radford et al. (2019) & 94.29 (0.97) & 87.31 (0.94) & 10.07 (0.62) & 7.98 (0.58) \\ GPT-Neo-2.7B Black et al. (2021) & **99.29 (1.00)** & 96.27 (0.98) & 4.70 (0.82) & 9.16 (0.80) \\ MT-NLG Smith et al. (2022) & 98.57 (0.99) & **100 (1.00)** & **40.04 (0.94)** & **49.3 (0.94)** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of different LLMs for **Plan** module in terms of accuracy and RoBERTa embedding cosine similarity (in brackets) against ground-truth sub-tasks, per evaluation split (seen and unseen), with and without human annotated goals. The MT-NLG with 530B parameters achieves the overall best performance on all dataset splits and greatly exceeds the performance of smaller models on hard tasks with human goal specification. In addition, MT-NLG generates sub-tasks with almost perfect embedding similarity for all tasks. \begin{table} \begin{tabular}{l c c} Model Ablations & seen & unseen \\ \hline Action Attention & 25 & 9 \\ Action Attention + Eliminate & 25 & 11 \\ Action Attention + Plan \& Track & 35 & 15 \\ Action Attention + PET & 52.5 & 27.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of different ablations of PET trained on a sampled set of 140 demonstrations from the training set, in terms of completion rate per evaluation split (seen and unseen). Applying Eliminate module alone has an insignificant effect on overall performance compared to Plan & Track. However, applying Eliminate module on sub-tasks together with Plan & Track results in a much more significant performance improvement. Figure 6: Plot of AUC scores of zero-shot relevance identification across all tasks in the Alford-Thor environment, with the Macaw-11b model. The ground truth is obtained as receptacles/objects accessed by the rule-based expert. **Top:** Receptacle relevance identification. **Bottom:** Object relevance identification. The QA model achieves an average AUC-ROC score of 65 for receptacles and 76 on objects. ## 5 Conclusion, Limitations, and Future Work In this work, we propose the Plan, Eliminate, and Track (PET) framework that uses pre-trained LLMs to assist an embodied agent in three steps. Our PET framework requires no fine-tuning and is designed to be compatible with any goal-conditional embodied agents. In our experiments, we combine PET with a novel Action Attention agent that handles the dynamic action space in AlfWorld. Our Action Attention agent greatly outperforms the BUTLER baseline. In addition, since the PET framework is not trained to fit the training set tasks, it demonstrates better generalization to unseen human goal specification tasks. Finally, our ablation studies show the Plan and Track modules together improve the performance of Eliminate module to achieve the best performance. Our results show that LLMs can be a good source of common sense and procedural knowledge for embodied agents, and multiple LLMs may be used in coordination with each other to further improve effectiveness. One of the major limitations of our current system design is that the Track module (progress tracker) does not re-visit finished sub-tasks. If for example, the agent is executing sub-tasks [picked up a pan, put the pan on countertop], and it picked up a pan but put it in the fridge (undo pickup action). Since the progress tracker does not take into consideration previous progress being undone, the system may break in this situation. Future work can focus on adding sub-task-level dynamic re-planning to address this limitation or explore other ways in which LLMs can assist the learning of the policy (i.e., reading an instruction manual about the environment).
2305.14270
NCC: Natural Concurrency Control for Strictly Serializable Datastores by Avoiding the Timestamp-Inversion Pitfall
Strictly serializable datastores greatly simplify the development of correct applications by providing strong consistency guarantees. However, existing techniques pay unnecessary costs for naturally consistent transactions, which arrive at servers in an order that is already strictly serializable. We find these transactions are prevalent in datacenter workloads. We exploit this natural arrival order by executing transaction requests with minimal costs while optimistically assuming they are naturally consistent, and then leverage a timestamp-based technique to efficiently verify if the execution is indeed consistent. In the process of designing such a timestamp-based technique, we identify a fundamental pitfall in relying on timestamps to provide strict serializability, and name it the timestamp-inversion pitfall. We find timestamp-inversion has affected several existing works. We present Natural Concurrency Control (NCC), a new concurrency control technique that guarantees strict serializability and ensures minimal costs -- i.e., one-round latency, lock-free, and non-blocking execution -- in the best (and common) case by leveraging natural consistency. NCC is enabled by three key components: non-blocking execution, decoupled response control, and timestamp-based consistency check. NCC avoids timestamp-inversion with a new technique: response timing control, and proposes two optimization techniques, asynchrony-aware timestamps and smart retry, to reduce false aborts. Moreover, NCC designs a specialized protocol for read-only transactions, which is the first to achieve the optimal best-case performance while ensuring strict serializability, without relying on synchronized clocks. Our evaluation shows that NCC outperforms state-of-the-art solutions by an order of magnitude on many workloads.
Haonan Lu, Shuai Mu, Siddhartha Sen, Wyatt Lloyd
2023-05-23T17:21:30Z
http://arxiv.org/abs/2305.14270v2
# NCC: Natural Concurrency Control for Strictly Serializable ###### Abstract Strictly serializable datastores greatly simplify the development of correct applications by providing strong consistency guarantees. However, existing techniques pay unnecessary costs for naturally consistent transactions, which arrive at servers in an order that is already strictly serializable. We find these transactions are prevalent in datacenter workloads. We exploit this natural arrival order by executing transaction requests with minimal costs while optimistically assuming they are naturally consistent, and then leverage a timestamp-based technique to efficiently verify if the execution is indeed consistent. In the process of designing such a timestamp-based technique, we identify a fundamental pitfall in relying on timestamps to provide strict serializability, and name it the timestamp-inversion pitfall. We find timestamp-inversion has affected several existing works. We present Natural Concurrency Control (NCC), a new concurrency control technique that guarantees strict serializability and ensures minimal costs--i.e., one-round latency, lock-free, and non-blocking execution--in the best (and common) case by leveraging natural consistency. NCC is enabled by three key components: non-blocking execution, decoupled response control, and timestamp-based consistency check. NCC avoids timestamp-inversion with a new technique: response timing control, and proposes two optimization techniques, asynchrony-aware timestamps and smart retry, to reduce false aborts. Moreover, NCC designs a specialized protocol for read-only transactions, which is the first to achieve the optimal best-case performance while ensuring strict serializability, without relying on synchronized clocks. Our evaluation shows that NCC outperforms state-of-the-art solutions by an order of magnitude on many workloads. ## 1 Introduction Strictly serializable datastores have been advocated by many recent works [19, 18, 12, 49, 56, 66, 32] because they provide a powerful abstraction of programming in a single-threaded, transactionally isolated environment, which greatly simplifies application development and prevents consistency anomalies [9]. However, only a few concurrency control techniques can provide such strong guarantees, i.e., they enforce strict serializability, and these existing techniques are expensive. Common techniques include distributed optimistic concurrency control (dOCC), distributed two-phase locking (d2PL), and transaction reordering (TR). They incur high overheads which manifest in extra rounds of messages (e.g., ones for dOCC's validation), distributed lock management (e.g., it is required by d2PL), blocking (e.g., it is required by TR to exchange ordering information), and excessive aborts (e.g., they happen to dOCC and d2PL). These overheads severely degrade the system's performance. These costs are paid to (1) prevent transactions from interleaving, and (2) ensure that later-issued transactions take effect after those finished ones (i.e., in their real-time order), which are the two requirements of strict serializability. However, we find that these costs are unnecessary for many datacenter workloads where a transaction is executed within a datacenter and then replicated within/across datacenters. Many datacenter transactions do not interleave: First, many of them are dominated by reads [12], and the interleaving of reads returning the same value does not affect correctness. Second, many of them are short [24, 27, 49, 62, 69, 49], and short lifetimes reduce the likelihood of interleaving. Third, the advances in datacenter networking techniques reduce the variance in delivery times of concurrent requests [14, 22, 6, 22], which results in less interleaving. Many datacenter transactions arrive at servers in an order that trivially satisfies their real-time order relationship. That is, if a transaction is issued in real-time after another one was committed, then the later transaction must arrive at the servers after the other arrived. Because many transactions do not interleave and their arrival order satisfies the real-time order constraints, intuitively, simply executing their requests in the order servers receive them (i.e., treating them as if non-transactional simple operations) would naturally satisfy strict serializability. We call these transactions _naturally consistent_. Natural consistency unleashes opportunities for low-cost, strictly serializable concurrency control, because, _ideally_, naturally consistent transactions could be safely executed without any concurrency control, incurring zero costs. However, existing techniques pay unnecessary overheads. For instance, dOCC still requires extra rounds of messages for validation, d2PL still acquires locks, and TR still blocks transactions for the exchange of ordering information, even if validation always succeeds, locks are always available, and nothing needs to be reordered. Therefore, this paper strives to make naturally consistent transactions, which are prevalent in datacenter workloads, as cheap as possible. In this paper, we present Natural Concurrency Control (NCC), a new concurrency control technique that guarantees strict serializability and ensures minimal costs--i.e., one-round latency, lock-free, and non-blocking execution--in the best (and common) case. NCC's design insight is to execute naturally consistent transactions in the order they arrive as if they were non-transactional operations while guaranteeing correctness without interfering with transaction execution. NCC is enabled by three key components. _Non-blocking execution_ ensures that servers execute transaction requests to completion without acquiring locks and make their results immediately visible to avoid blocking subsequent transactions, making transaction execution as cheap as that of non-transactional operations. _Decoupled response control_ separates the non-blocking execution of requests from the sending of their responses, guaranteeing that only correct (consistent) results are returned and there is no risk of cascading aborts. _Timestamp-based consistency check_ uses timestamps to reflect requests' execution order and ensures that transactions are consistent by examining the timestamps. In the process of designing the timestamp-based consistency check, we identified a correctness pitfall in timestamp-based, strictly serializable techniques. Specifically, these techniques sometimes fail to guard against an execution order that is serializable but non-strict. Consequently, executing transactions in such an order following the timestamps may incorrectly invert the real-time order relationship between transactions and thus violates strict serializability. We call this pitfall _timestamp-inversion_. Timestamp-inversion is subtle because it possibly happens only if a transaction interleaves with a set of non-conflicting transactions that have real-time order relationships. The pitfall is fundamental as we find that it has affected multiple prior works. NCC handles timestamp-inversion with a novel technique, response timing control, which is an integral part of decoupled response control. Response timing control enforces real-time order constraints, without interfering with the non-blocking execution or relying on synchronized clocks. NCC also proposes two timestamp optimization techniques, asynchrony-aware timestamps and smart retry, which reduce false aborts by controlling the timestamps. Moreover, NCC designs a specialized protocol for read-only transactions, which, to the best of our knowledge, is the first to achieve the optimal performance [39] in the best case while ensuring strict serializability, without having to rely on synchronized clocks. We compare NCC with common strictly serializable techniques: dOCC, d2PL, and TR, and two serializable protocols, TAPIR [69] and MVTO [53]. We use three workloads: Google-F1, Facebook-TAO, and TPC-C (SS5). The Google-F1 and Facebook-TAO workloads synthesize production-like workloads for Google's Spanner [12, 57] and Facebook's TAO [11], respectively. Both workloads are read-dominated. TPC-C [61] consists of few-shot transactions that are write-intensive. We further explore the workload space by varying the write fractions in Google-F1. NCC significantly outperforms dOCC, d2PL, and TR with 2-10\(\times\) lower latency and 2-20\(\times\) higher throughput. NCC outperforms (serializable) TAPIR with 2\(\times\) higher throughput and 2\(\times\) lower latency, and closely matches the performance of (serializable) MVTO. In summary, this work makes the following contributions: * Identifies timestamp-inversion, a fundamental correctness pitfall in timestamp-based, strictly serializable concurrency control techniques. * Proposes NCC, a new concurrency control technique that provides strict serializability and achieves minimal overhead in the best and common case by exploiting natural consistency in datacenter workloads. * A strictly serializable read-only protocol with optimal best-case performance that does not rely on synchronized clocks. * An implementation and evaluation that shows NCC outperforms existing strictly serializable systems by an order of magnitude, and closely matches the performance of systems that provide weaker consistency. ## 2 Background This section provides the necessary background on transactional datastores, strict serializability, and the general strictly serializable techniques. ### Transactional Datastores Transactional datastores are the back-end workhorse of many web applications. They typically consist of two types of machines as shown in Figure 1. Front-end _client_ machines receive users' requests, e.g., managing a web page, and execute these requests on behalf of users by issuing transactions to the storage _servers_ that store the data. Servers are fault-tolerant, e.g., the system state is made persistent on disks and replicated via replicated state machines (RSM), e.g., Paxos [29]. Transactions are managed by coordinators, which can be co-located either with a server or the client. This paper adopts the latter approach to avoid the delays caused by shipping the transaction from the client to a server, if servers were used as coordinators, while explicitly handling client failures. The coordinator issues read/write operations to relevant servers, called participants, following the transaction's logic, which can be _one-shot_, i.e., it knows a priori which data to read/write and can send all requests in one step, or _multi-shot_, i.e., it takes multiple steps as the data read in one step determines which data to read/write in later steps. The system executes transactions following a concurrency control protocol, which ensures that transactions appear to take effect in an order that satisfies the system's consistency requirements. The stronger the consistency provided by the system, the easier it is to develop correct applications. ### Strict Serializability _Strict serializability_[23, 51], also known as external consistency [21], is often considered the strongest consistency model.1 It requires that (1) there exists a _total order_ of transactions, and (2) the total order must respect the _real-time order_, which means if transaction \(tx_{1}\) ends before \(tx_{2}\) starts, then \(tx_{1}\) must appear before \(tx_{2}\) in the total order. As a result, transactions appear to take effect one at a time in the order the system receives them. Footnote 1: “Consistency” in this paper maps to the “T” (isolation level) in ACID. **Formal definition.** We use Real-time Serialization Graphs (RSG) [1] to formalize the total order and real-time order requirements. An RSG is a directed graph that captures the order in which transactions take effect. A vertex in the graph is a committed transaction. There are two types of edges. An _execution edge_\(tx_{1}\xrightleftharpoons{\exp}tx_{2}\) indicates that \(tx_{1}\) "affects" \(tx_{2}\), meaning that any of the following happens: \(tx_{1}\) creates some data version \(v_{i}\) and \(tx_{2}\) reads \(v_{i}\); \(tx_{1}\) reads some data version \(v_{j}\) and \(tx_{2}\) creates \(v\)'s next version that is after \(v_{j}\); or \(tx_{1}\) creates some data version \(v_{k}\) and \(tx_{2}\) creates \(v\)'s next version that is after \(v_{k}\). A chain of execution edges constructs a directed path between two vertices, denoted by \(tx_{1}\xrightleftharpoons{\exp}tx_{2}\), meaning that \(tx_{1}\) "transitively" affects \(tx_{2}\) through some intermediary transactions. A _real-time edge_\(tx_{1}\xrightleftharpoons{\exp}tx_{2}\) captures the real-time order between \(tx_{1}\) and \(tx_{2}\), meaning that \(tx_{1}\) commits before \(tx_{2}\)'s client issues \(tx_{2}\)'s first request. There exists a total order if and only if transactions do not circularly affect each other. That is, the subgraph that comprises all vertices and only execution edges is acyclic, meaning that the following invariant holds: **Invariant 1:**\(\forall tx_{1},tx_{2}\left(tx_{1}\xrightleftharpoons{\exp}tx_{2}\implies \neg(tx_{2}\xrightleftharpoons{\exp}tx_{1}))\) The (total) execution order respects the real-time order if and only if the execution edges (paths) do not _invert_ the real-time edges, meaning that the following invariant holds: **Invariant 2:**\(\forall tx_{1},tx_{2}\left(tx_{1}\xrightleftharpoons{\exp}tx_{2}\implies \neg(tx_{2}\xrightleftharpoons{\exp}tx_{1}))\) These invariants correspond to the total order and real-time order requirements, respectively. Therefore, a system is strictly serializable if and only if for any history of its execution, both invariants hold. By enforcing a total order and the real-time order, strictly serializable systems provide application programmers with the powerful abstraction of programming in a single-threaded, transactionally isolated environment, and thus they greatly simplify application development and eliminate consistency anomalies. For example, a web application typically comprises many internal services [50, 5]. If one service pushes a commit to production code that triggers a message notifying another service to commit a patch, then any service that uses a version of the code with the patch must also include the original commit, enforcing the real-time order denoted by _commit_\(\xrightarrow{\texttt{rto}}\)_path_. As another example, if an admin removes Alice from a shared album and then notifies Bob of the change (via a channel external to the system, e.g., a phone call), who then uploads a photo he does not want Alice to see, then Alice must not see Bob's photo, since _remove_Alice_\(\xrightarrow{\texttt{rto}}\)_new_photo_. Such guarantees cannot be enforced by weaker consistency models, e.g., serializability, because they do not enforce the real-time order that happens externally to the system. ### dOCC, d2PL, & Transaction Reordering Due to the real-time order requirement, only a few general techniques provide strict serializability. They are dOCC, d2PL, and transaction reordering (TR). dOCC and d2PL typically require three round trips, one for each phase: execute, prepare, and commit. In the execute phase, the coordinator Figure 1: An overview of system architecture and transaction execution. NCC follows two-phase commit and is enabled by three design pillars: non-blocking execution, decoupled response control, and timestamp-based consistency check. reads the data from the servers while writes are buffered locally at the coordinator. d2PL acquires read locks in this phase while dOCC does not. In the prepare phase, the coordinator sends prepare messages and the buffered writes to the participant servers. d2PL locks all participants while dOCC only locks the written data. dOCC must also validate that values read in the execute phase have not changed. If all requests are successfully prepared, i.e., locks are available and/or validation succeeds, the coordinator commits the transaction by applying writes and notifying the participants of the commitment; otherwise, the transaction is aborted and retried. Transaction reordering typically requires two steps. In the first step, the coordinator sends the requests to the servers, which make requests wait while recording their arrival order relative to those of concurrent transactions, i.e., constructing the execution and real-time edges in an RSG. This ordering information usually increases linearly in size with respect to the number of concurrent transactions. In the second step, the coordinator collects the ordering information from participants, sorts the requests to eliminate cycles, and servers execute the transactions in the sorted order. These techniques are expensive, e.g., they require multiple rounds of messages, locking, waiting, and aborts. We find that these overheads are unnecessary for most of the transactions in many datacenter workloads, and this observation has inspired the design in this paper. ## 3 Design Insight & Overview This section explains natural consistency, which inspires our design, and overviews the key design components. ### Exploiting Natural Consistency For many datacenter transactions, simply executing their requests in the order servers receive them, as if they were non-transactional read/write operations, would naturally satisfy strict serializability. In other words, they arrive at servers in an order that is already strictly serializable. We call these transactions _naturally consistent_. Key to natural consistency is the arrival order of transaction requests. Executing requests in the order they arrive trivially satisfies the real-time order between transactions because a transaction that happens later in real-time, i.e., it starts after another transaction has been committed, must arrive at servers after the committed transaction have arrived. Many requests in datacenter workloads arrive in an order that is total, i.e., transactions do not circularly affect each other, due to the following reasons. First, many requests in real-world workloads are reads [11, 12], and reads do not affect other reads. For instance, the reads that return the same value can be executed in any order, without introducing cycles to their serialization graph (SS2.2), and thus servers can safely execute them in their arrival order. Second, many transactions are short, e.g., they are one-shot [24, 27, 49, 62, 69, 46] or can be made one-shot using stored procedures [20, 65, 64, 58, 48], and thus their requests are less likely to interleave with others'. Third, the advances in datacenter networks reduce the variance of message delivery times [46, 47, 52], and thus further reduces the likelihood of request interleaving. Naturally consistent transactions present room for improvement in existing techniques. For instance, dOCC still requires validation messages while executing these transactions; however, these messages are not only unnecessary (e.g., they validate the transactions that are meant to be consistent), but even worse, create a _contention window_ (e.g., they lock the data between prepare and commit) which could cause consistent transactions to falsely abort, as shown in Figure 1(a). Ideally, the system could treat naturally consistent transactions as non-transactional operations and execute them in the order they arrive without any concurrency control, and strict serializability is still guaranteed. Although this ideal case would be impossible in general (e.g., we cannot differentiate transactions that are naturally consistent from those that are not before they arrive and apply concurrency control accordingly), our design is inspired by the prevalence of naturally consistent transactions, and aims to minimize costs for as many of these transactions as possible. ### Three Pillars of Design To achieve minimal overheads for naturally consistent transactions, our design executes them in a manner that closely resembles non-transactional operations. This is made possible through three key components. **Non-blocking execution.** Assuming transactions are naturally consistent, servers execute requests in the order they arrive, ensuring that if two transactions have a real-time order relationship, then they must be executed in their real-time order. Requests are executed "urgently" to completion without acquiring locks, and their results are immediately made visible to prevent blocking subsequent requests. As a result, transactions are executed as cheaply as non-transactional operations, without incurring contention windows. **Decoupled response control.** Because not all transactions are naturally consistent, servers must prevent returning inconsistent results to clients and ensure there are no cascading aborts. This is achieved by decoupling requests' responses from their execution, and responses are sent asynchronously only when it is verified consistent. Inconsistent results are discarded and their requests are re-executed. **Timestamp-based consistency check.** Consistency check must be as lightweight as possible, without interfering with server-side execution. We leverage timestamps to capture the arrival order (thus the execution order) of requests and design a client-side checker that verifies if requests were executed in a total order, without incurring overheads such as messages (as in dOCC and TR) and locks (as in dOCC and d2PL). Figure 1 shows at a high level how these three pillars support our design and depicts the life cycle of transactions: * The user submits application requests to a client, which translates the requests into transactions. * The (client) coordinator sends operations to the relevant servers, following the transaction's logic. The servers execute requests in their arrival order. Their responses are inserted into a queue and sent asynchronously. The responses include timestamps that capture requests' execution order. * Responses are sent to the client when it is safe, determined by response timing control (RTC). * The safeguard checks if transactions were executed in a total order by examining the timestamps in responses. The coordinator sends commit/abort messages to the servers and returns the results of committed transactions to the user in parallel, without waiting for servers' acknowledgments. * explicitly handle client failures by leveraging a server as a backup coordinator. Limitations.First, our design leverages natural consistency, which is often observed in short (e.g., one or few shots) datacenter transactions; thus, many-shot long-lasting transactions that are more likely to interleave might not benefit from our design, while we support arbitrary-shot transactions. Second, the timestamps associated with each request, including both reads and writes, must be replicated to correctly handle failures, which could lead to additional replication overhead, which we detail in Section 4.6. Key to the correctness of our design is leveraging timestamps to verify a total order that respects the real-time order. Yet, we identify a correctness pitfall in relying on timestamps to ensure strict serializability, and this pitfall has affected multiple existing works. ## 4 Natural Concurrency Control This section presents the basic components of NCC, explains how NCC avoids the timestamp-inversion pitfall, introduces two timestamp optimization techniques and a specialized algorithm for read-only transactions, and concludes with discussions of failure handling and correctness. ### Protocol Basics We build NCC on the three design pillars (SS3.2) to minimize the costs for naturally consistent transactions. Pre-timestamping transactions.NCC processes a transaction in two phases: execute and commit. Figure 4.1 shows the client (coordinator)'s logic. The coordinator starts a transaction \(tx\) by pre-assigning it a timestamp \(t\), which is a combination of the client's physical time and client identifier, uniquely identifying \(tx\) (line 3). When two timestamps have the same value for the physical time, NCC breaks the tie by comparing their client identifiers, i.e., timestamps are unique. \(t\) is included in all of \(tx\)'s requests that are sent to servers shot by shot, following \(tx\)'s application logic (lines 4 and 5). These timestamps accompany \(tx\) throughout its life cycle and will be used to verify if the results are consistent. Refining timestamps to match execution order.Figure 4.2 details the server-side logic for request execution and commitment. Each key stores a list of versions in the order of the server creating them. A version has three fields: _value_, a pair of _timestamps_\((t_{w},t_{r})\), and _status_. _value_ stores the data; \(t_{w}\) is the timestamp of the transaction that created the version; \(t_{r}\) is the highest timestamp of transactions that read the version; and _status_ indicates the state of the transaction that created the version: either (initially) _undecided_, or _committed_. An aborted version is removed immediately. The server always executes a request against the most recent version _curr_ver_, which is either undecided or committed (line 8). Specifically, the server executes a write by creating a new undecided version _new_ver_, which is now the most recent version of the key, ordered after _curr_ver_ (lines 11 and 12), and executes a read by reading the _value_ of _curr_ver_ (line 16). Apparently, NCC's basic protocol can work with a single-versioned data store, while multi-versioning is required only for smart retry, a timestamp optimization technique (SS4.4). The server refines the most recent version's timestamp pair to match the order in which requests are executed. Specifically, a write request initializes _new_ver_'s \(t_{w}\) and \(t_{r}\) to the same value which is no less than the write's timestamp \(t\) and _curr_ver_'s Figure 2: \(tx_{1}\) and \(tx_{2}\) are naturally consistent. dOCC incurs unnecessary validation costs, and \(tx_{2}\) could be falsely aborted due to lock unavailability. NCC can commit both transactions with timestamp pre-assignment, refinement, and the safeguard check (denoted by SG). These techniques are detailed in Section 4.1. Each version in NCC has a \((t_{w},t_{r})\) pair which is included in server responses. RTC means response timing control, detailed in Section 4.2. \(t_{r}\) (line 10). Similarly, a read request updates \(curr\_ver\)'s \(t_{r}\) if needed (line 15). Figure 1(b) shows examples for how timestamps are refined for reads (\(tx_{1}\)-\(tx_{3}\)) and writes (\(tx_{4}\) and \(tx_{5}\)). These refined timestamps match requests' arrival order (thus the execution order) and are used to verify if the order is total. **Non-blocking execution and response queues.** The server executes requests in a non-blocking manner and decouples their execution from responses. Specifically, a write creates a version and immediately makes it visible to subsequent transactions; a read fetches the value of the most recent version whose _status_ could be undecided, without waiting for it to commit; the server prepares the response (lines 7, 13, and 16), inserts it into a _response queue_ (lines 17 and 18), which asynchronously sends the responses to clients when it is safe. (Section 4.2 will detail response timing control, which determines when a response is safe to be sent so the real-time order is guaranteed.) Unlike d2PL and dOCC that lock the data for at least one round-trip time in the execute and prepare phases (i.e., the contention window), non-blocking execution ensures that a transaction never exclusively owns the data without performing useful work. As a result, the server never stalls, and thus CPUs are fully utilized to execute requests. Moreover, non-blocking execution eliminates the contention window completely and thus reduces false aborts. **Client-side safeguard.** A server response includes the timestamp pair (\(t_{w},t_{r}\)) of the most recent version, e.g., \(new\_ver\) for a write and \(curr\_ver\) for a read. When the transaction has completed its logic (i.e., all shots are executed) and the client has received responses to all its requests, the safeguard checks if there exists a consistent snapshot that intersects all (\(t_{w},t_{r}\)) pairs in server responses, as shown in Figure 4.1. This intersecting snapshot identifies the transaction's synchronization point. The safeguard commits a transaction if the timestamp pairs overlap (lines 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27). Figure 1(c) shows an example execution where NCC executes the same transactions in Figure 1(a). The safeguard enables NCC to commit both transactions without unnecessary overhead such as dOCC's validation cost and false aborts. When the client has decided to commit or abort the transaction, the protocol enters the commit phase by sending the commit/abort messages to the participant servers. If the transaction is committed, the server updates the _status_ of the versions created by the transaction from undecided to committed; otherwise, the versions are deleted (lines 20, 21, 22, 23, 24, 25, Figure 2). The client retries the aborted transaction. The client sends the results of the committed transaction to the user in paral lel with the commit messages, without waiting for servers' acknowledgments (lines 11-16, Figure 4.1). NCC achieves minimal costs by urgently executing transactions in a non-blocking manner and by ensuring a total order with the light-weight timestamp-based safeguard. Yet, in order to provide strict serializability, NCC must enforce the real-time order between transactions by handling the timestamp-inversion pitfall, which we detail next. ### Response Timing Control NCC avoids timestamp-inversion by disentangling the subtle interleaving, e.g., a transaction interleaves with a set of non-conflicting transactions that have real-time order dependencies (e.g., Figure 1), without relying on specialized clocks. To untangle the transactions, NCC proposes _response timing control_ (RTC), which controls the sending time of responses. It is safe to send the response of a request \(\mathit{req}_{1}\) when the following dependencies are satisfied: 1. If \(\mathit{req}_{1}\) reads a version created by \(\mathit{req}_{2}\) of another transaction, then \(\mathit{req}_{1}\)'s response is not returned until \(\mathit{req}_{2}\) is committed and is discarded (\(\mathit{req}_{1}\) will be re-executed) if \(\mathit{req}_{2}\) is aborted. 2. If \(\mathit{req}_{1}\) is a write and there are reads that read the version which immediately precedes the one created by \(\mathit{req}_{1}\), then \(\mathit{req}_{1}\)'s response is not returned until the reads are committed/aborted. 3. If \(\mathit{req}_{1}\) is a write that creates a version immediately after the version created by \(\mathit{req}_{2}\) of another transaction, then \(\mathit{req}_{1}\)'s response is not returned until \(\mathit{req}_{2}\) is committed/aborted. By enforcing the above dependencies, NCC controls the sending of responses so that the transactions which form the subtle interleaving are serialized in their real-time order. For instance, in Figure 1, server \(A\) cannot send the response of \(\mathit{tx}_{1}\) until \(\mathit{tx}_{3}\) has been committed (assuming at least one of them writes to \(A\)). As a result, any transaction \(\mathit{tx}_{2}\) that begins after \(\mathit{tx}_{1}\) receives its response, i.e., \(\mathit{tx}_{1}\xrightarrow{\mathrm{rtb}}\mathit{tx}_{2}\), must be executed after \(\mathit{tx}_{1}\), and thus after \(\mathit{tx}_{3}\) as well because all requests of \(\mathit{tx}_{3}\) have at least been executed when any of \(\mathit{tx}_{2}\)'s requests arrive and the server always execute transactions in their arrival order. This results in a total order, \(\mathit{tx}_{3}\xrightarrow{\mathrm{ceo}}\mathit{tx}_{1}\xrightarrow{\mathrm{ceo }}\mathit{tx}_{2}\), which respects the real-time order, enforcing Invariant 2, as shown in Part III of Figure 1. NCC controls response timing by managing response queues, independently from request execution. NCC maintains one response queue per key. A queue item consists of four fields: _response_ that stores the response message of a request, the _request_ itself, \(\mathit{ts}\) which is the pre-assigned timestamp of the request, and \(\mathit{q\_status}\) that indicates the state of the request, which is initially _undecided_, and updated to either _committed_ or _aborted_ when the server receives the commit/abort message for this request (lines 26-29, Figure 4.2). Managing response queues.Figure 4.3 details how NCC manages response queues. NCC iterates over the queue items from the head (i.e., the oldest response) until it finds the first response whose \(\mathit{q\_status}\) is undecided. This response satisfies the three dependencies because all responses before it have been decided, i.e., the requests it depends on are committed/aborted (lines 3-11). The server sends this response message to the client (lines 15-17). If this response is to a read request, then the server also sends all consecutive read responses that follow it, because all these read responses satisfy Dependency \(\mathrm{D}_{1}\) (lines 18-21). Fixing reads locally.When the server receives an abort message for a write request, it must invalidate the responses of any reads that have fetched the value of the aborted write. This is necessary to avoid returning invalid results to the client and to prevent cascading aborts. Specifically, the server removes the response of such a read from the response queue and re-executes the read request, e.g., it fetches the current most recent version, prepares a new response, and inserts the new response to the tail of the queue (lines 7-10). Avoiding indefinite waits.To avoid responses from circularly waiting on dependencies across different keys, NCC early aborts a request (thereby aborting the transaction to which it belongs) if its pre-assigned timestamp is not the highest the server has seen _and_ if its response cannot be sent immediately, i.e., it is not the head of the queue. Specifically, the server sends a special response to the client without executing the request. The special response includes a field \(\mathit{early\_abort}\) which allows the client to bypass the safeguard and abort the transaction. We omit the details from the pseudocode for simplicity and clarity. Response timing control (RTC) is a general solution to timestamp-inversion, without the need for specialized clocks. It does not incur more aborts even when responses are not sent immediately, because response management is decoupled from request execution. That is, whether a transaction is committed or aborted is solely based on timestamps, and RTC does not affect either pre-assignment or refinement of timestamps. Yet, NCC's performance also depends on how well timestamps capture the arrival order of (naturally consistent) transactions. That is, timestamps that do not match transactions' arrival order could cause transactions to falsely abort even if they are naturally consistent. In the next two subsections, we will discuss optimization techniques that enable timestamps to better match the arrival order. ### Asynchrony-Aware Timestamps NCC proposes two optimizations: a proactive approach that controls how timestamps are generated before transactions start and a reactive approach that updates timestamps to match the naturally consistent arrival order after requests are executed. This subsection discusses the proactive approach. The client pre-assigns the same timestamp to all requests of a transaction; however, these requests may arrive at their participant servers at very different physical times, which could result in a mismatch between timestamp and arrival order, as shown in Figure 2(a). Transactions \(tx_{1}\) and \(tx_{2}\) start around the same time and thus are assigned close timestamps by their clients, e.g., \(t_{1}=1004\) and \(t_{2}=1005\), respectively. Because the latency between \(B\) and \(CL_{1}\) is greater than that between \(B\) and \(CL_{2}\), \(tx_{1}\) may arrive at \(B\) later than \(tx_{2}\), but \(tx_{1}\) has a smaller timestamp. As a result, the safeguard may falsely reject \(tx_{1}\), e.g., server \(B\) responds with a refined timestamp pair (1006, 1006) which does not overlap with (1004, 1004), the timestamp pair returned by server \(A\). However, aborting \(tx_{1}\) is unnecessary because \(tx_{1}\) and \(tx_{2}\) are naturally consistent. To tackle this challenge, NCC generates timestamps while accounting for the time difference, \(t_{\Delta}\), between when a request is sent by the client and when the server starts executing the request. Specifically, the client records the physical time \(t_{c}\) before sending the request to the server; the server records the physical time \(t_{s}\) before executing the request and piggy-backs \(t_{s}\) onto the response sent back to the client; and the client calculates \(t_{\Delta}\) by finding the difference between \(t_{c}\) and \(t_{s}\), i.e., \(t_{\Delta}=t_{c}-t_{s}\). By measuring the end-to-end time difference, \(t_{\Delta}\) effectively masks the impact of queuing delays and clock skew. The client maintains a \(t_{\Delta}\) for each server it has contacted. An asynchrony-aware timestamp is generated by adding the client's current physical time and the greatest \(t_{\Delta}\) among the servers this transaction will access. For instance, given the values of \(t_{\Delta}\) shown in Figure 2(a), \(CL_{1}\) assigns \(tx_{1}\) timestamp 1014 (i.e., \(1004+10\)) and \(CL_{2}\) assigns \(tx_{2}\) 1010 (i.e., \(1005+5\)), and both transactions may successfully pass their safeguard check, capturing natural consistency. ### Smart Retry NCC proposes a reactive approach to minimizing the performance impact of the safeguard's false rejects, which happen when timestamps fail to identify the naturally consistent arrival order, as shown in Figure 2(b). Initially, version \(A_{0}\) has a timestamp pair (\(t_{w}=0\), \(t_{r}=0\)), and \(B_{0}\) has (\(t_{w}=0\), \(t_{r}=5\)). The same transactions \(tx_{1}\) and \(tx_{2}\) as those in Figure 1(c) access both keys. Following NCC's protocol, \(tx_{1}\)'s responses contain the timestamp pairs (\(0,4\)) and (\(6,6\)) from \(A\) and \(B\), respectively, which will be rejected by the safeguard because they do not overlap. However, aborting \(tx_{1}\) is unnecessary because \(tx_{1}\) and \(tx_{2}\) are naturally consistent. Instead, NCC tries to "reposition" a rejected transaction with respect to the transactions before and after it to construct a total order instead of aborting and re-executing the rejected transaction from scratch, which would waste all the work the server has done for executing it. Specifically, NCC chooses a time that is nearest "in the future" and hopes the rejected transaction can be re-committed at that time. This is possible if the chosen time has not been taken by other transactions. Figure 4.4 shows the pseudocode for smart retry. When the transaction fails the safeguard check, NCC suggests a new timestamp \(t^{\prime}\), which is the maximum \(t_{w}\) in the server responses. The client then sends smart retry messages that include \(t^{\prime}\) to the participant servers, which then attempt to reposition the transaction's requests at \(t^{\prime}\). The server can reposition a request if there has not been a newer version that was created before \(t^{\prime}\) (lines 3-5) and, if the request is a write, the version it created has not been read by any transactions (lines 6 and 7). The server updates the timestamps of relevant versions if smart retry succeeds, e.g., the created version has a new timestamp pair (\(t^{\prime}\), \(t^{\prime}\)), and \(t_{r}\) of read version is updated to \(t^{\prime}\) if \(t^{\prime}\) is greater (lines 8-11). The client commits the safeguard-rejected transaction if all its smart retry requests succeed, and "truly" aborts it, otherwise (lines 9 and 10, Figure 4.1). Not only does smart retry avoid unnecessary "true" aborts, it also unleashes a higher degree of concurrency, as shown in Figure 2(c). The servers have executed a newer transaction when \(tx_{1}\)'s smart retry (SR) messages arrive, and both transactions can be committed even if the messages interleave, e.g., \(tx_{1}\)'s smart retry succeeds and \(tx_{2}\) passes its safeguard check, because \(tx_{2}\)'s pre-assigned timestamps have left enough room for repositioning \(tx_{1}\)'s requests. In contrast, validation-based techniques would unnecessarily abort \(tx_{1}\) (considering SR as dOCC's validation messages) due to the presence of a conflicting transaction \(tx_{2}\). **Garbage collection.** Old versions are temporarily stored and quickly garbage collected as soon as they are no longer needed by undecided transactions for smart retry. Only the most recent versions are used to serve new transactions. ### Read-Only Transactions NCC designs a specialized read-only transaction protocol for those read-dominated workloads [42, 11, 12, 26, 39, 11]. Similar to existing works, NCC optimizes read-only transactions by eliminating their commit phase because they do not modify the system state and have nothing to commit. By eliminating commit messages, read-only transactions achieve the _optimal performance_ in the best case, i.e., one round of non-blocking messages with constant metadata [39, 40, 41]. Eliminating commit messages brings a new challenge to response timing control: write responses can no longer track their dependencies on preceding read-only transactions, as they do not know if and when those reads are committed/aborted (SS4.2). To tackle this challenge, NCC aborts a read-only transaction if its requests could possibly cause the subtle interleaving that leads to timestamp-inversion. Specifically, each client tracks \(t_{ro}\) which is the \(t_{w}\) of the version created by the most recent write on a server, and the client maintains a map of \(t_{ro}\) for each server this client has contacted. A read-only transaction is identified by a Boolean field in the API, _IS_READ_ONLY_. The client pre-assigns the transaction a timestamp \(t\) and sends each read request to the participant server together with the \(t_{ro}\) of the server. To execute a read request, the server checks the version at \(t_{ro}\). If the version is still the most recent, the server continues to execute the read following the basic protocol; otherwise, the server sends a special response that contains a field _ro_abort_ immediately without executing the request. If any of the responses contain _ro_abort_, the client aborts this read-only transaction; otherwise, the client continues with the safeguard check and, if needed, smart retry, after which the client does not send any commit/abort messages. If the client can make the pre-assigned timestamp \(t\) no less than the greatest \(t_{ro}\) across participant servers, then the read-only transaction is guaranteed to commit as long as there are not _ro_abort_ responses. This protocol pays more aborts in the worst case in exchange for reduced message overhead in the normal case, a trade-off that is worthwhile for read-dominated workloads where writes are few so aborts are rare, and read-only transactions are many so the savings in message cost are significant. This protocol also expedites the sending of responses to read-write transactions because read-only transactions do not insert responses into the response queue, i.e., a write response depends only on the reads of preceding read-write transactions in Dependency D\({}_{2}\), not those of read-only transactions. ### Failure Handling **Tolerating server failures.** NCC can assume servers never fail as their state is typically made persistent on disks and replicated via state machine replication such as Paxos [28]. All state changes incurred by a transaction in the execute phase (e.g., \(t_{w}\) and \(t_{r}\) of each request) must be replicated for correctness. For instance, after a request is executed, the server inserts its response into the response queue and, in parallel, replicates the request to other replicas. Its response is sent back to the client when it is allowed by response timing control and when its replication is finished. Commit/abort and smart retry messages are also replicated. This basic scheme ensures correctness but incurs high overhead. We will investigate possible optimizations in future work, e.g., NCC could defer replication to the last shot of a transaction where all state changes are replicated once and for all, without having to replicate each request separately. Server replication inevitably increases latency but does not introduce more aborts, because whether a transaction is committed or aborted is solely based on its timestamps which are decided during request execution Figure 3: Optimizations that match the timestamps with transactions’ arrival order. Asynchrony-aware timestamps proactively controls the pre-assigned timestamps before execution. Smart retry reactively fixes false rejects by the safeguard after execution thus avoids aborting and re-executing transactions. and before replication starts. **Tolerating client failures.** NCC must handle client failures explicitly because clients are not replicated in most systems and NCC co-locates coordinators with clients. NCC adopts an approach similar to that in Sinfonia [4] and RIFL [30]. We briefly explain it as follows. For a transaction \(tx\), one of the storage servers \(tx\) accesses is selected as the backup coordinator, and the other servers are cohorts. In the last shot of the transaction logic, which is identified by a field \(\mathit{IS\_LAST\_SHOT}\) in the requests, the client notifies the backup coordinator of the identities of the complete set of cohorts. Cohorts always know which server is the backup coordinator. When the client crashes, e.g., unresponsive for a certain amount of time, the backup coordinator reconstructs the final state of \(tx\) by querying the cohorts for how they executed \(tx\) and commits/aborts \(tx\) following the same safeguard and smart retry logic. Because computation is deterministic, the backup coordinator always makes the same commit/abort decision as the client would do if the client did not fail. To tolerate one client failure, NCC needs one backup coordinator which is a storage server replicated in a usual way. ### Correctness This section provides proof intuition for why NCC is safe and live. At a high level, NCC guarantees a total order, the real-time order, and liveness, with the mechanisms (M\({}_{1}\)) the safeguard, (M\({}_{2}\)) non-blocking execution with response timing control, and (M\({}_{3}\)) early aborts, respectively. We provide a formal proof of correctness in a technical report [2]. **NCC is safe.** We prove that NCC guarantees strict serializability by demonstrating that both Invariants 1 and 2 are upheld. These two invariants correspond to the total order and real-time order requirements, respectively. We present a simplified version of our proof by reasoning about real-time and execution edges in the RSGs while our arguments can be applied to execution paths by leveraging the transitivity of execution edges (SS2.2). Intuitively, NCC commits all requests of a transaction at the same synchronization point, and the synchronization points of all committed transactions construct a total order. The synchronization point is identified by the safeguard searching for a timestamp \(t_{c}\) that intersects the \((t_{w},t_{r})\) pairs in server responses, and \(t_{c}\) is the commit timestamp (synchronization point) that uniquely identifies the transaction. Specifically, we prove that the safeguard enforces Invariant 1, by contradiction. Assume 1 both \(tx_{1}\) and \(tx_{2}\) are committed, 2\(tx_{1}\)\(\stackrel{{\text{ex}_{2}}}{{\longrightarrow}}\)\(tx_{2}\), and 3\(tx_{2}\)\(\stackrel{{\text{ex}_{2}}}{{\longrightarrow}}\)\(tx_{1}\). Then, by the definition of execution edges (SS2.2) and 3, we can derive that 4 there must exist requests \(req_{2}\) and \(req_{1}\) from \(tx_{2}\) and \(tx_{1}\), respectively, such that any of the following happens: \(req_{1}\) reads the version written by \(req_{2}\), \(req_{1}\) creates a version ordered after the version read by \(req_{2}\), or \(req_{1}\) creates a version ordered after the version created by \(req_{2}\). Then, by NCC's timestamp refinement logic, we must have \(t_{r2}<t_{w1}\) where \(t_{r2}\) is \(t_{r}\) in \(req_{2}\)'s response and \(t_{w1}\) is \(t_{w}\) in that of \(req_{1}\)'s.2 Let \(t_{c2}\) and \(t_{c1}\) denote the commit timestamp \(t_{c}\) of \(tx_{2}\) and \(tx_{1}\), respectively. By the safeguard's logic, \(t_{w}\leq t_{c}\leq t_{r}\) for any \((t_{w},t_{r})\) pair. Thus, \(t_{c2}\leq t_{r2}\) and \(t_{w1}\leq t_{c1}\), which implies \(t_{c2}<t_{c1}\) because \(t_{r2}<t_{w1}\). By applying the same arguments to the assumption 2, we can derive \(t_{c1}<t_{c2}\), which, however, contradicts \(t_{c2}<t_{c1}\). Therefore, we have proved that NCC enforces a simplified version of Invariant 1. Footnote 2: \(t_{r2}=t_{w1}\) if and only if \(req_{1}\) reads the version created by \(req_{2}\), and we break the tie by following the requests’ execution order, for simplicity. We prove that NCC enforces Invariant 2 by considering two cases while assuming \(tx_{1}\)\(\stackrel{{\text{rdo}}}{{\longrightarrow}}\)\(tx_{2}\). Case 1, \(tx_{1}\) and \(tx_{2}\) access some common data items. Then, we must have 5\(\stackrel{{\text{ex}}}{{\longrightarrow}}\)\(tx_{1}\), \(tx_{2}\), because the servers must execute \(tx_{1}\)'s requests before \(tx_{2}\)'s on the overlapping data by NCC's protocol, i.e., the server executes requests in their arrival order. Then, it must be true that \(\neg(tx_{2}\stackrel{{\text{ex}_{2}}}{{\longrightarrow}}tx_{1})\), by Invariant 1 and 5. Case 2, \(tx_{1}\) and \(tx_{2}\) access disjoint data sets. Then, we prove the claim by contradiction. Assume \(tx_{2}\)\(\stackrel{{\text{ex}_{2}}}{{\longrightarrow}}\)\(tx_{1}\), then there must exist \(req_{2}\) and \(req_{1}\) in \(tx_{2}\) and \(tx_{1}\), respectively, such that \(req_{2}\) and \(req_{1}\) satisfy 4, by the definition of execution edges. \(req_{1}\)'s response is not returned until \(req_{2}\) is committed or aborted, by response timing control (SS4.2). Then, \(req_{2}\) is issued before \(req_{1}\)'s client receives \(req_{1}\)'s response because a request, e.g., \(req_{2}\), can be committed or aborted only after it is issued and executed. Thus, we can derive \(\neg(tx_{1}\)\(\stackrel{{\text{rdo}}}{{\longrightarrow}}\)\(tx_{2})\) because \(tx_{2}\) has at least one request, e.g., \(req_{2}\), which starts before \(tx_{1}\) receives all its responses. That is, \(tx_{2}\) starts before \(tx_{1}\) is committed, which contradicts our assumption \(tx_{1}\)\(\stackrel{{\text{rdo}}}{{\longrightarrow}}\)\(tx_{2}\). Therefore, Invariant 2 must hold. **NCC is live.** NCC's non-blocking execution guarantees that requests are always run to completion, i.e., execution never stalls (SS4.1). Blocking possibly happens only to the sending of responses due to response timing control, and NCC avoids circular waiting with early aborts (SS4.2). Thus, NCC guarantees that transactions finish eventually. NCC's specialized read-only transaction protocol and optimization techniques such as asynchrony-aware timestamps and smart retry do not affect correctness, because transactions are protected by the three mechanisms (i.e., M\({}_{1}\), M\({}_{2}\), and M\({}_{3}\) summarized at the beginning of this subsection) no matter if optimizations or the specialized protocol is used. ## 5 Evaluation This section answers the following questions: 1. How well does NCC perform, compared to existing strictly serializable techniques dOCC, d2PL, and TR? 2. How well does NCC perform, compared to state-of-the-art serializable (weaker consistency) techniques? 3. How well does NCC recover from client failures? **Implementation.** We developed NCC on Janus's framework [49]. We improved the framework by making it support multi-shot transactions, optimizing its baselines, and adding more benchmarks. NCC's core protocols have \(\sim\)3 K lines of C++ code. We also show the results of NCC-RW, a version without the read-only transaction protocol, i.e., all transactions are executed as read-write transactions. **Baselines.** The evaluation includes three strict serializable baselines (dOCC, d2PL, and Janus) and two non-strict serializable baselines (MVTO and TAPIR). We choose d2PL and dOCC because they are the most common strictly serializable techniques. We choose Janus because it is the only open-source TR-based strictly serializable system we can find. We choose MVTO because it has the highest best-case performance among all (weaker) serializable techniques, presenting a performance upper bound. We choose TAPIR because it utilizes a timestamp-based concurrency control. Our evaluation focuses on concurrency control and assumes servers never fail. Janus and TAPIR are unified designs of the concurrency control and replication layers, so we disabled their replication and only compare with their concurrency control protocols, shown as Janus-CC and TAPIR-CC, to make comparisons fair. We compare with two variants of d2PL. d2PL-no-wait aborts a transaction if the lock is not available. d2PL-bound-wait makes the transaction wait if it has a larger timestamp and aborts the lock-holding transaction otherwise. All baselines are fully optimized: we co-locate coordinators with clients (even if baselines cannot handle client failures), combine the execute and prepare phases for d2PL-no-wait and TAPIR-CC, and enable asynchronous commitment, i.e., the client replies to the user without waiting for the acknowledgments of commit messages. ### Workloads and Experimental Setup We evaluate NCC under three workloads that cover both read-dominated "simpler" transactions and many-write more "complex" ones. Google-F1 and Facebook-TAO synthesize real-world applications and capture the former. They are one-shot and read-heavy. TPC-C has multi-shot transactions and is write-intensive, capturing the latter. We also vary write fractions in Google-F1 to further explore the latter. Table 4 shows the workload parameters. Google-F1 parameters were published in F1 [57] and Spanner [12]. Facebook-TAO parameters were published in TAO [11]. TPC-C's New-Order, Payment, and Delivery are read-write transactions. Its Order-Status and Stock-Level are read-only. Janus's original implementation of TPC-C is one-shot, and we modified it to make Payment and Order-Status multi-shot, to demonstrate NCC is compatible with multi-shot transactions and evaluate its performance beyond one-shot transactions (yet still relatively short). **Experimental setting.** We use Microsoft Azure [45]. Each machine has 4 CPUs (8 cores), 16 GB memory, and a 1 Gbps network interface. We use 8 machines as servers and 16-32 machines as clients that generate open-loop requests to saturate the servers. (The open-loop clients back off when the system is overloaded to mitigate queuing delays.) Google-F1 and Facebook-TAO have 1 M keys with the popular keys randomly distributed to balance load. We run 3 trials for each test and 60 seconds for each trial. Experiments are CPU-bound (i.e., handling network interrupts). ### Result Overview NCC outperforms strictly serializable protocols dOCC, d2PL, and TR (Janus-CC) by 80%-20\(\times\) higher throughput and 2-10\(\times\) lower latency under various workloads (Figure 6) and write fractions (Figure 7(a)). NCC outperforms and closely matches serializable systems, TAPIR-CC and MVTO, respectively (Figure 6(b)). NCC recovers from client failures with minimal performance impact (Figure 6(c)). Please note that Figure 6 and Figure 6(b) have log-scale axes. Figure 5 summarizes the takeaway of performance improvements. Figure 4: Workload parameters. RO and RW mean read-only and read-write transactions, respectively. TPC-C has a scaling factor of \(10\) districts per warehouse and \(8\) warehouses per server. Figure 5: Two categories of natural consistency: Facebook-TAO and Google-F1 have low contention, TPC-C and Google-WF (varying write fractions) are write-intensive but arrive in order. TPC-C Payment and Order-Status are multi-shot. ### Latency vs. Throughput Experiments Figure 6 shows NCC's overall performance is strictly better than the baselines, i.e., higher throughput with the same latency and lower latency with the same throughput. Google-F1 and Facebook-TAO.Figure 5(a) shows the results under Google-F1. X-axis is the system throughput, and y-axis shows the median read latency in log scale. A horizontal line (O.P.) marks the operating point with reasonably low latency (\(<10\) ms). At the operating point, NCC has a 2-4\(\times\) higher throughput than dOCC and d2PL. We omit the results for Janus-CC to make the graph clearer as we found that Janus-CC's performance is incomparable (consistently worse) with other baselines, because Janus-CC is designed for highly contended workloads by relying on heavy dependency tracking, which is more costly under low contention. NCC has better performance because Google-F1 and Facebook-TAO are read-dominated and thus have few conflicts, making most of their transactions naturally consistent. NCC enables low overhead by leveraging natural consistency. In particular, its read-only transaction protocol executes the dominating reads with the minimum costs (Figure 5). For instance, at the operating point, NCC has about 99% of the transactions that passed their safeguard check and finished in one round trip. 99.1% of the transactions did not delay their responses, i.e., the real-time order dependencies were already satisfied when they arrived. That is, 99% of the transactions were finished by NCC within a single RTT without any delays. For the 1% of the transactions that did not pass the safeguard check initially, 70% of them passed the smart retry. Only 0.2% of the transactions were aborted and retried from scratch. All of them were committed eventually. As a result, NCC can finish most transactions with one round of messages (for the read-only ones) and a latency of one RTT (for both read-only and read-write) while dOCC and d2PL-wound-wait require three rounds of messages and a latency of two RTTs (asynchronous commitment saves one RTT). NCC has much higher throughput than d2PL-no-wait due to its novel read-only protocol that requires one round of messages, while d2PL-no-wait requires two. The fewer messages of NCC translate to much lower latency under medium and high load due to lower queuing delay. d2PL-no-wait performs similar to NCC-RW because NCC-RW executes read-only transactions by following its read-write protocol. However, NCC-RW outperforms d2PL-no-wait under higher load because conflicts cause d2PL-no-wait to abort more frequently while NCC-RW has fewer false aborts by leveraging the natural arrival order. This is more obvious in the Facebook-TAO results as shown in Figure 5(b) because Facebook-TAO has much larger read transactions that are more likely to conflict with writes. The results of Facebook-TAO demonstrate similar takeaways. TPC-C.Each experiment ran all five types of TPC-C transactions, and Figure 5(c) shows the latency and throughput (both in log scale) of New-Order while the throughput of the other four types is proportional. NCC and NCC-RW have \(\sim\)20\(\times\) higher peak throughput with \(\sim\)10\(\times\) lower latency compared to dOCC. dOCC and d2PL-no-wait have many false aborts when load increases due to conflicting writes. NCC and NCC-RW can execute most naturally consistent transactions with low costs, even if they conflict. For instance, NCC-RW has more than 80% of the transactions that passed the safeguard check and fewer than 10% of the transactions that were aborted and retried from scratch. NCC-RW has a 50% higher peak throughput than d2PL-wound-wait because NCC-RW requires only two rounds of messages, while d2PL-wound-wait requires three. NCC-RW has higher peak throughput than NCC because TPC-C has very few read-only transactions, which are also more likely to abort in NCC due to conflicting writes. Janus-CC's performance benefits mostly come from unifying the transaction and replication layers and are less significant in a single-datacenter setting, especially after we made some TPC-C transactions multi-shot. ### Additional Experiments We show more experiments with Google-F1. We choose Google-F1 because it has both read-write and read-only transactions while Facebook-TAO only has read-only transactions. Varying write fractions.Figure 6(a) shows the throughput when we increase the write fraction up to 30%. Each system is ran at \(\sim\)75% load according to Figure 5(a). The y-axis is the throughput normalized to the maximum throughput of Figure 6: NCC achieves much lower latency under read-dominated workloads with its specialized read-only transaction algorithm, 50% lower latency under write-intensive workload, and at least 80% higher throughput across workloads. each system during the experiment. The higher write fraction, the more conflicts in the system. The results show that the NCC-RW is most resilient to conflict increases because NCC-RW can exploit more concurrency in those conflicting but naturally consistent transactions, i.e., NCC has fewer aborts. In contrast, other protocols may falsely abort transactions due to failed validation (dOCC) or lock unavailability (d2PL variants). NCC's read-only transactions are more likely to abort when writes increase because frequent writes cause the client to have stale knowledge of the most recently executed writes on each server; as a result, NCC must abort the reads to avoid timestamp-inversion. Comparing with serializable systems.Figure 6(b) compares NCC with MVTO and TAPIR-CC which provide serializability, under Google-F1. NCC outperforms TAPIR-CC because NCC has fewer messages with its read-only transaction protocol. MVTO and NCC have similar performance under low and medium load because they have the same number of messages and RTTs. Under high load, MVTO outperforms NCC when many read-only transactions in NCC are aborted, while MVTO never aborts reads because it is allowed to read stale versions while NCC must read the most recent version and handle timestamp-inversion. That is, MVTO presents a performance upper bound for strictly serializable systems, and NCC closely matches the upper bound. Failure recovery.Figure 6(c) shows how well NCC-RW handles client failures under Google-F1. We inject failures after 10 seconds into the experiment by forcing _all_ clients to stop sending the commit messages of ongoing transactions and keep issuing new transactions. Undelivered commit messages cause servers to delay the responses of later transactions due to response timing control until the recovery mechanism is triggered after a timeout. We show two timeout values, 1 and 3 seconds. NCC-RW recovers quickly after failures are detected, which have limited impact on throughput. In many realistic settings, failures on one or a few clients would have negligible impact because uncommitted reads do not block other reads. Similarly, NCC is minimally impacted by client failures because its read-only transactions do not send commit messages and thus never delay later writes. ## 6 Related Work NCC proposes a new strictly serializable distributed protocol. Thus, this section places it in the context of existing strictly serializable techniques, single-machine concurrency control, and techniques that provide weaker consistency. At a high-level, NCC provides better performance, addresses a different problem setting, and provides stronger guarantees, compared to these categories of work, respectively. General strictly serializable protocols.As discussed in Section 2.3, existing general strictly serializable protocols are d2PL, dOCC, TR, or their variants, suffering extra costs when transactions are naturally consistent. For instance, Spanner's read-write transactions [12], Sinfonia [4], and Carousel [66] are variants of d2PL that must acquire locks. FaRM [15], FaRMv2 [56], RIFL [30] are variants of dOCC that suffer extra validation costs, even if they use timestamp-based techniques to reduce validation aborts. AOCC [2] is a variant of dOCC and employs a data-shipping model to improve performance, e.g., it periodically sends data from servers to client caches, which is different from NCC's functioning-shipping model, e.g., it only controls the execution and does not migrate data. Rococo [48] and its descendant Janus [49] reorder transactions to minimize aborts. Similarly, Granola [13] requires an all-to-all exchange of timestamps between servers, incurring extra messages and RTTs. Our evaluation shows that NCC outperforms these techniques for real-world workloads where natural consistency is prevalent. When transactions are not naturally consistent, however, these techniques could outperform NCC. NCC achieves better performance by exploiting natural consistency. Figure 8 summarizes performance and consistency properties of NCC and some representative distributed systems. Special strictly serializable techniques.Some work utilizes a centralized sequencer to enforce strict serializability [7, 19, 32, 54, 43, 71, 35, 54]. Because all transactions must contact the sequencer before execution (e.g., Eris [32]), besides the extra latency, the sequencer often becomes a single point of failure and scalability bottleneck. Scaling out sequencers incurs extra costs, e.g., Calvin [60] requires all-to-all messages among sequencers for each transaction (epoch). Figure 7: NCC performance with different write fractions (Google-WF), compared to serializable protocols (TAPIR-CC and MVTO), and under failures for the Google-F1 workload. Some ensure strict serializability by moving all data a transaction accesses to the same machine, e.g., LEAP [34]. Some rely on program analysis and are application-dependent, e.g., the homeostasis protocol [55]. Some rely on extensive gossip messages for liveness, which lower throughput and increase latency, e.g., Ocean Vista [18] whose latency of a transaction cannot be lower than the gossiping delay of the slowest server even if this server is not accessed by the transaction. General techniques such as NCC do not have the above limitations. **Strictly serializable read-only transaction protocols.** To the best of our knowledge, the only existing strictly serializable read-only transaction protocol that has the optimal _best-case_ performance (e.g., one-round of non-blocking messages with constant metadata) is Spanner [12]. Spanner ensures strict serializability by using d2PL for read-write transactions and by using synchronized clocks (TrueTime) for read-only transactions. TrueTime must be tightly bounded for both correctness and performance, which is achieved by Google's infrastructure using special hardware, e.g., GPS and atomic clocks [10] that are not generally available. For instance, CockroachDB [59], which began as an external Spanner clone, chose not to support strict serializability because it does not have access to such infrastructure [25]. In contrast, NCC's read-only transactions achieve optimal _best-case_ performance and provide strict serializability, without requiring synchronized clocks. **Single-machine concurrency control.** Concurrency control for single-machine databases is different from the distributed setting on which this paper focuses. First, some techniques are not feasible in a distributed setting. For instance, Silo [62] relies on atomic instructions, and MVTL [3] relies on shared lock state, which are challenging across machines. Second, most techniques, e.g., Silo [62] and TicToc [67], follow a multi-phase design and would be expensive if made distributed, e.g., they need one round of inter-machine messages for each phase and distributed lock management, which would be unnecessary costs for naturally consistent transactions. **Protocols for weaker consistency.** Many systems trade strong consistency for better performance. For instance, some settle for restricted transaction APIs, e.g., read-only and/or write-only transactions [37, 16, 36]. Some choose to support weaker consistency models, e.g., causal consistency and serializability [37, 38, 44, 63, 17, 68, 44]. In contrast, NCC provides stronger consistency and supports general transactions, greatly simplifying application development. ## 7 Conclusion Strictly serializable datastores are advocated for simplifying application development and reducing consistency anomalies. This paper presents NCC, a new design that provides strict serializability with minimal overhead by leveraging natural consistency in datacenter workloads. NCC identifies timestamp-inversion, a fundamental correctness pitfall into which several existing works have fallen. NCC significantly outperforms existing strictly serializable techniques and closely matches the performance of serializable systems.
2301.06383
Higgs boson decays to $B_c$ meson in the fragmentation-function approach
In the paper, we present a calculation of the decay widths for the Higgs boson decays to the $B_c$, $B_c^*$, $B_c(2^1S_0)$ and $B_c^*(2^3S_1)$ mesons using the fragmentation-function approach. In the calculation, the fragmentation functions up to order $\alpha_s^3$ based on the nonrelativistic QCD factorization theory are used, and the decay widths for $H\to Q+X$ and $H \to g+X$ at the partonic level are calculated up to order $\alpha_s$. The large logarithms of $m_H^2/m_{Bc}^2$ are resummed up to next-to-leading logarithmic accuracy by solving the evolution equations for the running quark masses and the fragmentation functions. Compared to the leading-order decay widths based on the nonrelativistic QCD approach, the decay widths based on the fragmentation-function approach that include the higher-order QCD corrections are reduced significantly. Our numerical results show that there are about $1.2\times 10^5$ $B_c$ events via the Higgs decays to be produced at the HL-LHC with $3ab^{-1}$, and about $1.6\times 10^6$ $B_c$ events via the Higgs decays to be produced at the HE-LHC with $15ab^{-1}$.
Xu-Chang Zheng, Xing-Gang Wu, Xi-Jie Zhan, Guang-Yu Wang, Hong-Tai Li
2023-01-16T12:07:23Z
http://arxiv.org/abs/2301.06383v2
# Higgs boson decays to \(B_{c}\) meson in the fragmentation-function approach ###### Abstract In the paper, we present a calculation of the decay widths for the Higgs boson decays to the \(B_{c}\), \(B_{c}^{*}\), \(B_{c}(2^{1}S_{0})\) and \(B_{c}^{*}(2^{3}S_{1})\) mesons using the fragmentation-function approach. In the calculation, the fragmentation functions up to order \(\alpha_{s}^{3}\) based on the nonrelativistic QCD factorization theory are used, and the decay widths for \(H\to Q+X\) and \(H\to g+X\) at the partonic level are calculated up to order \(\alpha_{s}\). The large logarithms of \(m_{H}^{2}/m_{B_{c}}\) are resummed up to next-to-leading logarithmic accuracy by solving the evolution equations for the running quark masses and the fragmentation functions. Compared to the leading-order decay widths based on the nonrelativistic QCD approach, the decay widths based on the fragmentation-function approach that include the higher-order QCD corrections are reduced significantly. Our numerical results show that there are about \(1.2\times 10^{6}\)\(B_{c}\) events via the Higgs decays to be produced at the HL-LHC with \(3ab^{-1}\), and about \(1.6\times 10^{6}\)\(B_{c}\) events via the Higgs decays to be produced at the HE-LHC with \(15ab^{-1}\). ## I Introduction The discovery of the Higgs boson at the LHC [1; 2] in 2012 is an important breakthrough in our understanding of fundamental interactions. After that, an important task is to accurately study the properties of the Higgs boson, including the Higgs couplings to the fundamental fermions and the gauge bosons, as well as the Higgs self-coupling, and test whether these couplings are completely consistent with those predicted by the Standard Model (SM). Any measurement that deviates from the SM prediction may be a signal of new physics. The LHC has achieved great success in discovering the Higgs boson, and has studied some couplings of the Higgs boson, e.g., the couplings to heavy vector bosons [3; 4; 5] and the charged fermions of the third generation [6; 7; 8; 9; 10; 11; 12; 13; 14]. However, the precision of these measurements is restricted due to the limited Higgs events and complicated hadronic background. After a period of shut down, the LHC has just upgraded to Run 3. During Run 3, more data will be collected than the first two runs combined. Futhermore, the LHC is planned to upgrade to the High-luminosity LHC (HL-LHC) and the High-energy LHC (HE-LHC) after Run 3. At the HL-LHC (\(\sqrt{s}=14\,\mathrm{TeV}\)), with an integrated luminosity of \(3\,ab^{-1}\), about \(1.6\times 10^{8}\) Higgs boson events will be produced; At the HE-LHC (\(\sqrt{s}=27\,\mathrm{TeV}\)), with an integrated luminosity of \(15\,ab^{-1}\), about \(2.2\times 10^{9}\) Higgs boson events will be produced [15]. In addition, several lepton colliders are under consideration, e.g., the Circular Electron-Positron Collider (CEPC) [16], the International Linear Collider (ILC) [17], and the \(e^{+}e^{-}\) Future Circular Collider (FCC-ee) [18], and the Muon Collider [19; 20]. One of the advantages of lepton colliders is that the background is clean, thus they are suitable for the precision measurements of the properties of the Higgs boson. With these collider platforms, some rare decays of the Higgs boson, such as the Higgs decays to quarkonium, may be measured [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. These rare decays can be used to determine the magnitude of the Yukawa couplings of the Higgs boson to the heavy quarks, and they have distinguished signals to be detected at the high-luminosity or high-energy colliders. The \(B_{c}\) meson carries two different heavy flavors and provides a unique bound-state system for testing the SM. As a combination, in Ref.[39], the authors studied the Higgs decays to the \(B_{c}\) meson at the leading order (LO) accuracy, and they found that about \(1.4\times 10^{5}\)\(B_{c}\) events can be produced through the Higgs decays at the HL-LHC. In addition to studying the Higgs properties, this decay process can also be used to study the production mechanism of the \(B_{c}\) meson. Thus, it is attractive to present a more precise study on this decay process. In the present paper, we devote ourselves to reanalyzing this decay process with higher accuracy in the fragmentation-function approach. There are large logarithms of the form \(\ln(m_{H}^{2}/m_{Bc}^{2})\) in the perturbative series of the decay width of the Higgs boson into a \(B_{c}\) meson, which come from two sources: the renormalization of the Yukawa couplings and the emission of the collinear gluons. These large logarithms may spoil the convergence of the perturbative expansion, thus it is important to sum them to all orders (in \(\alpha_{s}\)) in the calculation. It is noted that under the fragmentation-function approach, the large logarithms from these two sources can be resummed simultaneously. More explicitly, the large logarithms from the renormalization of the Yukawa couplings can be resummed by using the running quark masses for the heavy (\(b\) and \(c\)) quarks [40; 41]; while the large logarithms from the collinear gluon emission can be resummed through solving the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations for the fragmentation functions of the \(B_{c}\) production [42; 43]. In this paper, we will resum these large logarithms up to next-to-leading logarithmic (NLL) accuracy under the fragmentation-function approach. Because of carrying two different heavy flavors, the excited states of the \(B_{c}\) meson below the BD threshold1 will decay to the ground state \(B_{c}\) meson with almost 100% probability through electromagnetic or strong interaction. Thus, the excited states are important sources of the ground state \(B_{c}\) meson. Furthermore, the production of these excited states via the Higgs boson decays is also interesting by itself. Therefore, besides the decay width for the Higgs boson into the ground state \(B_{c}\) meson, we will also calculate the decay widths for the Higgs boson decays into the \(S\)-wave excited states, e.g. \(B_{c}^{*}\), \(B_{c}(2^{1}S_{0})\), and \(B_{c}^{*}(2^{3}S_{1})\). Footnote 1: The \(B_{c}\) excited states above the BD threshold will decay mainly into a pair of \(B\) and \(D\) mesons [44]. The paper is organized as follows. In Sec.II, we present useful formulas for the decay width of the Higgs boson into the \(B_{c}\) meson under the fragmentation-function approach. In Sec.III, numerical results and discussions are presented. Section IV is reserved as a summary. ## II Calculation formalism In this section, we present useful formulas for the considered decay width under the fragmentation-function approach. For simplicity, we only give the formulas for the ground state \(B_{c}\) meson, the formulas for the excited states [i.e., \(B_{c}^{*}\), \(B_{c}(2^{1}S_{0})\), and \(B_{c}^{*}(2^{3}S_{1})\)] are similar to the ground state \(B_{c}\) case. Under the fragmentation-function approach, the differential decay width for the decay channel \(H\to B_{c}+X\) can be written as \[\frac{d\Gamma_{H\to B_{c}+X}}{dz}= \sum_{i}\int_{z}^{1}\frac{dy}{y}\frac{d\tilde{\Gamma}_{H\to i +X}(y,\mu_{F})}{dy}\] \[\times D_{i\to B_{c}}(z/y,\mu_{F})+\mathcal{O}(m_{B_{c}}^{2}/m_{H}^ {2}), \tag{1}\] where \(d\tilde{\Gamma}_{H\to i+X}(y,\mu_{F})/dy\) stands for the differential decay width of \(H\to i+X\) at the partonic level, \(D_{i\to B_{c}}(z/y,\mu_{F})\) stands for the fragmentation function for a parton \(i\) into the \(B_{c}\) meson, \(z=2p_{B_{c}}\cdot P_{H}/m_{H}^{2}\) denotes the energy fraction carried by the \(B_{c}\) meson from the Higgs boson, \(\mu_{F}\) denotes the factorization scale, and the sum extends over the patron species. The decay widths for the Higgs boson into a parton can be calculated through perturbation theory. Up to now, the decay width for the Higgs boson into bottom quarks has been calculated up to order \(\alpha_{s}^{4}\)[40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. However, the expressions for the differential decay widths \(d\tilde{\Gamma}/dz\) of the Higgs boson into a quark or gluon are not given in those references. We calculate the differential decay widths for \(H\to Q+X\) and \(H\to g+X\) up to order \(\alpha_{s}\) in this work. In the calculation, we neglect the quark mass in the amplitudes and phase space integrals except the quark mass in the Yukawa coupling. This approximation will only lead to an error of \(\mathcal{O}(m_{Q}^{2}/m_{H}^{2})\). Then, we have \[\frac{d\hat{\Gamma}_{H\to Q+X}^{\text{NLO}}(y,\mu_{F})}{dy}= \frac{\sqrt{2}N_{c}G_{F}m_{H}\overline{m}_{Q}^{2}(\mu_{R})}{8\pi }\Bigg{\{}\delta(1-y)+\frac{\alpha_{s}(\mu_{R})}{2\pi}\Bigg{[}P_{QQ}^{(0)}(y) \ln\left(\frac{m_{H}^{2}}{\mu_{F}^{2}}\right)-3\,C_{F}\,\delta(1-y)\] \[\times\ln\left(\frac{m_{H}^{2}}{\mu_{R}^{2}}\right)+C_{Q}(y) \Bigg{]}\Bigg{\}}, \tag{2}\] \[\frac{d\hat{\Gamma}_{H\to g+X}^{\text{NLO}}(y,\mu_{F})}{dy}= \sum_{Q=b,c}\frac{\sqrt{2}N_{c}G_{F}m_{H}\overline{m}_{Q}^{2}(\mu_ {R})\alpha_{s}(\mu_{R})}{8\pi^{2}}\Bigg{[}P_{gQ}^{(0)}(y)\ln\left(\frac{m_{H} ^{2}}{\mu_{F}^{2}}\right)+C_{g}(y)\Bigg{]}, \tag{3}\] where \(Q\) can be a quark or an antiquark, \(N_{c}=3\) is the number of quark colors, \(C_{F}=(N_{c}^{2}-1)/(2N_{c})\) is the quadratic Casimir operator, \(G_{F}\) is the Fermi constant, and \(\overline{m}_{Q}(\mu_{R})\) is the running quark mass defined in the modified-minimal-subtraction scheme (\(\overline{\text{MS}}\)). The expressions of the LO splitting functions are \[P_{QQ}^{(0)}(y) = C_{F}\left[\frac{3}{2}\delta(1-y)+\frac{1+y^{2}}{(1-y)_{+}} \right], \tag{4}\] \[P_{gQ}^{(0)}(y) = C_{F}\frac{1+(1-y)^{2}}{y}. \tag{5}\] The expressions of \(C_{i}(y)\) functions in Eqs.(2) and (3) are \[C_{Q}(y)= C_{F}\Bigg{\{}\left(\frac{3}{2}+\frac{2\pi^{2}}{3}\right)\delta(1- y)-\frac{3}{2}\frac{1}{(1-y)_{+}}\] \[+2\left[\frac{\ln(1-y)}{1-y}\right]_{+}+\frac{5}{2}+\frac{y}{2}+4 \frac{\ln y}{1-y}\] \[-(1+y)\left[2\ln y+\ln(1-y)\right]\Bigg{\}}, \tag{6}\] \[C_{g}(y)=C_{F}\Bigg{\{}\frac{1+(1-y)^{2}}{y}\left[2\ln y+\ln(1-y)\right]+y \Bigg{\}}. \tag{7}\] In the calculation, the ultraviolet (UV) divergences are removed by renormalization, and the renormalization of the quark mass is carried out in the usual \(\overline{\rm MS}\) scheme. Besides the UV divergences, there are infrared (IR) soft and collinear divergences appearing in the virtual and real corrections. The IR soft divergences are canceled between the virtual and the real corrections, while the IR collinear divergences (which should be absorbed into the bare fragmentation functions) are subtracted according to the \(\overline{\rm MS}\) scheme. To avoid large logarithms appearing in \(d\overline{\Gamma}_{H\to i+X}/dz\), we will set the renormalization and factorization scales as \(\mu_{R}=\mu_{F}=m_{H}\) in the following calculation. The fragmentation functions for a parton into the \(B_{c}\) meson can be calculated based on the NRQCD factorization theory, i.e., \[D_{i\to B_{c}}(z,\mu_{F})=\sum_{n}d_{i\to(c\bar{b})[n]}(z,\mu_{F}) \langle{\cal O}^{B_{c}}(n)\rangle, \tag{8}\] where \(d_{i\to c\bar{b}[n]}(z,\mu_{F})\) is the short-distance coefficient (SDC) for the \((c\bar{b})[n]\) pair production, which can be calculated through perturbative QCD. \(\langle{\cal O}^{B_{c}}(n)\rangle\) is the long-distance matrix element (LDME) for the transition of a \((c\bar{b})[n]\) pair into the \(B_{c}\) meson, which can be estimated through phenomenological models, e.g. the potential models. The sum extends over intermediate Fock states. In the lowest nonrelativistic approximation, only the Fock state \(n=\,^{1}S_{0}^{[1]}(^{3}S_{1}^{[1]})\) need to be considered in the production of the \(B_{c}(B_{c}^{*})\) meson. The LO fragmentation functions for \(\bar{b}\to B_{c}(B_{c}^{*})\) and \(c\to B_{c}(B_{c}^{*})\) were first correctly calculated by the authors of Refs.[53; 54]. They extracted the fragmentation functions from the processes \(Z\to B_{c}(B_{c}^{*})+b+\bar{c}\) by taking the approximation of \(m_{B_{c}}/m_{Z}\to 0\). Their results were confirmed by the subsequent calculations in Refs.[55; 56] using different methods. For a long time, the NLO fragmentation functions for the \(B_{c}\) production are absent. Recently, with the development of loop-diagram calculation techniques, the NLO fragmentation functions for \(\bar{b}\to B_{c}(B_{c}^{*})\) and \(c\to B_{c}(B_{c}^{*})\) has been given in Ref.[57]. Furthermore, the fragmentation functions for \(g\to B_{c}(B_{c}^{*})\), which start at order \(\alpha_{s}^{3}\), have been obtained in Refs.[58; 59]. In this work, we will adopt the fragmentation functions up to order \(\alpha_{s}^{3}\) obtained in Refs.[57; 58]. In order to avoid large logarithms appearing in \(d\bar{\Gamma}_{H\to i+X}/dz\), we have set the factorization scale as \(\mu_{F}=m_{H}\) in Eq.(1). However, the large logarithms of \(m_{H}^{2}/m_{B_{c}}^{2}\) will appear in the fragmentation functions. To resum the large logarithms in the fragmentation functions, we first calculate the fragmentation functions up to order \(\alpha_{s}^{3}\) with initial scales \(\mu_{R0}=\mu_{F0}=m_{b}+m_{c}\) using the codes developed in our previous works [57; 58]. Then the fragmentation functions with \(\mu_{F}=m_{H}\) can be obtained through solving the DGLAP equations, i.e., \[\frac{d}{d\ {\rm ln}\mu_{F}^{2}}D_{i\to B_{c}}(z,\mu_{F})\] \[=\frac{\alpha_{s}(\mu_{F})}{2\pi}\sum_{j}\int_{z}^{1}\frac{dy}{ y}P_{ji}(y,\alpha_{s}(\mu_{F}))D_{j\to B_{c}}(z/y,\mu_{F}) \tag{9}\] where \(P_{ji}(y,\alpha_{s}(\mu_{F}))\) are the splitting functions, which can be expanded in powers of \(\alpha_{s}\): \[P_{ji}(y,\alpha_{s}(\mu_{F}))=P_{ji}^{(0)}(y)+\frac{\alpha_{s}( \mu_{F})}{2\pi}P_{ji}^{(1)}(y)+{\cal O}(\alpha_{s}^{3}). \tag{10}\] The LO splitting functions for \(Q\to Q\) and \(Q\to g\) have been given in Eqs.(4) and (5), and the LO splitting functions for \(g\to Q\) and \(g\to g\) are \[P_{Qg}^{(0)}(y)= T_{F}\left[y^{2}+(1-y)^{2}\right], \tag{11}\] \[P_{gg}^{(0)}(y)= 2C_{A}\left[\frac{y}{(1-y)_{+}}+\frac{1-y}{y}+y(1-y)\right]\] \[+\frac{1}{6}\delta(1-y)(11C_{A}-4n_{f}T_{F}). \tag{12}\] where \(C_{A}=N_{c}\) and \(T_{F}=1/2\). The NLO corrections to these splitting functions have been obtained in Refs.[60; 61; 62; 63; 64], they are too lengthy to be replicated here. It is nontrivial to solve these integro-differential equations. In this work, we adopt the program FFEVOL [65] to solve the DGLAP equations numerically. In solving the DGLAP equations, the NLO fragmentation functions at \(\mu_{F0}=m_{b}+m_{c}\) are used as the boundary conditions and the NLO spiliiting functions are used as the evolution kernel. After the evolution of the fragmentation functions from the initial factorization scale \(\mu_{F0}=m_{b}+m_{c}\) to the final factorization scale \(\mu_{F}=m_{H}\), the large logarithms of \(m_{H}^{2}/(m_{b}+m_{c})^{2}\) are resummed up to NLL accuracy. ## III Numerical results and discussion To do the numerical calculation, the input parameters are adopted as follows: \[G_{F}=1.16638\times 10^{-5}\,{\rm GeV}^{-2},m_{H}=125.3\,{\rm GeV},\] \[\overline{m}_{b}(\overline{m}_{b})=4.18\,{\rm GeV},\quad\overline{ m}_{c}(\overline{m}_{c})=1.27\,{\rm GeV},\] \[|R_{1S}(0)|^{2}=1.642\ {\rm GeV}^{3},|R_{2S}(0)|^{2}=0.983\ {\rm GeV}^{3} \tag{13}\] where the values for the Fermi constant and the masses are taken from the Particle Data Group (PDG) [66]. \(R_{1S}(0)\) and \(R_{2S}(0)\) are the radial wave functions at the origin for the \((c\bar{b})\) bound states, which are taken from the calculation based on the Buchmuller-Tye potential model [67]. The running masses at \(\mu_{R}=m_{H}\) can be obtained through solving the renormalization-group equation, i.e., \[\frac{d\,\overline{m}_{Q}(\mu_{R})}{d\,{\rm ln}\mu_{R}^{2}}=-\overline{m}_{Q}( \mu_{R})\sum_{i\geq 0}\gamma_{m,i}\left(\frac{\alpha_{s}(\mu_{R})}{\pi}\right)^{i+1}, \tag{14}\] where the first two coefficients[68; 69] are \[\gamma_{m,0}=1,\] \[\gamma_{m,1}=\frac{1}{16}\left(\frac{202}{3}-\frac{20}{9}n_{f} \right), \tag{15}\] and \(n_{f}\) is the number of active flavors. We solve this renormalization-group equation by using the Mathematica package RunDec [70], and only the first two coefficients of the right hand of Eq.(14) are preserved (i.e., the obtained running masses at \(\mu_{R}=m_{H}\) reach the NLL accuracy). Then we have \[\overline{m}_{b}(m_{H})=2.78\,\text{GeV},\quad\overline{m}_{c}(m_{H})=0.60\, \text{GeV}. \tag{16}\] In the calculation of the fragmentation functions in Ref.[57], the heavy quark masses are renormalized in the on-shell (OS) scheme. The OS (pole) masses for the heavy quarks can be obtained from the \(\overline{\text{MS}}\) masses through \(m_{Q}=\overline{m}_{Q}(\overline{m}_{Q})[1+4\alpha_{s}(\overline{m}_{Q})/(3 \pi)]\)[71; 72; 73; 74; 75], and we have \[m_{b}=4.58\,\text{GeV},\quad m_{c}=1.50\,\text{GeV}. \tag{17}\] For the strong coupling constant, we adopt the two-loop formula, i.e., \[\alpha_{s}(\mu_{R})=\frac{4\pi}{\beta_{0}L}\left(1-\frac{\beta_{1}\text{ln}\, L}{\beta_{0}^{2}L}\right), \tag{18}\] where \(L=\text{ln}(\mu_{R}^{2}/\Lambda_{QCD})\), \(\beta_{0}=11-2n_{f}/3\), and \(\beta_{1}=102-38n_{f}/3\). According to \(\alpha_{s}(m_{x})=0.1185\), we obtain \(\alpha_{s}(\overline{m}_{c})=0.420\), \(\alpha_{s}(\overline{m}_{b})=0.228\), \(\alpha_{s}(m_{b}+m_{c})=0.204\), and \(\alpha_{s}(m_{H})=0.113\). ### Comparison of the results at the LO level In the fragmentation-function approach, some terms which are suppressed by powers of \(m_{B_{c}}^{2}/m_{H}^{2}\) are neglected. In order to see the magnitude of those neglected higher-power (in \(m_{B_{c}}^{2}/m_{H}^{2}\)) contributions, we compare the decay widths calculated by the "direct" NRQCD and the fragmentation-function approaches. Here, we only present the comparison at the LO level. The differential decay width for the decay \(H\to B_{c}+X\) under the (direct) NRQCD approach can be written as \[d\Gamma_{H\to B_{c}+X}=\sum_{n}d\widehat{\Gamma}_{H\to(c\bar{b})[n]+X}\langle \mathcal{O}^{B_{c}}(n)\rangle, \tag{19}\] where \(d\widehat{\Gamma}_{H\to(c\bar{b})[n]+X}\) is the SDC for the \((c\bar{b})[n]\) pair production, which can be calculated through perturbation theory. At the LO level, there are four Feynman diagrams responsible for the decay \(H\to B_{c}+X\). The details about the LO calculation based on the direct NRQCD approach can be found in Ref.[39]. In the calculation, we adopt the package FeynArts [76] to generate the Feynman diagrams and the amplitudes, and use the FeynCalc [77; 78] to carry out the Dirac traces. The partial decay widths for \(H\to B_{c}+X\) and \(H\to B_{c}^{*}+X\) under the direct NRQCD approach and the fragmentation-function approach2 are presented in Tables 1 and 2. Here, for consistency, the quark masses are taken as the corresponding pole masses and the strong coupling is taken as \(\alpha_{s}(m_{b}+m_{c})=0.204\) under the two approaches. In the tables, the contributions from the \(\bar{b}\)-fragmentation and the \(c\)-fragmentation as well as the total contribution are presented explicitly. For the direct NRQCD approach, the first two Feynman diagrams in Fig.1 are responsible for the \(\bar{b}\)-fragmentation contribution, while the last two Feynman diagrams are responsible for the \(c\)-fragmentation contribution3. The interference contribution comes from the interference of the \begin{table} \begin{tabular}{c c c} \hline \hline Contributions & Direct NRQCD & FF approach \\ \hline \(\overline{b}\)-fragmentation & 1.62 & 1.68 \\ \(c\)-fragmentation & \(3.46\times 10^{-3}\) & \(3.70\times 10^{-3}\) \\ Interference & \(-6.98\times 10^{-3}\) & - \\ Total & 1.62 & 1.68 \\ \hline \hline \end{tabular} \end{table} Table 2: The LO partial decay width (unit: keV) for \(H\to B_{c}^{*}+X\) under the direct NRQCD approach and the fragmentation-function (FF) approach. Figure 1: LO Feynman diagrams for the decays \(H\to B_{c}(B_{c}^{*}\cdots)+X\). first two Feynman diagrams and the last two Feynman diagrams, and the fragmentation-function approach can not give the interference contribution. From the tables, we can see that the differences between the decay widths under two approaches are very small, i.e., the fragmentation-function approach can give a good approximation to the direct NRQCD approach for the \(\bar{b}\)-fragmentation as well as the \(c\)-fragmentation. This means that the higher-power terms neglected in the fragmentation-function approach are very small in the two decay processes. Furthermore, we can see that the \(c\)-fragmentation contributions are about 3 orders of magnitude smaller than the corresponding \(\bar{b}\)-fragmentation contributions. There are two reasons for the very small \(c\)-fragmentation contributions: One is that the magnitude of the Yukawa coupling of \(Hc\bar{c}\) is smaller than that of \(Hb\bar{b}\), and the other is that the fragmentation probability of \(c\to B_{c}\) is smaller than that of \(\bar{b}\to B_{c}\). The differential decay widths \(d\Gamma/dz\) of \(H\to B_{c}+X\) and \(H\to B_{c}^{*}+X\) under the direct NRQCD approach and the fragmentation-function approach are shown in Figs. 2 and 3. From the figures, we can see that the curves from the two approaches are very close. The difference between the two approaches is relatively small at large \(z\) values, and relatively large at small \(z\) values. ### Contribution from the \(Ht\bar{t}\) coupling From the comparison of the LO results under the two approaches given in the last subsection, we found that the fragmentation-function approach (without the resummation of large logarithms) can give a good approximation to the direct NRQCD approach, i.e., the power corrections in the fragmentation-function approach are negligible at the LO level. At the NLO, there are nonfragmenta Figure 3: Comparison of differential decay widths \(d\Gamma/dz\) of \(H\to B_{c}^{*}+X\) calculated based on the direct NRQCD approach and the fragmentation-function (FF) approach. The upper one shows the contributions of \(\bar{b}\)-fragmentation and \(c\)-fragmentation respectively, the lower one shows total contribution. In order to put the results from the \(\bar{b}\)-fragmentation and \(c\)- and the \(c\)-fragmentation into one figure, the curves for the \(c\)-fragmentation are multiplied by a factor of 50. Figure 2: Comparison of differential decay widths \(d\Gamma/dz\) of \(H\to B_{c}+X\) calculated based on the direct NRQCD approach and the fragmentation-function (FF) approach. The upper one shows the contributions of \(\bar{b}\)-fragmentation and \(c\)-fragmentation respectively, the lower one shows total contribution. In order to put the results from the \(\bar{b}\)-fragmentation and the \(c\)-fragmentation into one figure, the curves for the \(c\)-fragmentation are multiplied by a factor of 50. tion Feynman diagrams induced by a triangle top-quark loop as shown in Fig.4. Compared with the fragmentation contribution, the contribution from these Feynman diagrams is suppressed by powers of \(m_{B_{c}}^{2}/m_{H}^{2}\) but enhanced by the \(Ht\bar{t}\) coupling. Therefore, before giving the results from the fragmentation-function approach up to the NLL accuracy, it is important to see how much these triangle top-loop diagrams contribute. In fact, the authors of Ref.[39] have calculated the contribution from the triangle top-quark loop diagrams. They obtained very strange results, i.e., the triangle top-loop contribution in the \(B_{c}^{*}\) case is one order of magnitude smaller than that in the \(B_{c}\) case. To further illustrate the reason of the smallness of the triangle top-loop contribution in the \(B_{c}^{*}\) case, we recalculate the contribution from the triangle top-loop diagrams here. Furthermore, the authors of Ref.[39] found that the interference contribution between Fig.4 and Fig.1 is very small for both the \(B_{c}\) and \(B_{c}^{*}\) cases. However, we can not conclude from the small interference contribution that the contribution from the square of Fig.4 is also small. Because the topologies of Fig.4 and Fig.1 are significantly different, they may be dominated by different phase-space regions. Hence, in addition to the contribution from the interference between the Fig.4 and Fig.1, we also calculate the contribution from the square of Fig.4. In Table 3 and Fig.5, the contributions from the triangle top-quark loop are presented4. In the calculation, the top-quark mass is taken as \(m_{t}=172.8\,\mathrm{GeV}\)[66], and the other parameters are taken the same values as those in the last subsection. From Table 3, we can see that the interference contribution in the \(B_{c}^{*}\) case is one order of magnitude smaller than that in the \(B_{c}\) case. Footnote 4: Adopting the same input parameters as in Ref.[39], we are able to reproduce the numerical results for the contribution of the interference between the triangle top-quark loop diagrams and the LO diagrams given in Table 4 of Ref.[39]. This can be understood by the distribution shown in Fig.5. The distribution of the interference contribution in the \(B_{c}^{*}\) case is negative at large \(z\) values, this indicates that there is a large cancellation between the contributions from different phase-space regions. Furthermore, we can also see that the contributions (the interference contribution as well as the squared contribution) from the triangle top-quark loop are very small compared with the LO contributions. ### Results up to NLL accuracy under the fragmentation-function approach Figure 4: Feynman diagrams induced by the triangle top-quark loop. Figure 5: Contributions of the triangle top-loop diagrams to \(d\Gamma/dz\), where “Interference” denotes the contribution coming from the interference between Fig.1 and Fig.4, and “Square” denotes the contribution coming from the square of Fig.4. \begin{table} \begin{tabular}{c c c} \hline \hline Contributions & \(B_{c}\) & \(B_{c}^{*}\) \\ \hline Interference of Fig.1 and Fig.4 & \(4.39\times 10^{-2}\) & \(5.43\times 10^{-3}\) \\ Square of Fig.1 & \(2.04\times 10^{-3}\) & \(5.09\times 10^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Contributions (unit:keV) of the triangle top-loop diagrams to decay widths. From the analysis presented in the above two subsections, we believe that the neglected power suppressed terms in the fragmentation-function approach are small in the decays \(H\to B_{c}+X\) and \(H\to B_{c}^{*}+X\) even up to the NLO level. In this subsection, we present the decay widths calculated based on the the fragmentation-function approach up to the NLL accuracy. In Table 4, the partial decay widths for the Higgs decays to \(B_{c}\), \(B_{c}^{*}\), \(B_{c}(2^{1}S_{0})\) and \(B_{c}^{*}(2^{3}S_{1})\) are presented, where the contributions from the different fragmentation channels and the contribution from the triangle top-quark loop are given explicitly. The calculation method for these fragmentation contributions have been described in Sec.II, i.e., the factorization scale is taken as \(\mu_{F}=m_{H}\) in Eq.(1) and the fragmentation functions at \(\mu_{F}=m_{H}\) are obtained through the DGLAP evolution from the initial factorization scale \(\mu_{F0}=m_{b}+m_{c}\). Hence, the large logarithms of \(m_{H}^{2}/(m_{b}+m_{c})^{2}\) which arise from the collinear gluon emission and the renormalization of the Yukawa couplings have been resummed in the numerical results for these fragmentation contributions. The input parameters have been given at the beginning of this section. From Table 4, we can see that the decay widths up to NLL accuracy are significantly smaller than the corresponding LO decay widths shown in Tables 1 and 2. This indicates that the higher-order corrections, especially the large logarithmic terms, are important in these decay processes. Therefore, the resummation of large logarithms should be taken into consideration for giving high-precision predictions. The contributions from the \(c\)-fragmentation and the \(g\)-fragmentation are very small compared to the \(\bar{b}\)-fragmentation contribution. Moreover, the \(g\)-fragmentation contribution is negative for these processes. In Figs.6, 7, 8 and 9, the differential decay widths \(d\Gamma/dz\) for the \(B_{c}\), \(B_{c}^{*}\), \(B_{c}(2^{1}S_{0})\) and \(B_{c}^{*}(2^{3}S_{1})\) states are shown. In the figures, the contributions of the different fragmentation channels and the triangle top-quark loop to \(d\Gamma/dz\) are shown explicitly. From these figures, we can see that the differential decay widths are dominated by the \(\bar{b}\)-fragmentation contribution for all the \(z\) values. The \(c\)-fragmentation and the \(g\)-fragmentation contributions mainly come from the small \(z\) values. However, even for small \(z\) values, these contributions are also small \begin{table} \begin{tabular}{c c c c c} \hline \hline Contributions & \(B_{c}\) & \(B_{c}^{*}\) & \(B_{c}(2^{1}S_{0})\) & \(B_{c}^{*}(2^{3}S_{1})\) \\ \hline \hline \(\bar{b}\)-fragmentation & 0.673 & 0.766 & 0.403 & 0.459 \\ \(c\)-fragmentation & \(1.47\times 10^{-3}\) & \(1.25\times 10^{-3}\) & \(8.80\times 10^{-4}\) & \(7.48\times 10^{-4}\) \\ \(g\)-fragmentation & \(-1.80\times 10^{-3}\) & \(-2.45\times 10^{-3}\) & \(-1.07\times 10^{-3}\) & \(-1.47\times 10^{-3}\) \\ Triangle top-loop & \(4.59\times 10^{-2}\) & \(1.05\times 10^{-2}\) & \(2.75\times 10^{-2}\) & \(6.29\times 10^{-3}\) \\ Total & 0.719 & 0.775 & 0.430 & 0.465 \\ \hline \hline \end{tabular} \end{table} Table 4: The partial decay widths (unit: keV) for the Higgs decays to \(B_{c}\), \(B_{c}^{*}\), \(B_{c}(2^{1}S_{0})\) and \(B_{c}^{*}(2^{3}S_{1})\), where the contributions from three fragmentation channels and the triangle top-loop contribution are given explicitly. Figure 6: The differential decay width \(d\Gamma/dz\) for \(H\to B_{c}+X\), where the contributions from the three fragmentation channels and the triangle top-quark loop are shown explicitly. In order to put the results into one figure, the curve for the \(c\)-fragmentation is multiplied by a factor of 50, and the curve for the \(g\)-fragmentation is multiplied by a factor of -50. Figure 7: The differential decay width \(d\Gamma/dz\) for \(H\to B_{c}^{*}+X\), where the contributions from the three fragmentation channels and the triangle top-quark loop are shown explicitly. In order to put the results into one figure, the curve for the \(c\)-fragmentation is multiplied by a factor of 50, and the curve for the \(g\)-fragmentation is multiplied by a factor of -50. compared to the \(\bar{b}\)-fragmentation contribution. ### Uncertainty analysis In this subsection, we will estimate the theoretical uncertainties for these partial decay widths. The main uncertainty sources for these decay widths include the factorization and renormalization scales, the heavy quark masses, the Higgs boson mass, and the \(c\bar{b}\) radial wave functions at the origin. The dependence of the decay widths on the Higgs boson mass mainly comes from the partonic decay widths \(d\hat{\Gamma}_{H\to i+X}(y,\mu_{F})/dy(i=Q,g)\), which contain a global factor \(m_{H}\). The uncertainty for the world average value of the Higgs boson mass given by the PDG is about \(0.2\,\mathrm{GeV}\)[66], which is only about \(0.2\%\) of the Higgs boson mass. Therefore, the uncertainties for the decay widths caused by the Higgs boson mass are only about \(0.2\%\) of their central values. Since the uncertainties caused by the Higgs mass are very small, we will neglect this uncertainty source in the following uncertainty estimaton. There are several factorization and renormalization scales involved in the calculation based on the fragmentation-function approach: the initial (lower) factorization and renormalization scales (\(\mu_{F0}\) and \(\mu_{R0}\)) for the initial fragmentation functions; the final (upper) factorization and renormalization scales (\(\mu_{F}\) and \(\mu_{R}\)). In the calculation presented in the last subsection, these scales are set as \(\mu_{F0}=\mu_{R0}=m_{b}+m_{c}\) and \(\mu_{F}=\mu_{R}=m_{H}\). In the uncertainty estimation, we vary them by a factor 2 from their central values, i.e., \(\mu_{F0}=\mu_{R0}\in[(m_{b}+m_{c})/2,2(m_{b}+m_{c})]\) and \(\mu_{F}=\mu_{R}\in[m_{H}/2,2m_{H}]\). Then we obtain the uncertainties caused by the lower and upper scales: \[\Gamma_{H\to B_{c}+X}=0.719^{+0.087+0.037}_{-0.099-0.032}\,\mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}+X}=0.775^{+0.008+0.041}_{-0.072-0.035}\, \mathrm{keV},\] \[\Gamma_{H\to B_{c}(2^{1}S_{0})+X}=0.430^{+0.053+0.022}_{-0.059-0.019}\, \mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}(2^{3}S_{1})+X}=0.465^{+0.004+0.024}_{-0.044-0.02 2}\,\mathrm{keV}, \tag{20}\] where the first uncertainty is caused by the lower factorization and renormalization scales, and the second uncertainty is caused by the upper factorization and renormalization scales. For the uncertainties caused by the heavy quark masses, we estimate them by using the errors \(\overline{m}_{b}(\overline{m}_{b})=4.18^{+0.04}_{-0.03}\mathrm{GeV}\) and \(\overline{m}_{c}(\overline{m}_{c})=1.27\pm 0.02\mathrm{GeV}\) given by the PDG. We obtain the uncertainties caused by the heavy quark masses: \[\Gamma_{H\to B_{c}+X}=0.719^{+0.007+0.029}_{-0.005-0.035}\,\mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}+X}=0.775^{+0.012+0.037}_{-0.007-0.042}\, \mathrm{keV},\] \[\Gamma_{H\to B_{c}(2^{1}S_{0})+X}=0.430^{+0.005+0.018}_{-0.003-0.021}\, \mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}(2^{3}S_{1})+X}=0.465^{+0.006+0.021}_{-0.005-0.02 6}\,\mathrm{keV}, \tag{21}\] where the first uncertainty is caused by the bottom quark mass, while the second uncertainty is caused by the charm quark mass. For the wave functions at the origin, we have adopted the values based on the Buchmuller-Tye potential model \begin{table} \begin{tabular}{c c c c} \hline \hline Level & BT ([67]) & Logarithmic ([67]) & Cornell ([79]) \\ \hline 1S & 1.642 & 1.508 & 1.994 \\ 2S & 0.983 & 0.770 & 1.144 \\ \hline \hline \end{tabular} \end{table} Table 5: Model dependence of the radial wave functions at the origin (unit: \(\mathrm{GeV}^{3}\)) for the \(c\bar{b}\) mesons, where “BT” denotes the Buchmüller-Tye potential model. Figure 8: The differential decay width \(d\Gamma/dz\) for \(H\to B_{c}(2^{1}S_{0})+X\), where the contributions from the three fragmentation channels and the triangle top-quark loop are shown explicitly. In order to put the results into one figure, the curve for the \(c\)-fragmentation is multiplied by a factor of 50, and the curve for the \(g\)-fragmentation is multiplied by a factor of -50. Figure 9: The differential decay width \(d\Gamma/dz\) for \(H\to B_{c}^{*}(2^{3}S_{1})+X\), where the contributions from the three fragmentation channels and the triangle top-quark loop are shown explicitly. In order to put the results into one figure, the curve for the \(c\)-fragmentation is multiplied by a factor of 50, and the curve for the \(g\)-fragmentation is multiplied by a factor of -50. given in Ref.[67]. However, the authors of Ref.[67] did not give an error estimate to the wave functions. In order to give an estimate to the uncertainties from the wave functions, we take the values based on the Buchmuller-Tye potential model as the central values, while take the values based on the logarithmic potential and the Cornell potential as the boundary values for the wave functions. The values for the radial wave functions based on the three potential models are shown in Table 5. We obtain the uncertainties caused by the wave functions as follows: \[\Gamma_{H\to B_{c}+X}=0.719^{+0.154}_{-0.059}\,\mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}+X}=0.775^{+0.166}_{-0.063}\,\mathrm{keV},\] \[\Gamma_{H\to B_{c}(2\,^{1}S_{0})+X}=0.430^{+0.070}_{-0.093}\, \mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}(2\,^{3}S_{1})+X}=0.465^{+0.076}_{-0.101}\, \mathrm{keV}. \tag{22}\] Adding the uncertainties from different sources in quadrature, we obtain the total theoretical uncertainties for these partial decay widths as follows: \[\Gamma_{H\to B_{c}+X}=0.719^{+0.183}_{-0.125}\,\mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}+X}=0.775^{+0.176}_{-0.110}\,\mathrm{keV},\] \[\Gamma_{H\to B_{c}(2\,^{1}S_{0})+X}=0.430^{+0.092}_{-0.114}\, \mathrm{keV},\] \[\Gamma_{H\to B_{c}^{*}(2\,^{3}S_{1})+X}=0.465^{+0.083}_{-0.115}\, \mathrm{keV}. \tag{23}\] ## IV Summary In the present paper, we have calculated the partial decay widths for the Higgs boson decays to the \(B_{c}\), \(B_{c}^{*}\), \(B_{c}(2\,^{1}S_{0})\), and \(B_{c}^{*}(2\,^{3}S_{1})\) mesons based on the fragmentation-function approach. The decay widths and the differential distributions are obtained, and the theoretical uncertainties for the decay widths are estimated. In the calculation, the fragmentation functions up to order \(\alpha_{s}^{3}\) for the \(B_{c}(B_{c}^{*})\) production calculated in the previous works are used as the initial fragmentation functions. The large logarithms that arise from the renormalization of the Yukawa couplings and the collinear gluon emissions are resummed up to NLL accuracy through solving the evolution equations of the running heavy-quark masses and the fragmentation functions. After including these higher-order contributions, the decay widths are reduced significantly compared to the LO predictions. In order to have a feel on the size of the higher-power terms (in \(m_{B_{c}}^{2}/m_{H}^{2}\)) that neglected in the fragmentation-function approach, we compared the decay widths under the direct NRQCD approach with those under the fragmentation-function approach at the LO level. The results show that the decay widths under the two approaches are very close to each other, i.e., those higher-power terms are very small at the LO level. We also studied the contributions induced by the triangle top-quark loop, which are enhanced by the \(Ht\bar{t}\) coupling. The results show that these contributions are very small compared to the fragmentation contributions. Moreover, we found that the interference between the triangle top-quark loop diagrams and the LO Feynman diagrams for \(H\to B_{c}^{*}+X\) has a strong cancellation between different phase-space regions. This leads to a much smaller contribution from the triangle top-quark loop in the \(B_{c}^{*}\) case than that in the \(B_{c}\) case. Since the \(B_{c}\) excited states below the BD threshold will decay to the ground state \(B_{c}\) with almost 100% probability, the total decay width for the Higgs boson decay to the \(B_{c}\) meson is approximately equal to the sum of the decay widths for the Higgs boson decays to the \(c\bar{b}\) meson states below the BD threshold. Adding the decay widths for the \(S\)-wave states (\(B_{c}\), \(B_{c}^{*}\), \(B_{c}(2\,^{1}S_{0})\), and \(B_{c}^{*}(2\,^{3}S_{1})\)) shown in Eq.(23), and using the \(\Gamma_{H}\approx 3.2\,\mathrm{MeV}\)[67], we obtain that the total branching fraction for the Higgs boson decays into the \(B_{c}\) meson is about \(7.47\times 10^{-4}\). According to the total branching fraction, we estimate that there are about \(1.2\times 10^{5}\)\(B_{c}\) events will be produced via the Higgs boson decays at the HL-LHC with \(3\,ab^{-1}\), and about \(1.6\times 10^{6}\)\(B_{c}\) events will be produced via the Higgs boson decays at the HE-LHC with \(15\,ab^{-1}\). Therefore, these decay processes may be studied at the future HL-LHC and HE-LHC, and provide a complementary method for measuring the bottom-quark Yukawa coupling. **Acknowledgments:** This work was supported in part by the Natural Science Foundation of China under Grants No.12005028, No.12175025 and No.12147102, by the China Postdoctoral Science Foundation under Grant No.2021M693743, by the Fundamental Research Funds for the Central Universities under Grant No.2020CQJQY-Z003, by the Chongqing Natural Science Foundation under Grant No.CSTB2022NSCQ-MSX0415, and by the Chongqing Graduate Research and Innovation Foundation under Grant No.ydstd1912.
2306.02142
TransDocAnalyser: A Framework for Offline Semi-structured Handwritten Document Analysis in the Legal Domain
State-of-the-art offline Optical Character Recognition (OCR) frameworks perform poorly on semi-structured handwritten domain-specific documents due to their inability to localize and label form fields with domain-specific semantics. Existing techniques for semi-structured document analysis have primarily used datasets comprising invoices, purchase orders, receipts, and identity-card documents for benchmarking. In this work, we build the first semi-structured document analysis dataset in the legal domain by collecting a large number of First Information Report (FIR) documents from several police stations in India. This dataset, which we call the FIR dataset, is more challenging than most existing document analysis datasets, since it combines a wide variety of handwritten text with printed text. We also propose an end-to-end framework for offline processing of handwritten semi-structured documents, and benchmark it on our novel FIR dataset. Our framework used Encoder-Decoder architecture for localizing and labelling the form fields and for recognizing the handwritten content. The encoder consists of Faster-RCNN and Vision Transformers. Further the Transformer-based decoder architecture is trained with a domain-specific tokenizer. We also propose a post-correction method to handle recognition errors pertaining to the domain-specific terms. Our proposed framework achieves state-of-the-art results on the FIR dataset outperforming several existing models
Sagar Chakraborty, Gaurav Harit, Saptarshi Ghosh
2023-06-03T15:56:30Z
http://arxiv.org/abs/2306.02142v1
TransDocAnalyser: A Framework for Offline Semi-structured Handwritten Document Analysis in the Legal Domain ###### Abstract State-of-the-art offline Optical Character Recognition (OCR) frameworks perform poorly on semi-structured handwritten domain-specific documents due to their inability to localize and label form fields with domain-specific semantics. Existing techniques for semi-structured document analysis have primarily used datasets comprising invoices, purchase orders, receipts, and identity-card documents for benchmarking. In this work, we build the first semi-structured document analysis dataset in the legal domain by collecting a large number of First Information Report (FIR) documents from several police stations in India. This dataset, which we call the FIR dataset, is more challenging than most existing document analysis datasets, since it combines a wide variety of handwritten text with printed text. We also propose an end-to-end framework for offline processing of handwritten semi-structured documents, and benchmark it on our novel FIR dataset. Our framework used Encoder-Decoder architecture for localizing and labelling the form fields and for recognizing the handwritten content. The encoder consists of Faster-RCNN and Vision Transformers. Further the Transformer-based decoder architecture is trained with a domain-specific tokenizer. We also propose a post-correction method to handle recognition errors pertaining to the domain-specific terms. Our proposed framework achieves state-of-the-art results on the FIR dataset outperforming several existing models. Keywords:Semi-structured document Offline handwriting recognition Legal document analysis Vision Transformer FIR dataset ## 1 Introduction Semi-Structured documents are widely used in many different industries. Recent advancement in digitization has increased the demand for analysis of scanned or mobile-captured semi-structured documents. Many recent works have used different deep learning techniques to solve some of the critical problems in processing and layout analysis of semi-structured documents [34, 16, 23]. Semi-structured documents consist of printed, handwritten, or hybrid (both printed and handwritten) text forms. In particular, hybrid documents (see Figure 1) are more complex to analyze since they require segregation of printed and handwritten text and subsequent recognition. With recent advancements, the OCR accuracy has improved for printed text; however, recognition of handwritten characters is still a challenge due to variations in writing style and layout. Earlier works have focused on techniques for layout analysis, named-entity recognition, offline handwriting recognition, etc., but sufficient work has _not_ been done on developing an end-to-end framework for processing semi-structured documents. A general end-to-end framework can be easily fine-tuned for domain-specific requirements. In this paper we present the first framework for semi-structured document analysis applied to legal documents. There have been many works on legal documents, such as on case document summarization [6], relevant statute identification from legal facts [31], pretraining language models on legal text [32] and so on. But almost all prior research in the legal domain has focused on textual data, and _not_ on document images. In Figure 1: Examples of First Information Report (FIR) documents from different police stations in India. The FIR dataset developed in this paper consists of a wide variety of such semi-structured FIR documents containing both printed and handwritten text. particular, the challenges involved in document processing and layout analysis of legal documents is unattended, even though these tasks have become important due to the increasing availability of scanned/photographed legal documents. In this work, we build the first dataset for semi-structured document analysis in the legal domain. To this end, we focus on **First Information Report** (FIR) documents from India. An FIR is usually prepared by police stations in some South Asian countries when they first get a complaint by the victim of a crime (or someone on behalf of the victim).1 An FIR usually contains a lot of details such as the date, time, place, and details of the incident, the names of the person(s) involved, a list of the statutes (written laws, e.g., those set by the Constitution of a country) that might have been violated by the incident, and so on. The FIRs are usually written on a printed form, where the fields are filled in by hand by police officials (see examples in Figure 1). It is estimated that more than 6 million FIRs are filed every year across thousands of police stations in various states in India. Such high volumes lead to inconsistent practices in-terms of handwriting, layout structure, scanning procedure, scan quality, etc., and introduce huge noise in the digital copies of these documents. Footnote 1: [https://en.wikipedia.org/wiki/First_information_report](https://en.wikipedia.org/wiki/First_information_report) Our target fields of interest while processing FIR documents are the handwritten entries (e.g., name of the complainant, the statutes violated) which are challenging to identify due to the wide variation in handwriting. To form the dataset, which we call the **FIR dataset**, we created the meta-data for the target fields by collecting the actual text values from the police databases, and also annotated the documents with layout positions of the target fields. The FIR dataset is made publicly available at [https://github.com/LegalDocumentProcessing/FIR_Dataset_ICDAR2023](https://github.com/LegalDocumentProcessing/FIR_Dataset_ICDAR2023). The FIR dataset is particularly challenging since its documents are of mixed type, with both printed and handwritten text. Traditional OCR identifies blocks of text strings in documents and recognizes the text from images by parsing from left to right [19]. NLP techniques like named-entity recognition (NER), which uses raw text to find the target fields, cannot be applied easily, since traditional OCRs do not work well in recognition of mixed documents with handwritten and printed characters occurring together. Another drawback of traditional OCRs in this context is their inability to recognise domain-specific words due to their general language-based vocabulary. In this work, we propose a novel framework for analysing such domain-specific semi-structured documents. The contributions of the proposed framework as follows: 1. We use a FastRCNN + Vision Transformer-based encoder trained for target field localization and classification. We also deploy a BERT-based text decoder that is fine-tuned to incorporate legal domain-specific vocabulary. 2. We use a domain-specific pretrained language model [32] to improve the recognition of domain-specific text (legal statutes, Indian names, etc.). This idea of using a domain-specific language model along with OCR is novel, and has a wider applicability over other domains (e.g., finance, healthcare, etc) where this technique can be used to achieve improved recognition from domain-specific documents. 3. We improve the character error rate (CER) by reducing the ambiguities in OCR through a novel domain-specific post-correction step. Using domain knowledge, we created a database for each target field (such as Indian names, Indian statutes, etc.) to replace the ambiguous words from OCR having low confidence using a combination of TF-IDF vectorizer and K-Nearest Neighbour classifier. This novel post-correction method to handle recognition errors pertaining to proper nouns, enables our proposed framework to outperform state-of-the-art OCR models by large margins. To summarize, in this work we build the first legal domain-specific dataset for semi-structured document analysis. We also develop a framework to localise the handwritten target fields, and fine-tune a transformer-based OCR (TrOCR) to extract handwritten text. We further develop post-correction techniques to improve the character error rate. To our knowledge, the combination of Faster-RCNN and TrOCR with other components, such as Vision Transformer and legal domain-specific tokenizers, to create an end-to-end framework for processing offline handwritten semi-structured documents is novel, and can be useful for analysis of similar documents in other domains as well. ## 2 Related Work We briefly survey four types of prior works related to our work - (i) related datasets, (ii) works addressing target field localization and classification, (iii) handwritten character recognition, and (iv) works on post-OCR correction methods **Related Datasets:** There exist several popular datasets for semi-structured document analysis. FUNSD [22] is a very popular dataset for information extraction and layout analysis. FUNSD dataset is a subset of RVL-CDIP dataset [17], and contains 199 annotated financial forms. The SROIE dataset [21] contains 1,000 annotated receipts having 4 different entities, and is used for receipt recognition and information extraction tasks. The CloudSCan Invoice dataset [29] is a custom dataset for invoice information extraction. The dataset contained 8 entities in printed text. Note that no such dataset exists in the legal domain, and our FIR dataset is the first of its kind. Also, the existing datasets contain only printed text, while the dataset we build contains a mixture of printed and hand-written text (see Table 2 for a detailed comparison of the various datasets). **Localization and Labelling of field components:** Rule-based information extraction methods (such as the method developed by Kempf et al. [10] and many other methods) could be useful when documents are of high quality and do not contain handwritten characters. But when document layouts involve huge variations, noise and handwritten characters, keyword-based approaches fail to provide good results. Template-based approaches also fail due to scanning errors and layout variability [36, 1, 2]. Srivastava et al. [12] developed a graph-based deep network for predicting the associations between field labels and field values in handwritten form images. They considered forms in which the field label comprises printed text and field value can be handwritten text; this is similar to what we have in the FIR dataset developed in this work. To perform association between the target field labels and values, they formed a graphical representation of the textual scripts using their associated layout position. In this work, we tried to remove the dependency on OCR of previous works [12] by using layout information of images to learn the positions of target fields and extract the image patches using state-of-the-art object detection models such as [37, 35, 33]. Zhu et. al. [37] proposed attention modules that only attend to a small set of key sampling points around a reference, which can achieve better performance than baseline model [8] with 10\(\times\) less training epochs. Tan et. al. [35] used weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multi-scale feature fusion. Ren et al [33] proposed an improved version of their earlier work [14] provides comparative performances with [37, 35] with lower latency and computational resources on FIR dataset. Hence, we use Faster RCNN model in this framework for localization and classification of the field component. **Handwritten Character Recognition:** Offline handwriting recognition has been a long standing research interest. The works [3, 4, 5] presented novel features based on structural features of the strokes and their spatial relations with a character, as visible from different viewing directions on a 2D plane. Diesendruck et al. [11] used Word Spotting to directly recognise handwritten text from images. The conventional text recognition task is usually framed as an encoder-decoder problem where the traditional methods[19] leveraged CNN-based [24] encoder for image understanding and LSTM-based [20] decoder for text recognition. Chowdhury et al. [9] combined a deep convolutional network with a recurrent Encoder-Decoder network to map an image to a sequence of characters corresponding to the text present in the image. Michael, Johannes et al. [28] proposed a sequence-to-sequence model combining a convolutional neural network (as a generic feature extractor) with a recurrent neural network to encode both the visual information, as well as the temporal context between characters in the input image. Further, Li et al. [25] used for the first time an end-to-end Transformer-based encoder-decoder OCR model for handwritten text recognition and achieved SOTA results. The model [25] is convolution-free unlike previous methods, and does not rely on any complex pre/post-processing steps. The present work leverages this work and extends its application in legal domain. **Post-OCR correction:** Rectification of errors in the recognised text from the OCR would require extensive training which is computation heavy. Further, post-OCR error correction requires a large amount of annotated data which may not always be available. After the introduction of the Attention mechanism and BERT model, many works have been done to improve the results of the OCR using language model based post-correction techniques. However, Neural Machine Translation based approaches as used by Duong et al. [13] are not useful in the case of form text due to the lack of adequate context and neighbouring words. We extend the idea used in the work of Trstenjak et al. [7] where they used edit distance and cosine similarity to find the matching words. In this paper we used K-nearest neighbour with edit distance to find best matches for the words predicted with low confidence score by the OCR. ## 3 The FIR Dataset First Information Report (FIR) documents contain details about incidents of cognisable offence, that are written at police stations based on a complaint. FIRs are usually filed by a police official filling up a printed form; hence the documents contain both printed and handwritten text. In this work, we focus on FIR documents written at police stations in India. Though the FIR forms used across different Indian states mostly have a common set of fields, there are some differences in their layout (see examples in Fig. 1). To diversify the dataset, we included FIR documents from the databases of various police stations across several Indian states - West Bengal1, Rajasthan2, Sikkim3, Tripura4 and Nagaland5. Footnote 1: [http://bidhannagarcitypolice.gov.in/fir_record.php](http://bidhannagarcitypolice.gov.in/fir_record.php) Footnote 2: [https://home.rajasathan.gov.in/content/homeportal/en.html](https://home.rajasathan.gov.in/content/homeportal/en.html) Footnote 3: [https://police.sikkim.gov.in/visitor/fir](https://police.sikkim.gov.in/visitor/fir) Footnote 4: [https://tripurapolice.gov.in/west/fir-copies](https://tripurapolice.gov.in/west/fir-copies) Footnote 5: [https://police.nagaland.gov.in/fir-2/](https://police.nagaland.gov.in/fir-2/) As stated earlier, an FIR contains many fields including the name of the complainant, names of suspected/alleged persons, statutes that may have been violated, date and location of the incident, and so on. In this work, we selected _four target fields_ from FIR documents for the data annotation and recognition task - (1) _Year_ (the year in which the complaint is being recorded), (2) _Complainant's name_ (name of the person who lodged the complaint), (3) _Police Station_ (name of the police station that is responsible for investigating the particular incident), and (4) _Statutes_ (Indian laws that have potentially been violated in the reported incident; these laws give a good indication of the type of the crime). We selected these four target fields because we were able to collect the gold standard for these four fields from some of the police databases. Also, digitizing these four fields would enable various societal analysis, such as analysis of the nature of crimes in different police stations, temporal variations in crimes, and so on. **Annotations:** We manually analysed more than 1,300 FIR documents belonging to different states, regions, police stations, etc. We found that FIR documents from the same region / police station tend to have the similar layout and form structure. Hence we selected a subset of 375 FIR documents with reasonably varying layouts / form structure, so that this subset covers most of the different variations. These 375 documents were manually annotated. Annotations were done on these documents using LabelMe annotation tool10 to mark the bounding boxes of the target fields. Footnote 10: [https://github.com/wkentaro/labelme](https://github.com/wkentaro/labelme) Figure 2 shows some samples of various entities present in our dataset, and Figure 3 shows examples of ground truth annotations for two of the entities in Figure 2. In the ground truth, each bounding box has four co-ordinates (X_left, X_width, Y_right, Y_height) which describe the position of the rectangle containing the field value for each target field. **Train-test split:** During the annotation of our dataset, we identified 79 different types of large scale variations, layout distortions/deformations, which we split into training and testing sets. We divided our dataset (of 375 document images) such that 300 images are included in the training set and the other 75 images are used as the test set. During training, we used 30% of training dataset as a validation set. Table 1 shows the bifurcation statistics for training and test sets. **Preprocessing the images:** For Faster-RCNN we resized the document images to a size of 1180 \(\times\) 740, and used the bounding boxes and label names to train the model to predict and classify the bounding boxes. We convert the dataset into IAM Dataset format [27] to fine-tune the transformer OCR. \begin{table} \begin{tabular}{l|c|c|c|c} \hline Split & Images & Layout & Words & Labels \\ \hline Training & 300 & 61 & 1,830 & 1,230 \\ Testing & 75 & 18 & 457 & 307 \\ \hline \end{tabular} \end{table} Table 1: FIR Dataset statistics Figure 2: Sample of various entities present in First Information Reports with different writing styles, distortions and scales. **Novelty of the FIR dataset:** We compare our FIR dataset11 with other datasets for semi-structure document analysis in Table 2. The FIR dataset contains both printed and handwritten information which makes it unique and complex compared to several other datasets. Additionally, the FIR dataset is the first dataset for semi-structured document analysis in the legal domain. Footnote 11: [https://github.com/LegalDocumentProcessing/FIR_Dataset_ICDAR2023](https://github.com/LegalDocumentProcessing/FIR_Dataset_ICDAR2023) ## 4 The TransDocAnalyser Framework We now present TransDocAnalyser, a framework for offline processing of handwritten semi-structured documents, by adopting Faster-RCNN and Transformer-based encoder-decoder architecture, with post-correction to improve performance. ### The Faster-RCNN architecture Faster-RCNN [33] is a popular object detection algorithm that has been adopted in many real-world applications. It builds upon the earlier R-CNN [15] and Fast R-CNN [33] architectures. We pass the input images through the Faster-RCNN network to get the domain-specific field associations and extract the image patches from the documents. \begin{table} \begin{tabular}{|l|l|c|c|l|} \hline Dataset & Category & \#Images & Text Type & \#Entities \\ \cline{3-5} & & **Printed** & **Handwritten** & \\ \hline FUNSD [22] & Form & 199 & \(\diagdown\) & x & 4 \\ \hline SROIE [21] & Receipt & 1000 & \(\diagdown\) & x & 4 \\ \hline Cloud Invoice [29] & Invoice & 326571 & \(\diagdown\) & x & 8 \\ \hline FIR (**Ours**) & Form & 375 & \(\diagdown\) & 4 \\ \hline \end{tabular} \end{table} Table 2: Comparison of the FIR dataset with other similar datasets Figure 3: Examples of ground truth annotations for two of the entities shown in Figure 2 Our modified Faster-RCNN architecture consists of three main components (as schematically shown in Figure 4)- (1) Backbone Network, (2) Region Proposal Network (RPN), and (3) ROI Heads as detailed below. **(1) Backbone Network:** ResNet-based backbone network is used to extract multi-scaled feature maps from the input - that are named as P2, P3, P4, P8 and so on - which are scaled as 1/4th, 1/8th, 1/16th and so on. This backbone network is FPN-based (Feature Pyramid network) [26] which is multi-scale object detector invariant to the object size. **(2) Region Proposal Network (RPN):** Detects ROI (regions of interest) along with a confidence score, from the multi-scale feature maps generated by the backbone network. A fixed-size kernel is used for region pooling. The regions detected by the RPN are called _proposal boxes_. **(3) ROI Heads:** The input to the box head comprises (i) the feature maps generated by a Fully Connected Network (FCN), (ii) the _proposed boxes_ which come from the RPN. These are 1,000 boxes with their predicted labels. Box head uses the bounding boxes proposed by the RPN to crop and prepare the feature maps. (iii) ground truth bounding boxes from the annotated training datasets. The ROI pooling uses the proposed boxes detected by RPN, crops the rectangular areas of the feature maps, and feeds them into the head networks. Using Box head and mask head together in Faster-RCNN network, inspired by He et al. [18] improves the overall performance. During training, the box head makes use of the ground truth boxes to accelerate the training. The mask head provides the final predicted bounding boxes and confidence scores during the training. At the time of inference the head network uses non-maximum suppression (NMS) algorithm to remove the overlapping boxes and selects the top-k results as the predicted output based on thresholds on their confidence score and intersection over union (IOU). Figure 4: Modified Faster-RCNN based architecture for target field localization and labelling ### The TrOCR architecture Once the localized images are generated for a target field (e.g., complainant name) by Faster-RCNN, the image patches are then flattened and sent to the Vision Transformer (ViT) based encoder model. We use TrOCR [25] as the backbone model for our finetuning (see Figure 5). TrOCR [25] is a Transformer-based OCR model which consists of a pretrained vision Transformer encoder and a pretrained text decoder. The ViT encoder is trained on the IAM handwritten dataset, which we fine-tune on our FIR dataset. We use the output patches from the Faster-RCNN network as input to the ViT encoder, and fine-tune it to generate features. As we are providing the raw image patches received from Faster-RCNN into the ViT encoder, we did not apply any pre-processing or layout enhancement technique to improve the quality of the localised images. On the contrary, we put the noisy localised images cropped from the _form fields_ directly, which learns to suppress noise features by training. We also replace the default text decoder (RoBERTa) with the Indian legal-domain specific BERT based text decoder InLegalBERT [32] as shown in Fig. 5. InLegalBert [32] is pre-trained with a huge corpus of about 5.4 million Indian Legal documents, including court judgements of the Indian Supreme Court and other higher courts of India, and various Central Government Acts. To recognize characters in the cropped image patches, the images are first resized into square boxes of size 384 \(\times\) 384 pixels and then flattened into a sequence of patches, which are then encoded by ViT into high-level representations and decoded by InLegalBERT into corresponding characters step-by-step. We evaluate and penalise the model based on the Character Error Rate (CER). CER calculation is based on the concept of Levenshtein distance, where we count the minimum number of character-level operations required to transform the ground truth text into the predicted OCR output. CER is computed as \(CER=(S+D+I)/N\) where \(S\) is the number of substitutions, \(D\) is the number Figure 5: TrOCR architecture with custom enhancements. The Text decoder uses a domain-specific InLegalBert [32] based tokenizer. OCR predictions go for post-correction if the confidence score is less than the threshold. We convert the OCR prediction into a TF-IDF vector and search in the domain-specific field database to find the Nearest Match. of deletions, \(I\) is the number of Insertions, and \(N\) is the number of characters in the reference text. ### KNN-based OCR Correction For each predicted word from OCR, if the confidence score is less than a threshold 0.7, we consider the OCR output to be ambiguous for that particular word. In such cases, the predicted word goes through a post-correction step which we describe now (see Figure 6). For each target field, we create a database of relevant values and terms (which could be written in the field) from various sources available on the Web. Table 3 shows a very small subset of some of the field-specific databases such as Indian names, Indian surnames, Indian statutes (Acts and Sections), etc. We converted each database into a set of TF-IDF vectors (see Figure 6). Here TF-IDF stands for Term Frequency times Inverse Document Frequency. The TF-IDF scores are computed using n-grams of groups of letters. In our work we used \(n=3\) (trigrams) for generating the TF-IDF vectors for OCR predicted words as well as for the entities in the databases. For a given OCR output, based on the associated field name which is already available from the field classification by Faster-RCNN, we used the K-Nearest Neighbour (KNN) classifier to select the appropriate vectorized database. KNN \begin{table} \begin{tabular}{|c|c|c|c|} \hline Names & Surnames & Police Stations & Statutes / Acts \\ \hline \hline Anamul & Haque & Baguiati & IPC (Indian Penal Code) \\ \hline Shyam & Das & Airport & D.M. Act (Disaster Management Act) \\ \hline Barnali & Pramanik & Newton & D.C. Act (Drug and Cosmetics Act) \\ \hline Rasida & Begam & Saltlake & NDPS Act \\ \hline \end{tabular} \end{table} Table 3: Excerpts from field-specific databases used to prepare TF-IDF vectorized records for KNN search. All databases contain India-specific entries. Figure 6: Term Frequency and Inverse Document frequency (TF-IDF) Vectorizer based K-Nearest Neighbour model for post-correction on OCR output returns best matches with a confidence score based on the distance between the search vector (OCR output) and the vectors in the chosen database. If the confidence score returned by KNN is greater than 0.9, then the OCR predicted word gets replaced with the word predicted by the K-Nearest Neighbour search. ## 5 Experimental settings We ran all experiments on a Tesla T4 GPU with CUDA version 11.2. We used CUDA enabled Torch framework 1.8.0. In the first stage of the TransDocAnalyser framework, we trained the Faster RCNN from scratch using the annotated dataset (the training set). Table 4 shows the settings used for training the Faster-RCNN model. Prior to the training, input images are resized in 1180 \(\times\) 740. For memory optimization, we run the model in two steps, first for 1500 iteration and then for 1000 iteration on the stored model. We tried batch sizes (BS) of 16, 32 and 64, and finalized BS as 64 because of the improvement in performance and training time. We used the trained model Faster-RCNN model to detect and crop out the bounding boxes of each label from the original document (as shown in Fig. 2) and created our dataset to fine-tune the ViT encoder. We also created a metadata file mapping each cropped image (as shown in Fig. 2) with its corresponding text as described in [27] to fine-tune the decoder. Table 5 shows the parameter settings used for fine-tuning the TrOCR model. Image patches are resized to 384 \(\times\) 384 dimension to fine-tune ViT encoder. In the TrOCR model configuration, we replaced the tokenizer and decoder settings based on InLegalBert. We tried with batch size (BS) of 2, 4, 8, 16, 32, 64, and BS = 8 provided the best result on the validation set. We fine-tuned the Encoder and Decoder of the OCR for 40 epochs and obtained the final results. The KNN-based OCR correction module used n-grams with \(n=1,2,3,4\) to generate the TF-IDF vectors of the field-specific databases. Using \(n=3\) (trigrams) and KNN with \(K=1\) provided the best results. ## 6 Results In this section, we present the results of the proposed framework TransDocAnalyser in three stages - (i) The performance of Faster-RCNN on localization and \begin{table} \begin{tabular}{|l|l|c|c|c|c|c|} \hline Base Model & Base Weights & Learning Rate & Epoch \# & \# of Class & IMS/batch & Image Size \\ \hline ResNet 50 & Mask RCNN & 0.00025 & 2500 & 4 & 4 & 1180 \(\times\) 740 \\ \hline \end{tabular} \end{table} Table 4: Faster-RCNN model training parameters \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Feature Extractor & Tokenizer & Max Len & N-gram & Penalty & \# of Beam & Optimizer \\ \hline google-vit-patch16-384 & InLegalBERT & 32 & 3 & 2.0 & 4 & AdamW \\ \hline \end{tabular} \end{table} Table 5: Tranformer OCR (TrOCR) parameters used for model fine-tuning labelling of the target fields (Table 6); (ii) Sample of OCR results with Confidence Scores (Table 7); and (iii) Comparison of the performance of the proposed framework with existing OCR methods (Table 8). Table 6 shows the results of field label detection using Faster-RCNN on both test and validation sets of the FIR dataset. The performance is reported in terms of Recall (Re), Precision (Pr), F1 (harmonic mean of Recall and Precision) and mean Average Precision (mAP). For the localization and labelling, a prediction is considered correct if both the IOU (with the ground truth) and the confidence threshold are higher than 0.5. The results show that our model is performing well, with the best and worst results for the fields 'Year' (F1 = 0.97) and 'Name' (F1 = 0.8) respectively. This variation in the results is intuitive, since names have a lot more variation than the year. Figure 7 shows examples of outputs of Faster-RCNN on some documents from the test set of the FIR dataset. The predicted bounding boxes are highlighted in green rectangles, and the predicted class names are marked in red on top of each bounding box. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{2}{|l|}{Results on dataset} & \multicolumn{2}{l|}{Target field} & \multicolumn{2}{l|}{Faster R-CNN} \\ \cline{3-6} \multicolumn{2}{|l|}{} & & **Re \(\uparrow\)** & **Pr \(\uparrow\)** & **F1 \(\uparrow\)** & **mAP \(\uparrow\)** \\ \hline Validation & Year & 0.98 & 0.96 & 0.97 & 0.97 \\ & Statute & 0.85 & 0.82 & 0.83 & 0.84 \\ & Police Station & 0.96 & 0.90 & 0.93 & 0.93 \\ & Complainant Name & 0.84 & 0.76 & 0.80 & 0.77 \\ \hline Test & Year & 0.97 & 0.96 & 0.97 & 0.96 \\ & Statute & 0.84 & 0.87 & 0.86 & 0.80 \\ & Police Station & 0.93 & 0.88 & 0.91 & 0.91 \\ & Complainant Name & 0.80 & 0.81 & 0.81 & 0.74 \\ \hline \end{tabular} \end{table} Table 6: Performance of field labelling on the FIR dataset (validation set and test set). Re: Recall, Pr: Precision, F1: F1-score, mAP: mean average precision. Figure 7: Examples of localization and labelling of target fields by Faster-RCNN. The predicted bounding boxes are highlighted in green on the images. The associated class labels are highlighted in red. The output of Faster-RCNN provides bounding boxes and field names for each image, using which image patches are generated and sent to the Encoder-Decoder architecture. Table 7 shows some examples of image patches and the finetuned TrOCR predictions for those image patches. It is seen that the name "Amar Prakash" is predicted as 'Amar Prakesh" with confidence score below a threshold of 0.7 (which was decided empirically). As the prediction confidence is below the threshold, this output goes to the post-correction method proposed in this work. Table 8 compares the final performance of our proposed framework TransDocAnalyser, and compares our model with Google-Tesseract and Microsoft-TrOCR for handwritten recognition on proposed FIR dataset.12 The performances are reported in terms of Character Error Rate (CER), Word Error Rate (WER), and BLEU scores [30]. Lower values of CER and WER indicate better performance, while higher BLEU scores are better. Footnote 12: We initially compared Tesseract with TrOCR-Base, and found TrOCR to perform much better. Hence subsequent experiments were done with TrOCR only. We achieve state-of-the-art results using the proposed TransDocAnalyser framework which outperforms the other models with quite a good margin (see Table 8). While the TrOCR + InLegalBert model also performed well, our proposed framework TransDocAnalyser (consisting of vision transformer-based encoder, InLegalBert tokenizer and KNN-based post-correction) achieved the best results across all the four target fields of the FIR dataset. ## 7 Conclusion In this work, we (i) developed the first dataset for semi-structured handwritten document analysis in the legal domain, and (ii) proposed a novel framework for offline analysis of semi-structured handwritten documents in a particular domain. Our proposed TransDocAnalyser framework including Faster-RCNN, TrOCR, a domain-specific language model/tokenizer, and KNN-based post-correction outperformed existing OCRs. We hope that the FIR dataset developed in this work will enable further research on legal document analysis which is gaining importance world-wide and specially in developing countries. We also believe that the TransDocAnalyser \begin{table} \begin{tabular}{|c|c|c|} \hline **Image Patches** & **OCR Results** & **Confidence Score** \\ \hline & 2019 & 0.89 \\ \hline & \begin{tabular}{c} Lian Min Thang \\ \end{tabular} & 0.77 \\ \hline & Nscbi Airport & 0.79 \\ \hline & Amar Prakesh & 0.63 \\ \hline & 379 & 0.96 \\ \hline \end{tabular} \end{table} Table 7: Finetuned (TrOCR) predictions on the generated image patches shown below framework can be easily extended to semi-structured handwritten document analysis in other domains as well, with a little fine-tuning. **Acknowledgement:** This work is partially supported by research grants from Wipro Limited (www.wipro.com) and IIT Jodhpur (www.iitj.ac.in).
2310.16809
Exploring OCR Capabilities of GPT-4V(ision) : A Quantitative and In-depth Evaluation
This paper presents a comprehensive evaluation of the Optical Character Recognition (OCR) capabilities of the recently released GPT-4V(ision), a Large Multimodal Model (LMM). We assess the model's performance across a range of OCR tasks, including scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually-rich document. The evaluation reveals that GPT-4V performs well in recognizing and understanding Latin contents, but struggles with multilingual scenarios and complex tasks. Specifically, it showed limitations when dealing with non-Latin languages and complex tasks such as handwriting mathematical expression recognition, table structure recognition, and end-to-end semantic entity recognition and pair extraction from document image. Based on these observations, we affirm the necessity and continued research value of specialized OCR models. In general, despite its versatility in handling diverse OCR tasks, GPT-4V does not outperform existing state-of-the-art OCR models. How to fully utilize pre-trained general-purpose LMMs such as GPT-4V for OCR downstream tasks remains an open problem. The study offers a critical reference for future research in OCR with LMMs. Evaluation pipeline and results are available at https://github.com/SCUT-DLVCLab/GPT-4V_OCR.
Yongxin Shi, Dezhi Peng, Wenhui Liao, Zening Lin, Xinhong Chen, Chongyu Liu, Yuyi Zhang, Lianwen Jin
2023-10-25T17:38:55Z
http://arxiv.org/abs/2310.16809v2
# Exploring OCR Capabilities of GPT-4V(ision) : ###### Abstract This paper presents a comprehensive evaluation of the Optical Character Recognition (OCR) capabilities of the recently released GPT-4V(ision), a Large Multimodal Model (LMM). We assess the model's performance across a range of OCR tasks, including scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually-rich document. The evaluation reveals that GPT-4V performs well in recognizing and understanding Latin contents, but struggles with multilingual scenarios and complex tasks. Specifically, it showed limitations when dealing with non-Latin languages and complex tasks such as handwriting mathematical expression recognition, table structure recognition, and end-to-end semantic entity recognition and pair extraction from document image. Based on these observations, we affirm the necessity and continued research value of specialized OCR models. In general, despite its versatility in handling diverse OCR tasks, GPT-4V does not outperform existing state-of-the-art OCR models. How to fully utilize pre-trained general-purpose LMMs such as GPT-4V for OCR downstream tasks remains an open problem. The study offers a critical reference for future research in OCR with LMMs. Evaluation pipeline and results are available at [https://github.com/SCUT-DLVCLab/GPT-4V_OCR](https://github.com/SCUT-DLVCLab/GPT-4V_OCR). ## 1 Introduction The emergence of ChatGPT [1] marks a significant milestone in the field of Artificial Intelligence (AI). Concurrently, it has ignited a surge in Large Language Models (LLMs) research across both academia and industry, with models such as GLM-130B [2], Alpaca [3], Vicuna [4], LLaMA [5], ERNIE Bot [6], Owen [7], Baichuan2 [8]. The success of LLMs has also spurred the development of Large Multimodal Models (LMMs). Many initiatives are now striving to expand the multimodal capabilities of LLMs, including BLIP-2 [9], OpenFlamingo [10], LLaVA [11], MiniGPT4 [12], and mPLUG-Owl [13]. Particularly, the recent release of GPT-4V(ision) [14] presents a significant breakthrough in the domain of LMMs. Researchers across diverse fields are eager to comprehend the capabilities of GPT-4V, with those in the Optical Character Recognition (OCR) domain displaying particular curiosity in its potential to address OCR tasks. While the official report qualitatively demonstrates GPT-4V's abilities in several OCR-related tasks (including text recognition, expression recognition, and document understanding), quantitative assessment and in-depth analysis are urgently needed, which will provide valuable insights and essential references for future research. To this end, we conduct a quantitative evaluation of GPT-4V on mainstream OCR tasks, including Scene Text Recognition (STR) [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30], Handwritten Text Recognition (HTR) [31, 32, 33, 34, 35, 36, 37, 38, 39], Handwritten Mathematical Expression Recognition (HMER) [40, 41, 42, 43, 44, 45, 46, 47], Table Structure Recognition (TSR) [48, 49, 50, 51, 52, 53, 54, 55], and Information Extraction from Visually-rich Document (VIE) [56, 57, 58, 59, 60, 61, 62, 63, 64, 65]. For the above tasks, we employ some commonly used benchmarks in the OCR domain for evaluation: **(1) STR:** CUTE80 [66], SCUT-CTW1500 [67], Total-Text [68], WordArt [69], ReCTS [70] and MLT19 [71], **(2) HTR:** IAM [72] and CASIA-HNDB [73], **(3) HMER:** CROHME2014 [74] and HME100K [42], **(4) TSR:** SciTSR [75] and WTW [76], **(5) VIE:** FUNSD [77] and XFUND [78] Chinese subset (XFUND-zh). The evaluation results suggest that GPT-4V does not match the performance of specialized OCR models. Specifically, GPT-4V demonstrates superior performance in Latin content but encounters limitations when dealing with other languages. Furthermore, GPT-4V struggles in complex scenarios for tasks such as HMER, TSR, and VIE. Based on the experimental results, we try to address an important question: **do specialized models still hold research value in the OCR field?** Given the three critical drawbacks of GPT-4V, namely, limited performance in multilingual and complex scenarios, high inference costs, and challenges in updating, we argue that existing LMMs struggle to simultaneously handle various OCR tasks [79]. Therefore, we affirm the _continued research value_ of specialized models in the OCR field. However, it is still crucial to leverage the potential of LMMs like GPT-4V for future OCR research. There may be three potential directions worth investigating, including semantic understanding enhancement, downstream task finetuning, and auto/semi-auto data construction. ## 2 Experiments We evaluate GPT-4V on the following OCR tasks: scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually-rich document. The evaluation process was conducted within the web-based dialogue interface with GPT-4V, of which we directly uploaded the image and prompt, and then extracted relevant answers from the generated responses. The prompts for each task were meticulously designed. Additionally, to prevent interference from contextual information, we used a separate dialogue window for each image. Due to the conversation limits (50 conversations per 3 hours) of GPT-4V, we conducted sampling on datasets with a large number of samples. ### Scene text recognition DatasetWe focus on both word-level text recognition and end-to-end text spotting. For word-level text recognition, we employ CUTE80 [66], SCUT-CTW1500 [67], Total-Text [68], WordArt [69] in English and ReCTS [70] in Chinese. We randomly select 50 images from each dataset above for evaluation. The datasets are downloaded from 1. Footnote 1: [https://github.com/Yuliang-Liu/Multimodal0CR](https://github.com/Yuliang-Liu/Multimodal0CR) * **CUTE80** comprises 80 images specifically curated for the purpose of evaluating curved text. * **SCUT-CTW1500** is a comprehensive curved text dataset encompassing a total of 1500 images. * **Total-Text** has 1,555 scene images which collected with curved text in mind. * **WordArt** consists of 6316 artistic text images, which primarily features challenging artistic text. * **ReCTS** is a large-scale dataset of 25,000 images, which mainly focuses on reading Chinese text on signboard. In the end-to-end text spotting task, we use MLT19 [71] to evaluate the multilingual capabilities of GPT-4V. For each language, we randomly select 20 images from the training set. Additionally, to investigate the impact of image resolution on recognition results, we select 20 English images from the aforementioned subset and resize their long sides to 128, 256, 512, 1024, and 2048 pixels, respectively. * **MLT19** is a dataset for Multi-Lingual scene Text (MLT) detection and recognition, which consists of 20,000 images containing text from 10 languages. PromptFor word-level English text recognition, we use the following prompt: _"What is the scene text in the image?"_, while for ReCTS in Chinese, we translate the prompt into Chinese, resulting in: "The prompt in end-to-end text spotting is: _"What are all the scene text in the image? Do not translate."_ MetricFor the evaluation of word-level recognition, we employ word accuracy ignoring case and symbols (WAICS) [80] as metric. In the task of end-to-end text spotting, the predictions of GPT-4V and ground truths (GT) are split with spaces and then evaluated using precision and recall. Precision represents the ratio of correctly identified words to those generated by GPT-4V, while recall is the ratio of correctly identified words to the total number of GT words. We also compute the F1 score as follow. \[F1=\frac{2\cdot\text{precision}\cdot\text{recall}}{\text{precision}+\text{ recall}} \tag{1}\] Results and analysisThe results are shown in Table 1, Table 2 and Table 3, respectively. We visualize some examples in Figure 1. Based on the results, we draw the following insights: **(1) There is a substantial accuracy disparity between the recognition of English and Chinese text.** As shown in Table 1, the performance of English text recognition is commendable. Conversely, the accuracy of Chinese text recognition is zero (ReCTS). We speculate that this may be due to the lack of Chinese scene text images as training data in GPT-4V. **(2) GPT-4V exhibits a strong ability to recognize Latin characters, surpassing its performance in other languages.** As shown in Table 2, it can be observed that GPT-4V performs significantly better in English, French, German, and Italian, compared to non-Latin alphabet languages. This suggests noticeable limitations in GPT-4V's multilingual OCR capabilities. **(3) GPT-4V supports input images with different resolutions.** As shown in Table 3, there is a positive correlation between the input image resolution and the recognition performance. This suggests that, unlike previous LMMs that resize images to a fixed size, GPT-4V supports input images with variable resolutions. Meanwhile, we hypothesize that the image encoder of GPT-4V employs a fixed patch size, therefore increasing the resolution of the input image leads to a longer sequence, which help the model to capture more information. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & CUTE80 & SCUT-CTW1500 & Total-Text & WordArt & ReCTS \\ \hline GPT-4V & 88.0\% & 62.0\% & 66.0\% & 62.0\% & 0 \\ Supervised-SOTA & 98.6\% & 87.0\% & 90.1\% & 68.2\% & 97.4\% \\ \hline \hline \end{tabular} \end{table} Table 1: Results of word-level scene text recognition. The SOTA of CUTE80 and WordArt are achieved by [80] and [81], respectively. [82] reported the SOTA on SCUT-CTW1500 and Total-Text. The SOTA of ReCTS can be found at 3. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Language & Precision \(\uparrow\) & Recall \(\uparrow\) & F1 \(\uparrow\) \\ \hline \multirow{8}{*}{GPT-4V} & Arabic & 16.44\% & 16.67\% & 16.55\% \\ & English & 86.57\% & 78.77\% & 82.49\% \\ \cline{1-1} & French & 83.0\% & 83.84\% & 83.42\% \\ \cline{1-1} & Chinese & 1.2\% & 1.56\% & 1.36\% \\ \cline{1-1} & German & 73.65\% & 86.29\% & 79.47\% \\ \cline{1-1} & Korean & 10.83\% & 12.39\% & 11.56\% \\ \cline{1-1} & Japanese & 11.9\% & 11.9\% & 11.9\% \\ \cline{1-1} & Italian & 62.7\% & 67.52\% & 65.02\% \\ \cline{1-1} & Bangla & 2.53\% & 2.63\% & 2.58\% \\ \cline{1-1} & Hindi & 7.29\% & 8.33\% & 7.78\% \\ \cline{1-1} \cline{2-5} & All language & 43.04\% & 45.42\% & 44.2\% \\ \hline Supervised-SOTA & All language & 74.16\% & 52.91\% & 61.76\% \\ \hline \hline \end{tabular} \end{table} Table 2: Results of MLT19. The SOTA of end-to-end text spotting in MLT19 can be found at 4. Footnote 4: [https://rrc.cvc.uab.es/?ch=15&com=evaluation&task=4](https://rrc.cvc.uab.es/?ch=15&com=evaluation&task=4) \begin{table} \begin{tabular}{l c c c} \hline \hline Image size & Precision \(\uparrow\) & Recall \(\uparrow\) & F1 \(\uparrow\) \\ \hline 128 & 45.52\% & 57.28\% & 50.73\% \\ 256 & 73.88\% & 86.21\% & 79.57\% \\ 512 & 85.82\% & 83.21\% & 84.49\% \\ 1024 & 90.30\% & 84.72\% & 87.42\% \\ 2048 & 92.54\% & 86.01\% & 89.16\% \\ \hline \hline \end{tabular} \end{table} Table 3: Impact of image resolution for recognition performance on MLT19 English subset. ### Handwritten text recognition DatasetTo evaluate GPT-4V's capability in handwritten text recognition, we employ two commonly used handwritten datasets: IAM [72] (in English) and CASIA-HWDB [73] (in Chinese). We randomly sample 50 pages and 50 text lines from each of the test sets of IAM and CASIA-HWDB for evaluation. * **IAM** comprises 1,539 pages and 13,353 lines of handwritten English text. * **CASIA-HWDB** is an offline handwritten Chinese dataset, which contains about 5,090 pages and 1.35 million character samples of 7,356 classes (7,185 Chinese characters and 171 symbols). PromptFor IAM, we use the prompt: _"Recognize the text in the image."_ as input. And for CASIA-HWDB, we use the Chinese prompt "" " " ", which means _"Please tell me directly, what are all the text in the image?"_ MetricTwo metrics are used for evaluation in the handwritten English text: Word Error Rate (WER) and Character Error Rate (CER) [83]. To evaluate the performance in handwritten Chinese text, we use AR and CR metrics [36]. Results and analysisAs shown in Table 4 and 5. **(1) There's also a significant performance gap between English and Chinese handwritten text.** This phenomenon is consistent with the findings in Section 2.1, which collectively suggests that GPT-4V performs well in English text recognition while facing notable challenges in Chinese. **(2) GPT-4V exhibits significant hallucinations in Chinese text recognition.** As shown in Figure 3 (c) and (d), the responses generated by GPT-4V demonstrate a high degree of fluency in both grammar and semantics. However, they substantially deviate from the textual content of the ground truth (GT), appearing to produce nonsensical information in a seemingly earnest manner. ### Handwritten mathematical expression recognition DatasetFor this task, we employ two representative dataset includes CROHME2014 [74] and HME100K [42]. We randomly select 50 images from the test sets of each of these two datasets for evaluation. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Page-level} & \multicolumn{2}{c}{Line-level} \\ \cline{2-5} & WER \(\downarrow\) & CER \(\downarrow\) & WER \(\downarrow\) & CER \(\downarrow\) \\ \hline GPT-4V & 9.84\% & 3.32\% & 33.42\% & 13.75\% \\ Supervised-SOTA & 8.29\% & 2.89\% & 21.47\% & 6.52\% \\ \hline \hline \end{tabular} \end{table} Table 4: Results of IAM. The SOTA of page-level IAM in WER and CER metric are achieved by [84] and [85], respectively. And the line-level SOTA is achieved by [86]. Figure 1: Illustration of word-level scene text recognition. In the answers of GPT-4V, we highlight the characters that match the GT in green and characters that do not match in red. GPT-4V can recognize curved, slanted, and artistic English text, while common-style Chinese text can not be recognized. * **CROHME2014** is a classical online dataset for handwritten mathematical expression recognition, which comprises 9,820 samples of mathematical expressions. * **HME100K** is a large-scale handwritten mathematical expression recognition dataset, which contains 100k images from ten thousand writers and is mainly captured by cameras. PromptIn this task, we use _"This is an image of a handwritten mathematical expression. Please recognize the expression above as LaTeX."_ as prompt. MetricThe metrics we employed include the correct rates at the expression level, and with at most one to three errors [74]. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Page-level} & \multicolumn{2}{c}{Line-level} \\ \cline{2-5} & AR \(\uparrow\) & CR \(\uparrow\) & AR \(\uparrow\) & CR \(\uparrow\) \\ \hline GPT-4V & 0.97\% & 36.54\% & -3.45\% & 11.85\% \\ Supervised-SOTA & 96.83\% & 96.99\% & 97.70\% & 97.91\% \\ \hline \hline \end{tabular} \end{table} Table 5: Results of CASIA-HWDB. The SOTA of page-level CASIA-HWDB in AR and CR metric are achieved by [87] and [88], respectively. And the line-level SOTA is achieved by [36]. Figure 2: Illustration of handwritten text recognition. (a), (b), (c), and (d) are samples of page-level IAM, line-level IAM, page-level CASIA-HWDB, and line-level CASIA-HWDB, respectively. In the responses of GPT-4V, we highlight characters that match the GT in green and characters that do not match in red. For English text, GPT-4V demonstrates excellent performance. In contrast, for Chinese text, GPT-4V has generated a passage of text that is semantically coherent, but it is not associated with the ground truth text (GT). Results and analysisThe results are shown in Table 6. Based on the analysis of the failed case, we draw the following findings. **(1) GPT-4V appears to be limited when dealing with camera-captured and poor handwriting scenarios.** As shown in Table 6, the performance on HEM100K (which features camera-captured images and poor handwriting) significantly drops compared to CROHME2014. As shown in Figure 3, (a) and (c) are examples from CROHME2014, (b) and (d) are from HEM100K, GPT-4V performs well on the former, but poorly on the latter. **(2) GPT-4V exhibits certain challenges in fine-grained character recognition.** Among the failed cases, we observed instances where GPT-4V occasionally missed small-scale characters. Two examples are shown in Figure 3 (e) and (f). For these two examples, GPT-4V has omitted a superscript and a subscript, respectively. This finding aligns with the evaluation results of Liu et al. [79] on other multimodal models, suggesting that GPT-4V may also suffer from certain fine-grained perceptual issues. * **SciTSR** is a dedicated dataset created to address the task of table structure recognition in scientific papers. The dataset consists of 12,000 training samples and 3,000 test samples. * **WTW**'s images are collected in the wild. The dataset is split into training/testing sets with 10,970 and 3,611 samples respectively. PromptFor both SciTSR and WTW, we use the prompt _"Please read the table in this image and return a html-style reconstructed table in text, do not omit anything."_ as input. MetricTo evaluate the performance of GPT-4V in table structure recognition, we use TEDS-S metrics [48], which is a variation of Tree-Edit-Distance-Based Similarity (TEDS) [48] that disregards the textual content of the cells and only evaluates the accuracy of the table structure prediction. Results and analysisThe results are shown in Table 7. We gain two important findings based on the results: **(1) GPT-4V struggles with complex tables.** GPT-4V demonstrates outstanding performance when handling tables with structured layouts and consistent text distributions, such as Figure 4 (a). However, when dealing with other types of tables, including those with numerous empty cells, uneven text distribution, skewing, rotation, or densely packed arrangements, its performance noticeably declines. **(2) Content omission issues are observed in GPT-4V when processing lengthy tables.** Despite emphasizing the requirement of "do not omit anything" in the prompt, we still observed some instances of content omission in the responses, particularly in the case of a large table. A typical example is shown in Figure 4 (e), the table image Figure 4 (c) contains many rows, but GPT-4V only reconstructs three of them. \begin{table} \end{table} Table 7: The TEDS-S of SciTSR and WTW. The SOTA of SciTSR and WTW are both achieved by [52]. Figure 4: Illustration of table structure recognition. (a) and (c) are two input images, (b) and (d) are the corresponding visualized images of GPT-4V’s html-style output sequence. (e) is the output sequence of (c), where the elements that GPT-4V indicate the omitted content are highlighted in red. ### Information Extraction from Visually-rich Document DatasetWe evaluate GPT-4V on FUNSD [77] and XFUND [78] Chinese subset (XFUND-zh). * **FUNSD** dataset is a commonly used form understanding benchmark, which contains 199 scanned form-like documents with noisy images. * **XFUND** dataset is a multilingual extension of FUNSD that covers seven languages (Chinese, Japanese, French, Italian, German, Spanish, and Portuguese). We evaluate GPT-4V on the Semantic Entity Recognition (SER) and the end-to-end Pair Extraction tasks. The SER task requires the model to identify the category of each text segments, which are predefined as header, question, answer, and other in FUNSD and XFUND. The end-to-end pair extraction task asks the model to extract all the key-value pairs in the given document image. We use the full test set (both FUNSD and XFUND-zh contain 50 samples) for performance evaluation. PromptFor FUNSD, we use the following prompt for SER: _Please read the text in this image and return the information in the following JSON format (note xxx is placeholder, if the information is not available in the image, put "N/A" instead). "header": [xx,...], "key": [xxx,...], "value": [xxx,...]_ It's important to highlight that, we redefined the official entity type of "_question_" and "_answer_" as "_key_" and "_value_" to maintain consistency with the Pair Extraction task. For end-to-end Pair Extraction, we use the following prompt: _You are a document understanding AI, who reads the contents in the given document image and tells the information that the user needs. Respond with the original content in the document image, do not reformat. No extra explanation is needed. Extract all the key-value pairs from the document image._ MetricFor the SER task, we employ the entity-level F1-score [60] for performance evaluation. Additionally, Normalized Edit Distance (NED) is also calculated as is done in other end-to-end VIE methods [65]. However, due to limitations in GPT-4V's ability to generate precise bounding boxes for entities, we aligned predictions with ground-truth using the principle of minimum edit distance. Results and analysisThe SER and Pair Extraction results are shown in 8 and 9, respectively. We found that: **(1) GPT-4V might have constraints in comprehending the spatial arrangement of documents.** As shown in Figure 5, some text content located near the top of the page, which lacks both visual and semantic alignment with the _header_ category, is erroneously identified as a _header_. Additional visualizations are presented in 6. It is evident that GPT-4V excels in analyzing documents with straightforward layouts but struggles to comprehend those featuring intricate layouts. **(2) GPT4V tends to generate new keys for non-kv-pair contents.** For instance, as shown in Figure 7, contents "09 / 17 / 97 10:55" at the header part are recognized as "Date: 09/18/97", "Time: 10:55", "Fax Number: 503 841 1898", "Company: LORILLARD PTLD", "Page Number: 001". Figure 5: Illustration of error cases of the SER task. The text content enclosed within the red box is incorrectly identified as _header_ entities. Figure 6: Illustration of Entity Prediction on Full Document Images in the FUNSD Dataset. Due to GPT-4V’s limited capability in recognizing Chinese characters, we have excluded examples from the XFUND-zh dataset in this context. Zoom in for the best review. ## 3 Discussions Do specialized models still hold research value in the OCR field?There are three main drawbacks of GPT-4V. (1) Based on the experimental results in Section 2, GPT-4V's ability in OCR is limited to Latin contents and struggles to cope with multilingual and complex scenarios. (2) The inference cost and delay are significantly high, thereby posing usability challenges in some practical scenarios. (3) The long cycle and complex process of updating make it difficult to promptly address minor issues. Considering the aforementioned shortcomings and limited OCR capabilities of some other LMMs [79], we believe that existing LMMs struggle to simultaneously excel in various OCR tasks. Therefore, **we _contend that specialized models in the field of OCR continue to hold significant value for research._** How can we fully leverage the potential of LMMs like GPT-4V in the OCR domain?These are some possible strategies. **(1) Semantic understanding enhancement:** A significant characteristic of LMMs lies in their outstanding semantic capabilities after extensive training on large-scale data. Since semantic understanding is a crucial factor in document comprehension and some related tasks, harnessing the semantic potential of LMMs can greatly enhance the performance in these tasks. **(2) Downstream task finetuning:** Another approach that fully leverages the prior knowledge of LMMs is fine-tuning, especially in scenarios with limited data. Fine-tuning allows the model to adapt to specific tasks or domains, thus improving the performance [89]. **(3) Auto/semi-auto data construction:** Using LMMs for automatic/semi-automatic data annotation and generation will substantially reduce the cost of manual labeling, which is an effective strategy for tackling the difficulties of data acquisition [90]. ## 4 Limitations There are three main limitations of our work. First, the test sample of our evaluation is small-scale (mostly 50 samples per dataset) due to the conversation limits (50 conversations per 3 hours) of GPT-4V. This could potentially limit the generalizability of the results. Second, our assessment primarily focuses on mainstream OCR tasks and does not include other OCR-related tasks. Hence, the findings might not cover the full spectrum of OCR capabilities of GPT-4V. Third, only the zero-shot capacity of GPT-4V in OCR was evaluated, without exploring few-shot scenarios. As a result, the potential benefits of further training or fine-tuning the LLM model for specific tasks are not addressed. Few-shot scenarios with technology such as in-context learning [91] are worth of exploring in the future. ## 5 Conclusion In this paper, we present a comprehensive evaluation of the OCR capabilities of GPT-4V through a variety of experiments. For the first time, we offer not only qualitative demonstrations but also quantitative performance analysis of GPT-4V across a wide spectrum of tasks. These tasks encompass scene text recognition, handwritten text recognition, handwritten mathematical expression recognition, table structure recognition, and information extraction from visually rich documents. Our findings, grounded in meticulous experimental results, provide an in-depth analysis of the strengths and limitations of GPT-4V. Although the model shows a strong ability to accurately recognize Latin content and supports input images of variable resolutions, it displays notable struggles with multilingual and complex scenarios. Additionally, the high inference costs and the challenges associated with continuous updating pose significant barriers to the real-world deployment of GPT-4V. Therefore, we contend that specialized models in the field of OCR continue to hold significant value for research. Despite these limitations, GPT-4V and other existing general LMMs could still significantly contribute to the development of the OCR field in several ways. These would include enhancing semantic understanding, fine-tuning for downstream tasks, and facilitating auto/semi-auto data construction. Figure 7: Illustration of error cases of the Pair Extraction task. The text content enclosed within the red box is incorrectly identified as entity pairs. In summary, this paper presents a first-of-its-kind, in-depth quantitative evaluation of GPT-4V's performance in OCR tasks. We will continuously update the evaluation results in the future, and we hope the findings in this paper will provide valuable insights and strategies for researchers and practitioners working on OCR tasks using large multi-modal models.
2307.15717
Utilizing Large Language Models for Natural Interface to Pharmacology Databases
The drug development process necessitates that pharmacologists undertake various tasks, such as reviewing literature, formulating hypotheses, designing experiments, and interpreting results. Each stage requires accessing and querying vast amounts of information. In this abstract, we introduce a Large Language Model (LLM)-based Natural Language Interface designed to interact with structured information stored in databases. Our experiments demonstrate the feasibility and effectiveness of the proposed framework. This framework can generalize to query a wide range of pharmaceutical data and knowledge bases.
Hong Lu, Chuan Li, Yinheng Li, Jie Zhao
2023-07-26T17:50:11Z
http://arxiv.org/abs/2307.15717v1
# Utilizing Large Language Models for Natural Interface to Pharmacology Databases ###### Abstract. The drug development process necessitates that pharmacologists undertake various tasks, such as reviewing literature, formulating hypotheses, designing experiments, and interpreting results. Each stage requires accessing and querying vast amounts of information. In this abstract, we introduce a Large Language Model (LLM)-based Natural Language Interface designed to interact with structured information stored in databases. Our experiments demonstrate the feasibility and effectiveness of the proposed framework. This framework can generalize to query a wide range of pharmaceutical data and knowledge bases. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none: + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none: + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: none: + Footnote †: copyrighted: none none + Footnote: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: none: none + Footnote †: copyrighted: none + Footnote: none + FootnoteFootnote: none: none + FootnoteFootnote: none: none + Footnote: none: none + FootnoteFootnote: none: none + FootnoteFootnote: none + FootnoteFootnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + FootnoteFootnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none + Footnote: none none: none + Footnote: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none: none + Footnote: none: none + Footnote: none: none + Footnote: none: none + + Footnote: none: none + Footnote: none: none + + Footnote: none: none + + Footnote:
2304.13636
AutoCure: Automated Tabular Data Curation Technique for ML Pipelines
Machine learning algorithms have become increasingly prevalent in multiple domains, such as autonomous driving, healthcare, and finance. In such domains, data preparation remains a significant challenge in developing accurate models, requiring significant expertise and time investment to search the huge search space of well-suited data curation and transformation tools. To address this challenge, we present AutoCure, a novel and configuration-free data curation pipeline that improves the quality of tabular data. Unlike traditional data curation methods, AutoCure synthetically enhances the density of the clean data fraction through an adaptive ensemble-based error detection method and a data augmentation module. In practice, AutoCure can be integrated with open source tools, e.g., Auto-sklearn, H2O, and TPOT, to promote the democratization of machine learning. As a proof of concept, we provide a comparative evaluation of AutoCure against 28 combinations of traditional data curation tools, demonstrating superior performance and predictive accuracy without user intervention. Our evaluation shows that AutoCure is an effective approach to automating data preparation and improving the accuracy of machine learning models.
Mohamed Abdelaal, Rashmi Koparde, Harald Schoening
2023-04-26T15:51:47Z
http://arxiv.org/abs/2304.13636v1
# AutoCure: Automated Tabular ###### Abstract Machine learning algorithms have become increasingly prevalent in multiple domains, such as autonomous driving, healthcare, and finance. In such domains, data preparation remains a significant challenge in developing accurate models, requiring significant expertise and time investment to search the huge search space of well-suited data curation and transformation tools. To address this challenge, we present AutoCure, a novel and configuration-free data curation pipeline that improves the quality of tabular data. Unlike traditional data curation methods, AutoCure synthetically enhances the density of the clean data fraction through an adaptive ensemble-based error detection method and a data augmentation module. In practice, AutoCure can be integrated with open source tools, e.g., Auto-sklearn, H2O, and TPOT, to promote the democratization of machine learning. As a proof of concept, we provide a comparative evaluation of AutoCure against 28 combinations of traditional data curation tools, demonstrating superior performance and predictive accuracy without user intervention. Our evaluation shows that AutoCure is an effective approach to automating data preparation and improving the accuracy of machine learning models. data curation, data quality, data augmentation, machine learning, tabular data ## I Introduction **Data Quality Problems:** In the recent decades, machine learning (ML) has developed a strong impact in several application domains, including autonomous driving, gaming, healthcare, logistics, and finance [9]. Developing ML models, in such domains, typically involves data acquisition from multiple distinct sources. For instance, perception and state estimation systems in self-driving cars usually employ ML models trained on data collected from sensor readings, GPS fixes, and historical records [35]. After data acquisition, data preparation and transformation processes are commonly carried out to bring the collected data into a state suitable for model building. In fact, the performance of such ML models broadly depends on the quality of the input data [25]. Specifically, the predictive performance may drastically degrade whenever the input data is noisy or contains erroneous instances. In general, real-world data mostly suffers from heterogeneous error profiles due to improper join operations, noisy communication channels, inaccurate and/or incomplete manual data entry, etc. [16]. Due to such technical/human-related problems, different error types, e.g., outliers, pattern/rule violations, duplicates, inconsistencies, and implicit/explicit missing values, may simultaneously emerge in a data set. **Challenges:** Since high data quality is a necessary prerequisite for improving the performance of ML models, the input data has to be properly curated before being employed for modeling tasks. A typical data curation pipeline usually begins with detecting erroneous instances before repairing them. In fact, there exist plenty of commercial and open source tools for error detection and repair, e.g., Talend, OpenRefine, Trifacta, Tamr, NADEEF, and HoloClean [2]. However, such tools still suffer from several problems. First, they usually require domain knowledge and skilled individuals who can formulate such knowledge as a set of rules/constraints. Exceptions to these rules/constraints are typically reported to the users to take appropriate corrective actions, either by correcting the data, or by fine-tuning the data and/or the rule definitions. Due to user involvement, it may become challenging to adopt these data curation tools without users possessing sufficient data expertise. A possible solution could be the adoption of an automated rule generation tool, e.g., Metanome [27], DCFinder [28], and RTClean [11]. Nevertheless, the performance of such tools, in terms of the quality of the generated rules, varies broadly for different datasets, according to our experiments [1]. Moreover, their high computational complexity hinders their integration with data curation tools. Second, most ML-based error detection tools, e.g., RAHA [23] and HoloDetect [15], cannot recognize the type of the detected errors, i.e., whether they are rule/pattern violations, outliers, missing values, or duplicates. They simply train a detection classifier which differentiates between erroneous and clean data instances. Due to lack of knowledge about the detected error types, the task of selecting a well-suited data repair tool becomes non-trivial. As a workaround, one may implement multiple data repair tools. However, this solution may increase the complexity of the curation pipeline, since the search space of repair candidates drastically increases. For example, if a data instance \(X_{i}=x_{i,1},\cdots,x_{i,k}\) has an empty cell \(x_{i,j}=NaN\), where \(k\) is the number of columns, the cell \(x_{i,j}\) has to be properly imputed. Knowing that there exist plenty of imputation methods which generate distinct values \(a_{1},\cdots,a_{n}\), it becomes challenging to select the optimal repair candidate. To this end, it becomes necessary to further extend the data preparation pipeline, which in turn increases the complexity, via implementing additional tools, e.g., BoostClean [20] and CPClean [17]. **Proposed Method:** To overcome these challenges, we introduce a novel automated data curation pipeline, referred to as AutoCure, which comprises two main modules, namely the _adaptive ensemble-based error detection_ module and the _clean data augmentation_ module. The core idea behind AutoCure is to synthetically enhance the density of the clean fraction of the input data (which typically consists of clean and dirty data instances). According to the information theory, the process of noise reduction is equivalent to adding up more data with similar quality [32]. Pillared on this theory, AutoCure strives to increase the proportion of clean data instances for the sake of reducing the impact of noisy/erroneous data instances. Accordingly, we can avoid the problems of data repair via replacing it with a clean data augmentation module. To further clarify the idea behind AutoCure, Figure 1 shows an illustrative example of three different scenarios. Figure 0(a) shows an ML model (dashed curve) trained on a small data set, which includes a set of clean instances (green circles) and a set of noisy data instances (red circles). Due to the noisy instances, such an ML model fail to precisely describe the linear data set. To improve the predictive performance, we may repair the noisy data instances using a traditional data repair method, as depicted in Figure 0(b). However, the repair process is usually complex, involving user intervention and additional steps to select the best repair candidates. Moreover, traditional data repair methods can not restore the true values of the noisy instances. A second strategy, adopted by AutoCure, for dealing with the poor data quality problem is to deliberately increase the density of clean instances, as Figure 0(c) depicts. In this case, the impact of the dirty instances on the ML modeling process is significantly reduced. As a result, the generated ML model broadly resembles the curve shown in Figure 0(b). **Summary of Contributions:** To sum up, the paper provides the following contributions: (1) We introduce a novel two-stages curation pipeline for dealing with erroneous tabular data through increasing the density of clean data instances. (2) We propose an adaptive ensemble-based error detection method. The proposed detector annotates a cell \(x_{i,j}\) as dirty, if the cell is detected by at least \(k\) base detectors. AutoCure dynamically adapts the threshold \(k\) to overcome the _data exclusion_ problems which typically occur while extracting clean data instances from the input data set. Such data problems usually happen due to inaccurate detections of the base detectors (cf. Section III). (3) We conduct extensive experiments to evaluate AutoCure relative to the ground truth, to obtain the performance upper-bound, and a wide collection of baseline methods in terms of the modeling accuracy and the training time. The results show that AutoCure constantly achieves a comparable performance to the ground truth data together with requiring less training time compared to the case of running several repair methods to select the best repair candidates. To the best of our knowledge, AutoCure is the first work which automates data curation in ML pipelines via reducing the impact of erroneous data and considering the density of the clean fraction as a practical solution for the data quality problems. **Structure of the Paper:** The remainder of this paper is structured as follows. Section II introduces the different components in the proposed pipeline, together with highlighting the main assumptions. Section III presents the adaptive ensemble-based error detection method and explains the data exclusion problems. In Section IV, we discuss the augmentation of clean data using the so-called variational autoencoder (VAE). Section V introduces our proof-of-concept implementation, before presenting the obtained results for different data sets. Section VI discusses the related work, before Section VII concludes the paper with an outlook on future work. ## II Overview & Architecture In this section, we present the architecture of AutoCure together with our assumptions. The proposed architecture consists of two modules, an adaptive ensemble-based error detector and a clean data augmentation, which perform three main tasks, including (1) the detection of erroneous data instances, (2) the extraction of clean data fraction including all classes, and (3) the generation of additional data from the same distribution as the clean fraction. Figure 2 gives an overview over all components with their respective inputs. At the outset, a dirty data set is used as an input to the adaptive ensemble-based error detection module. The rationale behind such a module is to maximize the detection recall, defined as the fraction of erroneous data instances that are detected. Moreover, such an adaptive approach enables us to deal with the data exclusion problems (cf. Section III) which usually occur due to poor detection performance. The adaptive ensemble-based error detection module implements a number of _base error detectors_, such as missing values (MV) detector, outliers detector, duplicates detector, Fig. 1: Three ML models trained on different types of data rule violations detector, and ML-based error detectors (cf. Section V for more details). Each base detector generates a list of indices corresponding to the erroneous instances detected by each method. To combine these detections, we implement an _adaptive voting mechanism_ where the instances detected by at least \(k\) methods are annotated as erroneous. After generating a list of erroneous data instances, a _data sampler_ is used to extract a clean data fraction. Before augmenting the clean data fraction, a _data checker_ scrutinizes it to detect possible occurrence of the relevant data exclusion problems. If one of these problems is detected, the data sampler iteratively adjusts the value of the voting threshold \(k\) so that all relevant data exclusion problems are resolved. The extracted clean data fraction is then used as an input to a _variational autoencoder_ (VAE) to generate a new set of data from the same distribution. As Figure 2 illustrates, the VAE module implements two feed-forward neural networks, namely an encoder and a decoder. The encoder learns the distribution of the latent space representation of the clean data fraction. Afterward, the decoder uses the sampled latent vector to generate data instances similar to the inputs. In this context, the optimization problem is to minimize (1) the reconstruction loss function which compares the inputs with the decoder-generated values, and (2) the KL divergence which statistically differentiates between the probability distributions of the inputs and the generated data. After generating additional clean data, a _data integrator_ merges the newly generated data with the original dirty data. The merged data set is then transformed using feature engineering and wrangling tools, e.g., normalization, embedding, and feature crossing, before training ML models on the transformed data. In our paper, we assume that the dirty data sets have heterogeneous error profiles. In other words, different error types may simultaneously exist in a data set, e.g., outliers and missing values. To enable clean data augmentation, we also assume that each dirty data set includes a set of clean instances and a set of dirty instances. In the next section, we present the implementation details of the adaptive ensemble-based error detection module. ## III Adaptive Error Detection Before delving into the details of our adaptive ensemble-based error detection method, it is necessary to highlight that the existing _error-dependent_ detectors tackles only a subset of the errors in a data set. For instance, a missing value detector finds only null values while overlooking other errors. Similarly, outlier detectors usually employ statistical measures to differentiate between legitimate data and outliers while ignoring other error types, such as rule violation and duplicates. Accordingly, the low detection recall of such detectors prevents them from being individually employed. To maximize the detection recall, there exist several advanced methods which can be subsumed under two major classes, namely the _ML-based methods_ and the _ensemble methods_. The first class embodies the detectors which implement semi-supervised binary classifiers to differentiate between clean and dirty data instances [15, 26]. The second class comprises the detectors which combine the detections of several error-dependent methods [2] to improve the detection recall together with providing a knob for controlling the detection precision (defined as the fraction of relevant instances, e.g., actual dirty data instances, among the detected instances). Nevertheless, ML-based and ensemble detectors usually suffer from _relevant data exclusion_ problems when the main objective is to extract a clean fraction from the dirty data set, as required in AutoCure. Specifically, data exclusion problems typically arise when the extracted clean fraction is either (1) empty or (2) does not comprise all classes (i.e., unique labels in a data set) which exist in the dirty data set. The first type, referred to as the _attribute-level_ exclusion, commonly emerges when an error detector flags all data instances of a certain attribute as erroneous (cf. red instances in the attribute \(A2\) in Figure 2(a)). In this case, the data sampler will fail to extract a clean fraction because all records contain erroneous data instances. The second type, referred to as the _class-level_ exclusion, occurs when an error detector flags all data instances from one or more classes as erroneous (cf. red instances in Figure 2(b)). Accordingly, these classes (e.g., class "1" in Figure 2(b)) will not be represented in the clean fraction, which Fig. 2: Architecture of the proposed method in turn broadly changes the data distribution of the augmented data. In particular, both data exclusion problems may arise due to (1) the false positives of some error-dependent or ML-based detectors, and (2) fixing the value of the voting threshold \(k\), in ensemble detectors, for all data instances. To overcome the data exclusion problems, AutoCure introduces an _adaptive_ ensemble method which can dynamically adjust the voting threshold to preserve all classes in the extracted clean fraction. To this end, the proposed adaptive ensemble-based method makes the detection decisions iteratively. Specifically, the data sampler adjusts the value of the voting threshold \(k\) whenever it detects the occurrence of data exclusion problems. Figure 4 shows an example of tuning the voting threshold \(k\) while detecting errors in a dirty toy data set, which consists of five records \(R1\)-\(R5\) and two attributes \(A1\) and \(A2\). After running all available detectors, i.e., \(S1\)-\(S7\), the iterative voting mechanism begins with an initial value of the voting threshold, i.e., \(k=3\). In this scenario, the cell \(C12\) has been detected by three detection methods, i.e., \(s_{1}\), \(s_{3}\) and \(s_{4}\), while the cell \(C42\) has been detected by four detection methods, i.e., \(s_{1}\), \(s_{2}\), \(s_{5}\) and \(s_{6}\). According to the traditional Min-K voting mechanism, the cells \(C12\) and \(C42\) are flagged as erroneous since they have been detected by at least three methods. Consequently, the records containing these cells are to be excluded from the clean fraction, as shown on the top left table (i.e., \(D_{clean}\) for \(k=3\)). As a result, a class-level data exclusion problem emerges, where the clean fraction lacks records with the class "1". To resolve this problem, AutoCure updates the voting threshold \(k\) from three to four in the second iteration. In this case, the cell \(C12\) does not exceed the threshold. Accordingly, it is flagged as a clean cell and thus its record will be included in the clean fraction, as shown on the top right table (i.e., \(D_{clean}\) for \(k=4\)). To clarify the implementation details of AutoCure, Figure 5 demonstrates the pseudocode of the proposed data curation pipeline. At the outset, AutoCure implements an inventory of several error-dependent and ML-based detectors \(\mathcal{S}_{base}\). Each detector \(s_{i}\) generates a list of indices of the erroneous instances, e.g., \(R_{i}=\{(r_{u},a_{v})\}\), where \(r_{u}\), \(a_{v}\) are the \(u\)th record and the \(v\)th attribute in the dirty data set \(D_{dirty}\). Afterward, all detections of \(m\) detectors \(R_{all}\) are merged, before estimating the cells count \(Q_{cells}\), i.e., which quantifies for each cell \(c_{j}\), the number of times they have been detected (cf. lines 5-7). Initially, AutoCure assigns the current voting threshold \(k_{cur}\) a small value which is larger than one (e.g., \(k_{cur}=2\) while setting \(k_{cur}=1\) implies using only one error-dependent or ML-based detection method). In the first iteration, the algorithm uses an initial value, i.e., \(k_{cur}=k_{init}\), to decide for each cell whether it is erroneous (cf. lines 16 and 17). The final list of detections \(R_{ensemble}\) is then utilized to extract a clean fraction \(D_{clean}\), as shown in line 18. Before running the variational autoencoder, the algorithm checks for the occurrence of data exclusion problems (cf. lines 19-27). An attribute-level data problem can be detected if the size of the clean data fraction is equal to zero. If an attribute-level problem is detected, then the algorithm increases the temporary threshold \(K_{attr}\) by the value of the update rate \(\alpha_{u}\) (cf. line 20). Increasing the threshold clearly implies relaxing the voting mechanism, which in turn reduces the number of detected cells. In the subsequent iteration, the algorithm re-estimates the list of all detections \(R_{all}\) and the clean fraction \(D_{clean}\). Next, it checks again for possible attribute-level data exclusion, before inspecting \(D_{clean}\) for class-level exclusion problems. To this end, it simply compares the number of classes in the dirty data _classes_(\(D_{dirty}\)) and in the clean fraction _classes_(\(D_{clean}\)). If a class-level problem is detected, AutoCure estimates the list of missing classes \(L_{miss}\) (cf. lines 22-23). In this case, AutoCure deliberately differentiates between the cells belong to records whose classes are included in \(L_{miss}\), e.g., \(C12\) and \(C42\) in Figure 4, and all other cells, e.g., \(C52\). The rationale behind this differentiation is to achieve a high detection precision while ensuring that all classes in \(D_{dirty}\) also exist in the clean fraction \(D_{clean}\). To this end, AutoCure relaxes the voting algorithm for the cells belonging to the missing classes, e.g., \(C12\) and \(C42\), to facilitate their inclusion in the clean fraction. Whereas, AutoCure becomes more strict with other cells, e.g., \(C52\), since their classes are already represented in the clean fraction. For this purpose, AutoCure adopts two different temporary thresholds \(k_{attr}\) and \(k_{class}\). It updates the threshold \(k_{class}\) when a class-level exclusion occurs. In the next iteration, the algorithm checks whether each cell \(c_{j}\) belongs to the records of the missing classes (cf. line 12). If that is the case, it employs the temporary threshold \(k_{class}\), which is always greater than \(k_{attr}\) by the value of the update rate \(\alpha_{u}\), for making the detection decisions. Otherwise, it uses the threshold \(k_{attr}\). The algorithm keeps iterating till Fig. 4: Example of adapting the voting threshold Fig. 3: Examples of data exclusion problems all data exclusion problems are resolved. Afterward, the final clean fraction is merged with the dirty data, before carrying out the typical wrangling and feature engineering tasks. ## IV Clean Data Augmentation In this section, we discuss the technical details of the clean data augmentation module. After extracting a clean data fraction, a data augmentation module is used to generate data from the same distribution as the clean fraction. In this regard, we examined three data augmentation methods dedicated to tabular data, including MODALS [6], Variational Autoencoders (VAE) [18], and Conditional Tabular Generative Adversarial Network (CTGAN) [12]. Through our experiments, we found that VAE outperforms the other two methods1. Specifically, we examined the three methods in an ML pipeline for several datasets. The VAE data augmentation achieved higher predictive accuracy (F1 scores \(\geq\) 0.98%) than the other two methods. Therefore, we decided to integrate the VAE method with our adaptive error detection. Footnote 1: For brevity, we omitted the results of our comparative study between the three augmentation methods in this paper. In general, an autoencoder implements two components, namely an encoder and a decoder. The encoder compresses the input data set in a low dimensional space, referred to as the _latent space_ representation. Afterward, the decoder exploits the latent space representation to recover the input data set. For this purpose, autoencoders typically have a reconstruction loss function to compare the original data with the values generated by the decoder. For data augmentation, autoencoders perform some variations in the latent space. To this end, the VAE module trains its encoder to extract the parameters of data distributions rather than the latent space representation [18]. As depicted in Figure 2, the decoder generates a two-dimensional vector consisting of the mean \(\mu_{x}\) and the variance \(\sigma_{x}\). Afterward, VAE generates a set of data instances from the Gaussian distribution \(\mathcal{N}(\mu_{x},\sigma_{x})\) formed using the extracted parameters. Such a set of data instances are then used as an input to the decoder to generate additional clean data. In addition to the reconstruction loss, the VAE module employs the KL divergence to distinguish between the probability distributions of the original data set (i.e., the clean fraction in AutoCure) and the generated data. Generally, the KL divergence is a statistical measure which quantifies how a probability distribution differs from a reference distribution. The VAE module strives to reduce the value of KL divergence via optimizing the mean and variance to simulate the input distribution. Aside from the optimization metrics, the process of sampling data from a distribution, parameterized by the generated means and variances, is not differentiable. In this case, it is relatively challenging to perform backpropagation (necessary to optimize the weights of the encoder and the decoder) over the random node \(Z\). To overcome this problem, the reparameterization process is performed to enable backpropagation through the random node. To this end, the reparameterizer turns the random node \(Z\sim\mathcal{N}(\mu_{x},\sigma_{x})\) into a differentiable function \(Z=\mu+\sigma\odot\epsilon\), where \(\epsilon\sim\mathcal{N}(0,1)\) represents the standard Gaussian distribution, and it is irrelevant for taking the gradients (which is a necessary step in the backpropagation process). ## V Performance Evaluation In this section, we assess the effectiveness of AutoCure relative to a set of baseline methods. Through the experiments, we seek to answer the following questions: (1) How does AutoCure compare to the baseline methods in terms of their impact on downstream ML models (the performance of such models is quantified in terms of the predictive accuracy and the training time)? (2) What is the minimum amount of augmented clean data required to yield a comparable performance as the ground truth? (3) What is the impact of the voting threshold \(k\) on the detection accuracy? (4) what is the impact of the error rate of a data set on the performance of AutoCure? (5) How does AutoCure contribute to the model performance? We first Fig. 5: Ensemble-based error detection algorithm describe the setup of our evaluations, before discussing the results and the lessons learned throughout this study. ### _Experimental Setup_ We evaluated AutoCure using six real-world data sets covering different data sizes and different error rates. Table I summarizes the characteristics of each data set. Three of such data sets, i.e., Adult, breast Cancer, and Smart Factory, are associated with classification (CL) tasks. Whereas, the other three data sets, i.e., Nasa, Housing, and Soil Moisture, have regression tasks (REG). To control the evaluation environment, we opted to inject different realistic errors into the data sets. To this end, AutoCure leverages the BART tool [4] which provides a systematic control over the amount of errors and how hard these errors are to be repaired. To inject errors using BART, we use a set of denial constraints to generate different error types, such as rule violation, outliers, and missing values. We compare the performance of AutoCure to a wide collection of baseline methods composed of different error detection and repair tools. Table II lists the error detection and repair methods. The combination of each detection and repair method represents a baseline method. For instance, the baseline B2 implies detecting errors in a dirty data set using dBoost and generating repair candidates using an ML-based imputation algorithm. In the list of repair methods, we also include the ground truth and dirty cases to show the performance upper- and lower-bound. The standard imputer replaces the erroneous data instances with the mean values of the numerical columns and with a dummy string for the categorical columns. Alternatively, the ML-based imputer employs a decision tree model to predict the erroneous numerical instances, while it employs missForest [33] for categorical data. The evaluation metrics used in the evaluation comprise the detection accuracy, i.e., expressed in terms of the detection precision, recall, and F1 score, the training time, and the predictive accuracy. For the VAE module, both the encoder and the decoder have been implemented as feed-forward neural networks. The inputs to the VAE module are a training set, a test set, the dimensions of the clean fraction, the number of nodes in the hidden layers, and the number of latent factors. The number of nodes in the input layer of the encoder and in the output layer of the decoder has been set to the number of attributes in the clean fraction. The encoder and decoder comprise two hidden layers, and the number of nodes in the first and second hidden layers has been set to 50 and 12, respectively. We employ the Adam optimizer to optimize the parameters and custom loss, which are the combined mean square error and the KL divergence. We add the ReLU activation function for each hidden layer in the encoder and the decoder. To examine the performance of AutoCure and the baseline methods, we implemented a neural network, using Keras, which can deal with regression, binary, and multi-class classification tasks2. AutoCure leverages a Bayesian-based informed search method, referred to as Optuna [3], to tune the learning rate, the number of hidden layers, and the number of units per layer. For all experiments, we fixed the number of training epochs to 500. All experiments have been repeated ten times, where the means of the ten runs are reported. For clarity, we omitted the standard deviations when they are less than 1%. We run all the experiments on an Ubuntu 16.04 LTS machine with 32 2.60 GHz cores and 128 GB memory. Footnote 2: The source code of AutoCure, the baseline methods, and the data sets, are to be publicly released with the final version of the paper. ### _Results_ Voting ThresholdBefore delving into the results obtained for AutoCure and the baseline methods, we first assess the impact of changing the voting threshold to properly motive for the adaptive voting mechanism implemented in AutoCure. Figures 5(a) and 5(b) depict the detection accuracy of the traditional Min-k detection method while detecting erroneous instances in the Housing and Smart Factory data sets, respectively. Both figures show that increasing the voting threshold \(k\) has distinct influences on the detection recall (defined in Section II) and the detection precision (defined in Section III). For instance, Figure 5(a) demonstrates that increasing the voting threshold \(k\) usually leads to reducing the detection recall due to annotating fewer data instances as erroneous. On the other hand, increasing the threshold \(k\) causes the precision to be improved thanks to the precise estimation of erroneous instances using a consensus among multiple error detection methods. In light of this trade-off, it is not often straightforward to accurately select the optimal value of \(k\) for a certain application scenario. Moreover, we noticed during our experiments that hard-coding the value of \(k\), as implemented in the traditional Min-K method, typically prevents us from extracting a _balanced_ clean data fraction. Therefore, AutoCure iteratively adjusts the value of \(k\) to combat the relevant data exclusion problems. Predictive AccuracyIn this set of experiments, we estimate the accuracy of a neural network trained on different versions of the examined data sets. For AutoCure, we set the number of generated instances to 6000 for all data sets. Figure 7 depicts the predictive performance of a neural network trained on a clean data set3 (cf. green area in Figure (a)a) and a set of processed data sets (cf. blue area in Figure (a)a). The processed data sets comprise the data sets curated by AutoCure (abbreviated as A in Figure 7) and the ones curated using different combinations of baseline tools. For the Adult data set, Figure (a)a compares the performance of AutoCure, the clean data set, and 24 baseline methods in terms of the F1 score of the neural network. As the figure depicts, AutoCure achieves similar accuracy as the clean data set (both achieve an average F1 score of 89.5%). It is important to notice that some curation combinations yield a similar performance as AutoCure, while others, e.g., Q2, Q3, K2, and K3, fail to accurately curate the dirty data set. Footnote 3: In this context, a clean data set means the ground truth version of the data. Figure (b)b illustrates a similar evaluation of AutoCure and the baseline methods using the Breast Cancer data set. Again, AutoCure outperforms all baseline methods and achieves a similar performance as the clean data set (an average F1 score of 93.6% in case of AutoCure and 94% in case of the clean data set). Figure (c)c depicts an evaluation of the compared methods using the Smart Factory data set. Obviously, AutoCure outperforms most baseline methods as well as the clean data set (on average by 46%). It is important to highlight that some baseline methods, e.g., E2, H2, and N2, achieve an F1 score of one. However, such a performance of these baseline methods broadly depends on the data set. For the Nasa data set, AutoCure outperforms most baseline methods (an average MSE of 9.3 in case of AutoCure and 15.3 in case of the clean data set). For the Soil Moisture data set, AutoCure achieves a reasonable accuracy (an average MSE of 11.7) relative to the clean version (an average MSE of 2.78) and some baseline methods (average MSE of 3.15 and 3.55 for N2 and H2, respectively). In fact, it is important to highlight the consistency of the results obtained by AutoCure over different data sets. While, the other baseline methods lack such a consistency, e.g., N2 and H2 perform poorly in case of the Nasa data set (Figure (d)d). Finally, Figure (f)f shows the results, in terms of MSE, in case of the Housing data set. Again, AutoCure achieves a similar performance as the clean data set (both achieve an average MSE of 12.2). Training TimeAside from the predictive accuracy, we also evaluate AutoCure and the baseline methods in terms of the training time. Table III lists the average training times of the neural network trained on the clean and the curated data sets. Due to data augmentation, AutoCure causes the training time to be relatively increased for all data sets. However, for some data sets, such as Adult, the training time in case of AutoCure is slightly higher than that using the clean data set (on average by circa 2.9 minutes). Moreover, a fair comparison should consider a realistic scenario, where data scientists typically lack knowledge about which curation tools to implement in their ML pipelines. Due to the existence of multiple traditional data curation methods, it becomes necessary to examine and execute multiple traditional data curation methods, as in BoostClean [20] and ActiveClean [21]. Accordingly, a practical estimation of the training time, caused by the traditional curation methods, should consider a combined estimation of all training times. In this case, AutoCure significantly outperforms the traditional curation methods (named "Combined" in Table III) for all data sets. Augmentation SizeIn this set of experiments, we estimate, for each data set, the impact of the size of the augmented clean data on the predictive performance and the training time. A general remark from the experiments is that increasing the amount of augmented data broadly improves the predictive performance at the expense of increasing the training time. For instance, Figure (a)a shows the F1 score of the neural network trained on a clean data version of the Adult data set (cf. the red dashed line) and multiple versions curated by AutoCure with different sizes of the augmented clean data. The figure shows that only a small amount of augmented data (circa 1% of the size of the Adult data set) is sufficient to achieve the same performance as the clean data set. Similar results have been obtained in case of the Smart Factory data set, where 1% of the size of the clean data set is sufficient to reach the quality of the clean data set. Fig. 6: Detection accuracy against different values of the threshold \(k\) (Min-k) For smaller data sets, the experiments showed that more clean data instances have to be generated. In the case of the Breast Cancer data set, Figure (c)c demonstrates that AutoCure requires more clean instances (at least by 375% relative to the size of the original data set) to achieve a similar accuracy as the clean data set. For the Nasa data set, AutoCure achieves a comparable performance to that of the clean data set when the size of the generated clean data is large enough (circa 200% of the size of the original data set) to minimize the impact of erroneous instances. Similar results have been obtained for the Soil Moisture data set (cf. Figure (e)e). Figure 9 depicts the training times of different data sets curated by AutoCure. For example, Figure (a)a illustrates the training time of the clean version (cf. the red dashed line) and several versions curated by AutoCure. As the figure depicts, AutoCure requires circa three minutes more than it can be needed by the clean version of the data set. In case of the Smart Factory data set, AutoCure requires slightly more time (at most by 18.4%) to satisfy the quality requirements. For the Breast Cancer data set, AutoCure requires an additional 100 seconds to achieve a similar accuracy as the clean data (cf. Figure (c)c). Along a similar line, an additional 80 seconds are needed by AutoCure in case of the Nasa data set (cf. Figure (d)d). For the Soil Moisture data set, due to the associated regression task, AutoCure has to generate numerous clean instances. As a result, AutoCure requires additional training time of circa 400 seconds to reach the performance upper-bound. Error RateIn this set of experiments, we assess the impact of the error rate on the performance of AutoCure and a set of baseline methods. We fixed the number of generated clean instances and gradually increased the amount of injected errors (i.e., missing values and numerical outliers). Figure 10 depicts the predictive accuracy of the neural network, in terms of MSE for regression tasks and F1 score for classification tasks, while being trained on the clean data set and the curated data sets using AutoCure and the baseline methods. In Figure (a)a, AutoCure shows that it is mostly agnostic to the amount of errors in the Nasa data set. Even with a fixed number of generated clean instances, the curve of AutoCure is slightly increased, which is still below the red dashed line of the clean data set. Conversely, all traditional data curation methods poorly perform with increasing the error rate. Fig. 7: Modeling accuracy of clean datasets, AutoCure (denoted as “A”), and a set of baseline methods Figures (b)b and (c)c) show similar results for the Housing and Breast Cancer data sets. Figures (a)a-(c)c shows the training time of the neural network while increasing the error rate. Obviously, increasing the error rate has almost no influence on the training time of all examined data sets (except some spikes which rarely occur at certain error rates). OverfittingThe results, discussed above, show that AutoCure broadly improves the predictive performance while requiring no repair operations. However, such a high performance may be a result of overfitting. Therefore, in this set of experiments, we examine the generated neural network models, while using AutoCure, for overfitting. Figure 12 depicts the learning curves, with respect to the number of epochs and the predictive performance, for several data sets. For instance, in the case of the Smart Factory data set, Figure (a)a demonstrates that the model performance, in terms of the binary accuracy, is growing over time till it reaches a plateau. Obviously, both curves for the training and the validation sets are close to each other. This result has been repeated for all other data sets (cf. Figures (b)b-(f)f). Accordingly, the high performance of AutoCure, depicted in Figure 7, is not a result of overfitting, and it mainly occurs due to the high density of clean data instances. ### _Results Summary_ In this section, we highlight the main findings obtained in this work. The first finding revolves around the performance consistency. Specifically, our results revealed that some baseline methods may achieve similar or even better performance than AutoCure (e.g., accuracy of the models trained on data curated by E2, E3, and H2 in Figure (c)c). Nevertheless, it is important to mention that these results are broadly data set dependent, i.e., the performance may vary from one data set to another. Moreover, multiple experiments have to be carried out to identify a well-suited curation strategy for each data set. Alternatively, the results showed that AutoCure shows a consistency of the results over different data sets. For all examined data sets, AutoCure managed to improve the quality of the data set through emphasizing the clean fractions. Another important finding is that small data sets usually require the generation of more clean data instances (cf. Figure (c)c). This behavior of AutoCure is caused since the percentage of original clean instances, in small data sets, is typically small. Accordingly, AutoCure generates a large number of clean instances to satisfy the quality requirements. This finding also applies to data sets with associated regression tasks, e.g., Housing, Soil Moisture, and Nasa (cf. Figures (d)d and (e)e). In Fig. 11: Impact of error rate on the training time Fig. 8: Modeling accuracy of AutoCure for different sizes of the augmented clean data Fig. 10: Impact of error rate on the predictive accuracy Fig. 9: Training time of AutoCure for different sizes of the augmented clean data fact, regression models are generally more sensitive to data errors than classification tasks, as found in our experiments. For large data sets, e.g., Adult and Smart Factory, AutoCure requires much less generated data to achieve a similar accuracy as the clean data set. Furthermore, the results also showed that AutoCure, in some cases, increases the training time due to increasing the number of training instances (cf. Figures 8(c), 8(d) and 8(e)). Finally, the results confirmed that AutoCure can perform well even with high error rates, as shown in Figure 10. ## VI related work In this section, we report on the state-of-the-art techniques and tools relevant to the data curation problem. In fact, there exist plenty of data curation techniques and tools from academia and industry. For instance, HoloClean [30] is an ML-agnostic data repair technique which infers the repair values via holistically employing multiple cleaning signals to build a probabilistic graph model. To repair pattern violations and inconsistencies, OpenRefine [13] utilizes Google Refine Expression Language (GREL) as its native language to transform existing data or to create repair values. Similarly, BARAN [22] is a holistic configuration-free ML-based method for repairing different error types. To this end, BARAN trains incrementally updatable models which leverage the value, the vicinity, and the domain contexts of data errors to propose correction candidates. To further increase the training data, BARAN exploits external sources, such as Wikipedia page revision history. In fact, the above presented techniques, such as HoloClean, OpenRefine, and BARAN, do not consider the requirements imposed by the downstream ML applications. They tend to improve the data quality regardless of where and how the data comes from or how the data will be consumed. Therefore, a new set of techniques and tools has been emerged which strives to jointly optimize the cleaning and modeling tasks. In other words, these ML-oriented methods focus on selecting the optimal repair candidates with the objective of improving the performance of specific predictive models. Accordingly, these methods assume the availability of repair candidates from other ML-agnostic methods. For instance, BoostClean [20] deals with the error repair task as a statistical boosting problem. Specifically, it composes a set of weak learners into a strong learner. To generate the weak learners, BoostClean iteratively selects a pair of detection and repair methods, before applying them to the training set to derive a new model. ActiveClean [21] is another ML-oriented method, principally employed for models with convex loss functions. It formulates the data cleaning task as a stochastic gradient descent problem. Initially, it trains a model on a dirty training set, where such a model is to be iteratively updated until reaching a global minimum. In each iteration, ActiveClean samples a set of records and then asks an oracle to clean them to shift the model along the steepest gradient. A similar work is CPClean [17] which incrementally cleans a training set until it is certain that no more repairs can possibly change the model predictions. In fact, the ML-oriented methods do not introduce new data repair techniques, such as HoloClean and BARAN. Instead, they tend to select the already-existent repair candidates which may improve the predictive performance. In comparison to AutoCure, such methods can be entirely avoided since no repair candidates have to be generated in our solution. Accordingly, AutoCure can avoid the complexities of searching for the repair candidates and selecting the best subset from those candidates. Furthermore, the ML-oriented methods are usually tailored to specific optimization methods and ML models, e.g., ActiveClean limited to problems with convex loss functions. Alternatively, AutoCure can be utilized with all ML tasks and optimization methods. ## VII Conclusion & Future Work In this paper, we introduce a novel method for dealing with erroneous data, referred to as AutoCure. As alternative to traditional data curation tools, AutoCure does not require the recognition of the detected error types, since it entirely ignores such noisy data instances. Instead, it deals with the clean instances which can lead to a better modeling performance, if more clean instances do exist. Moreover, AutoCure avoids the complicated processes for searching the huge space of repair candidates and the process of selecting the most suitable candidates. As a proof of concept, AutoCure has been evaluated using six data sets and over 28 combinations of data curation tools. The results showed that the data sets curated using AutoCure can achieve a similar or even better performance as the clean data set. While AutoCure delivers an outstanding performance over multiple data sets, it can be further improved through: (1) integrating it with a data valuation tool, to reduce the training time, and (2) dynamically adapting the number of instances to be augmented depending on the size of the dirty data set and the associated ML task. ## Acknowledgment This work was supported (in part) by the Federal Ministry of Education and Research through grants 02L19C155, 01IS21021A (ITEA project number 20219). Fig. 12: Learning curves of the neural networks trained using AutoCure
2306.12351
Progress on the union-closed conjecture and offsprings in winter 2022-2023
Mathematicians had little idea whether the easy-to-state union-closed conjecture was true or false even after $40$ years. However, last winter saw a surge of interest in the conjecture and its variants, initiated by the contribution of a researcher at Google. Justin Gilmer [arXiv:2211.09055] made a significant breakthrough by discovering a first constant lower bound for the proportion of the most common element in a union-closed family.
Stijn Cambie
2023-06-21T15:46:56Z
http://arxiv.org/abs/2306.12351v1
# Progress on the union-closed conjecture and offsprings in winter 2022-2023 ###### Abstract Mathematicians had little idea whether the easy-to-state union-closed conjecture was true or false even after 40 years. However, last winter saw a surge of interest in the conjecture and its variants, initiated by the contribution of a researcher at Google. Justin Gilmer made a significant breakthrough by discovering a first constant lower bound for the proportion of the most common element in a union-closed family. ## 1 Introduction of the Union-Closed conjecture The union-closed conjecture is due to Peter Frankl1, who constructed the elegant statement in 1979 after observing many implications of the statement. Before fully stating it, we need to define crucial concepts from set theory. Footnote 1: See also [https://en.wikipedia.org/wiki/P753M4eter_Frankl](https://en.wikipedia.org/wiki/P753M4eter_Frankl) and [https://www.mrc.nl/mieus/2023/01/20/na-wiskundige-opwinding-of-the-union-closed-conjecture](https://www.mrc.nl/mieus/2023/01/20/na-wiskundige-opwinding-of-the-union-closed-conjecture) The ground set is generally denoted with \([n]=\{1,2,\ldots,n\}\), where \(n\in\mathbb{N}\) is a finite number. A subset \(A\subseteq[n]\) is nothing more than a set containing integers between \(1\) and \(n\), e.g., \(A=\{2,4,6\}\subset[7]\). A family \(\mathcal{F}\subseteq 2^{[n]}\) is a collection of subsets of \([n]\). Here \(2^{[n]}\) contains all \(2^{n}\) possible subsets of \([n]\), which includes the empty set \(\emptyset\) as well. A family \(\mathcal{F}\) is called **union-closed** if for every \(A,B\in\mathcal{F}\), the union \(A\cup B\) belongs to \(\mathcal{F}\). This can be written as \(\mathcal{F}=\mathcal{F}\cup\mathcal{F}\), where the latter equals exactly \(\{A\cup B\mid A,B\in\mathcal{F}\}\). An example of such a family is presented in Figure 1. An other example, for every \(m\in\mathbb{N}\), is the family \(\mathcal{F}_{m}=\{A\mid A\subseteq[m]\lor A=[k]\text{ for some }m+1\leq k\leq m^{2}\}\) which consists of the \(2^{m}\) subsets of \([m]\), as well as \(m^{2}-m\) intervals consisting of the first \(k\) natural numbers. Figure 1: Example of union-closed family The Union-closed conjecture can now be formally stated as follows. **Conjecture 1** (Union-closed conjecture).: _If \(\mathcal{F}\neq\{\emptyset\}\) is a union-closed family with ground set \([n]\), then there exists an element \(i\in[n]\) such that at least half of the sets in \(\mathcal{F}\) contain \(i\)._ Considering our previous example \(\mathcal{F}_{m}\) for large \(m\), one can verify that it might be that only a small fraction of the elements of the ground set are abundant (belong to at least half of the sets) and their average proportion of sets to which they belong can tend to zero. Note that this conjecture would be (arguably) false when taking an infinite ground set \(\mathbb{N}\), e.g. by considering the (union-closed) family of finite subsets of \(\mathbb{N}\). This conjecture can also be formulated in many different ways. For example, one can consider bitstrings in \(\{0,1\}^{n}\) with the element-wise \(OR\)-operation. For instance, when \(n=4\) and \(\mathcal{F}=\{0011,1100,1111\}\), we note that \(0011+1100=1111\). This family is closed under the \(OR\)-operation, which corresponds to being union-closed in the initial formulation. Taking the complements of the set, one obtains the Intersection-closed sets conjecture, which states that an intersection-closed family has an element in its ground set appearing in at most half of the sets. In [3, Sec. 3], one can also find a lattice-, graph-, and Salzborn-formulation. On November 17, 2022, Justin Gilmer [10], a researcher at Google working in machine learning, made a breakthrough by proving a first constant fraction for Conjecture 1. Soon thereafter, as fast as a few days, his result made others put improvements and related results on the preprint server Arxiv. In this note, we summarize the contributions and progress that was made in the winter of 2022-2023. We explain the main ideas of Gilmer's approach (Section 2), mention the forthcoming extensions of his method (Sections 3 and 4), as well as an unsuccessful attempt (Section 5) and discuss other work related to the Union-closed conjecture (Section 6). ## 2 The observations and key elements in the proof by Gilmer A first elementary observation by Gilmer is that one can always prove a statement by proving the contrapositive of that statement. Since the statement of the union-closed conjecture is that simple already, it might be no one considered that before. The contraposition of Conjecture 1 can be stated as follows. If a non-empty family \(\mathcal{F}\) has no element appearing in at least half of the sets of \(\mathcal{F}\), then \(\mathcal{F}\) is not a union-closed family. By remarking that \(A\cup A=A\) for every set \(A\), one knows that \(\mathcal{F}\subseteq\mathcal{F}\cup\mathcal{F}\), and thus \(|\mathcal{F}\cup\mathcal{F}|>|\mathcal{F}|\) whenever \(\mathcal{F}\) is not a union-closed family. While posing related questions and studying counterexamples to variants of Conjecture 1 similar to the ones in [8], Gilmer noted that the entropy of a family might play a role.2 The entropy \(H(X)\) of a discrete random variable \(X\) equals the Shannon entropy of its probability distribution. The latter can be purely presented with a formula. If each possible outcome \(x\) belongs to a (finite) set \(A\), and has probability \(p_{x}\), then Footnote 2: More details on his journey/ thought process can be found in [https://www.youtube.com/watch?v=4ZaP0EwjR_Ikt](https://www.youtube.com/watch?v=4ZaP0EwjR_Ikt) \[H(X)=-\sum_{x\in A}p_{x}\log_{2}p_{x}.\] When sampling uniformly at random from \(\mathcal{F}\), the entropy will equal \(\log_{2}|\mathcal{F}|\) and no higher entropy is possible. If one can sample from \(\mathcal{F}\cup\mathcal{F}\) in such a way that the entropy is larger than \(\log_{2}|\mathcal{F}|\), then one can conclude that \(|\mathcal{F}\cup\mathcal{F}|>|\mathcal{F}|.\) This is exactly the core of Gilmer's approach. More precisely, he proved the following statement. **Theorem 2**.: _Let \(A\) and \(B\) denote independent and identically distributed random variables that sample from a common distribution over subsets of \([n]\). Assume that for all \(i\in[n]\), \(\mathbb{P}[i\in A]\leq 0.01\). Then \(H(A\cup B)\geq 1.26H(A)\)._ As a corollary, by taking the uniform distribution over the subsets of \([n]\), one knows that if \(\mathcal{F}\subset 2^{[n]}\) is a family for which every element is contained in no more than \(1\%\) of the sets, then \(|\mathcal{F}\cup\mathcal{F}|\geq|\mathcal{F}|\)3. This implies that whenever \(|\mathcal{F}|\geq 2\), either \(|\mathcal{F}\cup\mathcal{F}|>|\mathcal{F}|\) (and so the family is not union-closed) or there is an element appearing in at least a \(0.01\) fraction of the sets in \(\mathcal{F}\). From this, one can conclude that Conjecture 1 is true for a half replaced by \(0.01\). **example 3**.: Let \(\mathcal{F}=\{\{1\},\{2\}\}\) and thus \(\mathcal{F}\cup\mathcal{F}=\{\{1\},\{2\},\{1,2\}\}\). Let \(A\) and \(B\) be i.i.d. random variables that output a set of \(\mathcal{F}\) uniformly at random. Then \(\,\mathbb{P}(A=\{1\})=\,\mathbb{P}(A=\{2\})\) and analogously for \(B\), which implies \[\mathbb{P}(A\cup B=\{1\})=\,\mathbb{P}(A\cup B=\{2\})=\frac{1}{4}\,\,\text{ and}\,\,\mathbb{P}(A\cup B=\{1,2\})=\frac{1}{2}.\] Now \(H(A)=2\cdot\frac{1}{2}\log_{2}2=1\) and \(H(A\cup B)=2\cdot\frac{1}{4}\log_{2}4+\frac{1}{2}\log 2=\frac{3}{2}(<\log_{2}3)\). Since \(\log_{2}(2)<H(A\cup B)\), we conclude that it is impossible that \(A\cup B\) takes values in a family with only \(2\) elements and thus \(|\mathcal{F}\cup\mathcal{F}|>|\mathcal{F}|\), i.e. Gilmer's method verifies that \(\mathcal{F}\) is not union-closed. **example 4**.: Let \(\mathcal{F}=\binom{[3]}{\leq 2}\) and thus \(\mathcal{F}\cup\mathcal{F}=2^{[3]}\). Note that \(|\mathcal{F}|=7\) and every \(1\leq i\leq 3\) appears in exactly \(3\) sets and thus in a \(\frac{3}{7}\) fraction. Let \(A,B\) be i.i.d. random variables that output a set of \(\mathcal{F}\) uniformly at random. Then \[\mathbb{P}(A\cup B=\emptyset) =\frac{1}{49}\] \[\mathbb{P}(|A\cup B|=1) =\frac{3}{49}\] \[\mathbb{P}(|A\cup B|=2) =\frac{9}{49}\] \[\mathbb{P}(A\cup B=[3]) =\frac{12}{49}\] Now \[H(A)= 7\frac{1}{7}\log_{2}(7)=\log_{2}(7)\] \[\sim 2.81\] \[H(A\cup B)= \frac{1}{49}\log_{2}(49)+3\frac{3}{49}\log_{2}(49/3)\] \[+3\frac{9}{49}\log_{2}(49/9)+\frac{12}{49}\log_{2}(49/12)\] \[\sim 2.70\] and thus \(H(A)>H(A\cup B)\). We conclude that this is an example for which Gilmer's method does not provide evidence that the family is not union-closed, even while the maximum fraction of occurence of an element is \(\frac{3}{7}\). Note: Analogously, when \(\mathcal{F}=\binom{[5]}{\leq 3}\), one can verify that \(H(A)=\log_{2}(26)\sim 4.7\) and \(H(A\cup B)\sim 4.54\). Every element appears in a \(\frac{11}{26}\) fraction in this case. ## 3 Quick refinement of Gilmer's idea The binary entropy function \(h(p)=-(p\log_{2}p+(1-p)\log_{2}(1-p))\) plays a role in the computations in the work of Gilmer. Noting that \(h(p)\leq h(2p-p^{2})\) whenever \(p\leq\psi:=\frac{3-\sqrt{5}}{2}\), Gilmer claimed that his ideas could be extended to prove a fraction equal to \(\psi\). The authors of [1, 5, 18, 15] quickly implemented this approach. All four of these papers essentially reduced Conjecture 1 for the constant \(\psi\) to the following key lemma, an inequality in one variable. **Lemma 5**.: _Let \(\phi=\frac{\sqrt{5}+1}{2}\) and \(0\leq x\leq 1,\) then \(h(x^{2})\geq\phi xh(x).\)_ The validity of this lemma was established in two different ways by [1] and Sawin [18]. The former used accurate computer calculations and applied interval arithmetic on three intervals, while the latter utilized a purely calculus-based approach. Thanks to some communication between the authors of [1] and [5], in [5] a reference to the formal proof of [1] was added. In [15] the lemma was split in two parts without formal proof, but both can be verified easily. A short and more elegant proof for Lemma 5 was given later by Boppana [2], even while the proof itself would originate from 1989. This proof relies on the following extension of the classical Rolle's theorem, which follows from observations in e.g. [12]. **Theorem 6**.: _Let \(f\) be a differentiable function on a interval \(I\). Let \(m(f)\) be the sum of multiplicities of the roots of \(f\) in \(I\). Then \(m(f^{\prime})\geq m(f)-1.\)_ By iterating the theorem three times, one finds \(m(f)\leq m(f^{\prime\prime\prime})+3\). Applying this result on the function \(f(x)=h(x^{2})-\phi xh(x)\) and counting the multiplicities of the roots \(0,\frac{1}{\phi}\) and \(1\) of \(f\), the conclusion that \(f\) is nonnegative on \([0,1]\) follows quickly. Once Lemma 5 is derived, the proof for Conjecture 1 for constant \(\psi\) (instead of \(0.5\)) is rather short in each of the papers [1, 5, 15, 18], indicated e.g. by the total length of the paper by Chase and Lovett [5]. Their work has three steps. First, they extended the analytic claim (Lemma 5) to the two-variate function \(f(x,y):=\frac{h(xy)}{h(x)y+h(y)x}.\) Next they prove a strengthened inequality between the entropy of \(A\cup B\) and the one of \(A\) and \(B\), for random variables \(A\) and \(B\) (not necessarily identical) on \(\{0,1\}^{n}\) for which every bit is \(1\) with a bounded probability. Finally, they finish the proof of their slightly more general statement that holds for approximate union-closed families. The latter being families for which the union of two random drawn sets belong to the family with a high probability. One example which certifies the sharpness of their proof can be derived from \(\mathcal{F}_{1}+\mathcal{F}_{2}=\{A\mid A\in\mathcal{F}_{1}\lor A\in\mathcal{ F}_{2}\}\) where \(\mathcal{F}_{1}=\binom{[n]}{\psi n+n^{2/3}}\) and \(\mathcal{F}_{2}=\binom{[n]}{\geq(1-\psi)n}\). For this, one need to note that \(|\mathcal{F}_{1}|>>|\mathcal{F}_{2}|\) and that the union of two (ild uniform sampled) random sets from \(\mathcal{F}_{1}\) belongs with very high probability to \(\mathcal{F}_{2}\). The expected size of the union is slightly larger (with an additional term of the order \(n^{2/3}\), i.e. \(\Theta(n^{2/3})\)) than \(n-(1-\psi)^{2}n=(1-\psi)n\), and since the variance on the size is \(O(n^{1/2})\), the union almost surely belongs to \(\mathcal{F}_{2}\) as well. The conclusion is still valid when replacing the term \(n^{2/3}\) by any function \(g(n)\) for which \(n>>g(n)>>n^{1/2}\). In a different direction, in his paper, Gilmer included some ideas for a full resolution of Conjecture 1, but some of these directions were immediately proven not to hold by Sawin and Ellis [18, 7]. Figure 2: An approximate union-closed family whose elements appear in at most a \(\psi+o(1)\) fraction. Further refinements and extensions related to Gilmer's work Sawin [18] gave a suggestion to improve the bound further, which given the sharpness of the form for union-closed families may be considered surprising. Hereby the essence is in a question purely stated in terms of probability distributions. His suggestion was worked out by Yu [20] and Cambie [4]. Yu [20] considered the approach in a slightly more general form initially and made a lower bound computable by restricting to the suggestion of Sawin and applying [1, Lem. 5] and the Krein-Milman theorem [13] to bound the support (number of values with nonzero probability) of a joint distribution by 4. A numerical computation then yield a bound equal to (roughly) 0.38234. In parallel, Cambie [4] found an upper bound for Sawin's approach which indicates that the improvement is way smaller than expected and one would hope for. The construction is a discrete probability distribution with only two values having nonzero probability, with the values determined by a system of equations involving the entropy function. Additionally he proved that this value is sharp, by first reducing the support to 3 elements, where one of the elements equals 1. Finally, the conclusion is derived from the combination of 3-dimensional plots, a numerical minimization problem and a more precise solution for the case where the support has exactly two elements, one of which equals 1. Finally, building upon the work of [5], Yuster [21] considered families that are almost \(k\)-union-closed, meaning that the union of \(k\) independent uniform random sets from \(\mathcal{F}\) belongs to \(\mathcal{F}\) with high probability. He conjectured a tight version for the minimum frequency (the proportion of sets containing the element) of some element in such families, with the threshold for this frequency being the unique real root in \([0,1]\) of \((1-x)^{k}=x\), denoted by \(\psi_{k}\). To understand the sharpness of his conjecture and the intuition behind the choice of \(\psi_{k}\), consider the union of \(\mathcal{F}_{1}=\binom{[n]}{\psi_{k}n^{+}n^{2/3}}\) and \(\mathcal{F}_{2}=\binom{[n]}{\geq(1-\psi_{k})n}\). If at least one set from \(\mathcal{F}_{2}\) is included among the \(k\) sets drawn, the union is guaranteed to belong to \(\mathcal{F}_{2}\). If all \(k\) sets belong to \(\mathcal{F}_{1}\), the expected size of the union is \(n-(1-\psi_{k})^{k}n+\Theta(n^{2/3})\), and since the variance is \(O(n^{1/2})\), the union almost surely belongs to \(\mathcal{F}_{2}\) as well. The conjecture is proven to be true for \(k\leq 4\), while for larger values of \(k\) a weaker bound is established. ## 5 The final Eureka moment, not yet When Scandone [19] uploaded a preprint claiming the full resolution of the union-closed conjecture, there arose initially excitement. However, upon closer examination it became clear that Scandone's proposed solution had several issues, including a significant flaw that requires revising the underlying construction. This was communicated to Scandone by Terence Tao, and the details of this issue are briefly explained later in this section. Nevertheless, Scandone's underlying idea holds potential and is worth mentioning for the valuable intuition it provides for Gilmer's approach. Let \(\mathcal{F}\) be a family which is not union-closed, so \(\mathcal{F}\cup\mathcal{F}\neq\mathcal{F}\). A random variable taking values in \(\mathcal{F}\) has entropy at most \(\log_{2}\lvert\mathcal{F}\rvert\) and equality occurs only for uniform sampling from \(\mathcal{F}.\) By considering various examples, e.g. \(\mathcal{F}=\{\{1\},\{2\}\},\) the reader can verify that there is no strategy to choose two random variables \(A,B\) which sample sets from \(\mathcal{F}\), such that \(A\cup B\) samples uniformly random from \(\mathcal{F}\cup\mathcal{F}\). On the other hand, if for every set \(A\in\mathcal{F}\) the probability of obtaining it is almost equal to the original probability and a few other sets from \((\mathcal{F}\cup\mathcal{F})\backslash\mathcal{F}\) happen with a small probability, the entropy can increase. The reason for this is that the derivative of \(h\) (plotted in Figure 3) is a continuously decreasing function on the interval \((0,1)\), with \(h^{\prime}(0)=+\infty\). To provide a more explicit explanation of Scandone's idea, we describe his proposed construction in detail. Let \(A,B\) be independent random variables that take any set of \(\mathcal{F}\) uniformly at random. Define a \(\mathcal{P}([n])\)-valued random variable \(A^{\delta}\) (depending on \(\delta\)) through the relation \[\Pr[A^{\delta}=X]=(1-\delta)\Pr[A=X]+\delta\Pr[A\cup B=X]\text{ for every }X\subseteq[n].\] For every \(X\in\mathcal{F}\), \(\Pr[A^{\delta}=X]\geq(1-\delta)\Pr[A=X]\) and thus for \(\delta\) sufficiently small, we have \(h(\Pr[A^{\delta}=X])-h(\Pr[A=X])\gtrsim\delta/|\mathcal{F}|h^{\prime}(1/|\mathcal{ F}|)\).4 On the other hand, for \(X\in(\mathcal{F}\cup\mathcal{F})\backslash\mathcal{F}\), let the probability \(p:=\Pr[A\cup B=X]\). We have that \(h(\delta p)\sim-\delta p(\log\delta+\log p-1)\). By choosing \(\delta\) to be sufficiently small such that \(-\log\delta\) is much greater than \(\frac{1}{p}h^{\prime}(1/|\mathcal{F}|)\), we can ensure that \(H(A^{\delta})>H(A)\) holds. Footnote 4: To be precise, we assume \(|\mathcal{F}|\geq 3\) and \(\frac{2}{|\mathcal{F}|}+\delta<1\). Equivalently, the variable \(A^{\delta}\) can be obtained by considering, in addition to \(A\) and \(B\), a Bernoulli random variable of parameter \(\delta\), \(Z_{\delta}\), which determines whether we take \(A\cup B\) or only \(A\). The flaw in the argument is that, in the process of revealing all the digits of \(A^{\delta}\) (computed using the chain rule for the entropy), the indeterminacy provided by \(Z_{\delta}\) (and the consequent improvement of the bounds) is lost after the first step. More precisely, there is step in the computations in which a conditional probability distribution has been erroneously replaced by its expected value, and this produces the aforementioned flaw in the argument. The comment of Tao can be rephrased as follows, "the idea of modifying the union operation by Gilmer is promising, but a single global bit \(Z_{\delta}\) is not sufficient to do the job, and a more involved construction is needed". ## 6 A better understanding by progress in a different direction In this final section, we conclude with the essence of a recent paper and two preprints on the union-closed conjecture, which consider different aspects and angles of attack on Conjecture 1. While Frankl's conjecture is about the existence of one abundant element (element that appears in at least half of the sets) in the family, it is also natural to wonder if there are more abundant elements, assuming that all sets in the family are sufficiently large. The following conjecture by Cui and Hu [6] would imply Conjecture 1. **Conjecture 7**.: _If \(\mathcal{F}\) is a finite union-closed family of sets whose smallest set is of size at least \(2\), then there are at least two elements such that each belong to more than half of the sets of \(\mathcal{F}\)._ At the end of 2022, the three authors of [11] considered this different direction and proved that Conjecture [6] is not true when replacing \(2\) by a larger integer. They proved (among other results) that there are families all of whose sets have size at least \(k\), where \(k\) can be arbitrary large, which do only have \(2\) abundant elements. The main construction is the family \(\mathcal{P}_{4}^{12}\). The family \(\mathcal{P}_{4}^{12}\) consists of all subsets \(S\) of \(\{0,1,\ldots,11\}\) of size at least \(4\) such that either \(\{0,1\}\subset S\), or \(0\in S\) and \(S\subseteq\{0,2,\ldots,10\}\), or \(1\in S\) and \(S\subseteq\{1,3,\ldots,11\}\). The reader can verify that \(|\mathcal{P}_{4}^{12}|=(2^{10}-11)+2\cdot 16=1045\), while every element \(2\leq i\leq 11\) only appears \(2^{9}-1+11=522\) times. One way to increase the size of sets in families with non-abundant elements is to duplicate an element within the sets. However, this creates blocks of size at least \(2\). A block is defined by Poonen [16] as a maximum set of elements that all belong to the exact same sets of a family. Poonen also noted that to prove Conjecture 1, it is sufficient to focus on families for which no block is a singleton. Due to this, it is interesting to note that the Figure 3: Plot of the binary entropy function \(h\) construction of the family \(\mathcal{P}_{4}^{12}\) in [11] can be extended to such families.Let \(k\geq 3\) be a fixed integer and let \(n\) be a sufficiently large even integer as a function of \(k\) (\(n\geq 10k\) works). Let \(E_{n}=\{i\in[n]\mid i\equiv 0\pmod{2}\}\) and \(O_{n}=\{i\in[n]\mid i\equiv 1\pmod{2}\}\) be the set of even and odd integers in \([n]\) respectively. Consider the family \(\mathcal{P}_{k}^{n}\) consisting of subsets \(S\) of \([n]\) of size at least \(k\), such that either * \(\{1,2\}\subset S\), * \(S\subset E_{n}\) and \(2\in S\), or * \(S\subset O_{n}\) and \(1\in S\). It is clear that \(1\) and \(2\) are abundant elements. Now the other elements appear all equally often (by symmetry) and by a small bijection and counting argument, we conclude that these elements are not abundant whenever \[\binom{n-3}{k-3}<2\binom{n/2-2}{\geq k-1}.\] Since this is the case for \(n\) sufficiently large, the conclusion is clear. Another result related with union-closed families and the smallest set size, was published early 2023. Ellis, Ivan and Leader [9] proved that for every \(k\in\mathbb{N}\), there exists a union-closed family in which the (unique) smallest set has size \(k\), but where each element of this set has frequency \((1+o(1))\frac{\log k}{2k}\). As such, proving that focusing on the smallest set cannot work in the strongest possible sense. They also proposed the problem of verifying the union-closed conjecture for a family for which they were unable to verify the statement. The latter was verified by Pulaj and Wood [17]. They also proved new bounds on the least number \(m\) (given \(k\) and \(n\)) such that every union-closed family \(\mathcal{F}\) containing any \(\mathcal{A}\subseteq\binom{[n]}{k}\) with \(|\mathcal{A}|=m\) as a subfamily, satisfies Conjecture 1. We can conclude that despite the progress that originates from the breakthrough of Justin Gilmer, the exact version of Conjecture 1 is still not proven. Mathematicians are still thinking about other directions or modifications of the strategy and hope to resolve Conjecture 1 in the future. Taking into account that the improvement by taking combinations suggested by Sawin [18] turned out to be tinier than expected and hoped for, as illustrated by the example in [4], it seems that the focus should go towards essential new ideas. In particular, the union-closed conjecture might be a distraction of a more general behaviour that \(|\mathcal{F}\cup\mathcal{F}|>|\mathcal{F}|^{c}\) for some \(c(\varepsilon)>1\) when every element of \([n]\) appears in less than a \(\frac{1}{2}-\varepsilon\) fraction of the sets in \(\mathcal{F}\).5 Footnote 5: communicated by Zachary Chase Note added: In June 2023, Liu [14] improved the constant slightly with a different method of coupling. ## Acknowledgements We thank Zachary Chase, Justin Gilmer, Raffaele Scandone and Lei Yu for internal communication while writing this manuscript.
2303.01281
Relative homological algebra for bivariant K-theory
This survey article on relative homological algebra in bivariant K-thoery is mainly intended for readers with a background knowledge in triangulated categories. We briefly recall the general theory of relative homological algebra in triangulated categories and latter specialize it to the non-equivariant and the equivariant bivariant K-thoery, where the actions on C*-algebras is by a finite cyclic group. We conclude by the explicit computation of the universal abelian invariant for separable C*-algebras with the action of $\mathbb{Z}/4$ by automorphisms.
George Nadareishvili
2023-03-02T14:05:59Z
http://arxiv.org/abs/2303.01281v1
# Relative homological algebra for bivariant K-theory ###### Abstract. This survey article on relative homological algebra in bivariant K-theory is mainly intended for readers with a background knowledge in triangulated categories. We briefly recall the general theory of relative homological algebra in triangulated categories and latter specialize it to the non-equivariant and the equivariant bivariant K-thoery, where the actions on C*-algebras is by a finite cyclic group. We conclude by the explicit computation of the universal abelian invariant for separable C*-algebras with the action of \(\mathbb{Z}/4\) by automorphisms. Supported by Shota Rustaveli National Science Foundation of Georgia, FR-18-10849 George Nadareishvili, A. Razmadze Mathematical Institute, [email protected] ## 1. Introduction Given a locally compact Hausdorff space \(X\), one can consider the algebra of continuous functions from \(X\) to complex numbers vanishing at infinity. This construction is functorial and induces the contravariant equivalence between the category of locally compact Hausdorff spaces and the category of commutative C*-algebras with appropriate morphisms (see for example [26]). Subsequently, every property of a locally compact Hausdorff space can be expressed in terms of the function algebra, formulation of which will then usually extend to any (noncommutative) C*-algebra. In this way, C*-algebra theory can successfully be regarded as "noncommutative topology". Similar to topological spaces, the key component of noncommutative topology is the study of C*-algebras using homology functors. However, in case of C*-algebras, up to mild assumptions, there is a universal object among such theories. In particular, by the work of Higson [12], there exists a universal category \(\mathfrak{K}\mathfrak{K}\), admitting a functor from the category of all separable C*-algebras, such that any stable, additive, split short exact sequence preserving functor into an additive category factors through \(\mathfrak{K}\mathfrak{K}\). The similar statements (with respect to appropriate versions of \(\mathfrak{K}\mathfrak{K}\)) hold if we endow C*-algebras with an additional structure. Initially defined by Genadi Kasparov [15] using functional analysis of Fredholm bimodules, the universal category-theoretic characterizations exhibit these categories (also referred to as Kasparov categories, bivariant K-theories or KK-theories) as an indispensable tool in the study of C*-algebras. Arising as homotopy categories, Kasparov categories are naturally triangulated [18]. The techniques of triangulated categories proved fruitful for bivariant K-theory. Most notably in connection to classification programs [7, 8, 10, 19, 20, 22] and Baum-Connes conjecture [18]. Classification in KK-theories usually involves applying the tools of relative homological algebra. It proceeds by functorially pushing computations from a triangulated domain to an abelian setting, where more tools are available to tackle a problem. Results of this nature include (but are not limited to) spectral sequences for derived functor computation, universal coefficient theorems and, consequentially, subcategory classifications. In Section 2 we will recall the known facts from relative homological algebra in triangulated categories. The theory is exceptionally rich, as demonstrated when first explored by of J. Daniel Christensen [4] and Apostolos Beligiannis [1]. In Section 3 we will overview the triangulated structure of equivariant bivariant K-theory and specialize the subject and techniques developed in Section 2. The relative homological algebra for bivariant K-theory was developed by Ralf Meyer and Ryszard Nest in [17, 19]. To demonstrate an example, we will conclude by an explicit computation of the universal abelian invariant for the KK-category of C*-algebras with the action of cyclic group of order \(4\). ## 2. Relative homological algebra in triangulated categories The facts recalled in this section can be found in [19] and [1]. **Definition 2.1**.: An additive category \(\mathfrak{A}\) is called _stable_ if it is equipped with an automorphism \[\Sigma_{\mathfrak{A}}\colon\mathfrak{A}\to\mathfrak{A}.\] For example, given an abelian category \(\mathfrak{A}\), we can construct a \(\mathbb{Z}/k\)-graded (we also allow \(k=0\), by defining \(\mathbb{Z}/0:=\mathbb{Z}\)) stable abelian category \(\mathfrak{A}^{\mathbb{Z}/k}\), by considering the category product \(\mathfrak{A}^{\mathbb{Z}/k}=\prod_{i\in\mathbb{Z}/k}\mathfrak{A}\) and the suspension functor \(\Sigma_{\mathfrak{A}^{\mathbb{Z}/k}}\) that shifts the \(i\)th component of objects (and morphisms) one place to the left. **Definition 2.2**.: _A stable homological functor_ is a homological functor into a stable abelian category that commutes with the suspension functor. For example, to a homological functor \(H\colon\mathfrak{T}\to\mathfrak{A}\) from some triangulated category \(\mathfrak{T}\) into an abelian category \(\mathfrak{A}\), we can associate a stable homological functor \(H_{*}\colon\mathfrak{T}\to\mathfrak{A}_{\mathbb{Z}/k}\) by defining \[H_{*}(A)_{i}=H_{i}(A):=H(\Sigma_{\mathfrak{A}^{\mathbb{Z}/k}}^{-i}X) \tag{2.1}\] for the \(i\)th component of the object \(H_{*}(A)\in\mathfrak{A}_{\mathbb{Z}/k}\). Homological algebra in the non-abelian setting is always relative; that is, one needs an additional structure to get started. For triangulated categories, this structure can be given by fixing a class of morphisms. **Definition 2.3**.: A collection of subgroups \(\mathfrak{I}(A,B)\subseteq\mathfrak{T}(A,B)\) for all pairs of objects \(A,B\) in a triangulated category \(\mathfrak{T}\), such that \[\mathfrak{T}(C,D)\circ\mathfrak{I}(B,C)\circ\mathfrak{T}(A,B)\subseteq \mathfrak{I}(A,D),\] for all \(A,B,C,D\in\mathfrak{T}\), is called an _ideal_\(\mathfrak{I}\) in \(\mathfrak{T}\). _Remark 2.4_.: Equivalently, we can think of \(\mathfrak{I}\) as fixing the subclass of exact triangles \[A\to B\to C\xrightarrow{f}\Sigma A\] with \(f\in\mathfrak{I}(C,\Sigma A)\). These are called _pure_ triangles in [1] (\(\mathfrak{I}\)-exact triangles below). For readers familiar with extriangulated categories, it should be remarked that under mild assumptions, pure triangles constitute the extriangulated subcategory, when one views \(\mathfrak{T}\) as an extriangulated category itself [13]. As an example of an ideal, consider a homological functor \(H\colon\mathfrak{T}\to\mathfrak{A}\) into an abelian category \(\mathfrak{A}.\) Then the kernel of \(H\) on morphisms \[\ker H(A,B)=\{f\in\mathfrak{T}(A,B)\mid F(f)=0\}\] clearly defines an ideal in \(\mathfrak{T}\). **Definition 2.5**.: An ideal \(\mathfrak{I}\) is called _homological_ if it is the kernel of a stable homological functor. Note that different functors can give rise to the same homological ideal by sharing a kernel. However, the resulting homological algebra will only depend on the ideal itself. We will only deal with homological ideals. As elements of a kernel, the morphisms in \(\mathfrak{I}\) can be thought of as vanishing relative to \(\mathfrak{I}\) in \(\mathfrak{T}\). An application of the homological functor with \(\ker H=\mathfrak{I}\) explains the following terminology. **Definition 2.6**.: We call the exact triangle \[A\xrightarrow{f}B\xrightarrow{g}C\xrightarrow{h}\Sigma A\] \(\mathfrak{I}\)_-exact_ in \(\mathfrak{T}\), if \(h\in\mathfrak{I}(C,\Sigma A)\). In this situation, \(f\) is called \(\mathfrak{I}\)_-monic_ and \(g\) is called \(\mathfrak{I}\)_-epic_. Let \(\mathfrak{I}=\ker H\) be a homological ideal. **Definition 2.7**.: We say that a chain complex \(C_{\bullet}=(C_{n},d_{n})\) in \(\mathfrak{T}\) is \(\mathfrak{I}\)_-exact_ in degree \(n\) if \[H(C_{n+1})\xrightarrow{H(d_{n+1})}H(C_{n})\xrightarrow{H(d_{n})}H(C_{n-1})\] is exact at \(H(C_{n})\). We call a chain complex \(\mathfrak{I}\)_-exact_ if it is \(\mathfrak{I}\)-exact in every degree. ### Relative projective objects **Definition 2.8**.: A homological functor \(H\colon\mathfrak{T}\to\mathfrak{A}\) is called \(\mathfrak{I}\)_-exact_ if \(\mathfrak{I}\subseteq\ker H\); that is, \(H(f)=0\) for all \(f\in\mathfrak{I}(A,B)\). As for abelian categories, we define **Definition 2.9**.: An object \(A\in\mathfrak{T}\) is called \(\mathfrak{I}\)_-projective_ if the functor \(\mathfrak{T}(A,-)\) is \(\mathfrak{I}\)-exact. Let \(\mathfrak{P}_{\mathfrak{I}}\) denote the full subcategory of \(\mathfrak{I}\)-projective objects in \(\mathfrak{T}\). \(\mathfrak{P}_{\mathfrak{I}}\) is closed under suspensions, desuspensions, retracts and whatever coproducts exist in \(\mathfrak{T}\). Having candidates for projective objects, we continue by constructing homological algebra in an analogy to the abelian case. **Definition 2.10**.: Let \(\mathfrak{I}\) be a homological ideal in \(\mathfrak{T}\) and \(A\in\mathfrak{T}\). We say that \(\pi\colon P\to A\) is a _one-step \(\mathfrak{I}\)-projective resolution_ if \(\pi\) is \(\mathfrak{I}\)-epic and \(P\in\mathfrak{P}_{\mathfrak{I}}\). An \(\mathfrak{I}\)_-projective resolution_ of \(A\) is an \(\mathfrak{I}\)-exact chain complex \[\cdots\to P_{n}\to P_{n-1}\to\cdots\to P_{0}\to A\] with \(P_{n}\in\mathfrak{P}_{\mathfrak{I}}\) for all \(n\in\mathbb{N}\). We will say that there are _enough \(\mathfrak{I}\)-projective objects_ in \(\mathfrak{T}\) if every object \(A\in\mathfrak{T}\) has a one-step \(\mathfrak{I}\)-projective resolution. Relative projective objects enjoy properties similar to projective objects in an abelian category. **Proposition 2.11** (Meyer-Nest [19, Proposition 3.26]).: _Every object in \(\mathfrak{T}\) has an \(\mathfrak{I}\)-projective resolution if and only if \(\mathfrak{T}\) has enough \(\mathfrak{I}\)-projective objects._ _Any map between objects of \(\mathfrak{T}\) can be lifted to a chain map between \(\mathfrak{I}\)-projective resolutions of these objects and this lifting is unique up to chain homotopy. Two \(\mathfrak{I}\)-projective resolutions of the same object are chain homotopy equivalent._ This allows to define derived functors as in the classical case. Let \(F\colon\mathfrak{T}\to\mathfrak{A}\) be an additive functor into an abelian category \(\mathfrak{A}\). Denote by \(\mathfrak{H}\mathfrak{o}(\mathfrak{T})\) and \(\mathfrak{H}\mathfrak{o}(\mathfrak{A})\) the categories of chain complexes up to chain homotopy of \(\mathfrak{T}\) and \(\mathfrak{A}\), respectively. Applying \(F\) termwise to a chain complexes induces a functor \(\mathfrak{H}\mathfrak{o}(F)\colon\mathfrak{H}\mathfrak{o}(\mathfrak{T})\to \mathfrak{H}\mathfrak{o}(\mathfrak{A}).\) By Proposition 2.11, construction of projective resolutions defines a functor \(P\colon\mathfrak{T}\to\mathfrak{H}\mathfrak{o}(\mathfrak{T})\). Denote by \(\operatorname{H}_{n}\colon\mathfrak{H}\mathfrak{o}(\mathfrak{A})\to\mathfrak{ A}\) the \(n\)th homology functor for \(n\in\mathbb{N}\). **Definition 2.12**.: For an additive functor \(F\colon\mathfrak{T}\to\mathfrak{A}\), the composite \[\mathbb{L}_{n}F\colon\mathfrak{T}\xrightarrow{P}\mathfrak{H}\mathfrak{o}( \mathfrak{T})\xrightarrow{\mathfrak{H}\mathfrak{o}(F)}\mathfrak{H}\mathfrak{o }(\mathfrak{A})\xrightarrow{\operatorname{H}_{n}}\mathfrak{A}\] is called the _\(n\)th left derived functor of \(F\)_. If \(F\colon\mathfrak{T}^{\operatorname{op}}\to\mathfrak{A}\) is now a contravariant additive functor, the composite \[\mathbb{R}^{n}F\colon\mathfrak{T}^{\operatorname{op}}\xrightarrow{P} \mathfrak{H}\mathfrak{o}(\mathfrak{T})\xrightarrow{\mathfrak{H}\mathfrak{o}(F )}\mathfrak{H}\mathfrak{o}(\mathfrak{A})\xrightarrow{\operatorname{H}^{n}} \mathfrak{A}\] is called the _\(n\)th right derived functor of \(F\)_. Relative derived functors share many similar properties with their abelian counterparts. There is even a spectral sequence relating a homological functor and its relative derived functors. We are not going to discuss this general construction. We only recall the hereditary case of the Universal Coefficient Theorem, where the spectral sequence collapses to the short exact sequence. Denote by \(\langle\mathfrak{P}_{\mathfrak{I}}\rangle\subseteq\mathfrak{T}\) subcategory generated by \(\mathfrak{I}\)-projective objects \(\mathfrak{P}_{\mathfrak{I}}\); that is, the triangulated subcategory closed under whatever coproducts exist in \(\mathfrak{T}\). **Theorem 2.13** (Meyer-Nest [19, Theorem 4.4]).: _Let \(\mathfrak{I}\) be a homological ideal in a triangulated category \(\mathfrak{T}\). Let \(A\in\mathfrak{T}\) have an \(\mathfrak{I}\)-projective resolution of length one. Suppose also that \(A\in\langle\mathfrak{P}_{\mathfrak{I}}\rangle.\) Let \(F\colon\mathfrak{T}\to\mathfrak{A}\) be a homological functor and \(\tilde{F}\colon\mathfrak{T}^{\operatorname{op}}\to\mathfrak{A}\) a cohomological functor. Then there are natural short exact sequences_ \[0\to\mathbb{L}_{0}F(A)\to F(A)\to\mathbb{L}_{1}F(\Sigma A)\to 0\] \[0\to\mathbb{R}^{1}\tilde{F}(\Sigma A)\to\tilde{F}(A)\to\mathbb{R}^{0}\tilde{ F}(A)\to 0\] _Remark 2.14_.: If, for example, we take \(\tilde{F}=\mathfrak{T}(\neg,B)\) for any \(B\in\mathfrak{T}\) and denote \(\operatorname{Ext}^{n}_{\mathfrak{T},\mathfrak{I}}:=\mathbb{R}^{n}\tilde{F}\), under the assumptions of Theorem 2.13, we get the short exact sequence \[0\to\operatorname{Ext}^{1}_{\mathfrak{T},\mathfrak{I}}(\Sigma A,B)\to \mathfrak{T}(A,B)\to\operatorname{Ext}^{0}_{\mathfrak{T},\mathfrak{I}}(A,B)\to 0. \tag{2.2}\] ### The universal \(\mathfrak{I}\)-exact functor By the classical construction of Peter Freyd [11], any triangulated category admits a universal homological functor \(U\) into an abelian category of finitely presented functors, such that any other homological functor uniquely factors through \(U\) with an exact functor (unique up to natural isomorphism) between abelian categories. Notably, the relative version of this statement is also true. **Definition 2.15**.: An \(\mathfrak{I}\)-exact homological functor \(F\) is _universal_, if any other \(\mathfrak{I}\)-exact homological functor \(G\colon\mathfrak{T}\to\mathfrak{A}^{\prime}\) factors as \(G=\bar{G}\circ F\) with an exact functor \(\bar{G}\colon\mathfrak{A}\to\mathfrak{A}^{\prime}\) that is unique up to natural isomorphism. **Theorem 2.16** (Beligiannis [1, Section 3]).: _For every homological ideal \(\mathfrak{I}\) in a triangulated category \(\mathfrak{T}\), there exists an abelian category \(\mathcal{A}_{\mathfrak{I}}(\mathfrak{T})\) and a universal \(\mathfrak{I}\)-exact stable homological functor \(F\colon\mathfrak{T}\to\mathcal{A}_{\mathfrak{I}}(\mathfrak{T})\)._ Here, if no set-theoretic issues arise, the category \(\mathcal{A}_{\mathfrak{I}}(\mathfrak{T})\) can be obtained by localizing the category of finitely presented functors at the Serre subcategory, where we quotient out all morphisms coming from the ideal \(\mathfrak{I}\). Under nonrestrictive assumptions, the universal \(\mathfrak{I}\)-exact homological functor identifies homological algebra in a target abelian category to the \(\mathfrak{I}\)-relative homological algebra in the domain triangulated category and allows the computation of relative derived functors using derived functors in the universal abelian category. More precisely, the following results hold. **Theorem 2.17** (Beligiannis [1, Proposition 4.19]).: _Let \(\mathfrak{I}\) be a homological ideal in a triangulated category \(\mathfrak{T}\) and let \(F\colon\mathfrak{T}\to\mathfrak{A}\) be a universal \(\mathfrak{I}\)-exact homological functor into an abelian category \(\mathfrak{A}\). Suppose that idempotent morphisms in \(\mathfrak{T}\) split and that there are enough \(\mathfrak{I}\)-projective objects in \(\mathfrak{T}\). Then there are enough projective objects in \(\mathfrak{A}\) and \(F\) induces an equivalence between the full subcategories of \(\mathfrak{I}\)-projective objects in \(\mathfrak{T}\) and of projective objects in \(\mathfrak{A}\)._ **Theorem 2.18** (Meyer-Nest [19, Theorem 3.41]).: _Let \(\mathfrak{I}\) be a homological ideal in a triangulated category \(\mathfrak{T}\) and let \(F\colon\mathfrak{T}\to\mathfrak{A}\) be a universal \(\mathfrak{I}\)-exact (stable) homological functor into an abelian category \(\mathfrak{A}\). Suppose that idempotent morphisms in \(\mathfrak{T}\) split and that there are enough \(\mathfrak{I}\)-projective objects in \(\mathfrak{T}\). If \(G\colon\mathfrak{T}\to\mathfrak{A}^{\prime}\) is any (stable) homological functor, then there is a unique right exact (stable) functor \(\bar{G}\colon\mathfrak{A}\to\mathfrak{A}^{\prime}\) such that \(\bar{G}\circ F(P)=G(P)\) for all \(P\in\mathfrak{P}_{\mathfrak{I}}\)._ _The left derived functors of \(G\) with respect to \(\mathfrak{I}\) and of \(\bar{G}\) are related by natural isomorphisms \(\mathbb{L}_{n}\bar{G}\circ F(A)=\mathbb{L}_{n}G(A)\) for all \(A\in T,n\in\mathbb{N}\). There is a similar statement for cohomological functors, which specializes to natural isomorphisms_ \[\operatorname{Ext}^{n}_{\mathfrak{T},\mathfrak{I}}(A,B)\cong\operatorname{ Ext}^{n}_{\mathfrak{A}}(F(A),F(B)).\] _Remark 2.19_.: In light of Theorem 2.18, once we have a universal \(\mathfrak{I}\)-exact homological functor \(F\colon\mathfrak{T}\to\mathfrak{A}\), under the assumptions of Theorem 2.13, the exact sequence (2.2) takes the form \[0\to\operatorname{Ext}^{1}_{\mathfrak{A}}(F(\Sigma A),F(B))\to\mathfrak{T}(A,B)\to\operatorname{Hom}_{\mathfrak{A}}(F(A),F(B))\to 0.\] ### An example of the universal \(\mathfrak{I}\)-exact functor Now, we construct for us the most relevant example of the universal \(\mathfrak{I}\)-exact stable homological functor. Fix at most countable set of objects \(\mathcal{C}\) in a triangulated category \(\mathfrak{T}\) with countable coproducts. Denote by \(\mathfrak{I}_{\mathcal{C}}\) the homological ideal defined as the kernel of the functor \[F_{\mathcal{C}}\colon\mathfrak{T}\to\prod_{C\in\mathcal{C}}\mathfrak{Alg}^{ \mathbb{Z}},\qquad A\mapsto\big{(}\mathfrak{T}(C,A)\big{)}_{C\in\mathcal{C}}.\] Assume that \(F_{\mathcal{C}}(A)\) is countable for all \(A\in\mathfrak{T}\). Let \(\mathfrak{C}\) denote a \(\mathbb{Z}\) -graded pre-additive full subcategory of \(\mathfrak{T}\) on objects \(\mathcal{C}\). Denote by \(\mathfrak{M}\mathfrak{o}\mathfrak{o}(\mathfrak{C}^{\mathrm{op}})_{\mathrm{c}}\) the category of functors with countable values \(\mathrm{Funct}(\mathfrak{C}^{\mathrm{op}},\mathfrak{Alg}^{\mathbb{Z}})_{ \mathrm{c}}\), or equivalently the category of countable graded right modules over the category ring \(R^{\mathfrak{C}}\) of \(\mathfrak{C}\). Then the enrichment of \(F_{\mathcal{C}}\) to the functor \[F_{\mathfrak{C}}\colon\mathfrak{T}\to\mathfrak{M}\mathfrak{o}\mathfrak{o}( \mathfrak{C}^{\mathrm{op}})_{\mathrm{c}},\] with the right \(\mathfrak{C}\)-module structure on \(\big{(}\mathfrak{T}(C,A)\big{)}_{C\in\mathfrak{C}}\) coming from composition of morphisms in \(\mathfrak{T}\), is the universal \(\mathfrak{I}_{\mathfrak{C}}\)-exact stable homological functor [20, Theorem 4.4]. ## 3. Universal invariants for bivariant K-theory In what follows, we will show two examples of applications of relative homological algebra to noncommutative topology. Both examples are of KK-theory viewed as a category, one with extra structure. KK-theory is a joint generalization of topological K-theory and K-homology for noncommutative spaces. However, keeping the audience in mind, we will define the category in question rather unconventionally, using the universal property mentioned in the introduction and not the more traditional, Fredholm bimodule picture. **Definition 3.1**.: A _C*-algebra_\(A\) is a Banach algebra over complex numbers together with a conjugate linear automorphism \({}^{*}\colon A\to A\) called involution, such that \((ab)^{*}=b^{*}a^{*}\), \((a^{*})^{*}=a\) and \(\|a^{*}a\|=\|a^{*}\|\|a\|\) for all \(a,b\in A\). A ring homomorphism of C*-algebras that also preserves involution is called a *-homomorphism. We denote by \(\mathfrak{C}^{*}\mathfrak{alg}\) the category of C*-algebras as objects and *-homomorphisms as arrows. Let \(G\) be a locally compact group. **Definition 3.2**.: A _\(G\)-C*-algebra_\(A\) is a strongly continuous representation of \(G\) by *-automorphisms \(G\to\mathrm{Aut}(A)\). By \(G\)-\(\mathfrak{C}^{*}\mathfrak{alg}\) we denote the category with \(G\)-C*-algebras as objects and \(G\) equivariant *-homomorphisms as arrows. Whenever \(G\) is a trivial group we recover the category \(\mathfrak{C}^{*}\mathfrak{alg}\), so it is sufficient to state definitions for only the equivariant case. For basic properties of the categories of C*-algebras we refer to [5] and [16]. As for rings, an _extension_ of \(G\)-C*-algebras is a diagram isomorphic to \(I\to A\to A/I\) in \(G\)-\(\mathfrak{C}^{*}\mathfrak{alg}\) for some \(G\)-invariant ideal \(I\) in a \(G\)-C*-algebra \(A\). We call an extension _split_, if it has a section in \(G\)-\(\mathfrak{C}^{*}\mathfrak{alg}\). Let \(\mathfrak{A}\) denote an exact category. If \(\mathfrak{A}\) is only additive to begin with, we can endow it with trivial exact category structure with all extensions being split. **Definition 3.3**.: A functor \(F\colon G\)-\(\mathfrak{C}^{*}\mathfrak{alg}\to\mathfrak{A}\) is called _split exact_, if for any split extension \(A\xrightarrow{i}B\xrightarrow{p}C\) with section \(s\colon C\to B\), the map \(\big{(}F(i),F(s)\big{)}\colon F(A)\oplus F(C)\to F(B)\) is an isomorphism. We will also use the definition of C*-stability. For this, we need a monoidal structure on \(G\text{-}\mathfrak{C}^{*}\mathfrak{alg}\). Nevertheless, we will not define it here. We only note that there are two reasonable ways to complete the algebraic tensor product of \(G\)-C*-algebras \(A\) and \(B\), called _minimal_ and _maximal_ tensor products respectively. However, for the large class of C*-algebras these two constructions coincide; we will use the notation \(A\otimes B\) whenever this is the case (for details see [21] and [24]). More intuitive definition of C*-stability is in a non-equivariant setting. **Definition 3.4**.: For a rank-one projection \(p\in\mathbb{K}(\ell^{2}\mathbb{N})\) in compact operators on \(\ell^{2}\mathbb{N}\), an embedding \(i\colon A\to A\otimes\mathbb{K}(\ell^{2}\mathbb{N})\) given by \(i(a)=a\otimes p\), is called a _corner embedding_ of \(A\). A functor \(F\colon\mathfrak{C}^{*}\mathfrak{alg}\to\mathfrak{A}\) is called _C*-stable_ if any corner embedding induces an isomorphism \(F(A)\cong F\big{(}A\otimes\mathbb{K}(\ell^{2}\mathbb{N})\big{)}\). The appropriate generalization of C*-stability to the equivariant case is the following: **Definition 3.5**.: Given canonical embeddings of any non-zero \(G\)-Hilbert spaces \(\mathcal{H}_{1}\to\mathcal{H}_{1}\oplus\mathcal{H}_{2}\leftarrow\mathcal{H}_{2}\), a functor \(F\colon G\text{-}\mathfrak{C}^{*}\mathfrak{alg}\to\mathfrak{A}\) is _C*-stable_ if it induces the isomorphisms \[F\big{(}A\otimes\mathbb{K}(\mathcal{H}_{1})\big{)}\xrightarrow{\cong}F\big{(} A\otimes\mathbb{K}(\mathcal{H}_{1}\oplus\mathcal{H}_{2})\big{)}\stackrel{{ \cong}}{{\leftarrow}}F\big{(}A\otimes\mathbb{K}(\mathcal{H}_{2})\big{)}.\] Now we are ready to give a universal category-theoretic definition of KK-theory. For technical reasons, we will restrict attention to a full subcategory \(G\text{-}\mathfrak{C}^{*}\mathfrak{sep}\subseteq G\text{-}\mathfrak{C}^{*} \mathfrak{alg}\) of _separable_\(G\)-C*-algebras: equivariant C*-algebras with a countable dense subsets. **Theorem 3.6** (Higson [12, Theorem 4.5]).: _There exists the additive category \(\mathfrak{K}\mathfrak{K}^{G}\) and the universal split-exact C*-stable functor \(\operatorname{KK}^{G}\colon G\text{-}\mathfrak{C}^{*}\mathfrak{sep}\to \mathfrak{K}\mathfrak{K}^{G}\)._ In other words, any C*-stable and split-exact functor \(F\colon G\text{-}\mathfrak{C}^{*}\mathfrak{alg}\to\mathfrak{A}\) will factor uniquely through \(\mathfrak{K}\mathfrak{K}^{G}\). _Remark 3.7_.: Of course, the existence and the universal property defines \(\mathfrak{K}\mathfrak{K}^{G}\) up to the category theoretic equivalence. We will not need the original, admittedly more practical definition of \(\mathfrak{K}\mathfrak{K}^{G}\). We only state that the objects of \(\mathfrak{K}\mathfrak{K}^{G}\) are the same as of \(G\text{-}\mathfrak{C}^{*}\mathfrak{sep}\), namely the separable \(G\)-C*-algebras, and the morphisms are described concretely as the \(G\)-equivariant Kasparov group \(\mathfrak{K}\mathfrak{K}^{G}(A,B)=\operatorname{KK}^{G}_{0}(A,B)\) for \(A,B\in\mathfrak{C}^{*}\mathfrak{sep}\). The composition is given by the so called _Kasparov product_. We direct the reader interested to learn more about KK-theory to the textbook sources like [2] or the original paper by Genadi Kasparov [15]. ### Triangulated structure As already stated, the category \(\mathfrak{K}\mathfrak{K}^{G}\) is additive. Coproduct is given by direct sum of \(G\)-C*-algebras. Consider the functor \[\Sigma\colon\mathfrak{K}\mathfrak{K}^{G}\to\mathfrak{K}\mathfrak{K}^{G} \qquad A\mapsto C_{0}(\mathbb{R})\otimes A.\] As a consequence of a remarkable theorem by Raul Bott, \(\operatorname{KK}^{G}\) (and thus any C*-stable and split exact functor) satisfies Bott periodicity, that is, in \(\mathfrak{K}\mathfrak{K}^{G}\) there are natural isomorphisms \(\Sigma^{2}(A)\cong A\) for all \(A\in\mathfrak{K}\mathfrak{K}^{G}\). Therefore \(\Sigma\) is an an automorphism up to a natural isomorphism and thus, \(\mathfrak{K}\mathfrak{K}^{G}\) is stable. We refer to \(\Sigma\) as the suspension. Let now \(A\to B\to C\) be an extension of \(G\)-C*-algebras. This extension is called _cp-split_ if there is a \(G\)-equivariant, completely positive (see [5]), contractive section \(C\to B\). In analogy to topological spaces, the _cone_ of a morphism \(A\xrightarrow{f}B\) between \(G\)-C*-algebras is defined as \[\operatorname{cone}(f):=\{(a,b)\in A\times C_{0}((0,1],B)\mid f(a)=b(1)\}.\] For every cp-split extension \(A\to B\to C\), with \(A,B,C\) separable \(G\)-C*-algebras, there is a unique \(G\)-equivariant map \(\Sigma C\to A\) and an isomorphism \(A\xrightarrow{\cong}\operatorname{cone}(B\to C)\) in \(\mathfrak{K}\mathfrak{K}^{G}\), such that the following diagram commutes: The first row in the above diagram is called the \(G\)-equivariant _extension triangle_ of the cp-split extension \(A\to B\to C\). Now, declare all \(4\)-term diagrams in \(\mathfrak{K}\mathfrak{K}^{G}\) isomorphic to the extension triangle of some cp-split extension as exact triangles in \(\mathfrak{K}\mathfrak{K}^{G}\). This way \(\mathfrak{K}\mathfrak{K}^{G}\) becomes a triangulated category. However, we are faced with the notation problem. As constructed, mapping cone triangles have the form \[\Sigma C\to\operatorname{cone}(f)\to B\xrightarrow{f}C,\] so the arrows point in the opposite direction to the established conventions of triangulated categories. This is explained by the fact that the functor \(X\to C_{0}(X)\) from locally compact spaces to C*-algebras is contravariant. Nevertheless, this is not really a problem, as in general, \(\mathfrak{T}^{\operatorname{op}}\) inherits triangulated structure from \(\mathfrak{T}\) canonically, and moreover \(\Sigma\cong\Sigma^{-1}\) in our case. More details on triangulated structure in \(\mathfrak{K}\mathfrak{K}^{G}\) can be found in [18]. _Remark 3.8_.: \(\mathfrak{K}\mathfrak{K}^{G}\) only contains countable coproducts. So, in following subsections, when we will talk about localizing subcategories, we will really mean localizing\({}_{\mathbb{R}_{1}}\) subcategories as in [6], that is, triangulated subcategories closed under countable coproducts. ### The bootstrap class example We start by the observation that morphism sets in \(\mathfrak{K}\mathfrak{K}^{G}\) are closely related to K-theory. For a compact group \(G\), denote by \(\operatorname{K}_{0}^{G}\) the \(G\)-equivariant topological K-theory. Generalizing Atiyah-Segal's \(G\)-equivariant vector bundle K-cohomology of topological spaces, for any \(G\)-C*-algebra \(A\), there is the natural isomorphism [2] \[\mathfrak{K}\mathfrak{K}^{G}(\mathbb{C},A)=\operatorname{KK}_{0}^{G}( \mathbb{C},A)\cong\operatorname{K}_{0}^{G}(A). \tag{3.1}\] Thus, the natural challenge of noncommutative topology is to compute KK groups using K-theoretic invariants. Probably the most famous result in this context is the Universal Coefficient Theorem by Rosenberg and Schochet [23]. We will try to describe this theorem by relative homological algebra. Following (2.1), we define **Definition 3.9**.: \[\operatorname{KK}_{n}^{G}(A,B):=\operatorname{KK}_{0}^{G}(A,\Sigma^{n}(B))\] Therefore, also \(\mathrm{K}^{G}_{n}(B):=\mathrm{K}^{G}_{0}(\Sigma^{n}(B))\) by (3.1). Since \(\Sigma^{2}\cong\mathrm{Id}\), we get a \(\mathbb{Z}/2\)-graded abelian theory in both cases. Going forward in this subsection, we assume that \(G\) is trivial. So, the situation is non-equivariant and we simply write \(\mathfrak{K}\mathfrak{K}\) for the universal split-exact, C*-stable category for separable C*-algebras. To proceed, we need to restrict to a smaller subcategory of C*-algebras in \(\mathfrak{K}\mathfrak{K}\). **Definition 3.10**.: The _bootstrap class_\(\mathfrak{B}\subset\mathfrak{K}\mathfrak{K}\) is the localizing triangulated subcategory in \(\mathfrak{K}\mathfrak{K}\) generated by the object \(\mathbb{C}\in\mathfrak{K}\mathfrak{K}\); that is, \(\mathfrak{B}=\langle\mathbb{C}\rangle\). Another equivalent way to characterize the bootstrap class is all separable C*-algebras that are isomorphic to commutative C*-algebras in \(\mathfrak{K}\mathfrak{K}\). The class is large, and most separable C*-algebras that operator algebraist encounters in daily work are in fact in \(\mathfrak{B}\). Now, fix the generator, the complex numbers, as the single object \(\mathbb{C}\in\mathfrak{B}\) and consider a \(\mathbb{Z}/2\)-graded representable functor \[\mathrm{KK}_{*}(\mathbb{C},B)=\big{(}\mathrm{KK}_{0}(\mathbb{C},B),\mathrm{KK }_{1}(\mathbb{C},B)\big{)}\cong\big{(}\mathrm{K}_{0}(B),\mathrm{K}_{1}(B) \big{)}=\mathrm{K}_{*}(B)\] into \(\mathfrak{A}\mathfrak{b}_{\mathrm{c}}^{\mathbb{Z}/2}\), the category of \(\mathbb{Z}/2\)-graded countable abelian groups with degree preserving homomorphisms. Here countability condition comes from the fact that we are only considering separable C*-algebras. In the notation of Subsection 2.3, we took \(\mathfrak{C}=\{\mathbb{C}\}\), and thus \(\mathfrak{M}\mathfrak{o}\mathfrak{o}_{\mathrm{c}}^{\mathbb{Z}/2}\mathfrak{C}^ {\mathrm{op}}\cong\mathfrak{A}\mathfrak{b}_{\mathrm{c}}^{\mathbb{Z}/2}\) by definition. Therefore, we are left with the K-theory functor \[\mathrm{K}_{*}\colon\mathfrak{B}\longrightarrow\mathfrak{M}\mathfrak{b}_{ \mathrm{c}}^{\mathbb{Z}/2}\qquad A\mapsto\mathrm{KK}_{*}(\mathbb{C},\text{-})\] which is the universal stable \(\ker\mathrm{K}_{*}\)-exact functor. Therefore, by Theorem 2.18 the relative derived functors can be computed using honest derived functors in hereditary abelian category \(\mathfrak{A}\mathfrak{b}_{\mathrm{c}}^{\mathbb{Z}/2}\). So, Theorem 2.13 and Remark 2.19, we derive the celebrated Universal Coefficient Theorem: **Theorem 3.11** (Rosenberg-Schochet [23]).: _Let \(A\) be a separable \(\mathrm{C}^{*}\)-algebra. Then for \(A\in\mathfrak{B}\), there is a short exact sequence of \(\mathbb{Z}/2\)-graded abelian groups_ \[\mathrm{Ext}^{1}\big{(}\mathrm{K}_{*+1}(A),\mathrm{K}_{*}(B)\big{)} \longrightarrow\mathfrak{K}\mathfrak{K}_{*}(A,B)\twoheadrightarrow\mathrm{ Hom}\big{(}\mathrm{K}_{*}(A),\mathrm{K}_{*}(B)\big{)}\] _for every \(B\in\mathfrak{K}\mathfrak{K}\)._ _Remark 3.12_.: The converse of the theorem is also true, that is, \(A\in\mathfrak{B}\) only if the short exact sequence exists for every \(B\in\mathfrak{K}\mathfrak{K}\). This is sometimes also used to define the bootstrap class. The Universal Coefficient Theorem is very useful as it allows the computation of KK groups using K-theory for C*-algebras in the bootstrap class. This is widely used for classification programs of C*-algebras or even of different triangulated subcategories as in [7, 10, 22]. ### Actions of finite groups Now we will give an example of the case when the category ring of \(\mathfrak{C}\) is not hereditary and some computation is needed to pin down the universal invariant. Throughout this subsection, let \(G\) be a finite group. As before, looking at whole \(\mathfrak{K}\mathfrak{K}^{G}\) is far too complicated, so we will restrict our attention to a smaller subclass of \(G\)-C*-algebras. The correct equivariant bootstrap class is defined as follows [9]. **Definition 3.13**.: A \(G\)-C*-algebra \(A\) is called _elementary_ if it is of the form \[\operatorname{Ind}_{H}^{G}\mathbb{M}_{n}\mathbb{C}=\{G\xrightarrow{f}\mathbb{M} _{n}\mathbb{C}\mid hf(xh)=f(x)\text{ for any }x\in G\text{ and }h\in H\}\] with the \(G\)-action \((gf)(x):=f(g^{-1}x)\), \(g,x\in G,\) for some subgroup \(H\subseteq G\) and some action by automorphisms of \(H\) on \(n\times n\) matrix algebra \(\mathbb{M}_{n}\mathbb{C}.\) **Definition 3.14**.: The \(G\)_-equivariant bootstrap class_\(\mathfrak{B}^{G}\subset\mathfrak{K}\mathfrak{K}^{G}\) is the localizing triangulated subcategory generated by the elementary \(G\)-C*-algebras in \(\mathfrak{K}\mathfrak{K}^{G},\) that is, \[\mathfrak{B}^{G}=\langle\operatorname{Ind}_{H}^{G}\mathbb{M}_{n}\mathbb{C} \mid\text{ For all }H\subseteq G\text{ actions on }\mathbb{M}_{n}\mathbb{C}\rangle.\] Contrary to \(\mathfrak{B},\) equivariant bootstrap class is strictly larger than all \(G\)-C*-algebras isomorphic to commutative \(G\)-C*-algebras in \(\mathfrak{K}\mathfrak{K}^{G}\). The latter is too restrictive in the equivariant setting, as it is not even thick as a subcategory. On the other hand, \(\mathfrak{B}^{G}\) as defined above is fairly large, as it is equivalent to all \(G\)-C*-algebras that are isomorphic in \(\mathfrak{K}\mathfrak{K}^{G}\) to a \(G\)-action on a type I C*-algebra (for the definition of Type I see for example [2] or [5]). By Skolem-Noether theorem, each automorphism of complex matrix algebra is inner, thus \(\operatorname{Aut}(\mathbb{M}_{n}\mathbb{C})\cong\operatorname{GL}_{n}( \mathbb{C})/\mathbb{C}\mathrm{I}_{n}\). Therefore, every action of \(H\subseteq G\) on \(\mathbb{M}_{n}\mathbb{C}\) by automorphisms comes from some \(n\)-dimensional projective representation \[H\to\operatorname{GL}_{n}(\mathbb{C})/\mathbb{C}\mathrm{I}_{n},\] which in turn are classified by a cohomology class in \(\mathrm{H}^{2}(H,\mathrm{U}(1))\)[14]. Two actions on \(\mathbb{M}_{n}\mathbb{C}\) are isomorphic in \(\mathfrak{K}\mathfrak{K}^{G}\) if and only if they belong to the same class in \(\mathrm{H}^{2}(H,\mathrm{U}(1)).\) For a finite group, second cohomology is also finite [14], thus there is a finite choice (each per cohomology class of a projective representation) of elementary C*-algebras that generate \(\mathfrak{B}^{G}.\) #### 3.3.1. Actions of finite cyclic groups Now we further assume that \(G=C_{k},\) the cyclic group of order \(k\). Then it is well known that there are no non-trivial projective representations for \(H\subseteq C_{k},\)[14] and thus \(\operatorname{Ind}_{H}^{C_{k}}\mathbb{C}=C(C_{k}/H)\) are enough to generate \(\mathfrak{B}^{C_{k}}.\) _Remark 3.15_.: In general, for a subgroup \(H\subseteq G\), the construction that assigns \(G\)-C*-algebra \(\operatorname{Ind}_{H}^{G}A\) to the \(H\)-C*-algebra \(A\) is functorial. It is the left adjoint to the functor \(\operatorname{Res}_{H}^{G}\colon\mathfrak{K}\mathfrak{K}^{G}\to\mathfrak{K} \mathfrak{K}^{H},\) that restricts a \(G\)-action to \(H\). So, when computing Yoneda functors represented by generators of \(\mathfrak{B}^{C_{k}}\), by (3.1) we get an equivariant K-theory \[\operatorname{KK}^{G}(C(G/H),A)=\operatorname{KK}^{G}(\operatorname{Ind}_{H}^ {G}\mathbb{C},A)\cong\operatorname{KK}^{H}(\mathbb{C},\operatorname{Res}_{H}^ {G}A)\cong\operatorname{K}_{0}^{H}(\operatorname{Res}_{H}^{G}A).\] Following Subsection 2.3, to apply the machinery of relative homological algebra, we want to compute the category ring \(R^{C_{k}}\) of the full subcategory \(\mathfrak{C}^{C_{k}}\subset\mathfrak{K}\mathfrak{K}^{C_{k}}\) on objects \[\{C(C_{k}/H)\mid H\subseteq C_{k}\}.\] As knowledgeable reader might have noticed, this looks like the domain for Mackey theory (for Mackey functors see [3] or a shorter guide [25]). This is indeed the case and the following computation falls completely under Ivo Dell'Ambrogio's work [8]. Unpacking [8, Theorem 4.9] and adapting it to ring theoretic conventions, we find that the category ring \(R^{C_{k}}\) of \(\mathfrak{e}^{C_{k}}\) is generated by the arrows of the form \[r_{L}^{H} :C(C_{k}/H)\to C(C_{k}/L),\] \[i_{L}^{H} :C(C_{k}/L)\to C(C_{k}/H),\] \[c_{g}^{H} :C(C_{k}/H)\to C(C_{k}/H),\] \[m_{\chi}^{H} :C(C_{k}/H)\to C(C_{k}/H),\] for all \(L\subseteq H\subseteq C_{k},g\in C_{k}\) and complex group representation \(\chi\in\operatorname{Rep}(H)\) of \(H\). These are called _restriction, induction, conjugation, multiplication_ respectively and are subject to the relations that follow. The first six are well known relations from Mackey theory. Let \(H\subseteq C_{k}\). 1. \(r_{H}^{H}=i_{H}^{H}=1_{C(C_{k}/H)}\), 2. \(c_{h}^{H}=1_{C(C_{k}/H)}\), if \(h\in H\), 3. \(r_{L}^{K}\circ r_{K}^{H}=r_{L}^{H}\), and \(i_{K}^{H}\circ i_{L}^{K}=i_{L}^{H}\) for \(L\subseteq K\subseteq H\), 4. \(c_{g}^{H}\circ c_{h}^{H}=c_{hg}^{H}\), for any \(g,h\in C_{k}\), 5. \(c_{g}^{K}\circ r_{K}^{H}=r_{K}^{H}\circ c_{g}^{H}\) and \(i_{K}^{H}\circ c_{g}^{K}=c_{g}^{H}\circ i_{K}^{H}\) for \(K\subseteq H\), 6. \(r_{K}^{H}\circ i_{L}^{H}=\sum_{g\in[L\setminus H/K]}i_{L\cap K}^{K}\circ c_{g} ^{L\cap K}\circ r_{L\cap K}^{L}\) for \(L,K\subseteq H\). Then come the two relations pertaining to the multiplication, addition, restriction and conjugation of representations. Let \(\chi\) be a complex group representation of \(H\). 1. \(m_{\chi}^{H}\circ m_{\psi}^{H}=m_{\chi\psi}^{H}\), \(m_{\chi}^{H}+m_{\psi}^{H}=m_{\chi+\psi}^{H}\), and \(m_{\tau}^{H}=1_{C(C_{k}/H)}\) for every complex representation \(\psi\) of \(H\) and a trivial representation \(\tau\) of \(H\). 2. \(m_{\chi\left|{}_{L}\right.}^{L}\circ r_{L}^{H}=r_{L}^{H}\circ m_{\chi}^{H}\) for \(L\subseteq H\), 3. \(c_{g}^{H}\circ m_{\chi}^{H}=m_{\chi}^{H}\circ c_{g}^{H}\). And finally, we have the so called _Frobenius isomorphisms_. These reflect the ring structure on Mackey functors. 1. \(m_{\chi}^{H}\circ i_{L}^{H}=i_{L}^{H}\circ m_{\chi\left|{}_{L}\right.}^{L}\) for \(L\subseteq H\), 2. \(i_{L}^{H}\circ m_{\chi}^{L}\circ r_{L}^{H}=m_{\operatorname{ind}_{L}^{H}\chi}^ {H}\) for \(L\subseteq H\), where \(\operatorname{ind}_{L}^{H}\chi\) denotes the induced representation of \(\chi\) to \(H\). #### 3.3.2. Actions of the group \(C_{4}\) The first cyclic group with non-trivial subgroup structure is \(C_{4}=\{e,a,a^{2},a^{3}\}\). There is the single non-trivial subgroup \(\langle a^{2}\rangle\subseteq C_{4}\) and thus three generators \(C(C_{4}/\{e\})=C(C_{4})\), \(C(C_{4}/\langle a^{2}\rangle)\) and \(C(C_{4}/C_{4})=\mathbb{C}\) for the equivariant bootstrap class \(\mathfrak{B}^{C_{4}}\). \(C_{4}\) has three non-trivial \(1\)-dimensional complex representations \(\chi_{\mathrm{i}}^{C_{4}}\), \(\chi_{-1}^{C_{4}}=(\chi_{\mathrm{i}}^{C_{4}})^{2}\) and \(\chi_{-\mathrm{i}}^{C_{4}}=(\chi_{\mathrm{i}}^{C_{4}})^{3}\). Here in the subscript we indicate the image of the generator \(a\in C_{4}\) in \(\mathbb{C}\). \(\chi_{\mathrm{i}}^{C_{4}}\) and \(\chi_{-\mathrm{i}}^{C_{4}}\) restrict to the only non-trivial representation of the subgroup \(\langle a^{2}\rangle\), which we denote by \(\chi^{\langle a^{2}\rangle}\). By relations (i) through (iv) and relation (vii), the eleven generators of the category ring \(R^{C_{4}}\) are \[\Gamma=\big{\{}1_{C(C_{4})},\ c_{a}^{\{e\}},\ i_{\{e\}}^{\langle a^{2}\rangle},\ r_{\{e\}}^{\langle a^{2}\rangle},\ 1_{C(C_{4}/\langle a^{2}\rangle)},\ c_{a}^{\langle a^{2}\rangle},\ m ^{\langle a^{2}\rangle},\ i_{\langle a^{2}\rangle}^{C_{4}},\ r_{\langle a^{2} \rangle}^{C_{4}},\ 1_{\mathbb{C}},\ m_{\mathrm{i}}^{C_{4}}\big{\}}.\] The following represents how these generators act. Since \(R^{C_{4}}\) is a category ring, by definition we have that 1. for all \(x,y\in\Gamma\), if \(x\) and \(y\) are not composable in \(\mathfrak{C}\), then \(xy=0\). Relations (ii),(iv) and (vii) say that 1. \((c_{a}^{\{e\}})^{4}=1_{C(C_{4})}\), \((c_{a}^{\{a^{2}\}})^{2}=(m^{\langle a^{2}\rangle})^{2}=1_{C(C_{4}/\langle a^{2} \rangle)}\), \((m_{\rm i}^{C_{4}})^{4}=1_{\mathbb{C}}\). Commutation relations (v), (viii), (ix) and (x), together with (i), (ii) and (vii) imply that 1. \(r_{\{e\}}^{\langle a^{2}\rangle}c_{a}^{\{a^{2}\}}=c_{a}^{\{e\}}r_{\{e\}}^{\langle a ^{2}\rangle}\), \(c_{a}^{\{a^{2}\}}r_{\{a^{2}\}}^{C_{4}}=r_{\{e\}}^{C_{4}}\), \(i_{\{e\}}^{\langle a^{2}\rangle}c_{a}^{\{e\}}=c_{a}^{\{a^{2}\}}i_{\{e\}}^{\langle a ^{2}\rangle}\), \(i_{\{e\}}^{C_{4}}\), \(i_{\{e\}}^{C_{4}}\), \(i_{\{e\}}^{C_{4}}\), \(i_{\{e\}}^{\langle a^{2}\rangle}\), \(m^{\langle a^{2}\rangle}=r_{\{e\}}^{C_{4}}\), \(m^{\langle a^{2}\rangle}r_{\{a^{2}\}}^{C_{4}}=r_{\{e\}}^{C_{4}}\), \(m^{\langle a^{2}\rangle}=r_{\{a^{2}\}}^{C_{4}}\), \(m^{\langle a^{2}\rangle}=m^{C_{4}}i_{\{a^{2}\}}^{C_{4}}\); \(c_{a}^{\{a^{2}\}}m^{\langle a^{2}\rangle}=m^{\langle a^{2}\rangle}c_{a}^{\langle a ^{2}\rangle}\). There are two classes of double cosets in \([\{e\}\backslash\langle a^{2}\rangle/\{e\}]\) and \([\langle a^{2}\rangle\backslash C_{4}/\langle a^{2}\rangle]\) with set of representatives \(\{1,a^{2}\}\) and \(\{1,a\}\) respectively. Therefore, relation (vi) yields 1. \(r_{\{e\}}^{\langle a^{2}\rangle}i_{\{e\}}^{\langle a^{2}\rangle}=1_{C(C_{4})} +(c_{a}^{\{e\}})^{2}\), \(r_{\langle a^{2}\rangle}^{C_{4}}i_{\langle a^{2}\rangle}^{C_{4}}=1_{C(C_{4}/ \langle a^{2}\rangle)}+c_{a}^{\langle a^{2}\rangle}\). Finally, to use the last relation (xi), one needs to identify the induced representations of the trivial representations \(\operatorname{ind}_{\{e\}}^{\langle a^{2}\rangle}\tau^{\{e\}}\), \(\operatorname{ind}_{\langle a^{2}\rangle}^{C_{4}}\tau^{\langle a^{2}\rangle}\) and of \(\operatorname{ind}_{\langle a^{2}\rangle}^{C_{4}}\chi^{\langle a^{2}\rangle}\). After completing this exercise in linear algebra, one finds that \(\operatorname{ind}_{\{e\}}^{\langle a^{2}\rangle}\tau^{\{e\}}=\tau^{\langle a ^{2}\rangle}+\chi^{\langle a^{2}\rangle}\), \(\operatorname{ind}_{\langle a^{2}\rangle}^{C_{4}}\tau^{\langle a^{2}\rangle }=\tau^{C_{4}}+(\chi_{\rm i}^{C_{4}})^{2}\) and \(\operatorname{ind}_{\langle a^{2}\rangle}^{C_{4}}\chi^{\langle a^{2}\rangle}= \chi_{\rm i}^{C_{4}}+(\chi_{\rm i}^{C_{4}})^{3}\). So, by relation (xi) and additive part of (vii), we have 1. \(i_{\{e\}}^{\langle a^{2}\rangle}r_{\{e\}}^{\langle a^{2}\rangle}=1_{C(C_{4}/ \langle a^{2}\rangle)}+m^{\langle a^{2}\rangle}\), \(i_{\langle a^{2}\rangle}^{C_{4}}r_{\langle a^{2}\rangle}^{C_{4}}=1_{\mathbb{C} }+(m_{\rm i}^{C_{4}})^{2}\), \(i_{\langle a^{2}\rangle}^{C_{4}}m^{\langle a^{2}\rangle}r_{\langle a^{2} \rangle}^{C_{4}}=m_{\rm i}^{C_{4}}+(m_{\rm i}^{C_{4}})^{3}\). The generators \(\Gamma\) and relations \(0\), \(1\), \(2\), \(3\) and \(4\) define the ring \(R^{C_{4}}\). **Corollary 3.16**.: _The functor_ \[\operatorname{k}_{*}^{G}\colon\mathfrak{B}^{G}\to\mathfrak{Moo}_{\rm c}^{ \mathbb{Z}/2}(R^{C_{4}}),\qquad A\mapsto\{\operatorname{K}_{\epsilon}^{H}( \operatorname{Res}_{H}^{G}A)\}_{\epsilon\in\mathbb{Z}/2}^{H\subseteq G}\] _into the abelian category of \(\mathbb{Z}/2\)-graded countable right modules over the ring \(R^{C_{4}}\) is the universal stable \(\ker\operatorname{k}_{*}^{G}\)-exact functor._ Proof.: By Remark 3.15, \[\operatorname{K}_{\epsilon}^{H}(\operatorname{Res}_{H}^{G}A)\cong \operatorname{KK}_{\epsilon}^{G}(C(G/H),A).\] By [8, Theorem 4.9] and computations in this subsection, \(R^{C_{4}}\) is the category ring of \(\mathfrak{C}^{C_{4}}\), thus we arrive at the result as explained in Subsection 2.3. The ring \(R^{C_{4}}\) is not hereditary, thus there is no Universal Coefficient Theorem. However, as a special case of general theory, there is still a spectral sequence that relates relative derived functors on \(\mathfrak{B}^{G}\) to the derived functors on \(\mathfrak{Moo}_{\rm c}^{\mathbb{Z}/2}R^{C_{4}}\)[8]. This can be used for classification purposes. Another direction is to localize the ring in question at different subsets to arrive at the hereditary situation, but this is out of the scope of the current exposition. #### 3.3.3. Outlook The situation gets more complicated when when group \(G\) has non-trivial projective representations. In this case, the equivariant bootstrap class has generators given by \(G\)-C*-algebras induced from subgroup actions on matrices of dimension bigger than one. Mackey-like relations still arise, however, in addition, one has to take into account the non-trivial 2-cocycles in \(\mathrm{H}^{2}(H,\mathrm{U}(1))\) for \(H\subseteq G\).
2305.12615
Global Finite-Energy Solutions of the Compressible Euler-Poisson Equations for General Pressure Laws with Spherical Symmetry
We are concerned with global finite-energy solutions of the three-dimensional compressible Euler-Poisson equations with gravitational potential and general pressure law, especially including the constitutive equation of white dwarf stars. We construct global finite-energy solutions of the Cauchy problem for the Euler-Poisson equations with large initial data of spherical symmetry as the inviscid limit of the solutions of the corresponding Cauchy problem for the Navier-Stokes-Poisson equations. The strong convergence of the vanishing viscosity solutions is achieved through entropy analysis, uniform estimates in $L^p$, and a more general compensated compactness framework via several new ingredients. A key estimate is first established for the integrability of the density over unbounded domains independent of the viscosity coefficient. Then a special entropy pair is carefully designed by solving a Goursat problem for the entropy equation such that a higher integrability of the velocity is established, which is a crucial step. Moreover, the weak entropy kernel for the general pressure law and its fractional derivatives of the required order near vacuum ($\rho=0$) and far-field ($\rho=\infty$) are carefully analyzed. Owing to the generality of the pressure law, only the $W^{-1,p}_{{\rm loc}}$-compactness of weak entropy dissipation measures with $p\in [1,2)$ can be obtained; this is rescued by the equi-integrability of weak entropy pairs which can be established by the estimates obtained above so that the div-curl lemma still applies. Finally, based on the above analysis of weak entropy pairs, the $L^p$ compensated compactness framework for the compressible Euler equations with general pressure law is established. This new compensated compactness framework and the techniques developed in this paper should be useful for solving further nonlinear problems with similar features.
Gui-Qiang G. Chen, Feimin Huang, Tianhong Li, Weiqiang Wang, Yong Wang
2023-05-22T00:45:00Z
http://arxiv.org/abs/2305.12615v2
# Global finite-energy solutions of ###### Abstract. We are concerned with global finite-energy solutions of the three-dimensional compressible Euler-Poisson equations with _gravitational potential_ and _general pressure law_, especially including the constitutive equation of _white dwarf stars_. In this paper, we construct global finite-energy solutions with spherical symmetry of the Cauchy problem for the Euler-Poisson equations as the inviscid limit of the corresponding compressible Navier-Stokes-Poisson equations. The strong convergence of the vanishing viscosity solutions is achieved through entropy analysis, uniform estimates in \(L^{p}\), and a more general compensated compactness framework via several new main ingredients. A key estimate is first established for the integrability of the density over unbounded domains independent of the vanishing viscosity coefficient. Then a special entropy pair is carefully designed via solving a Goursat problem for the entropy equation such that a higher integrability of the velocity is established, which is a crucial step. Moreover, the weak entropy kernel for the general pressure law and its fractional derivatives of the required order near vacuum (\(\rho=0\)) and far-field (\(\rho=\infty\)) are carefully analyzed. Owing to the generality of the pressure law, only the \(W^{-1,p}_{\rm loc}\)-compactness of weak entropy dissipation measures with \(p\in[1,2)\) can be obtained; this is rescued by the equi-integrability of weak entropy pairs which can be established by the estimates obtained above, so that the div-curl lemma still applies. Finally, based on the above analysis of weak entropy pairs, the \(L^{p}\) compensated compactness framework for the compressible Euler equations with general pressure law is established. This new compensated compactness framework and the techniques developed in this paper should be useful for solving further nonlinear problems with similar features. Key words and phrases:Euler-Poisson equations, white dwarf stars, finite-energy solutions, general pressure law, spherical symmetry, entropy analysis, \(L^{p}\) estimates, compensated compactness framework, Goursat problem. A sharp Sobolev inequality ### 1. Introduction We are concerned with global finite-energy solutions of the three-dimensional (3-D) compressible Euler-Poisson equations (CEPEs) that take the form: \[\begin{cases}\partial_{t}\rho+\nabla\cdot\mathcal{M}=0,\\ \partial_{t}\mathcal{M}+\nabla\cdot\Big{(}\frac{\mathcal{M}\otimes\mathcal{M}} {\rho}\Big{)}+\nabla P+\rho\nabla\Phi=0,\\ \Delta\Phi=k_{g}\rho,\end{cases} \tag{1.1}\] for \((t,\mathbf{x}):=(t,x_{1},x_{2},x_{3})\in\mathbb{R}_{+}\times\mathbb{R}^{3}:=(0, \infty)\times\mathbb{R}^{3}\). The system is used to model the motion of compressible gaseous stars under a self-consistent gravitational field (_cf._[6]), where \(\rho\) is the density, \(P=P(\rho)\) is the pressure, \(\mathcal{M}\in\mathbb{R}^{3}\) is the momentum, \(\Phi\) represents the gravitational potential of gaseous stars as \(k_{g}>0\), \(\nabla=(\partial_{x_{1}},\partial_{x_{2}},\partial_{x_{3}})\), and \(\Delta=\partial_{x_{1}x_{1}}+\partial_{x_{2}x_{2}}+\partial_{x_{3}x_{3}}\). Without loss of generality by scaling, we take \(k_{g}=1\) throughout this paper. The constitutive pressure-density relation \(P(\rho)\) depends on the types of gaseous stars. For simplicity, the class of polytropic gases, _i.e._, \[P(\rho)=\kappa\rho^{\gamma}\qquad\text{ for }\kappa>0\text{ and }\gamma\in(1,3), \tag{1.2}\] has been widely investigated in mathematics. From the point view of astronomy, the constitutive pressure \(P(\rho)\) for certain gaseous stars is not of the polytropic form. For example, the pressure law of a white dwarf star takes the following form (_cf._[6, 64]): \[P(\rho)=\mathcal{C}_{1}\int_{0}^{\mathcal{C}_{2}\rho^{\frac{1}{3}}}\frac{s^{4 }}{\sqrt{\mathcal{C}_{3}+s^{2}}}\,\mathrm{d}s\qquad\text{ for }\rho>0, \tag{1.3}\] where \(\mathcal{C}_{1},\mathcal{C}_{2}\) and \(\mathcal{C}_{3}\) are positive constants. It can be checked that \(P(\rho)\cong\kappa_{1}\rho^{\frac{5}{3}}\) as \(\rho\to 0\) and \(P(\rho)\cong\kappa_{2}\rho^{\frac{4}{3}}\) as \(\rho\to\infty\) for some positive constants \(\kappa_{1}\) and \(\kappa_{2}\). In this paper, we consider a general pressure law in which any pressure function \(P(\rho)\) satisfies the following conditions: * The pressure function \(P(\rho)\in C^{1}([0,\infty))\cap C^{4}(\mathbb{R}_{+})\) and satisfies the hyperbolic and genuinely nonlinear conditions: \[P^{\prime}(\rho)>0,\quad 2P^{\prime}(\rho)+\rho P^{\prime\prime}(\rho)>0 \qquad\text{ for }\rho>0.\] (1.4) * There exists a constant \(\rho_{*}>0\) such that \[P(\rho)=\kappa_{1}\rho^{\gamma_{1}}\big{(}1+\mathcal{P}_{1}(\rho)\big{)} \qquad\text{ for }\rho\in[0,\rho_{*}),\] (1.5) with some constants \(\gamma_{1}\in(1,3)\) and \(\kappa_{1}>0\), and a function \(\mathcal{P}_{1}(\rho)\in C^{4}(\mathbb{R}_{+})\) satisfying that \(|\mathcal{P}_{1}^{(j)}(\rho)|\leq C_{*}\rho^{\gamma_{1}-1-j}\) for \(\rho\in(0,\rho_{*})\) and \(j=0,\cdots,4\), where \(C_{*}>0\) is a constant depending only on \(\rho_{*}\). * There exists a constant \(\rho^{*}>\rho_{*}>0\) such that \[P(\rho)=\kappa_{2}\rho^{\gamma_{2}}\big{(}1+\mathcal{P}_{2}(\rho)\big{)} \qquad\text{ for }\rho\in[\rho^{*},\infty),\] (1.6) with some constants \(\gamma_{2}\in(\frac{6}{5},\gamma_{1}]\) and \(\kappa_{2}>0\), and a function \(\mathcal{P}_{2}(\rho)\in C^{4}(\mathbb{R}_{+})\) satisfying that \(|\mathcal{P}_{2}^{(j)}(\rho)|\leq C^{*}\rho^{-\epsilon-j}\) for \(\rho\in[\rho^{*},\infty)\) and \(j=0,\cdots,4\), where \(\epsilon>0\), and \(C^{*}>0\) is a constant depending only on \(\rho^{*}\). It is direct to see that the polytropic gases in (1.2) satisfy assumptions (1.4)-(1.6). Moreover, the white dwarf star (1.3) is also included with \[\gamma_{1}=\frac{5}{3},\quad\kappa_{1}=\frac{1}{5\sqrt{\mathcal{C}_{3}}} \mathcal{C}_{1}\mathcal{C}_{2}^{5},\quad\gamma_{2}=\frac{4}{3},\quad\kappa_{2 }=\frac{1}{4}\mathcal{C}_{1}\mathcal{C}_{2}^{4},\quad\epsilon=\frac{2}{3}. \tag{1.7}\] The restriction: \(\gamma_{2}>\frac{6}{5}\) is necessary to ensure the global existence of finite-energy solutions with finite total mass. Such a condition is also needed for the existence of the Lane-Emden solutions; see [6, 45]. We consider the Cauchy problem of (1.1) with the initial data: \[(\rho,\mathcal{M})(0,\mathbf{x})=(\rho_{0},\mathcal{M}_{0})(\mathbf{x})\, \to\,(0,\mathbf{0})\qquad\text{ as }|\mathbf{x}|\to\infty \tag{1.8}\] subject to the far field condition: \[\Phi(t,\mathbf{x})\,\to\,0\qquad\text{ as }|\mathbf{x}|\to\infty. \tag{1.9}\] The global existence of solutions of the Cauchy problem (1.1) and (1.8)-(1.9) is a longstanding open problem. Many efforts have been made for the polytropic gas case (1.2). Considerable progress has been made on the smooth or special solutions under some restrictions on the initial data. One of the most famous solutions of CEPEs (1.1) are the Lane-Emden steady solutions (_cf._[45]), which describe spherically symmetric gaseous stars in equilibrium and minimize the energy among all possible configurations (_cf._[44]). There exist expanding solutions for the non-steady CEPEs (1.1). Hadzic-Jang [32] proved the nonlinear stability of the affine solution (which is linearly expanding) under small spherically symmetric perturbations for \(\gamma=\frac{4}{3}\), while the stability problem for \(\gamma\neq\frac{4}{3}\) is still widely open. A class of linearly expanding solutions for \(\gamma=1+\frac{1}{k}\), \(k\in\mathbb{N}\backslash\{1\}\) or \(\gamma\in(1,\frac{14}{13})\) was further constructed in [33]. For \(1<\gamma<\frac{4}{3}\), concentration (collapse) phenomena may happen. Indeed, as \(\gamma=\frac{4}{3}\), there exists an homologous concentration solution; see [26, 28, 53]. More recently, Guo-Hadzic-Jang [29] first observed a continued concentration solution for \(1<\gamma<\frac{4}{3}\); see also [35]. A kind of smooth radially symmetric self-similar solutions exhibiting gravitational collapse for \(1\leq\gamma<\frac{4}{3}\) can be found in [30, 31]. We refer to [49, 52] for the local well-posedness of smooth solutions. Owing to the strong nonlinearity and hyperbolicity, the smooth solutions of (1.1) with (1.2) may break down in a finite time, especially when the initial data are large (_cf._[15, 53]). Therefore, weak solutions have to be considered for large initial data. For gaseous stars surrounding a solid ball, Makino [54] obtained the local existence of weak solutions for \(\gamma\in(1,\frac{5}{3}]\) with spherical symmetry; also see Xiao [67] for global weak solutions with a class of initial data. For this case, the possible singularity at the origin is prevented since the domain was considered outside a ball. Luo-Smoller [48] proved the conditional stability of rotating and non-rotating white dwarfs and rotating supermassive stars; see also Rein [59] for the conditional nonlinear stability of the Lane-Emden steady solutions. Another fundamental question is whether global solutions can be constructed via the vanishing viscosity limit of the solutions of the compressible Navier-Stokes-Poisson equations (CNSPEs): \[\begin{cases}\partial_{t}\rho+\nabla\cdot\mathcal{M}=0,\\ \partial_{t}\mathcal{M}+\nabla\cdot\Big{(}\frac{\mathcal{M}\otimes\mathcal{ M}}{\rho}\Big{)}+\nabla P+\rho\nabla\Phi=\varepsilon\nabla\cdot\Big{(}\mu(\rho)D \Big{(}\frac{\mathcal{M}}{\rho}\Big{)}\Big{)}+\varepsilon\nabla\Big{(}\lambda (\rho)\nabla\cdot\Big{(}\frac{\mathcal{M}}{\rho}\Big{)}\Big{)},\\ \Delta\Phi=\rho,\end{cases} \tag{1.10}\] where \(D(\frac{\mathcal{M}}{\rho})=\frac{1}{2}\big{(}\nabla(\frac{\mathcal{M}}{\rho}) +(\nabla(\frac{\mathcal{M}}{\rho}))^{\perp}\big{)}\) is the stress tensor, the Lame (shear and bulk) viscosity coefficients \(\mu(\rho)\) and \(\lambda(\rho)\) depend on the density (that may vanish on the vacuum) and satisfy \[\mu(\rho)\geq 0,\quad\mu(\rho)+3\lambda(\rho)\geq 0\qquad\text{ for }\rho\geq 0,\] and parameter \(\varepsilon>0\) is the inverse of the Reynolds number. Formally, as \(\varepsilon\to 0\), the sequence of the solutions of CNSPEs (1.10) converges to a corresponding solution of CEPEs (1.1). However, the rigorous proof has been one of the most challenging problems in the mathematical fluid dynamics; see Chen-Feldman [8] and Dafermos [20]. The limit problem with vanishing physical viscosity dates back to the pioneering paper by Stokes [63]. Most of the known results were around the inviscid limit from the compressible Navier-Stokes to the Euler equations for the polytropic gas case (1.2). The first rigorous proof of the vanishing viscosity limit from the Navier-Stokes to the Euler equations was provided by Gilbarg [27], in which he established the existence and inviscid limit of the Navier-Stokes shock layers. For the case of large data, due to the lack of \(L^{\infty}\) uniform estimate, the \(L^{\infty}\) compensated compactness framework [21, 22, 23, 34, 46, 47] fails to work directly in the inviscid limit of the compressible Navier-Stokes equations. An \(L^{p}\) compensated compactness framework was first studied by LeFloch-Westdickenberg [41] for the isentropic Euler equations for the case \(\gamma\in(1,\frac{5}{3})\) in (1.2), and was further developed by Chen-Perepelitsa [12] to all \(\gamma>1\) for (1.2) with a simplified proof; see also [16] for spherically symmetric solutions of the M-D isentropic Euler equations. We also refer to [61, 62] for the 1-D case of asymptotically isothermal gas, _i.e._, \(\gamma_{2}=1\) in (1.6). More recently, Chen-He-Wang-Yuan [9] established both the strong inviscid limit of CNSPEs (1.10) and the global existence of spherically symmetric solutions of CEPEs (1.1) with large data for polytropic gases (1.2). The main purpose of this paper is to establish the global existence of spherically symmetric finite-energy solutions of (1.1) with general pressure law (1.4)-(1.6): \[\rho(t,\mathbf{x})=\rho(t,r),\quad\mathcal{M}(t,x)=m(t,r)\frac{\mathbf{x}}{r}, \quad\Phi(t,\mathbf{x})=\Phi(t,r)\qquad\text{ for }r=|\mathbf{x}|, \tag{1.11}\] subject to the initial condition: \[(\rho,\mathcal{M})(0,\mathbf{x})=(\rho_{0},\mathcal{M}_{0})(\mathbf{x})=(\rho _{0}(r),m_{0}(r)\frac{\mathbf{x}}{r})\,\to\,(0,\mathbf{0})\qquad\text{ as }r\to\infty, \tag{1.12}\] and the asymptotic boundary condition: \[\Phi(t,\mathbf{x})=\Phi(t,r)\,\to\,0\qquad\text{ as }r\to\infty. \tag{1.13}\] Systems (1.1) and (1.10) for spherically symmetric solutions take the following respective forms: \[\begin{cases}\rho_{t}+m_{r}+\frac{2}{r}m=0,\\ m_{t}+\Big{(}\frac{m^{2}}{\rho}+P(\rho)\Big{)}_{r}+\frac{2}{r} \frac{m^{2}}{\rho}+\rho\Phi_{r}=0,\\ \Phi_{rr}+\frac{2}{r}\Phi_{r}=\rho,\end{cases} \tag{1.14}\] and \[\begin{cases}\rho_{t}+m_{r}+\frac{2}{r}m=0,\\ m_{t}+\Big{(}\frac{m^{2}}{\rho}+P(\rho)\Big{)}_{r}+\frac{2}{r} \frac{m^{2}}{\rho}+\rho\Phi_{r}=\varepsilon\Big{(}(\mu(\rho)+\lambda(\rho)) \Big{(}\big{(}\frac{m}{\rho}\big{)}_{r}+\frac{2}{r}\frac{m}{\rho}\Big{)} \Big{)}_{r}-\frac{2\varepsilon}{r}\mu(\rho)_{r}\frac{m}{\rho},\\ \Phi_{rr}+\frac{2}{r}\Phi_{r}=\rho.\end{cases} \tag{1.15}\] The study of spherically symmetric solutions is motivated by many important physical problems such as stellar dynamics including gaseous stars and supernovae formation [6, 58, 66]. An important question is how the waves behave as they move radially inward near the origin, especially under the self-gravitational force for gaseous stars. The spherically symmetric solutions of the compressible Euler equations may blow up near the origin [19, 42, 55, 66] at certain time in some situations. Considering the effect of gravitation, a fundamental problem for CEPEs (1.1) is whether a concentration (delta-measure) is formed at the origin. This problem was answered in [9] for polytropic gases in (1.2) when the initial total-energy is finite that no delta measure is formed for the density at the origin for the two cases: (i) \(\gamma>\frac{6}{5}\); (ii) \(\gamma\in(\frac{6}{5},\frac{4}{3}]\) and the initial total-energy is finite and the total mass is less than a critical mass. In this paper, we establish the global existence of finite-energy solutions of the Cauchy problem (1.1) and (1.12)-(1.13) with spherical symmetry as the inviscid limits of global weak solutions of CNSPEs (1.10) with general pressure law (1.4)-(1.6), especially including the white dwarf star (1.3). The \(L^{p}\) compensated compactness framework for general pressure is also established. Moreover, it is proved that no delta measure is formed for the density at the origin in the limit, and the critical mass for the white dwarf star is the same as the Chandrasekhar limit for the polytropic gas (1.2) with \(\gamma=\frac{4}{3}\). The precise statements of the main results are given in SS2. To achieve these, the main strategy is to develop entropy analysis, uniform estimates in \(L^{p}\), and a more general compensated compactness framework to prove that there exists a strongly convergent subsequence of solutions of CNSPEs (1.10) and show that the limit is the finite-energy weak solution of CEPEs (1.1) with general pressure law. This consists of the following three steps: * Establish the uniform \(L^{p}\) estimates of the solutions of CNSPEs (1.10) independent of \(\varepsilon\) for some \(p>1\); * Show the compactness for weak entropy dissipation measures; * Prove the associated Young measure \(\nu_{(t,r)}\) is the delta measure almost everywhere which leads to a subsequence of solutions of CNSPEs (1.10) strongly converges to the global finite-energy solution of CEPEs (1.1). The generality of pressure \(P(\rho)\) causes essential difficulties in the analysis for all of the above steps. We now describe these difficulties and show how they can be overcome: (i) The crucial step in the \(L^{p}\) estimates is to show that \(\rho|u|^{3}\) (\(u:=\frac{m}{\rho}\) is the velocity) is uniformly bounded in \(L^{1}_{\rm loc}\). This estimate might be obtained through constructing appropriate entropy \(\hat{\eta}\), which is a solution of \((\rho,u)\) to the entropy equation: \[\eta_{\rho\rho}-\frac{P^{\prime}(\rho)}{\rho^{2}}\eta_{uu}=0, \tag{1.16}\] with corresponding entropy flux \(\hat{q}\). If \((\rho,u)\) is the solution of (1.15), any entropy-entropy flux pair (entropy pair, for short) \((\hat{\eta},\hat{q})\) satisfies \[(\hat{\eta}r^{2})_{t}+(\hat{q}r^{2})_{r}+2r\,(-\hat{q}+\rho u\hat{\eta}_{\rho} +\rho u^{2}\hat{\eta}_{m})=\varepsilon\,r^{2}\big{(}(\rho u_{r})_{r}+2\rho \big{(}\frac{u}{r}\big{)}_{r}\big{)}\hat{\eta}_{m}-\rho\int_{a}^{r}\rho\,z^{2 }{\rm d}z\,\hat{\eta}_{m};\] see (5.68) below. For the polytropic gas case (1.2), there is an explicit formula of the entropy kernel \(\chi(\rho,u)\) so that \(\chi*\psi\) is the entropy, where \(*\) denotes the convolution and \(\psi(s)\) is any smooth function. By choosing \(\psi(s)=\frac{1}{2}s|s|\) as in [9], the corresponding entropy flux \(\hat{q}\) satisfies that \(\hat{q}\geq c_{0}\rho|u|^{3}\) and \(-\hat{q}+\rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}\leq 0\). Then the uniform bound of \(\rho|u|^{3}r^{2}\) in \(L^{1}_{\rm loc}\) follows (_cf._[9]). However, there is no explicit formula of the entropy kernel \(\chi\) for the general pressure satisfying (1.4)-(1.6). Even for the special entropy pair generated by \(\psi(s)=\frac{1}{2}s|s|\), it is difficult to prove that \(\hat{q}\geq c_{0}\rho|u|^{3}\) and \(-\hat{q}+\rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}\leq 0\), due to the lack of explicit formula of the entropy kernel \(\chi\). Hence, the above approach does not apply directly, so we have to seek a new method to establish the uniform local integrability of \(\rho|u|^{3}\). One of the novelties of this paper is that a special entropy \(\hat{\eta}\) is constructed by solving a Goursat problem of the entropy equation (1.16) in the domain: \(|u|\leq k(\rho):=\int_{0}^{\rho}\sqrt{P^{\prime}(y)}/y\,{\rm d}y\), so that \(\hat{\eta}\) is chosen as the mechanical energy \(\eta^{*}\) (see (2.13)) when \(u\geq k(\rho)\), \(-\eta^{*}\) when \(u\leq-k(\rho)\), and The boundary condition for the Goursat problem is given on the characteristics curves: \(u\pm k(\rho)=0\). One advantage of such a special entropy pair \((\hat{\eta},\hat{q})\) is that \(\hat{q}\geq c_{0}\rho|u|^{3}\) as \(|u|\geq k(\rho)\), and \(|\hat{q}|\leq C\rho^{\gamma_{2}+1}\) for large \(\rho\) as \(|u|\leq k(\rho)\) via careful analysis for the Goursat problem; see Lemma 5.8 for details. Moreover, \(-\hat{q}+\rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}\) vanishes as \(|u|\geq k(\rho)\). Similarly, \(|-\hat{q}+\rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}|\leq C\rho^{\gamma_{ 2}+1}\) for large \(\rho\) as \(|u|\leq k(\rho)\). To show \(\rho|u|^{3}\) is uniformly bounded in \(L^{1}_{\rm loc}\), it remains to prove that \[\int_{0}^{T}\int_{d}^{\infty}\rho^{\gamma_{2}+1}\,r{\rm d}r{\rm d}t \tag{1.17}\] is uniformly bounded for any \(T>0\) and \(d>0\). It should be noted that the local integrability \(\int_{0}^{T}\int_{d}^{D}\rho^{\gamma_{2}+1}\,{\rm d}r{\rm d}t\leq C\) was obtained in [9], but it is not enough yet to obtain the uniform \(L^{1}_{\rm loc}\) estimate for \(\rho|u|^{3}\). Fortunately, we can obtain even stronger estimate than (1.17), _i.e._, \[\int_{0}^{T}\int_{d}^{\infty}\rho^{\gamma_{2}+1}\,r^{2}{\rm d}r{\rm d}t\leq C, \tag{1.18}\] by an elaborate analysis; see Lemma 5.6 and Corollary 5.7 for details. (ii) For the polytropic gas case in (1.2), Chen-Perepelitsa [12, 13] and Chen-He-Wang-Yuan [9] proved the \(H^{-1}_{\rm loc}\)-compactness for weak entropy dissipation measures via the explicit formula of the weak entropy kernel \(\chi\) by convolution with any test function of compact support, which also implies that the entropy pair \((\eta,q)\) is in \(L^{r}_{\rm loc},r>2\). However, it is not clear how the \(H^{-1}_{\rm loc}\)-compactness for the general pressure satisfying (1.4)-(1.6) can be shown by using the expansions of the weak entropy kernel established in [10, 11]. Motivated by [62], we instead show the \(W^{-1,p}_{\rm loc}\)-compactness for \(1\leq p<2\), so that an improved div-curl lemma (_cf_. [18]) applies, which leads to the commutation identity for the entropy pairs. In fact, we can show that the entropy flux function \(q\) is bounded by \(\rho^{\frac{\gamma_{2}+1}{2}}\) (see (4.81)) as \(\rho\) is large by careful analysis on the expansion of the entropy pair so that \(q\in L^{2}_{\rm loc}\). Then the interpolation compactness yields the \(W^{-1,p}\) compactness for \(1\leq p<2\); see Lemma 7.1 for details. (iii) The argument for the reduction of the associated Young measure \(\nu_{(t,r)}(\rho,u)\) introduced in [9, 12, 13] for the polytropic gas case in (1.2), can be roughly stated as follows: Show first every connected subset of the support of the Young measure is a bounded interval; then use the \(L^{\infty}\) reduction technique introduced in [7, 21, 23, 46] for a bounded supported Young measure to show the Young measure is either a delta measure or supported on the vacuum line. This method essentially relies on the explicit formula of the weak entropy kernel \(\chi\). For the general pressure law satisfying (1.4)-(1.6), the above method does not apply directly, since it is difficult to show that every connected subset of the support of the Young measure is a bounded interval. Motivated by [10, 11, 46, 61, 62], we carefully analyze the singularities of \(\partial^{\lambda_{1}+1}\chi\) with \(\lambda_{1}=\frac{3-\gamma_{1}}{2(\gamma_{1}-1)}\) for large \(\rho\) and fully exploit the property: \((\rho^{\gamma_{2}+1},\rho|u|^{3})\in L^{1}({\rm d}\nu_{(t,r)})\) so that the \(\partial^{\lambda_{1}+1}-\)derivatives can be operated in the commutation relation; see Lemmas 4.11-4.14 for details. Then we prove that the Young measure is either a delta measure or supported on the vacuum line by similar arguments as in [10, 11, 46, 62]. This new compensated compactness framework and the techniques developed in this paper should be useful for solving further nonlinear problems with similar features. Finally, we remark that there are some related results on CNSPEs (1.10) and the compressible Euler equations. For weak solutions of CNSPEs (1.10), we refer to [25, 36, 38, 39] with constant viscosity, and [24, 68] with density-dependent viscosity. Recently, Luo-Xin-Zeng [49, 50, 51] proved the large-time stability of the Lane-Emden solution for \(\gamma\in(\frac{4}{3},2)\). We also refer to the BD entropy developed in [2, 3, 4, 5], which provides a new estimate for the gradient of the density. For the compressible Euler equations, we refer to [7, 14, 37, 42, 60] and the references cited therein. The rest of this paper is organized as follows: In SS2, the finite-energy solutions of the Cauchy problem (1.1) and (1.8)-(1.9) for CEPEs are introduced, and the main theorems of this paper are given. In SS3, some elementary quantities and basic properties about the pressure and related internal energy are provided, and then some remarks on \(M_{\rm c}\) are also given. The entropy analysis for weak entropy pairs for the general pressure satisfying (1.4)-(1.6) is presented in SS4, especially a special entropy pair is constructed by solving a Goursat problem for the entropy equation (2.14). In SS5, a free boundary problem (5.1)-(5.6) for (1.15) is analyzed, and some uniform estimates of solutions are derived, including the basic energy estimate, the BD-type entropy estimate, and the higher integrabilities of the density and the velocity. In SS6, the global existence of weak solutions of CNSPEs (1.10) is established, and some uniform \(L^{p}\) estimates in Theorem 2.1 are also obtained. In SS7, we prove the \(W^{-1,p}_{\rm loc}\)-compactness of the entropy dissipation measures for the weak solutions of (1.15) and complete the proof of Theorem 2.1. In SS8, the \(L^{p}\)-compensated compactness framework for the general pressure law (1.4)-(1.6) (Theorem 2.2) is established, which leads to the proof of Theorem 2.3 by taking the inviscid limit of weak solutions of CNSPEs (1.10) in SS9. Appendix A is devoted to the presentation of both the sharp Sobolev inequality that is used in SS5 and some variants of Gronwall's inequality which are used in the proof of some estimates in SS4. **Notations:** Throughout this paper, we denote \(C^{\alpha}(\Omega),L^{p}(\Omega),W^{k,p}(\Omega)\), and \(H^{k}(\Omega)\) as the standard Holder space, and the corresponding Sobolev spaces, respectively, on domain \(\Omega\) for \(\alpha\in(0,1)\) and \(p\in[1,\infty]\). \(C^{k}_{0}(\Omega)\) represents the space of continuously differentiable functions up to the \(k\)th order with compact support over \(\Omega\), and \(\mathcal{D}(\Omega):=C^{\infty}_{0}(\Omega)\). We also use \(L^{p}(I;r^{2}{\rm d}r)\) or \(L^{p}([0,T)\times I;r^{2}{\rm d}r{\rm d}t)\) for an open interval \(I\subset\mathbb{R}_{+}\) with measure \(r^{2}{\rm d}r\) or \(r^{2}{\rm d}r{\rm d}t\) correspondingly, and \(L^{p}_{\rm loc}([0,\infty);r^{2}{\rm d}r)\) to represent \(L^{p}([0,R];r^{2}{\rm d}r)\) for any fixed \(R>0\). ## 2. Mathematical Problem and Main Theorems The spherically symmetric initial data function \((\rho_{0},\mathcal{M}_{0})({\bf x})\) given in (1.12) is assumed to be of both finite initial total-energy: \[E_{0}:=\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{\mathcal{M}_{0}}{ \sqrt{\rho_{0}}}\Big{|}^{2}+\rho_{0}e(\rho_{0})\Big{)}({\bf x})\,{\rm d}{\bf x }=\omega_{3}\int_{0}^{\infty}\Big{(}\frac{1}{2}\frac{m_{0}^{2}}{\rho_{0}}+ \rho_{0}e(\rho_{0})\Big{)}(r)\,r^{2}{\rm d}r<\infty, \tag{2.1}\] and initial total-mass: \[M:=\int_{\mathbb{R}^{3}}\rho_{0}({\bf x})\,{\rm d}{\bf x}=\omega_{3}\int_{0}^ {\infty}\rho_{0}(r)\,r^{2}{\rm d}r<\infty, \tag{2.2}\] where the internal energy \(e(\rho)\) is related to the pressure by \[e^{\prime}(\rho)=\frac{P(\rho)}{\rho^{2}},\qquad e(0)=0, \tag{2.3}\] and \(\omega_{n}:=\frac{2\pi}{\Gamma\big{(}\frac{n}{2}\big{)}}\) denotes the surface area of the unit sphere in \(\mathbb{R}^{n}\). The initial potential \(\Phi_{0}({\bf x})\) is determined by \[\Delta\Phi_{0}({\bf x})=\rho_{0}({\bf x}),\qquad\lim_{|{\bf x}|\to\infty}\Phi_ {0}({\bf x})=0. \tag{2.4}\] For \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3}]\), we define the critical mass \(M_{\rm c}(\gamma_{2})\) by (i) When \(\gamma_{2}=\frac{4}{3}\), \[M_{\rm c}:=M_{\rm ch}, \tag{2.5}\] where \(M_{\rm ch}\) is the Chandrasekhar limit that is the total mass of the Lane-Emden steady solution \((\rho_{s}(|x|),0)\) for \(P(\rho)=\kappa_{2}\rho^{\frac{4}{3}}\): \(\rho_{s}(|x|)\) has compact support and is determined by the equations: \[\nabla_{\bf x}P(\rho_{s}|{\bf x}|)+\rho_{s}(|{\bf x}|)\nabla_{\bf x}\Phi({\bf x })=0,\qquad\Delta_{\bf x}\Phi({\bf x})=\rho_{s}(|{\bf x}|),\qquad P(\rho_{s}|{ \bf x}|)=\kappa_{2}(\rho_{s}(|{\bf x}|))^{\frac{4}{3}},\] with the center density \(\rho_{s}(0)=\varrho\). It is well-known that \(M_{\rm ch}\) is a uniform constant with respect to the center density \(\varrho\) (_cf._[6]). (ii) When \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3})\), \[M_{\rm c}:=\sup_{\beta>0}M_{\rm c}(\beta) \tag{2.6}\] with \[(4-3\gamma_{2})\Big{(}\frac{B_{\beta}}{3(\gamma_{2}-1)}\Big{)}^{- \frac{3(\gamma_{2}-1)}{4-3\gamma_{2}}}M_{\rm c}(\beta)^{-\frac{5\gamma_{2}-6}{4-3 \gamma_{2}}}-\omega_{3}^{-1}\beta M_{\rm c}(\beta)=E_{0}, \tag{2.7}\] \[B_{\beta}:=\frac{2}{3}\omega_{4}^{-\frac{2}{3}}\omega_{3}^{\frac {4-3\gamma_{2}}{3(\gamma_{2}-1)}}(C_{\rm max}(\beta))^{\frac{5\gamma_{2}-6}{ 3(\gamma_{2}-1)}},\quad C_{\rm max}(\beta):=\sup_{\rho\geq 0}\big{(}\rho^{ \gamma_{2}-1}(\beta+e(\rho))^{-1}\big{)}^{\frac{1}{5\gamma_{2}-6}}>0. \tag{2.8}\] It is clear in (2.6)-(2.8) that \(M_{\rm c}(\beta)\) is well determined for \(\beta>0\) and \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3})\). Some useful properties of \(M_{\rm c}:=\sup_{\beta>0}M_{\rm c}(\beta)\) will be presented in Proposition 3.3 below. We also point out that \(M_{\rm c}\) in (2.5) is strictly larger than the one obtained in [9, (2.8)] for \(\gamma_{2}=\frac{4}{3}\) (_cf._[17]). For the spherically symmetric initial data \((\rho_{0},m_{0},\Phi_{0})(r)\) imposed in (1.11)-(1.13) satisfying (2.1)-(2.2), using similar arguments as in [9, Appendix A], we can construct a sequence of approximate initial data functions \((\rho_{0}^{\varepsilon},m_{0}^{\varepsilon},\Phi_{0}^{\varepsilon})(r)\) satisfying \[\int_{0}^{\infty}\rho_{0}^{\varepsilon}(r)\,r^{2}{\rm d}r=\frac{ M}{\omega_{3}},\qquad\Phi_{0r}^{\varepsilon}=\frac{1}{r^{2}}\int_{0}^{r}\rho_{0}^ {\varepsilon}(z)\,z^{2}{\rm d}z,\] \[E_{0}^{\varepsilon}:=\omega_{3}\int_{\mathbb{R}^{3}}\Big{(}\frac {1}{2}\Big{|}\frac{m_{0}^{\varepsilon}}{\sqrt{\rho_{0}^{\varepsilon}}}\Big{|} ^{2}+\rho_{0}^{\varepsilon}e(\rho_{0}^{\varepsilon})\Big{)}\,r^{2}{\rm d}r \leq C(E_{0}+1)<\infty, \tag{2.9}\] \[E_{1}^{\varepsilon}:=\varepsilon^{2}\int_{0}^{\infty}\big{|} \partial_{r}\sqrt{\rho_{0}^{\varepsilon}(r)}\big{|}^{2}\,r^{2}{\rm d}r\leq C \varepsilon(M+1)<\infty.\] Moreover, as \(\varepsilon\to 0\), \((E_{0}^{\varepsilon},E_{1}^{\varepsilon})\to(E_{0},0)\) and \[(\rho_{0}^{\varepsilon},\rho_{0}^{\varepsilon}u_{0}^{\varepsilon })(r)\to(\rho_{0},\rho_{0}u_{0})(r)\qquad\text{in }L^{\tilde{q}}([0,\infty);r^{2}{\rm d}r)\times L^{1}([0, \infty);r^{2}{\rm d}r),\] \[\Phi_{0r}^{\varepsilon}\to\Phi_{0r}\qquad\text{in }L^{2}([0, \infty);r^{2}{\rm d}r),\] where \(\tilde{q}\in\{1,\gamma_{2}\}\). Furthermore, there exists \(\varepsilon_{0}\in(0,1]\) such that, for any \(\varepsilon\in(0,\varepsilon_{0}]\), \[M<M_{\rm c}^{\varepsilon}\qquad\text{for }\gamma_{2}\in(\frac{6}{5},\frac{4}{3}], \tag{2.10}\] where \(M_{\rm c}^{\varepsilon}\) is defined in (2.5)-(2.8) by replacing \(E_{0}\) by \(E_{0}^{\varepsilon}\). Now we introduce the weak entropy pairs of the 1-D isentropic Euler system (_cf._[10, 40]): \[\begin{cases}\rho_{t}+m_{r}=0,\\ m_{t}+\big{(}\frac{m^{2}}{\rho}+P(\rho)\big{)}_{r}=0.\end{cases} \tag{2.11}\] A pair of functions \((\eta(\rho,m),q(\rho,m))\) is called an entropy pair of the 1-D Euler system (2.11) if \[\nabla q(\rho,m)=\nabla\eta(\rho,m)\nabla\Big{(}\frac{m}{\rho} \Big{)}. \tag{2.12}\] Moreover, \(\eta(\rho,m)\) is called a weak entropy if \(\eta(\rho,m)|_{\rho=0}=0\), and a convex entropy if \(\nabla^{2}\eta(\rho,m)\geq 0\). The mechanical energy and energy flux pair is defined as \[\eta^{*}(\rho,m)=\frac{1}{2}\frac{m^{2}}{\rho}+\rho e(\rho),\qquad q ^{*}(\rho,m)=\frac{1}{2}\frac{m^{3}}{\rho}+m(\rho e(\rho))^{\prime}, \tag{2.13}\] which is a convex weak entropy pair. From (2.12), any entropy satisfies \[\eta_{\rho\rho}-\frac{P^{\prime}(\rho)}{\rho^{2}}\eta_{uu}=0 \tag{2.14}\] with \(u=\frac{m}{\rho}\). It is known in [10, 11, 46, 47] that any regular weak entropy can be generated by the convolution of a smooth function \(\psi(x)\) with a fundamental solution \(\chi(\rho,u,s)\) of the entropy equation (2.14), _i.e._, \[\eta^{\psi}(\rho,u)=\int_{\mathbb{R}}\chi(\rho,u,s)\psi(s)\,\mathrm{d}s. \tag{2.15}\] The corresponding entropy flux is generated from the flux kernel \(\sigma(\rho,u,s)\) (see (4.56)), _i.e._, \[q^{\psi}(\rho,u)=\int_{\mathbb{R}}\sigma(\rho,u,s)\psi(s)\,\mathrm{d}s. \tag{2.16}\] We first consider the Cauchy problem of CNSPEs (1.10) with approximate initial data: \[(\rho,\mathcal{M},\Phi)|_{t=0}=(\rho_{0}^{\varepsilon},\mathcal{M}_{0}^{ \varepsilon},\Phi_{0}^{\varepsilon})(\mathbf{x}):=(\rho_{0}^{\varepsilon}(r), m_{0}^{\varepsilon}(r)\frac{\mathbf{x}}{r},\Phi_{0}^{\varepsilon}(r)), \tag{2.17}\] subject to the far field condition: \[\Phi^{\varepsilon}(t,\mathbf{x})\longrightarrow 0\qquad\text{as }|\mathbf{x}| \rightarrow\infty. \tag{2.18}\] For concreteness, we take \(\varepsilon\in(0,1]\) and the viscosity coefficients \((\mu,\lambda)=(\rho,0)\) in (1.10). **Definition 2.1**.: _A triple \((\rho^{\varepsilon},\mathcal{M}^{\varepsilon},\Phi^{\varepsilon})(t,\mathbf{x})\) is said to be a weak solution of the Cauchy problem (1.10) and (2.17) if_ * \(\rho^{\varepsilon}(t,\mathbf{x})\geq 0\) _and_ \((\mathcal{M}^{\varepsilon},\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{ \varepsilon}}})(t,\mathbf{x})=\mathbf{0}\) _a.e. on the vacuum states_ \(\{(t,\mathbf{x})\,:\,\rho^{\varepsilon}(t,\mathbf{x})=0\}\)_,_ \[\rho^{\varepsilon}\in L^{\infty}(0,T;L^{\gamma_{2}}(\mathbb{R}^{3})), \quad\nabla\sqrt{\rho^{\varepsilon}}\in L^{\infty}(0,T;L^{2}(\mathbb{R}^{3})),\] \[\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\in L^{ \infty}(0,T;L^{2}(\mathbb{R}^{3})),\quad\Phi^{\varepsilon}\in L^{\infty}(0,T; L^{6}(\mathbb{R}^{3})),\quad\nabla\Phi^{\varepsilon}\in L^{\infty}(0,T;L^{2}( \mathbb{R}^{3})).\] * _For any_ \(t_{2}\geq t_{1}\geq 0\) _and any_ \(\zeta(t,\mathbf{x})\in C^{1}_{0}([0,\infty)\times\mathbb{R}^{3})\)_, the mass equation_ \(\eqref{eq:1.10}_{1}\) _holds in the sense_:__ \[\int_{\mathbb{R}^{3}}(\rho^{\varepsilon}\zeta)(t_{2},\mathbf{x})\,\mathrm{d} \mathbf{x}-\int_{\mathbb{R}^{3}}(\rho^{\varepsilon}\zeta)(t_{1},\mathbf{x})\, \mathrm{d}\mathbf{x}=\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{3}}(\rho^{ \varepsilon}\zeta_{t}+\mathcal{M}^{\varepsilon}\cdot\nabla\zeta)(t,\mathbf{x })\,\mathrm{d}\mathbf{x}\mathrm{d}t.\] * _For any_ \(\Psi=(\Psi_{1},\Psi_{2},\Psi_{3})(t,\mathbf{x})\in(C^{2}_{0}([0,\infty)\times \mathbb{R}^{3}))^{3}\)_, the momentum equations_ \(\eqref{eq:1.10}_{2}\) _hold in the sense_:__ \[\int_{\mathbb{R}^{4}_{+}}\Big{(}\mathcal{M}^{\varepsilon}\cdot \Psi_{t}+\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\cdot\big{(} \frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\cdot\nabla\big{)} \Psi+P(\rho^{\varepsilon})\nabla\cdot\Psi\Big{)}\,\mathrm{d}\mathbf{x} \mathrm{d}t+\int_{\mathbb{R}^{3}}\mathcal{M}_{0}^{\varepsilon}(\mathbf{x}) \cdot\Psi(0,\mathbf{x})\,\mathrm{d}\mathbf{x}\] \[=-\varepsilon\int_{\mathbb{R}^{4}_{+}}\Big{(}\frac{1}{2}\mathcal{ M}^{\varepsilon}\cdot\big{(}\Delta\Psi+\nabla(\nabla\cdot\Psi)\big{)}+\frac{ \mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\cdot\big{(}\nabla\sqrt{ \rho^{\varepsilon}}\cdot\nabla\big{)}\Psi\Big{)}\,\mathrm{d}\mathbf{x} \mathrm{d}t\] \[\quad-\varepsilon\int_{\mathbb{R}^{4}_{+}}\nabla\sqrt{\rho^{ \varepsilon}}\cdot\big{(}\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{ \varepsilon}}}\cdot\nabla\big{)}\Psi\,\mathrm{d}\mathbf{x}\mathrm{d}t+\int_{ \mathbb{R}^{4}_{+}}\big{(}\rho\nabla\Phi\cdot\Psi\big{)}(t,\mathbf{x})\, \mathrm{d}\mathbf{x}.\] * _For any_ \(t\geq 0\) _and_ \(\xi(\mathbf{x})\in C^{1}_{0}(\mathbb{R}^{3})\)_,_ \[\int_{\mathbb{R}^{3}}\nabla\Phi^{\varepsilon}(t,\mathbf{x})\cdot\nabla\xi( \mathbf{x})\,\mathrm{d}\mathbf{x}=-\int_{\mathbb{R}^{3}}\rho^{\varepsilon}(t, \mathbf{x})\xi(\mathbf{x})\,\mathrm{d}\mathbf{x}.\] Then we have **Theorem 2.1** (Global existence of spherically symmetric solutions for CNSPEs).: _Assume that the initial data function \((\rho_{0}^{\varepsilon},\mathcal{M}_{0}^{\varepsilon},\Phi_{0}^{\varepsilon})( \mathbf{x})\) is given in (2.17)-(2.18) with \((\rho_{0}^{\varepsilon},m_{0}^{\varepsilon},\Phi_{0}^{\varepsilon})(r)\) satisfying (2.9)-(2.10). Then, for each fixed \(\varepsilon\in(0,1]\), there exists a global weak solution \((\rho^{\varepsilon},\mathcal{M}^{\varepsilon},\Phi^{\varepsilon})(t,\mathbf{x})\) of the Cauchy problem (1.10) and (2.17)-(2.18) in the sense of_ Definition 2.1 _with following spherical symmetry form_:__ \[(\rho^{\varepsilon},\mathcal{M}^{\varepsilon},\Phi^{\varepsilon})(t,{\bf x})=( \rho^{\varepsilon}(t,r),m^{\varepsilon}(t,r)\frac{{\bf x}}{r},\Phi^{ \varepsilon}(t,r))\qquad\text{for }r=|{\bf x}|, \tag{2.19}\] _such that_ \[\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{\mathcal{M}^{\varepsilon}} {\sqrt{\rho^{\varepsilon}}}\Big{|}^{2}+\rho^{\varepsilon}e(\rho^{\varepsilon })-\frac{1}{2}|\nabla\Phi^{\varepsilon}|^{2}\Big{)}\,{\rm d}{\bf x}\leq\int_{ \mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{\mathcal{M}^{\varepsilon}_{0}}{ \sqrt{\rho^{\varepsilon}_{0}}}\Big{|}^{2}+\rho^{\varepsilon}_{0}e(\rho^{ \varepsilon}_{0})-\frac{1}{2}|\nabla\Phi^{\varepsilon}_{0}|^{2}\Big{)}\,{ \rm d}{\bf x}\quad\text{for }t\geq 0. \tag{2.20}\] _Furthermore, for \((\rho^{\varepsilon},m^{\varepsilon},\Phi^{\varepsilon})(t,r)\), there exists a measurable function \(u^{\varepsilon}(t,r)\) with_ \[u^{\varepsilon}(t,r):=\frac{m^{\varepsilon}(t,r)}{\rho^{\varepsilon}(t,r)} \qquad\text{a.e. on }\big{\{}(t,r)\,:\,\rho^{\varepsilon}(t,r)\neq 0\big{\}},\] _and \(u^{\varepsilon}(t,r):=0\) a.e. on \(\big{\{}(t,r)\,:\,\rho^{\varepsilon}(t,r)=0\text{ or }r=0\big{\}}\) such that \(m^{\varepsilon}(t,r)=(\rho^{\varepsilon}u^{\varepsilon})(t,r)\) a.e. on \(\mathbb{R}^{2}_{+}\). In addition, the following properties hold_:__ \[\text{(i)}\,\,\int_{0}^{\infty}\rho^{\varepsilon}(t,r)\,r^{2}{ \rm d}r=\int_{0}^{\infty}\rho^{\varepsilon}_{0}(r)\,r^{2}{\rm d}r=M\qquad\text {for }t\geq 0, \tag{2.21}\] \[\text{(ii)}\,\,\int_{0}^{\infty}\eta^{*}(\rho^{\varepsilon},m^{ \varepsilon})(t,r)\,r^{2}{\rm d}r+\varepsilon\int_{\mathbb{R}^{2}_{+}}(\rho^{ \varepsilon}|u^{\varepsilon}|^{2})(t,r)\,r^{2}{\rm d}r{\rm d}t+\|\Phi^{ \varepsilon}\|_{L^{6}(\mathbb{R}^{3})}+\|\nabla\Phi^{\varepsilon}\|_{L^{2}( \mathbb{R}^{3})}\] \[\qquad+\int_{0}^{\infty}\Big{(}\int_{0}^{r}\rho^{\varepsilon}(t,r )\,z^{2}{\rm d}z\Big{)}\rho^{\varepsilon}(t,r)\,r{\rm d}r\leq C\,(M,E_{0}) \qquad\text{for }t\geq 0,\] (2.22) \[\text{(iii)}\,\,\sup_{t\in[0,T]}\varepsilon^{2}\int_{0}^{\infty} \big{|}\big{(}\sqrt{\rho^{\varepsilon}}\big{)}_{r}\big{|}^{2}\ r^{2}{\rm d}r+ \varepsilon\int_{0}^{T}\int_{0}^{\infty}\frac{P^{\prime}(\rho^{\varepsilon})}{ \rho^{\varepsilon}}\,|\rho^{\varepsilon}_{r}|^{2}\ r^{2}{\rm d}r{\rm d}t\leq C( M,E_{0},T),\] (2.23) \[\text{(iv)}\,\,\int_{0}^{T}\int_{d}^{D}\rho^{\varepsilon}\,|u^{ \varepsilon}|^{3}\ r^{2}{\rm d}r{\rm d}t\leq C(d,D,M,E_{0},T),\] (2.24) \[\text{(v)}\,\,\int_{0}^{T}\int_{d}^{\infty}(\rho^{\varepsilon})^{ \gamma_{2}+1}\,r^{2}{\rm d}r{\rm d}t\leq C(d,M,E_{0},T), \tag{2.25}\] _for any \(T\in\mathbb{R}_{+}\) and interval \([d,D]\Subset(0,\infty)\), where \(C(M,E_{0}),C(M,E_{0},T)\), and \(C(d,D,M,E_{0},T)\) are three universal positive constants independent of \(\varepsilon\). In addition, for \(\varepsilon\in(0,1]\),_ \[\partial_{t}\eta^{\psi}\,(\rho^{\varepsilon},m^{\varepsilon})+\partial_{r}q^{ \psi}\,(\rho^{\varepsilon},m^{\varepsilon})\quad\text{ is compact in }W^{-1,p}_{\rm loc}\left(\mathbb{R}^{2}_{+}\right) \tag{2.26}\] _for any \(p\in[1,2)\), where \(\psi(s)\) is any smooth function with compact support on \(\mathbb{R}\)._ Now we introduce the notion of finite-energy weak solutions of CEPEs (1.1). **Definition 2.2**.: _A measurable vector function \((\rho,\mathcal{M},\Phi)\) is said to be a finite-energy solution of the Cauchy problem (1.1) and (1.8)-(1.9) provided that_ * \(\rho(t,{\bf x})\geq 0\) _a.e., and_ \((\mathcal{M},\frac{\mathcal{M}}{\sqrt{\rho}})(t,{\bf x})={\bf 0}\) _a.e. on the vacuum states_ \(\{(t,{\bf x})\,:\,\rho(t,{\bf x})=0\}\)_._ * _For a.e._ \(t>0\)_, the total energy is finite_:__ (2.27) * _For any_ \(\zeta(t,{\bf x})\in C^{1}_{0}([0,\infty)\times\mathbb{R}^{3})\)_,_ \[\int_{\mathbb{R}^{4}_{+}}(\rho\zeta_{t}+\mathcal{M}\cdot\nabla\zeta)\,{\rm d}{ \bf x}{\rm d}t+\int_{\mathbb{R}^{3}}(\rho_{0}\zeta)(0,{\bf x})\,{\rm d}{\bf x }=0.\] (2.28) * _For all_ \(\Psi(t,\mathbf{x})=(\Psi_{1},\Psi_{2},\Psi_{3})(t,\mathbf{x})\in(C^{1}_{0}([0, \infty)\times\mathbb{R}^{3}))^{3}\)_,_ \[\int_{\mathbb{R}^{4}_{+}}\left(\mathcal{M}\cdot\partial_{t}\Psi+ \frac{\mathcal{M}}{\sqrt{\rho}}\cdot(\frac{\mathcal{M}}{\sqrt{\rho}}\cdot \nabla)\Psi+P(\rho)\,\nabla\cdot\Psi\right)\mathrm{d}\mathbf{x}\mathrm{d}t+\int _{\mathbb{R}^{3}}\mathcal{M}_{0}(\mathbf{x})\cdot\Psi(0,\mathbf{x})\,\mathrm{d} \mathbf{x}\] \[=\int_{\mathbb{R}^{4}_{+}}\left(\rho\nabla\Phi\cdot\Psi\right)(t, \mathbf{x})\,\mathrm{d}\mathbf{x}.\] * _For all_ \(\xi(\mathbf{x})\in C^{1}_{0}(\mathbb{R}^{3})\)_,_ \[\int_{\mathbb{R}^{3}}\nabla\Phi(t,\mathbf{x})\cdot\nabla\xi(\mathbf{x})\, \mathrm{d}\mathbf{x}=-\int_{\mathbb{R}^{3}}\rho(t,\mathbf{x})\xi(\mathbf{x}) \,\mathrm{d}\mathbf{x}\qquad\text{for a.e. }t\geq 0.\] (2.30) To establish the strong convergence of the inviscid limit \(\varepsilon\to 0\) of solutions \((\rho^{\varepsilon},\mathcal{M}^{\varepsilon},\Phi^{\varepsilon})(t,\mathbf{x})\) of CNSPEs (1.10) obtained in Theorem 2.1, we establish the following \(L^{p}\) compensated compactness framework for the \(1\)-D Euler equations (2.11) with general pressure law (1.4)-(1.6), in which restriction \(\gamma_{2}\in(\frac{6}{5},\gamma_{1}]\) in (1.6) can be relaxed to \(\gamma_{2}\in(1,\gamma_{1}]\). **Theorem 2.2** (\(L^{p}\) compensated compactness framework).: _Let \((\rho^{\varepsilon},m^{\varepsilon})(t,r)=(\rho^{\varepsilon},\rho^{\varepsilon }u^{\varepsilon})(t,r)\) be a sequence of measurable functions with \(\rho^{\varepsilon}\geq 0\) a.e. on \(\mathbb{R}^{2}_{+}\) satisfying the following two conditions_:__ * _For any_ \(T>0\) _and_ \(K\Subset\mathbb{R}_{+}\)_, there exists_ \(C(K,T)>0\) _independent of_ \(\varepsilon\) _such that_ \[\int_{0}^{T}\int_{K}\left((\rho^{\varepsilon})^{\gamma_{2}+1}+\rho^{ \varepsilon}|u^{\varepsilon}|^{3}\right)\mathrm{d}r\mathrm{d}t\leq C(K,T).\] * _For any entropy pair_ \((\eta^{\psi},q^{\psi})\) _defined in (_2.15_)-(_2.16_) with any smooth function_ \(\psi(s)\) _of compact support on_ \(\mathbb{R}\)_,_ \[\partial_{t}\eta^{\psi}(\rho^{\varepsilon},m^{\varepsilon})+\partial_{r}q^{ \psi}(\rho^{\varepsilon},m^{\varepsilon})\qquad\text{is compact in }W^{-1,1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}).\] _Then there exists a subsequence \((\)still denoted\()\)\((\rho^{\varepsilon},m^{\varepsilon})(t,r)\) and a vector function \((\rho,m)(t,r)\) such that, as \(\varepsilon\to 0\),_ \[\begin{array}{ll}&\rho^{\varepsilon}(t,r)\to\rho(t,r)\text{ in }L^{q_{1}}_{ \mathrm{loc}}(\mathbb{R}^{2}_{+})\qquad\text{ for }q_{1}\in[1,\gamma_{2}+1),\\ &m^{\varepsilon}(t,r)\to m(t,r)\text{ in }L^{q_{2}}_{\mathrm{loc}}( \mathbb{R}^{2}_{+})\qquad\text{ for }q_{2}\in[1,\frac{3(\gamma_{2}+1)}{\gamma_{2}+3}),\end{array} \tag{2.31}\] _where \(L^{p}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) represents \(L^{p}([0,T]\times K)\) for any \(T>0\) and compact set \(K\Subset\mathbb{R}_{+}\)._ Now, we are ready to state our main theorem. **Theorem 2.3** (Global existence of finite-energy solutions).: _Let the pressure function \(P(\rho)\) satisfy (1.4)-(1.6), and let the spherically symmetric initial data \((\rho_{0},\mathcal{M}_{0},\Phi_{0})(\mathbf{x})\) be given in (1.12)-(1.13) with \((\rho_{0},m_{0},\Phi_{0})(r)\) satisfying (2.1)-(2.2) and (2.4). Assume that \(\gamma_{2}>\frac{4}{3}\) or \(M<M_{\mathrm{c}}\) as \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3}]\). Then there exists a global finite-energy solution \((\rho,\mathcal{M},\Phi)(t,\mathbf{x})\) of (1.1) and (1.12)-(1.13) with spherical symmetry form (1.11) in the sense of_ Definition 2.2_._ **Remark 2.1**.: _For the steady gaseous star problem, there is no white dwarf star if the total mass is larger than the so-called Chandrasekhar limit when \(\gamma\in(\frac{6}{5},\frac{4}{3}]\); see [6]._ Theorem 2.3 _requires similar restriction on the total mass when \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3}]\) for non-steady gaseous stars. Moreover, in view of (2.5), for the non-steady white dwarf star, the critical mass is exactly the Chandrasekhar limit in the case that \(P(\rho)=\kappa_{2}\rho^{\frac{4}{3}}\). It would be interesting to analyze whether the critical mass defined in (2.6)-(2.8) for \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3})\) is optimal._ **Remark 2.2**.: Theorem 2.3 _can be extended to the \(3\)-D compressible Euler equations, i.e., (1.1) with \(\Phi=0\). Moreover, the inviscid limit from the compressible Navier-Stokes equations to the compressible Euler equations with far-field vacuum can also be justified._ **Remark 2.3**.: Theorem 2.3 _also holds for the plasmas case, i.e., \(k_{g}=-1\) in (1.1), by a similar proof. In this case, the restriction: \(M<M_{\rm c}\) can be removed, and the condition_: \(\gamma_{2}>\frac{6}{5}\) _can be relaxed to \(\gamma_{2}>1\) if the additional assumption_: \(\rho_{0}\in L^{\frac{6}{5}}(\mathbb{R}^{3})\) _is imposed. We omit the proof in this paper for brevity and, instead, refer the reader to_ _[_9_]_ _for the details._ ## 3. Properties of the General Pressure Law and Related Internal Energy In this section, we present some useful estimates involving the general pressure \(P(\rho)\) with (1.4)-(1.6) and the corresponding internal energy \(e(\rho)\), which are used in the subsequent development. Denote \(c(\rho):=\sqrt{P^{\prime}(\rho)}\) as the speed of sound, and \[k(\rho):=\int_{0}^{\rho}\frac{\sqrt{P^{\prime}(y)}}{y}\,\mathrm{d}y. \tag{3.1}\] By direct calculation, we can obtain the following asymptotic behaviors of \(P(\rho)\), \(e(\rho)\), and \(k(\rho)\). **Lemma 3.1**.: _Assume that \(\rho_{*}\) given in (1.5) is small enough and \(\rho^{*}\) given in (1.6) is large enough such that the following estimates hold_:__ * _When_ \(\rho\in(0,\rho_{*}]\)_,_ \[\begin{cases}\underline{\kappa}_{1}\rho^{\gamma_{1}}\leq P(\rho)\leq \bar{\kappa}_{1}\rho^{\gamma_{1}},\\ \underline{\kappa}_{1}\gamma_{1}\rho^{\gamma_{1}-1}\leq P^{\prime}(\rho)\leq \bar{\kappa}_{1}\gamma_{1}\rho^{\gamma_{1}-1},\\ \underline{\kappa}_{1}\gamma_{1}(\gamma_{1}-1)\rho^{\gamma_{1}-2}\leq P^{ \prime\prime}(\rho)\leq\bar{\kappa}_{1}\gamma_{1}(\gamma_{1}-1)\rho^{\gamma_ {1}-2},\end{cases}\] (3.2) _and when_ \(\rho\in[\rho^{*},\infty)\)_,_ \[\begin{cases}\underline{\kappa}_{2}\rho^{\gamma_{2}}\leq P(\rho)\leq\bar{ \kappa}_{2}\rho^{\gamma_{2}},\\ \underline{\kappa}_{2}\gamma_{2}\rho^{\gamma_{2}-1}\leq P^{\prime}(\rho)\leq \bar{\kappa}_{2}\gamma_{2}\rho^{\gamma_{2}-1},\\ \underline{\kappa}_{2}\gamma_{2}(\gamma_{2}-1)\rho^{\gamma_{2}-2}\leq P^{ \prime\prime}(\rho)\leq\bar{\kappa}_{2}\gamma_{2}(\gamma_{2}-1)\rho^{\gamma_ {2}-2},\end{cases}\] (3.3) _where we have denoted_ \(\underline{\kappa}_{i}:=(1-\mathfrak{a}_{0})\kappa_{i}\) _and_ \(\bar{\kappa}_{i}:=(1+\mathfrak{a}_{0})\kappa_{i}\) _with_ \(\mathfrak{a}_{0}=\frac{3-\gamma_{1}}{2(\gamma_{1}+1)}\) _and_ \(i=1,2\)_._ * _For_ \(e(\rho)\) _and_ \(k(\rho)\)_, there exists_ \(C>0\) _depending on_ \((\gamma_{1},\gamma_{2},\kappa_{1},\kappa_{2},\rho_{*},\rho^{*})\) _such that_ \[C^{-1}\rho^{\gamma_{1}-1}\leq e(\rho)\leq C\rho^{\gamma_{1}-1}, \quad C^{-1}\rho^{\gamma_{1}-2}\leq e^{\prime}(\rho)\leq C\rho^{\gamma_{1}-2} \quad\text{ for }\rho\in(0,\rho_{*}],\] (3.4) \[C^{-1}\rho^{\gamma_{2}-1}\leq e(\rho)\leq C\rho^{\gamma_{2}-1}, \quad C^{-1}\rho^{\gamma_{2}-2}\leq e^{\prime}(\rho)\leq C\rho^{\gamma_{2}-2} \quad\text{ for }\rho\in[\rho^{*},\infty),\] (3.5) _and, for_ \(i=0,1\)_,_ \[C^{-1}\rho^{\theta_{1}-i}\leq k^{(i)}(\rho)\leq C\rho^{\theta_{1}-i}, \ C^{-1}\rho^{\theta_{1}-2}\leq|k^{\prime\prime}(\rho)|\leq C\rho^{\theta_{1} -2}\ \text{ for }\rho\in(0,\rho_{*}],\] (3.6) \[C^{-1}\rho^{\theta_{2}-i}\leq k^{(i)}(\rho)\leq C\rho^{\theta_{2} -i},\ C^{-1}\rho^{\theta_{2}-2}\leq|k^{\prime\prime}(\rho)|\leq C\rho^{\theta_{2 }-2}\ \text{ for }\rho\in[\rho^{*},\infty),\] (3.7) _where_ \(\theta_{1}=\frac{\gamma_{1}-1}{2}\) _and_ \(\theta_{2}=\frac{\gamma_{2}-1}{2}\)_._ It follows from (3.2)-(3.3) that \[\Big{(}\frac{(3\gamma_{1}-1)(\gamma_{1}-1)}{2(5+\gamma_{1})}\Big{)}2P^{\prime }(\rho)\leq\rho P^{\prime\prime}(\rho)\leq\Big{(}\frac{(5+\gamma_{1})(\gamma_ {1}-1)}{2(3\gamma_{1}-1)}\Big{)}2P^{\prime}(\rho)<2P^{\prime}(\rho), \tag{3.8}\] when \(\rho\in[0,\rho_{*}]\cup[\rho^{*},\infty)\). For later use, we denote \[\nu=1-\frac{(3\gamma_{1}-1)(\gamma_{1}-1)}{2(5+\gamma_{1})}=\frac{3(3-\gamma_ {1})(\gamma_{1}-1)}{2(5+\gamma_{1})}<1, \tag{3.9}\] \[d(\rho):=2+\frac{\rho k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}. \tag{3.10}\] Then it follows from (3.8) that \[0<\Big{|}\frac{\rho k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\Big{|}=1-\frac{\rho P ^{\prime\prime}(\rho)}{2P^{\prime}(\rho)}\leq\nu<1\qquad\text{for }\rho\in(0,\rho_{*}]\cup[\rho^{*}, \infty). \tag{3.11}\] Motivated by [62], we have **Lemma 3.2**.: \(0<d(\rho)\leq C\) _for all \(\rho>0\), and_ \[\big{|}d(\rho)-(1+\theta_{2})\big{|}\leq C\rho^{-\epsilon}\qquad\text{for }\rho \gg 1. \tag{3.12}\] **Proof.** It follows from (1.4) that \(d(\rho)=1+\frac{\rho P^{\prime\prime}(\rho)}{2P^{\prime}(\rho)}>0.\) Moreover, by (3.11), it is direct to see that \(d(\rho)\) is bounded. Using (1.6), we see that, for \(\rho\geq\rho^{*}\), \[\begin{cases}P^{\prime}(\rho)=\gamma_{2}\kappa_{2}\rho^{\gamma_{2}-1}\big{(}1 +\mathcal{P}_{2}(\rho)+\rho\mathcal{P}_{2}^{\prime}(\rho)\big{)},\\ P^{\prime\prime}(\rho)=\gamma_{2}(\gamma_{2}-1)\kappa_{2}\rho^{\gamma_{2}-2} \big{(}1+\mathcal{P}_{2}(\rho)+3\rho\mathcal{P}_{2}^{\prime}(\rho)+\rho^{2} \mathcal{P}_{2}^{\prime\prime}(\rho)).\end{cases}\] Then, for \(\rho\geq\max\{\rho^{*},(8C^{*})^{1/\epsilon}\}\), \[\big{|}d(\rho)-(1+\theta_{2})\big{|}=\Big{|}\frac{\rho P^{\prime\prime}(\rho) }{2P^{\prime}(\rho)}-\theta_{2}\Big{|}=\Big{|}\frac{\theta_{2}\big{(}2\rho \mathcal{P}_{2}^{\prime}(\rho)+\rho^{2}\mathcal{P}_{2}^{\prime\prime}(\rho) \big{)}}{1+P_{2}(\rho)+3\rho\mathcal{P}_{2}(\rho)+\rho^{2}\mathcal{P}_{2}^{ \prime\prime}(\rho)}\Big{|}\leq C(\theta_{2},C^{*})\rho^{-\epsilon},\] where we have used that \(|\mathcal{P}_{2}^{(j)}(\rho)|\leq C^{*}\rho^{-\epsilon-j}\) for \(j\in\{0,1,2\}\) in the last inequality. \(\square\) Hereafter, for simplicity of notation, we assume that (3.12) holds for \(\rho\geq\rho^{*}\). Furthermore, using (1.6) and \(e^{\prime}(\rho)=\frac{P(\rho)}{\rho^{2}}\), we obtain \[e(\rho)=\frac{\kappa_{2}}{\gamma_{2}-1}\big{(}\rho^{\gamma_{2}-1}-(\rho^{*})^ {\gamma_{2}-1}\big{)}+\kappa_{2}\int_{\rho^{*}}^{\rho}s^{\gamma_{2}-2} \mathcal{P}_{2}(s)\,\mathrm{d}s+\int_{0}^{\rho^{*}}\frac{P(s)}{s^{2}}\, \mathrm{d}s\qquad\text{ for }\rho\geq\rho^{*}, \tag{3.13}\] which, with \(e(0)=0\) and \(|\mathcal{P}_{2}(\rho)|\leq C^{*}\rho^{-\epsilon}\), yields that, for any parameter \(\beta>0\), \[\lim_{\rho\to 0}\frac{\gamma_{2}-1}{\nu_{2}-6}(\beta+e(\rho))^{-\frac{1}{ \nu_{2}-6}}=0,\qquad\lim_{\rho\to\infty}\rho^{\frac{\gamma_{2}-1}{\nu_{2}-6}} (\beta+e(\rho))^{-\frac{1}{5\gamma_{2}-6}}=(\frac{\kappa_{2}}{\gamma_{2}-1})^ {-\frac{1}{5\gamma_{2}-6}}.\] Then we see that \[C_{\max}(\beta):=\sup_{\rho\geq 0}\rho^{\frac{\gamma_{2}-1}{5 \gamma_{2}-6}}(\beta+e(\rho))^{-\frac{1}{5\gamma_{2}-6}}\in[(\frac{\kappa_{2 }}{\gamma_{2}-1})^{-\frac{1}{5\gamma_{2}-6}},\,\infty), \tag{3.14}\] \[\rho^{\frac{6(\gamma_{2}-1)}{5\gamma_{2}-6}}(\beta\rho+\rho e( \rho))^{-\frac{1}{5\gamma_{2}-6}}\leq C_{\max}(\beta)\rho\qquad\text{for }\rho>0. \tag{3.15}\] With a careful analysis of \(C_{\max}(\beta)\), we obtain some estimates on \(M_{\rm c}\) defined in (2.6). **Proposition 3.3**.: _Let \(h(\rho)=P(\rho)\rho^{-1}-(\gamma_{2}-1)e(\rho)\), and let \(\widetilde{M}_{\rm c}\) be the critical mass obtained in [9, (2.8)] for polytropic gases in (1.2) with \(\gamma\in(\frac{6}{5},\frac{4}{3})\). Then \(M_{\rm c}\) defined in (2.6)-(2.8) satisfies that \(M_{\rm c}\leq\widetilde{M}_{\rm c}\); in particular, \(M_{\rm c}<\widetilde{M}_{\rm c}\) when \(h^{\prime}(\rho)>0\). For example,_ \[P_{\delta}(\rho):=\int_{0}^{\frac{1}{3}}\frac{s^{4}}{\sqrt{\delta+s^{2+\epsilon _{0}}}}\,\mathrm{d}s\qquad\text{for }\delta>0\text{ and }\epsilon_{0}\in(0,\frac{4}{5}) \tag{3.16}\] _satisfies conditions (1.4)-(1.6). If \(M_{\rm c}(\delta)\) is the critical mass defined in (2.6)-(2.8) for pressure \(P_{\delta}(\rho)\), then \(M_{\rm c}(\delta)<\widetilde{M}_{\rm c}\) for any \(\delta>0\)._ **Proof.** For \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3})\), it follows from (2.6)-(2.8) and (3.14) that, for any fixed \(\beta>0\), \[M_{\rm c}(\beta)=\Big{(}\frac{2}{9(\gamma_{2}-1)}(C_{\max}^{\beta})^{\frac{5 \gamma_{2}-6}{3(\gamma_{2}-1)}}\omega_{3}^{-\frac{4-3\gamma_{2}}{3(\gamma_{2}- 1)}}\omega_{4}^{-\frac{2}{3}}\Big{)}^{-\frac{3(\gamma_{2}-1)}{5\gamma_{2}-6}} \Big{(}\frac{E_{0}+\omega_{3}^{-1}\beta M_{\rm c}(\beta)}{4-3\gamma_{2}} \Big{)}^{-\frac{4-3\gamma_{2}}{5\gamma_{2}-6}}\] \[\leq\Big{(}\frac{2}{9(\gamma_{2}-1)}\Big{(}\frac{\kappa_{2}}{\gamma_{2}-1}\Big{)}^ {-\frac{1}{3(\gamma_{2}-1)}}\omega_{3}^{-\frac{4-3\gamma_{2}}{3(\gamma_{2}-1)}} \omega_{4}^{-\frac{2}{3}}\Big{)}^{-\frac{3(\gamma_{2}-1)}{5\gamma_{2}-6}}\Big{(} \frac{E_{0}}{4-3\gamma_{2}}\Big{)}^{-\frac{4-3\gamma_{2}}{5\gamma_{2}-6}}= \widetilde{M}_{\rm c}, \tag{3.17}\] which yields \(M_{\rm c}\leq\widetilde{M}_{\rm c}\). Let \(g(\rho):=\rho^{\frac{7_{2}-1}{5\gamma_{2}-6}}(\beta+e(\rho))^{-\frac{1}{5 \gamma_{2}-6}}\). Then \(C_{\max}(\beta)=\max_{\rho\geq 0}g(\rho)\). Since \(e^{\prime}(\rho)=\frac{P(\rho)}{\rho^{2}}\), a direct calculation shows that \[g^{\prime}(\rho)=\frac{1}{5\gamma_{2}-6}\rho^{5-4\gamma_{2}}\big{(}\beta+e( \rho)\big{)}^{-\frac{5(\gamma_{2}-1)}{5\gamma_{2}-6}}\big{(}(\gamma_{2}-1) \beta-h(\rho)\big{)}. \tag{3.18}\] If \(h^{\prime}(\rho)>0\) for all \(\rho>0\), then \(h(\rho)\geq h(0)=0\). Let \(K_{0}:=\max_{\rho>0}h(\rho)>0\). For \(\beta\) small enough such that \(0<\beta<\frac{K_{0}}{\gamma_{2}-1}\), there exists a unique point \(\rho_{\beta}>0\) such that \(g^{\prime}(\rho_{\beta})=0\), _i.e._, \[h(\rho_{\beta})=(\gamma_{2}-1)\beta, \tag{3.19}\] and \(C_{\max}(\beta)=g(\rho_{\beta})=(\gamma_{2}-1)^{\frac{1}{5\gamma_{2}-6}}\big{(} P(\rho_{\beta})\rho_{\beta}^{-\gamma_{2}}\big{)}^{-\frac{1}{5\gamma_{2}-6}}.\) Moreover, it follows from (3.19) that \(\lim_{\beta\to 0+}\rho_{\beta}=0\). Thus, we see from (1.5) that \[\lim_{\beta\to 0+}C_{\max}(\beta)=(\gamma_{2}-1)^{\frac{1}{5\gamma_{2}-6}} \lim_{\rho_{\beta}\to 0}\big{(}P(\rho_{\beta})\rho_{\beta}^{-\gamma_{2}} \big{)}^{-\frac{1}{5\gamma_{2}-6}}=\infty,\] which, with \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: see Fig. 4.1. Set \[V_{1}=\frac{1}{2k^{\prime}(\rho)}(\eta_{\rho}+k^{\prime}(\rho)\eta_{u}),\qquad V_ {2}=\frac{1}{2k^{\prime}(\rho)}(\eta_{\rho}-k^{\prime}(\rho)\eta_{u}). \tag{4.3}\] Then (4.2) can be rewritten as \[\begin{cases}\frac{\partial V_{1}}{\partial\rho}-k^{\prime}(\rho)\frac{ \partial V_{1}}{\partial u}=-\frac{k^{\prime\prime}(\rho)}{2k^{\prime}(\rho)} (V_{1}+V_{2}),\\ \frac{\partial V_{2}}{\partial\rho}+k^{\prime}(\rho)\frac{ \partial V_{2}}{\partial u}=-\frac{k^{\prime\prime}(\rho)}{2k^{\prime}(\rho)} (V_{1}+V_{2}).\end{cases} \tag{4.4}\] The corresponding characteristic boundary conditions become \[\begin{cases}V_{1}|_{u=\pm k(\rho)}=\pm\frac{1}{2k^{\prime}(\rho)}\big{(} \frac{1}{2}u^{2}+e(\rho)+\rho e^{\prime}(\rho)\big{)}\pm\frac{1}{2}\rho u,\\ V_{2}|_{u=\pm k(\rho)}=\pm\frac{1}{2k^{\prime}(\rho)}\big{(}\frac{1}{2}u^{2}+e( \rho)+\rho e^{\prime}(\rho)\big{)}\mp\frac{1}{2}\rho u.\end{cases} \tag{4.5}\] Since \(\frac{k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\) has the singularity at vacuum \(\rho=0\), the Goursat problem (4.4)-(4.5) is singular, which requires a careful analysis. It follows from (4.4) that there exist two characteristic curves originating from the origin point \(O(0,0)\) in the \((\rho,u)\)-plane: \[\ell_{+}:=\{(\rho,u)\,:\,u=k(\rho)\},\qquad\ell_{-}:=\{(\rho,u)\,:\,\,u=-k( \rho)\}. \tag{4.6}\] For any given point \(O_{1}(\rho_{0},u_{0})\) with \(u_{0}=0\), we can draw two backward characteristic curves \(\ell_{0}^{\pm}\) through \(O_{1}(\rho_{0},u_{0})\); see Fig. 4.1). Let \(O_{2}(\rho_{0}^{+},u_{0}^{+})\) be the intersection point of \(\ell_{0}^{+}\) and \(\ell_{+}\), and let \(O_{3}(\rho_{0}^{-},u_{0}^{-})\) be the intersection point of \(\ell_{0}^{-}\) and \(\ell_{-}\). Let \(\Sigma\) be the region surrounded by arc \(\widehat{OO_{2}O_{1}O_{3}}\), and let \(\overline{\Sigma}\) be the closure of \(\Sigma\). **Lemma 4.1**.: _The Goursat problem (4.2) admits a unique solution \(\hat{\eta}\in C^{2}(\mathbb{R}_{+}\times\mathbb{R})\) such that_ 1. \(|\hat{\eta}(\rho,u)|\leq C(\rho|u|^{2}+\rho^{\gamma(\rho)})\) _for_ \((\rho,u)\in\mathbb{R}_{+}\times\mathbb{R}\)_, where_ \(\gamma(\rho)=\gamma_{1}\) _if_ \(\rho\in[0,\rho_{*}]\) _and_ \(\gamma(\rho)=\gamma_{2}\) _if_ \(\rho\in(\rho_{*},\infty)\) Figure 4.1. The schematic diagram of the characteristic curves of (4.4) * _If_ \(\hat{\eta}\) _is regarded as a function of_ \((\rho,u)\)_,_ \[|\hat{\eta}_{\rho}(\rho,u)|\leq C(|u|^{2}+\rho^{2\theta(\rho)}),\quad|\hat{\eta} _{u}(\rho,u)|\leq C(\rho|u|+\rho^{\theta(\rho)+1})\qquad\mbox{for }(\rho,u)\in \mathbb{R}_{+}\times\mathbb{R},\] _and, if_ \(\hat{\eta}\) _is regarded as a function of_ \((\rho,m)\)_,_ \[|\hat{\eta}_{\rho}(\rho,m)|\leq C(|u|^{2}+\rho^{2\theta(\rho)}),\quad|\hat{\eta }_{m}(\rho,m)|\leq C(|u|+\rho^{\theta(\rho)})\qquad\mbox{for }(\rho,m)\in \mathbb{R}_{+}\times\mathbb{R},\] _where_ \(\theta(\rho):=\frac{\gamma(\rho)-1}{2}\)_._ * _If_ \(\hat{\eta}_{m}\) _is regarded as a function of_ \((\rho,u)\)_,_ \[|\hat{\eta}_{m\rho}(\rho,u)|\leq C\rho^{\theta(\rho)-1},\quad|\hat{\eta}_{mu} (\rho,u)|\leq C,\] _and, if_ \(\hat{\eta}_{m}\) _is regarded as a function of_ \((\rho,m)\)_,_ \[|\hat{\eta}_{m\rho}(\rho,m)|\leq C\rho^{\theta(\rho)-1},\quad|\hat{\eta}_{mm} (\rho,m)|\leq C\rho^{-1}.\] * _If_ \(\hat{q}\) _is the corresponding entropy flux determined by (_2.12_), then_ \(\hat{q}\in C^{2}(\mathbb{R}_{+}\times\mathbb{R})\) _and_ \[\hat{q}(\rho,u)=\frac{1}{2}\rho|u|^{3}\pm\rho u(e(\rho)+\rho e^{ \prime}(\rho)) \mbox{for }\pm u\geq k(\rho),\] \[|\hat{q}(\rho,u)|\leq C\rho^{\gamma(\rho)+\theta(\rho)} \mbox{for }|u|<k(\rho),\] \[\hat{q}(\rho,u)\geq\frac{1}{2}\rho|u|^{3} \mbox{for }|u|\geq k(\rho),\] \[|\hat{q}-u\hat{\eta}|\leq C(\rho^{\gamma(\rho)}|u|+\rho^{\gamma( \rho)+\theta(\rho)}) \mbox{for }(\rho,u)\in\mathbb{R}_{+}\times\mathbb{R}.\] **Proof.** To prove that (4.2) has a unique \(C^{2}\)-solution \(\hat{\eta}\) in \(\mathbb{R}_{+}\times\mathbb{R}\), it suffices to prove that (4.4)-(4.5) admits a unique \(C^{1}\)-solution \((V_{1},V_{2})\) in \(\Sigma\) for any given point \(O(\rho_{0},u_{0})\). We use the Picard iteration and divide the proof into six steps to achieve this. 1. For any point \(A(\rho,u)\in\Sigma\), we can draw two backward characteristic curves through \(A\): \[\begin{split}\ell_{1}&:=\big{\{}(s,\,u^{(1)}(s))\,: \,u^{(1)}(s)=-k(s)+u+k(\rho),0<s\leq\rho\big{\}},\\ \ell_{2}&:=\big{\{}(s,\,u^{(2)}(s))\,:\,u^{(2)}(s)=-k( \rho)+u+k(s),0<s\leq\rho\big{\}}.\end{split}\] (4.7) Let \(B(\rho_{1},u_{1})\) be the intersection point of \(\ell_{+}\) and \(\ell_{1}\), and let \(C(\rho_{2},u_{2})\) be the intersection point of \(\ell_{-}\) and \(\ell_{2}\). It follows from (4.6)-(4.7) that \[u_{1}=k(\rho_{1})=\frac{k(\rho)+u}{2},\qquad u_{2}=-k(\rho_{2})=\frac{u-k(\rho )}{2}. \tag{4.8}\] Using (4.4) and integrating \(V_{1}\) and \(V_{2}\) along the characteristic curves \(\ell_{1}\) and \(\ell_{2}\) respectively, we have \[V_{i}(\rho,u)=V_{i}(\rho_{i},u_{i})-\int_{\rho_{i}}^{\rho}\frac{k^{\prime\prime }(s)}{2k^{\prime}(s)}\,\sum_{j=1}^{2}V_{j}(s,u^{(i)}(s))\,\mathrm{d}s\qquad \mbox{for }i=1,2. \tag{4.9}\] Denote \(V_{i}^{(0)}(\rho,u):=V_{i}(\rho_{i},u_{i})\). It follows from (4.5) and (4.8) that \[V_{i}^{(0)}(\rho,u)=(-1)^{i+1}\Big{(}\frac{1}{2}\rho_{i}k(\rho_{i})+\frac{1}{4 }\frac{k^{2}(\rho_{i})}{k^{\prime}(\rho_{i})}+\frac{e(\rho_{i})+\rho_{i}e^{ \prime}(\rho_{i})}{2k^{\prime}(\rho_{i})}\Big{)}\qquad\mbox{for }i=1,2. \tag{4.10}\] We define the iterated scheme: \[V_{i}^{(n+1)}(\rho,u):=V_{i}(\rho_{i},u_{i})-\int_{\rho_{i}}^{\rho}\frac{k^{ \prime\prime}(s)}{2k^{\prime}(s)}\,\sum_{j=1}^{2}V_{j}^{(n)}(s,u^{(i)}(s))\, \mathrm{d}s\qquad\mbox{for }i=1,2. \tag{4.11}\] Then we obtain two sequences \(\{V_{i}^{(n)}\}_{n=0}^{\infty}\) for \(i=1,2\). We now prove that \(\{V_{i}^{(n)}\}_{n=0}^{\infty}\) are uniformly convergent in \(\overline{\Sigma}\), which is equivalent to proving that \[V_{i}^{(0)}(\rho,u)+\sum_{n=1}^{\infty}\big{(}V_{i}^{(n)}-V_{i}^{(n-1)}\big{)}( \rho,u),\quad i=1,2, \tag{4.12}\] are uniformly convergent in \(\overline{\Sigma}\). From Lemma 3.1 and (4.10), we know that \(V_{i}^{(0)},i=1,2\), are continuous in \(\overline{\Sigma}\) and there exists a constant \(C_{1}>0\) depending only on \(\rho_{*}\) and \(\rho^{*}\) such that \[|V_{i}^{(0)}(\rho,u)|\leq\begin{cases}C_{1}\rho_{i}^{1+\theta_{1}}&\text{ for }\rho_{i}\leq\rho_{*},\\ C_{1}\rho_{i}^{1+\theta_{2}}&\text{ for }\rho_{i}\geq\rho_{*},\end{cases} \qquad i=1,2,\] which, with the fact that \(\rho_{i}\leq\rho\), yields that \[|V_{i}^{(0)}(\rho,u)|\leq\begin{cases}C_{1}\rho^{1+\theta_{1}}&\text{ for }\rho\leq\rho_{*},\\ \tilde{C}_{1}\rho^{1+M_{1}}&\text{ for }\rho_{*}\leq\rho\leq\rho^{*},\\ \tilde{C}_{1}\rho^{1+\theta_{2}}&\text{ for }\rho\geq\rho^{*},\end{cases} \qquad i=1,2, \tag{4.13}\] where \(\tilde{C}_{1}\geq C_{1}(\rho_{*})^{\theta_{2}-M_{1}}\), \(\hat{C}_{1}\geq C_{1}\), and \(M_{1}\) are positive constants to be chosen later. It follows from (3.9) and (3.11) that there exist a constant \(\nu<1\) and a constant \(C_{0}\gg 1\) depending on \(\rho_{*}\) and \(\rho^{*}\) such that \[\Big{|}\frac{k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\Big{|}\leq\begin{cases} \nu\rho^{-1}&\text{ for }0<\rho\leq\rho_{*}\text{ and }\rho\geq\rho^{*},\\ C_{0}\rho^{-1}&\text{ for }\rho_{*}<\rho<\rho^{*},\end{cases} \tag{4.14}\] For the estimate of \(|V_{i}^{(1)}-V_{i}^{(0)}|\), we divide it into six cases: _Case 1._\(\rho_{i}\leq\rho\leq\rho_{*}\): It follows from (4.11) and (4.13)-(4.14) that \[\big{|}(V_{i}^{(1)}-V_{i}^{(0)})(\rho,u)\big{|}\leq\int_{\rho_{i}}^{\rho}C_{1 }\nu\,s^{\theta_{1}}\,\mathrm{d}s\leq C_{1}\rho^{1+\theta_{1}}\,\varpi_{1}, \tag{4.15}\] where \(\varpi_{1}:=\frac{\nu}{1+\theta_{1}}\in(0,1)\). _Case 2._\(\rho_{i}\leq\rho_{*}\leq\rho\leq\rho^{*}\): Then \[\big{|}(V_{i}^{(1)}-V_{i}^{(0)})(\rho,u)\big{|}\leq\Big{(}\int_{ \rho_{i}}^{\rho_{*}}+\int_{\rho_{*}}^{\rho}\Big{)}\Big{|}\frac{k^{\prime \prime}(s)}{2k^{\prime}(s)}\Big{|}\sum_{j=1}^{2}\big{|}V_{j}^{(0)}(s,u^{(i)}(s ))\big{|}\,\mathrm{d}s\] \[\leq\tilde{C}_{1}\big{(}\rho^{1+M_{1}}-(\rho_{*})^{1+M_{1}} \big{)}\varpi_{M_{1}}+C_{1}(\rho_{*})^{1+\theta_{1}}\varpi_{1}\leq\tilde{C}_ {1}\rho^{1+M_{1}}\,\varpi_{M_{1}}, \tag{4.16}\] where \(\varpi_{M_{1}}:=\frac{C_{0}}{1+M_{1}}\) and, in the last inequality of (4.16), we have chosen \[\tilde{C}_{1}\geq C_{1}(\rho_{*})^{\theta_{1}-M_{1}}\varpi_{1}(\varpi_{M_{1}} )^{-1}. \tag{4.17}\] _Case 3._\(\rho_{*}\leq\rho_{i}\leq\rho\leq\rho^{*}\): It is direct to see that \[\big{|}(V_{i}^{(1)}-V_{i}^{(0)})(\rho,u)\big{|}\leq\int_{\rho_{i}}^{\rho} \tilde{C}_{1}C_{0}\,s^{M_{1}}\,\mathrm{d}s\leq\tilde{C}_{1}\rho^{1+M_{1}}\, \varpi_{M_{1}}. \tag{4.18}\] _Case 4._\(\rho_{i}\leq\rho_{*}<\rho^{*}\leq\rho\): Then \[\big{|}(V_{i}^{(1)}-V_{i}^{(0)})(\rho,u)\big{|}\leq\Big{(}\int_{\rho_{i}}^{ \rho_{*}}+\int_{\rho_{*}}^{\rho^{*}}+\int_{\rho^{*}}^{\rho}\Big{)}\Big{|}\frac{ k^{\prime\prime}(s)}{2k^{\prime}(s)}\Big{|}\sum_{j=1}^{2}\big{|}V_{j}^{(0)}(s,u^{(i)}( s))\big{|}\,\mathrm{d}s\] \[\leq\Big{(}\int_{\rho_{i}}^{\rho_{*}}+\int_{\rho_{*}}^{\rho^{*}}+ \int_{\rho_{*}}^{\rho}\Big{)}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{\prime}(s)} \Big{|}\,\sum_{j=1}^{2}\big{|}V_{j}^{(k)}(s,u^{(i)}(s))-V_{j}^{(k-1)}(s,u^{(i)}( s))\big{|}\,\mathrm{d}s\] \[\leq\int_{\rho_{i}}^{\rho_{*}}\nu C_{1}\varpi_{1}^{k}s^{\theta_{1 }}\,\mathrm{d}s+\int_{\rho_{*}}^{\rho^{*}}C_{0}\tilde{C}_{1}\varpi_{M_{1}}^{k} \,\mathrm{d}s+\int_{\rho^{*}}^{\rho}\nu\hat{C}_{1}\varpi_{2}^{k}s^{\theta_{2} }\,\mathrm{d}s\] \[\leq\hat{C}_{1}\big{(}\rho^{1+\theta_{2}}-(\rho^{*})^{1+\theta_{2 }}\big{)}\varpi_{2}^{k+1}+\tilde{C}_{1}\big{(}(\rho^{*})^{1+M_{1}}-(\rho_{*})^{ 1+M_{1}}\big{)}\varpi_{M_{1}}^{k+1}+C_{1}(\rho^{*})^{1+\theta_{1}}\varpi_{1}^{ k+1}\leq\hat{C}_{1}\rho^{1+\theta_{2}}\varpi_{2}^{k+1},\] where we have chosen \(\tilde{C}_{1}\) and \(\hat{C}_{1}\) such that \[\tilde{C}_{1}\geq C_{1}(\rho_{*})^{\theta_{1}-M_{1}}\big{(}\varpi_{1}\varpi_ {M_{1}}^{-1}\big{)}^{k+1},\qquad\hat{C}_{1}\geq\tilde{C}_{1}(\rho^{*})^{M_{1}- \theta_{2}}\big{(}\varpi_{M_{1}}\varpi_{2}^{-1}\big{)}^{k+1}. \tag{4.25}\] Therefore, under assumption (4.25), we obtain \[\left|(V_{i}^{(k+1)}-V_{i}^{(k)})(\rho,u)\right|\leq\begin{cases}C_{1}\rho^{1+ \theta_{1}}\,\varpi_{1}^{k+1}&\text{ for }\rho\leq\rho_{*},\\ \tilde{C}_{1}\rho^{1+M_{1}}\,\varpi_{M_{1}}^{k+1}&\text{ for }\rho_{*}\leq\rho \leq\rho^{*},\quad i=1,2.\\ \tilde{C}_{1}\rho^{1+\theta_{2}}\,\varpi_{2}^{k+1}&\text{ for }\rho\geq\rho^{*}. \end{cases}\] Recalling that \(\theta_{1}\geq\theta_{2}\), we can take \(C_{0}\) and \(M_{1}\) large enough such that \[0<\varpi_{1}\leq\varpi_{M_{1}}\leq\varpi_{2}<1. \tag{4.26}\] Combining (4.13) and (4.23)-(4.26), and taking \[\varpi_{M_{1}}=\varpi_{2},\qquad\tilde{C}_{1}=C_{1}(\rho_{*})^{\theta_{2}-M_{ 1}},\qquad\hat{C}_{1}=\tilde{C}_{1}(\rho^{*})^{M_{1}-\theta_{2}}, \tag{4.27}\] by induction, we conclude that, for any \(n\geq 1\), \[\left|(V_{i}^{(n)}-V_{i}^{(n-1)})(\rho,u)\right|\leq\begin{cases}C_{1}\rho^{1+ \theta_{1}}\,\varpi_{1}^{n}&\text{ for }\rho\leq\rho_{*},\\ \tilde{C}_{1}\rho^{1+M_{1}}\,\varpi_{M_{1}}^{n}&\text{ for }\rho_{*}\leq\rho \leq\rho^{*},\\ \tilde{C}_{1}\rho^{1+\theta_{2}}\,\varpi_{2}^{n}&\text{ for }\rho\geq\rho^{*}. \end{cases}\quad i=1,2. \tag{4.28}\] Noting that (4.26) and \(\rho\leq\rho_{0}\) for \((\rho,u)\in\overline{\Sigma}\), we have proved that the two sequences in (4.12), \(i=1,2\), are uniformly convergent in \(\overline{\Sigma}\) so that sequence \(\{(V_{1}^{(n)},V_{2}^{(n)})\}\) is uniformly convergent in \(\overline{\Sigma}\). Let \((V_{1},V_{2})\) be the limit function of sequence \((V_{1}^{(n)},V_{2}^{(n)})\). Noting the continuity and the uniform convergence of \((V_{1}^{(n)},V_{2}^{(n)})\), \((V_{1},V_{2})\) is continuous in \(\overline{\Sigma}\). Taking the limit: \(n\to\infty\) in (4.11), we conclude that \((V_{1},V_{2})\) is the continuous solution of (4.9). 2. It follows from (4.13) and (4.28) that, for \((\rho,u)\in\{\rho\geq 0,\,|u|\leq k(\rho)\}\) and \(i=1,2\), \[\left|V_{i}(\rho,u)\right|\leq|V_{i}^{(0)}(\rho,u)|+\sum_{n=1}^{\infty}\left| (V_{i}^{(n)}-V_{i}^{(n-1)})(\rho,u)\right|\leq\begin{cases}C\rho^{1+\theta_{1} }&\text{ for }\rho\leq\rho_{*},\\ C\rho^{1+M_{1}}&\text{ for }\rho_{*}\leq\rho\leq\rho^{*},\\ C\rho^{1+\theta_{2}}&\text{ for }\rho\geq\rho^{*}.\end{cases} \tag{4.29}\] On the other hand, we see from (4.3) that, for \(|u|\leq k(\rho)\), \[\begin{split}|\hat{\eta}_{\rho}(\rho,u)|&=\left|k^{\prime }(\rho)(V_{1}(\rho,u)+V_{2}(\rho,u))\right|\leq C\rho^{\gamma(\rho)-1},\\ |\hat{\eta}_{u}(\rho,u)|&=\left|V_{1}(\rho,u)-V_{2}(\rho,u) \right|\leq C\rho^{1+\theta(\rho)}.\end{split} \tag{4.30}\] Hence, for \(|u|\leq k(\rho)\), it holds that \[|\hat{\eta}(\rho,u)|\leq\int_{\bar{\rho}}^{\rho}|\hat{\eta}_{\rho}(s,u)|\, \mathrm{d}s+|\hat{\eta}(\bar{\rho},u)|\leq C\rho^{\gamma(\rho)},\quad|\hat{ \eta}_{m}(\rho,m)|=|\rho^{-1}\hat{\eta}_{u}(\rho,u)|\leq C\rho^{\theta(\rho)}, \tag{4.31}\] where \((\bar{\rho},u)\) is the point satisfying \(k(\bar{\rho})=|u|\), and we have used the boundary data in (4.2). 3. We now show that \(V_{1}\) and \(V_{2}\) have continuous first-order derivatives with respect to \((\rho,u)\). Using (4.7)-(4.8) and Lemma 3.1, we have \[\frac{\partial u^{(i)}(s)}{\partial u}=1,\qquad\left|\frac{\partial\rho_{i}}{ \partial u}\right|=\frac{1}{2|k^{\prime}(\rho_{i})|}\leq C_{2}\rho_{i}^{1- \theta(\rho_{i})}. \tag{4.32}\] Applying \(\partial_{u}\) to (4.11) and using (4.32) yield that for \(i=1,2\), \[\frac{\partial V_{i}^{(n+1)}}{\partial u}(\rho,u)=\frac{\partial V_{i}(\rho_{i},u_{i})}{\partial u}+\frac{k^{\prime\prime}(\rho_{i})}{2k^{\prime}(\rho_{i})} \,\sum_{j=1}^{2}V_{j}^{(n)}(\rho_{i},u_{i})\,\frac{\partial\rho_{i}}{\partial u} -\int_{\rho_{i}}^{\rho}\frac{k^{\prime\prime}(s)}{2k^{\prime}(s)}\,\sum_{j=1}^{2} \frac{\partial V_{j}^{(n)}}{\partial u}(s,u^{(i)}(s))\,\mathrm{d}s. \tag{4.33}\] It follows from (4.10), (4.32), Lemma 3.1, and a direct calculation that, for \(i=1,2\), \[\Big{|}\frac{\partial V_{i}^{(0)}}{\partial u}(\rho,u)\Big{|}=\Big{|}\frac{ \mathrm{d}V_{i}(\rho_{i},u_{i})}{\mathrm{d}\rho_{i}}\frac{\partial\rho_{i}}{ \partial u}\Big{|}\leq C_{2}\rho_{i}\leq\begin{cases}\bar{C}_{2}\rho&\text{ for }\rho\leq\rho_{*},\\ \tilde{C}_{2}\rho^{1+M_{2}}&\text{ for }\rho_{*}\leq\rho\leq\rho^{*},\\ \hat{C}_{2}\rho&\text{ for }\rho\geq\rho^{*},\end{cases} \tag{4.34}\] where \(C_{2}\) is chosen to be a common, fixed, and large enough constant in (4.32) and (4.34) depending only on \(\rho_{*}\) and \(\rho^{*}\), and \(\bar{C}_{2}\geq C_{2},\tilde{C}_{2}\geq C_{2}\), and \(M_{2}\) are some large positive constants to be chosen later. For the estimate of \(\big{|}(\frac{\partial V_{i}^{(1)}}{\partial u})(\rho,u)-(\frac{\partial V_{i }^{(0)}}{\partial u})(\rho,u)\big{|}\), we divide it into six cases: _Case 1._\(\rho_{i}\leq\rho\leq\rho_{*}\): It follows from (4.13)-(4.14) and (4.33)-(4.34) that \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|} \leq\int_{\rho_{i}}^{\rho}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{ \prime}(s)}\Big{|}\,\sum_{j=1}^{2}\Big{|}\Big{(}\frac{\partial V_{j}^{(0)}}{ \partial u}\Big{)}(s,u^{(i)}(s))\Big{|}\,\mathrm{d}s+\Big{|}\frac{k^{\prime \prime}(\rho_{i})}{2k^{\prime}(\rho_{i})}\Big{|}\,\sum_{j=1}^{2}|V_{j}^{(0)}( \rho_{i},u_{i})|\,\Big{|}\frac{\partial\rho_{i}}{\partial u}\Big{|}\] \[\leq\int_{\rho_{i}}^{\rho}\frac{\nu}{2s}\,(2\bar{C}_{2}s)\, \mathrm{d}s+\frac{\nu}{2\rho_{i}}\,(2C_{1}\rho_{i}^{1+\theta_{1}})\,(C_{2} \rho_{i}^{1-\theta_{1}})\] \[=\bar{C}_{2}\nu\rho-\bar{C}_{2}\nu\rho_{1}+C_{1}C_{2}\nu\rho_{i} \leq\bar{C}_{2}\nu\rho, \tag{4.35}\] where, in the last inequality of (4.35), we have chosen \[\bar{C}_{2}\geq C_{1}C_{2}. \tag{4.36}\] _Case 2._\(\rho_{i}\leq\rho_{*}\leq\rho\leq\rho^{*}\): Then, similarly, we have \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|}\] \[\leq\Big{(}\int_{\rho_{i}}^{\rho_{*}}+\int_{\rho_{*}}^{\rho} \Big{)}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{\prime}(s)}\Big{|}\,\sum_{j=1}^{ 2}\Big{|}\Big{(}\frac{\partial V_{j}^{(0)}}{\partial u}\Big{)}(s,u^{(i)}(s)) \Big{|}\,\mathrm{d}s+\Big{|}\frac{k^{\prime\prime}(\rho_{i})}{2k^{\prime}(\rho _{i})}\Big{|}\,\sum_{j=1}^{2}|V_{j}^{(0)}(\rho_{i},u_{i})|\,\Big{|}\frac{ \partial\rho_{i}}{\partial u}\Big{|}\] \[\leq\tilde{C}_{2}\big{(}\rho^{1+M_{2}}-(\rho_{*})^{1+M_{2}}\big{)} \varpi_{M_{2}}+\bar{C}_{2}\nu(\rho_{*}-\rho_{i})+C_{1}C_{2}\nu\rho_{i}\leq \tilde{C}_{2}\rho\,\varpi_{M_{2}}, \tag{4.37}\] where \(\varpi_{M_{2}}:=\frac{C_{0}}{1+M_{2}}\) and, in the last inequality of (4.37), we have used (4.36) and chosen \[\tilde{C}_{2}\geq\bar{C}_{2}(\rho_{*})^{-M_{2}}\,\nu\varpi_{M_{2}}^{-1}. \tag{4.38}\] _Case 3._\(\rho_{*}\leq\rho_{i}\leq\rho\leq\rho^{*}\): It follows that \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|} \leq\int_{\rho_{i}}^{\rho}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{ \prime}(s)}\Big{|}\,\sum_{j=1}^{2}\Big{|}\Big{(}\frac{\partial V_{j}^{(0)}}{ \partial u}\Big{)}(s,u^{(i)}(s))\Big{|}\,\mathrm{d}s+\Big{|}\frac{k^{\prime \prime}(\rho_{i})}{2k^{\prime}(\rho_{i})}\Big{|}\,\sum_{j=1}^{2}\Big{|}V_{j}^{(0 )}(\rho_{i},u_{i})\Big{|}\,\Big{|}\frac{\partial\rho_{i}}{\partial u}\Big{|}\] \[\leq\tilde{C}_{2}\big{(}\rho^{1+M_{2}}-\rho_{i}^{1+M_{2}}\big{)} \varpi_{M_{2}}+\tilde{C}_{1}C_{2}C_{0}\rho_{i}^{1+M_{1}-\theta_{2}}\leq\tilde{C} _{2}\rho\,\varpi_{M_{2}}, \tag{4.39}\] where, in the last inequality of (4.39), we have chosen \[M_{1}\geq M_{2}+\theta_{2},\qquad\tilde{C}_{2}\geq\tilde{C}_{1}C_{2}(1+M_{2})( \rho^{*})^{M_{1}-M_{2}-\theta_{2}}. \tag{4.40}\] _Case 4:_\(\rho_{i}\leq\rho_{*}<\rho^{*}\leq\rho\). For this case, similarly, we have \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|}\] \[\leq\Big{(}\int_{\rho_{i}}^{\rho_{*}}+\int_{\rho_{*}}^{\rho^{*}} +\int_{\rho^{*}}^{\rho}\Big{)}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{\prime}(s)} \Big{|}\,\sum_{j=1}^{2}\Big{|}\Big{(}\frac{\partial V_{j}^{(0)}}{\partial u} \Big{)}(s,u^{(i)}(s))\Big{|}\,\mathrm{d}s+\Big{|}\frac{k^{\prime\prime}(\rho_{i}) }{2k^{\prime}(\rho_{i})}\Big{|}\,\sum_{j=1}^{2}|V_{j}^{(0)}(\rho_{i},u_{i})| \,\Big{|}\frac{\partial\rho_{i}}{\partial u}\Big{|}\] \[\leq\tilde{C}_{2}(\rho-\rho^{*})\nu+\tilde{C}_{2}\big{(}(\rho^{*} )^{1+M_{2}}-(\rho_{*})^{1+M_{2}}\big{)}\varpi_{M_{2}}+\bar{C}_{2}(\rho_{*}- \rho_{i})\nu+C_{1}C_{2}\rho_{i}\nu\leq\hat{C}_{2}\rho\,\nu, \tag{4.41}\] where, in the last inequality of (4.41), we have used (4.36) and (4.38) and chosen \[\hat{C}_{2}\geq\tilde{C}_{2}(\rho^{*})^{M_{2}}\,\nu^{-1}\varpi_{M_{2}}. \tag{4.42}\] _Case 5._\(\rho_{*}\leq\rho_{i}\leq\rho^{*}\leq\rho\): Then \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|}\] \[\leq\Big{(}\int_{\rho_{i}}^{\rho^{*}}+\int_{\rho^{*}}^{\rho}\Big{)} \Big{|}\frac{k^{\prime\prime}(s)}{2k^{\prime}(s)}\Big{|}\,\sum_{j=1}^{2}\Big{|} \Big{(}\frac{\partial V_{j}^{(0)}}{\partial u}\Big{)}(s,u^{(i)}(s))\Big{|} \,\mathrm{d}s+\Big{|}\frac{k^{\prime\prime}(\rho_{i})}{2k^{\prime}(\rho_{i})} \Big{|}\,\sum_{j=1}^{2}|V_{j}^{(0)}(\rho_{i},u_{i})|\,\Big{|}\frac{\partial \rho_{i}}{\partial u}\Big{|}\] \[\leq\hat{C}_{2}(\rho-\rho^{*})\nu+\tilde{C}_{2}\big{(}(\rho^{*}) ^{1+M_{2}}-\rho_{i}^{1+M_{2}}\big{)}\varpi_{M_{2}}+\tilde{C}_{1}C_{2}C_{0}\rho _{i}^{1+M_{1}-\theta_{2}}\leq\hat{C}_{2}\rho\,\nu, \tag{4.43}\] where we have used (4.40) and (4.42) in the last inequality of (4.43). _Case 6._\(\rho^{*}\leq\rho_{i}\leq\rho\): It follows similarly that \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|} \leq\int_{\rho_{i}}^{\rho}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{ \prime}(s)}\Big{|}\,\sum_{j=1}^{2}\Big{|}\Big{(}\frac{\partial V_{j}^{(0)}}{ \partial u}\Big{)}(s,u^{(i)}(s))\Big{|}\,\mathrm{d}s+\Big{|}\frac{k^{\prime \prime}(\rho_{i})}{2k^{\prime}(\rho_{i})}\Big{|}\,\sum_{j=1}^{2}|V_{j}^{(0)}( \rho_{i},u_{i})|\,\Big{|}\,\frac{\partial\rho_{i}}{\partial u}\Big{|}\] \[\leq\hat{C}_{2}\rho\nu-\hat{C}_{2}\rho_{i}\nu+\hat{C}_{1}C_{2} \rho_{i}\nu\leq\hat{C}_{2}\rho\,\nu, \tag{4.44}\] where, in the last inequality of (4.44), we have chosen \[\hat{C}_{2}\geq\hat{C}_{1}C_{2}. \tag{4.45}\] Combining (4.35)-(4.45), we conclude that, for \(i=1,2\), \[\Big{|}\frac{\partial(V_{i}^{(1)}-V_{i}^{(0)})}{\partial u}(\rho, u)\Big{|}\leq\begin{cases}\tilde{C}_{2}\rho\,\nu&\text{ for }\rho\leq\rho_{*},\\ \tilde{C}_{2}\rho^{1+M_{2}}\,\varpi_{M_{2}}&\text{ for }\rho_{*}\leq\rho\leq \rho_{*},\\ \hat{C}_{2}\rho\,\nu&\text{ for }\rho\geq\rho_{*},\end{cases} \tag{4.46}\] provided (4.36), (4.38), (4.40), (4.42), and (4.45) hold. To use the induction arguments, we make the induction assumption for \(n=k\): For \(i=1,2\), \[\Big{|}\frac{\partial(V_{i}^{(k)}-V_{i}^{(k-1)})}{\partial u}(\rho, u)\Big{|}\leq\begin{cases}\tilde{C}_{2}\rho\,\nu^{k}&\text{ for }\rho\leq\rho_{*},\\ \tilde{C}_{2}\rho^{1+M_{2}}\,\varpi_{M_{2}}&\text{ for }\rho_{*}\leq\rho\leq \rho_{*},\\ \hat{C}_{2}\rho\,\nu^{k}&\text{ for }\rho\geq\rho_{*}.\end{cases} \tag{4.47}\] To estimate \(|\frac{\partial(V_{i}^{(k+1)}-V_{i}^{(k)})}{\partial u}(\rho,u)|\), it suffices to consider the case: \(\rho_{i}\leq\rho_{*}<\rho^{*}\leq\rho\) for simplicity of presentation, since the other cases can be estimated by similar arguments in (4.35)-(4.45). In fact, for the case \(\rho_{i}\leq\rho_{*}<\rho^{*}\leq\rho\), it follows from (4.28) and (4.47) that \[\Big{|}\frac{\partial(V_{i}^{(k+1)}-V_{i}^{(k)})}{\partial u}( \rho,u)\Big{|}\] \[\leq\Big{(}\int_{\rho_{i}}^{\rho_{*}}+\int_{\rho_{*}}^{\rho^{*}}+ \int_{\rho^{*}}^{\rho}\Big{)}\Big{|}\frac{k^{\prime\prime}(s)}{2k^{\prime}(s)} \Big{|}\,\sum_{j=1}^{2}\Big{|}\frac{\partial(V_{j}^{(k)}-V_{j}^{(k-1)})}{ \partial u}(s,u^{(i)}(s))\Big{|}\mathrm{d}s\] \[\quad+\Big{|}\frac{k^{\prime\prime}(\rho_{i})}{2k^{\prime}(\rho_{ i})}\Big{|}\,\Big{|}\,\frac{\partial\rho_{i}}{\partial u}\Big{|}\,\sum_{j=1}^{2}|(V_{j}^ {(k)}-V_{j}^{(k-1)})(\rho_{i},u_{i})|\] \[\leq\int_{\rho_{i}}^{\rho_{*}}\bar{C}_{2}\nu^{k+1}\,\mathrm{d}s+ \int_{\rho_{*}}^{\rho^{*}}C_{0}\tilde{C}_{2}\varpi_{M_{2}}^{k}s^{+M_{2}}\, \mathrm{d}s+\int_{\rho^{*}}^{\rho}\hat{C}_{2}\nu^{k+1}\,\mathrm{d}s+C_{2}\nu\rho _{i}^{-\theta_{1}}\,C_{1}\varpi_{M_{2}}^{k}\rho_{i}^{1+\theta_{1}}\] \[\leq\hat{C}_{2}(\rho-\rho^{*})\,\nu^{k+1}+\tilde{C}_{2}\big{(}( \rho^{*})^{1+M_{2}}-(\rho_{*})^{1+M_{2}}\big{)}\varpi_{M_{2}}^{k+1}+\bar{C}_{2}( \rho_{*}-\rho_{i})\nu^{k+1}+C_{1}C_{2}\rho_{i}\nu^{k+1}\] \[\leq\hat{C}_{2}\rho\,\nu^{k+1},\] where we have chosen \[M_{1} \geq M_{2}+\theta_{2},\qquad\bar{C}_{2}\geq C_{1}C_{2}, \tag{4.48}\] \[\tilde{C}_{2} \geq\max\Big{\{}\tilde{C}_{1}C_{2}(1+M_{2})(\rho^{*})^{M_{1}-M_{2 }-\theta_{2}}\Big{(}\frac{1+M_{2}}{1+M_{1}}\Big{)}^{k},\,\bar{C}_{2}(\rho_{*})^ {-M_{2}}\big{(}\nu\varpi_{M_{2}}^{-1}\big{)}^{k+1}\Big{\}},\] \[\hat{C}_{2} \geq\max\Big{\{}\hat{C}_{1}C_{2},\,\tilde{C}_{2}(\rho^{*})^{M_{2} }\big{(}\nu^{-1}\varpi_{M_{2}}\big{)}^{k+1}\Big{\}}.\] Thus, under assumption (4.48), we conclude that, for \(i=1,2\), \[\Big{|}\frac{\partial(V_{i}^{(k+1)}-V_{i}^{(k)})}{\partial u}(\rho,u)\Big{|} \leq\begin{cases}\bar{C}_{2}\rho\,\nu^{k+1}&\text{for }\rho\leq\rho_{*},\\ \tilde{C}_{2}\rho^{1+M_{2}}\,\varpi_{M_{2}}^{k+1}&\text{for }\rho_{*}\leq \rho\leq\rho_{*},\\ \hat{C}_{2}\rho\,\nu^{k+1}&\text{for }\rho\geq\rho_{*}.\end{cases}\] Combining (4.27) with (4.46)-(4.48) and taking \[\varpi_{M_{2}}=\nu,\qquad\bar{C}_{2}=\max\{C_{2},\,C_{1}C_{2}\},\] \[\tilde{C}_{2}=\max\big{\{}\bar{C}_{2}\rho_{*}^{-M_{2}},\tilde{C}_ {1}C_{2}(1+M_{2})(\rho^{*})^{M_{1}-M_{2}-\theta_{2}}\big{\}},\qquad\hat{C}_{2} =\max\big{\{}\tilde{C}_{2}(\rho^{*})^{M_{2}},\hat{C}_{1}C_{2}\big{\}},\] we have proved that, for any \(n\geq 1\) and \(i=1,2\), \[\Big{|}\frac{\partial(V_{i}^{(n)}-V_{i}^{(n-1)})}{\partial u}(\rho,u)\Big{|} \leq\begin{cases}\bar{C}_{2}\rho\,\nu^{n}&\text{for }\rho\leq\rho_{*},\\ \tilde{C}_{2}\rho^{1+M_{2}}\,\nu^{n}&\text{for }\rho_{*}\leq\rho\leq\rho_{*},\\ \hat{C}_{2}\rho\,\nu^{n}&\text{for }\rho\geq\rho_{*}.\end{cases} \tag{4.49}\] Noting that \(\nu<1\) and \(\rho\leq\rho_{0}\) for \((\rho,u)\in\overline{\Sigma}\), we know that \(\big{\{}\frac{\partial V_{i}^{(n)}}{\partial u}\big{\}}\) is uniformly convergent in \(\overline{\Sigma}\). It is direct to check that the limit function is \(\frac{\partial V_{i}}{\partial u}\). Due to the continuity and uniform convergence of \(\{\frac{\partial V_{i}^{(n)}}{\partial u}\}\), it is clear that \(\frac{\partial V_{i}}{\partial u}\) is continuous in \(\overline{\Sigma}\). On the other hand, it follows from (4.9) that \[\begin{cases}\frac{\partial V_{1}^{(n)}}{\partial\rho}=k^{\prime}( \rho)\frac{\partial V_{1}^{(n)}}{\partial u}-\frac{k^{\prime\prime}(\rho)}{2k ^{\prime}(\rho)}\big{(}V_{1}^{(n-1)}+V_{2}^{(n-1)}\big{)},\\ \frac{\partial V_{2}^{(n)}}{\partial\rho}=-k^{\prime}(\rho)\frac{ \partial V_{2}^{(n)}}{\partial u}-\frac{k^{\prime\prime}(\rho)}{2k^{\prime}( \rho)}\big{(}V_{1}^{(n-1)}+V_{2}^{(n-1)}\big{)},\end{cases}\] which, with (4.14), (4.28), and (4.49), yields that, for \(k\geq 0\) and \(i=1,2\), \[\Big{|}\frac{\partial(V_{i}^{(n)}-V_{i}^{(n-1)})}{\partial\rho}( \rho,u)\Big{|} \leq k^{\prime}(\rho)\Big{|}\frac{\partial(V_{i}^{(n)}-V_{i}^{(n- 1)})}{\partial u}(\rho,u)\Big{|}+\Big{|}\frac{k^{\prime\prime}(\rho)}{2k^{ \prime}(\rho)}\Big{|}\sum_{j=1}^{2}|(V_{j}^{(n)}-V_{j}^{(n-1)})(\rho,u)|\] \[\leq\begin{cases}C\rho^{\theta_{1}}\,\nu^{n}&\text{for }\rho\leq\rho_{*},\\ C\rho^{M_{1}}\,\nu^{n}&\text{for }\rho_{*}\leq\rho\leq\rho_{*},\\ C\rho^{\theta_{2}}\,\nu^{n}&\text{for }\rho\geq\rho_{*},\end{cases} \tag{4.50}\] for some large constant \(C>0\). Thus, \(\frac{\partial V_{i}^{(n)}}{\partial\rho}\) converges uniformly to \(\frac{\partial V_{i}}{\partial\rho}\) in \(\overline{\Sigma}\). It is clear that \(\frac{\partial V_{i}}{\partial\rho}\) is continuous. Therefore, \((V_{1}(\rho,u),V_{2}(\rho,u))\) is a \(C^{1}\)-solution of the Goursat problem (4.4)-(4.5), which implies that \(\hat{\eta}\) is a \(C^{2}\)-solution of (4.2). 4. From (4.34) and (4.49), we obtain that, for \(i=1,2\), \[\Big{|}\frac{\partial V_{i}}{\partial u}(\rho,u)\Big{|}\leq\Big{|}\frac{\partial V _{i}^{(0)}}{\partial u}(\rho,u)\Big{|}+\sum_{n=1}^{\infty}\Big{|}\frac{\partial( V_{i}^{(n)}-V_{i}^{(n-1)})}{\partial u}(\rho,u)\Big{|}\leq C\rho \tag{4.51}\] for \(\rho\geq 0\) and \(|u|\leq k(\rho)\). Similarly, using (4.50), we see that, for \(i=1,2\), \[\Big{|}\frac{\partial V_{i}}{\partial\rho}\Big{|}\leq C\rho^{\theta(\rho)} \qquad\text{for $\rho\geq 0$ and $|u|\leq k(\rho)$.}\] Therefore, for \(|u|\leq k(\rho)\), it follows from (4.3) and (4.51) that \[|\hat{\eta}_{uu}(\rho,u)|=|\partial_{u}V_{1}(\rho,u)-\partial_{u}V_{2}(\rho,u )|\leq C\rho,\ |\hat{\eta}_{\rho u}(\rho,u)|=|k^{\prime}(\rho)(\partial_{u}V_{1}(\rho,u)+ \partial_{u}V_{2}(\rho,u))|\leq C\rho^{\theta(\rho)}.\] If \(\hat{\eta}_{m}\) is regarded as a function of \((\rho,u)\), we have \[|\hat{\eta}_{m\rho}(\rho,u)|\leq C\rho^{\theta(\rho)-1},\quad|\hat{\eta}_{mu} (\rho,u)|\leq C\qquad\text{ for $|u|\leq k(\rho)$.}\] If \(\hat{\eta}_{m}\) is regarded as a function of \((\rho,m)\), we see that, for \(|u|\leq k(\rho)\), \[|\hat{\eta}_{m\rho}(\rho,m)|=|\hat{\eta}_{m\rho}(\rho,u)+u\hat{\eta}_{mu}(\rho,u)|\leq C\rho^{\theta(\rho)-1},\quad|\hat{\eta}_{mm}(\rho,m)|=|\rho^{-1}\hat{ \eta}_{mu}(\rho,u)|\leq C\rho^{-1}.\] 5. We now prove the uniqueness of \(\hat{\eta}\), which is equivalent to the uniqueness of solutions of (4.4)-(4.5) in the class of \(C^{1}\)-solutions satisfying (4.29). Suppose that there exist two \(C^{1}\) solutions \((V_{1},V_{2})\) and \((\tilde{V}_{1},\tilde{V}_{2})\) of (4.4)-(4.5) satisfying the uniform estimate (4.29). Then it follows from (4.9) that \[V_{i}(\rho,u)-\tilde{V}_{i}(\rho,u)=-\int_{\rho_{i}}^{\rho}\frac{k^{\prime \prime}(s)}{2k^{\prime}(s)}\,\sum_{j=1}^{2}\big{(}V_{j}(s,u^{(i)}(s))-\tilde{ V}_{j}(s,u^{(i)}(s))\big{)}\,\mathrm{d}s\qquad\text{for $i=1,2$.} \tag{4.52}\] Applying the uniform estimates (4.29) and similar arguments as in (4.28)-(4.52) yields \[\max_{\tiny\begin{array}{c}(\rho,u)\in\mathbb{R}_{+}\times\mathbb{R}\\ |u|\leq k(\rho)\end{array}}\big{|}V_{i}(\rho,u)-\tilde{V}_{i}(\rho,u)\big{|} \leq\begin{cases}C\rho^{1+\theta_{1}}\,\varpi_{1}^{n}&\text{ for $\rho\leq\rho_{*}$,}\\ C\rho^{1+M_{1}}\,\varpi_{M_{1}}^{n}&\text{ for $\rho_{*}\leq\rho\leq\rho^{*}$,}\\ C\rho^{1+\theta_{2}}\,\varpi_{2}^{n}&\text{ for $\rho\geq\rho^{*}$,}\end{cases}\] for any \(n\geq 0\), where \(C\gg 1\) is independent of \(n\). Taking \(n\to\infty\), we obtain that \(V_{i}(\rho,u)\equiv\tilde{V}_{i}(\rho,u)\) for \(|u|\leq k(\rho)\) which, with (4.3) and \(\hat{\eta}(0,u)\equiv 0\), yields the uniqueness of \(\hat{\eta}\). 6. We now estimate the entropy flux \(\hat{q}\). It follows from (2.12) that, for all entropy pairs, \[q_{\rho}=u\eta_{\rho}+\rho k^{\prime}(\rho)^{2}\eta_{u},\qquad q_{u}=\rho\eta _{\rho}+u\eta_{u}. \tag{4.53}\] Then there exists an entropy flux \(\hat{q}(\rho,u)\in C^{2}(\mathbb{R}_{+}\times\mathbb{R})\) corresponding to the special entropy \(\hat{\eta}\): \[\hat{q}(\rho,u)=\frac{1}{2}\rho|u|^{3}\pm\rho u(e(\rho)+\rho e^{\prime}(\rho)) \qquad\text{for $\pm u\geq k(\rho)$.}\] It follows from (4.30) and (4.53) that \(|\hat{q}_{\rho}(\rho,u)|=|u\hat{\eta}_{\rho}+\rho k^{\prime}(\rho)^{2}\hat{ \eta}_{u}|\leq C\rho^{\gamma(\rho)+\theta(\rho)-1}\) for \(|u|\leq k(\rho)\), which implies \[|\hat{q}(\rho,u)|=\Big{|}\int_{\bar{\rho}}^{\rho}\hat{q}_{\rho}(s,u)\,\mathrm{d }s+\hat{q}(\bar{\rho},u)\Big{|}\leq C\rho^{\gamma(\rho)+\theta(\rho)}\qquad \text{for $|u|\leq k(\rho)$,} \tag{4.54}\] where \((\bar{\rho},u)\) is the point satisfying \(k(\bar{\rho})=|u|\). For \(|u|\leq k(\rho)\), using (4.31) and (4.54), we have \(|\hat{q}-u\hat{\eta}|\leq|\hat{q}|+|u||\hat{\eta}|\leq C\rho^{\gamma(\rho)+ \theta(\rho)}\). In region \(\{(\rho,u)\,:\,|u|\geq k(\rho)\}\), it is direct to check that all the estimates in Lemma 4.1 hold by using (4.1). Therefore, the proof of Lemma 4.1 is now complete. ### Estimates of the weak entropy pairs In order to show the compactness of the weak entropy dissipation measures below, we now derive some estimates of the weak entropy pairs. To achieve this, from (2.15)-(2.16), it requires to analyze the entropy kernel and entropy flux kernel, respectively. The entropy kernel \(\chi=\chi(\rho,u,s)\) is a fundamental solution of the entropy equation (2.14): \[\begin{cases}\chi_{\rho\rho}-\frac{P^{\prime}(\rho)}{\rho^{2}}\chi_{uu}=0,\\ \chi|_{\rho=0}=0,\quad\chi_{\rho}|_{\rho=0}=\delta_{u=s}.\end{cases} \tag{4.55}\] As pointed out in [10] that equation (4.55) is invariant under the Galilean transformation, which implies that \(\chi(\rho,u,s)=\chi(\rho,u-s,0)=\chi(\rho,0,s-u)\). For simplicity, we write it as \(\chi(\rho,u,s)=\chi(\rho,u-s)\) below when no confusion arises. The corresponding entropy flux kernel \(\sigma(\rho,u,s)\) satisfies the Cauchy problem for \(\sigma-u\chi\): \[\begin{cases}(\sigma-u\chi)_{\rho\rho}-\frac{P^{\prime}(\rho)}{\rho^{2}}( \sigma-u\chi)_{uu}=\frac{P^{\prime\prime}(\rho)}{\rho}\chi_{u},\\ (\sigma-u\chi)|_{\rho=0}=0,\quad(\sigma-u\chi)_{\rho}|_{\rho=0}=0.\end{cases} \tag{4.56}\] We recall from [10] that \(\sigma-u\chi\) is also Galilean invariant. From (1.4)-(1.6), \(P(\rho)\) satisfies all the conditions in [10, 11]. For later use, we introduce the definition of fractional derivatives (_cf._[7, 10, 46]). For any real \(\alpha>0\), the fractional derivative \(\partial_{s}^{\alpha}f\) of a function \(f=f(s)\) is \[\partial_{s}^{\alpha}f(s)=\Gamma(-\alpha)f*[s]_{+}^{-\alpha-1},\] where \(\Gamma(x)\) is the Gamma function and the convolution should be understood in the sense of distributions. The following formula: \[\partial_{s}^{\alpha}(sg(s))=s\partial_{s}^{\alpha+1}g+(\alpha+1)\partial_{s }^{\alpha}g\] holds for fractional derivatives. We now present two useful lemmas for the entropy kernel \(\chi(\rho,u)\) and the entropy flux kernel \(\sigma(\rho,u)\) when \(\rho\) is bounded. **Lemma 4.2** ([10, Theorems 2.1-2.2]).: _The entropy kernel \(\chi(\rho,u)\) admits the expansion:_ \[\chi(\rho,u)=a_{1}(\rho)G_{\lambda_{1}}(\rho,u)+a_{2}(\rho)G_{\lambda_{1}+1}( \rho,u)+g_{1}(\rho,u)\qquad\text{for }\rho\in[0,\infty), \tag{4.57}\] _where \(k(\rho)=\int_{0}^{\rho}\frac{\sqrt{P^{\prime}(y)}}{y}\,\mathrm{d}y\) and_ \[\begin{split}& G_{\lambda_{1}}(\rho,u)=[k(\rho)^{2}-u^{2}]_{+}^{ \lambda_{1}},\qquad\lambda_{1}=\frac{3-\gamma_{1}}{2(\gamma_{1}-1)}>0,\\ & a_{1}(\rho)=M_{\lambda_{1}}k(\rho)^{-\lambda_{1}}k^{\prime}( \rho)^{-\frac{1}{2}}>0,\quad M_{\lambda_{1}}=\left(\frac{2\lambda_{1}}{\sqrt{ 2\lambda_{1}+1}}\int_{-1}^{1}(1-z^{2})^{\lambda_{1}}\,\mathrm{d}z\right)^{-1}, \\ & a_{2}(\rho)=-\frac{1}{4(\lambda_{1}+1)}k(\rho)^{-\lambda_{1}-1 }k^{\prime}(\rho)^{-\frac{1}{2}}\int_{0}^{\rho}k(s)^{\lambda_{1}}k^{\prime}(s )^{-\frac{1}{2}}a_{1}^{\prime\prime}(s)\,\mathrm{d}s.\end{split} \tag{4.58}\] _Moreover, \(\operatorname{supp}\chi(\rho,u)\subset\{(\rho,u)\,:\,|u|\leq k(\rho)\}\), and \(\chi(\rho,u)>0\) in \(\{(\rho,u)\,:\,|u|<k(\rho)\}\). The remainder term \(g_{1}(\rho,\cdot)\) and its fractional derivative \(\partial_{u}^{\lambda_{1}+1}g_{1}(\rho,\cdot)\) are Holder continuous. Furthermore, for any fixed \(\rho_{\max}>0\), there exists \(C(\rho_{\max})>0\) depending only on \(\rho_{\max}\) such that_ \[|g_{1}(\rho,u-s)|\leq C(\rho_{\max})[k(\rho)^{2}-(u-s)^{2}]_{+}^{\lambda_{1}+ \alpha_{0}+1}, \tag{4.59}\] _for any \(0\leq\rho\leq\rho_{\max}\) and some \(\alpha_{0}\in(0,1)\). In addition, for any \(0\leq\rho\leq\rho_{\max}\),_ \[|a_{1}(\rho)|+\rho^{1-2\theta_{1}}|a_{1}^{\prime}(\rho)|+\rho^{2-2\theta_{1}}|a _{1}^{\prime\prime}(\rho)|+|a_{2}(\rho)|+\rho|a_{2}^{\prime}(\rho)|+\rho^{2}|a_ {2}^{\prime\prime}(\rho)|\leq C(\rho_{\max}). \tag{4.60}\] **Proof.** Since (4.57)-(4.59) have been derived in [10, Theorem 2.2], it suffices to prove (4.60). From (3.6), we find that \(|a_{1}(\rho)|\leq C(\rho_{\max})\) for \(0\leq\rho\leq\rho_{\max}\). For \(|a_{1}^{\prime}(\rho)|\), a direct calculation shows that \[a_{1}^{\prime}(\rho)=-\lambda_{1}M_{\lambda_{1}}k(\rho)^{-\lambda_{1}-1}k^{ \prime}(\rho)^{\frac{1}{2}}-\frac{1}{2}M_{\lambda_{1}}k(\rho)^{-\lambda_{1}}k ^{\prime}(\rho)^{-\frac{3}{2}}k^{\prime\prime}(\rho).\] It follows from (1.5) that \(k(\rho)=C_{1}\rho^{\theta_{1}}\big{(}1+O(\rho^{2\theta_{1}})\big{)}\) as \(\rho\in[0,\rho_{\max}]\) for some constant \(C_{1}>0\) that may depend on \(\kappa_{1}\) and \(\gamma_{1}\). Then, by direct calculation, we observe that the term involving \(\rho^{-1}\) in \(a_{1}^{\prime}(\rho)\) vanishes so that \(|a_{1}^{\prime}(\rho)|\leq C(\rho_{\max})\rho^{2\theta_{1}-1}\). Similarly, we obtain that \(|a_{1}^{\prime\prime}(\rho)|\leq C(\rho_{\max})\rho^{2\theta_{1}-2}\). Finally, using \(\eqref{eq:2.3}_{3}\), we can obtain \(\eqref{eq:2.3}_{2}\) by a direct calculation. This completes the proof. **Lemma 4.3** ([10, Theorem 2.3]).: _The entropy flux kernel \(\sigma(\rho,u)\) admits the expansion_ \[(\sigma-u\chi)(\rho,u)=-u\big{(}b_{1}(\rho)G_{\lambda_{1}}(\rho,u)+b_{2}(\rho )G_{\lambda_{1}+1}(\rho,u)\big{)}+g_{2}(\rho,u)\qquad\text{for $\rho\in[0,\infty)$},\] _where_ \[b_{1}(\rho) =M_{\lambda_{1}}\rho k(\rho)^{-\lambda_{1}-1}k^{\prime}(\rho)^{ \frac{1}{2}}>0, \tag{4.61}\] \[b_{2}(\rho) =-\frac{1}{4(\lambda_{1}+1)}\rho k^{\prime}(\rho)^{\frac{1}{2}}k (\rho)^{-(\lambda_{1}+2)}\int_{0}^{\rho}k(s)^{\lambda_{1}}k^{\prime}(s)^{- \frac{1}{2}}a_{1}^{\prime\prime}(s)\,\mathrm{d}s\] \[\quad-\frac{1}{4(\lambda_{1}+1)}k(\rho)^{-(\lambda_{1}+2)}k^{ \prime}(\rho)^{-\frac{1}{2}}\int_{0}^{\rho}k(s)^{\lambda_{1}+1}k^{\prime}(s)^ {-\frac{1}{2}}b_{1}^{\prime\prime}(s)\,\mathrm{d}s\] \[\quad+\frac{1}{4(\lambda_{1}+1)}k(\rho)^{-(\lambda_{1}+2)}k^{ \prime}(\rho)^{-\frac{1}{2}}\int_{0}^{\rho}sk(s)^{\lambda_{1}}k^{\prime}(s)^{ \frac{1}{2}}a_{1}^{\prime\prime}(s)\,\mathrm{d}s.\] _The remainder term \(g_{2}(\rho,\cdot)\) and its fractional derivative \(\partial_{u}^{\lambda_{1}+1}g_{2}(\rho,\cdot)\) are Holder continuous. Moreover, for any fixed \(\rho_{\max}>0\), there exists \(C(\rho_{\max})>0\) depending only on \(\rho_{max}\) such that_ \[|g_{2}(\rho,u-s)|\leq C(\rho_{\max})[k(\rho)^{2}-(u-s)^{2}]_{+}^{\lambda_{1}+ \alpha_{0}+1},\] _for any \(0\leq\rho\leq\rho_{\max}\) and some \(\alpha_{0}\in(0,1)\). Furthermore, similar to the proof of (4.60), for any \(0\leq\rho\leq\rho_{\max}\),_ \[|b_{1}(\rho)|+\rho^{1-2\theta_{1}}|b_{1}^{\prime}(\rho)|+\rho^{2-2\theta_{1}} |b_{2}^{\prime\prime}(\rho)|+|b_{2}(\rho)|+\rho|b_{2}^{\prime}(\rho)|+\rho^{2} |b_{2}^{\prime\prime}(\rho)|\leq C(\rho_{\max}). \tag{4.62}\] **Remark 4.1**.: _In [10, Theorem 2.2], it is proved that \(a_{2}(\rho)\) and \(b_{2}(\rho)\) satisfy \(|a_{2}(\rho)|+|b_{2}(\rho)|\leq C\rho k(\rho)^{-2}\) for the pressure law given in [10, (2.1)]. In this paper, we have improved them to be (4.60) and (4.62) under conditions (1.4)-(1.6)._ For later use, we recall a useful representation formula for \(\chi(\rho,u)\). **Lemma 4.4** (First representation formula, [62, Lemma 3.4]).: _Given any \((\rho,u)\) with \(|u|\leq k(\rho)\) and \(0\leq\rho_{0}<\rho\),_ \[\chi(\rho,u) =\frac{1}{2(\rho-\rho_{0})k^{\prime}(\rho)}\int_{\rho_{0}}^{\rho} k^{\prime}(s)\,\tilde{d}(s)\big{(}\chi(s,u+k(\rho)-k(s))+\chi(s,u-k(\rho)+k(s)) \big{)}\,\mathrm{d}s\] \[\quad-\frac{1}{2(\rho-\rho_{0})k^{\prime}(\rho)}\int_{-(k(\rho)-k( \rho_{0}))}^{k(\rho)-k(\rho_{0})}\chi(\rho_{0},u-s)\,\mathrm{d}s,\] _where \(\tilde{d}(\rho):=2+(\rho-\rho_{0})\frac{k^{\prime\prime}(\rho)}{k^{\prime}( \rho)}\)._ **Remark 4.2**.: _In the statement of [62, Lemma 3.4], \(\rho_{0}\) is positive. However, the proof of [62, Lemma 3.4] is also valid for \(\rho_{0}=0\) without modification_; _see also [10, (3.38)]._ Given any \(\psi\in C_{0}^{2}(\mathbb{R})\), a regular weak entropy pair \((\eta^{\psi},\,q^{\psi})\) can be given by \[\eta^{\psi}(\rho,u)=\int_{\mathbb{R}}\psi(s)\,\chi(\rho,u,s)\,\mathrm{d}s, \qquad q^{\psi}(\rho,u)=\int_{\mathbb{R}}\psi(s)\,\sigma(\rho,u,s)\,\mathrm{d}s. \tag{4.63}\] It follows from (4.55) that \[\begin{cases}\eta^{\psi}_{\rho\rho}-k^{\prime}(\rho)^{2}\eta^{\psi}_{uu}=0,\\ \eta^{\psi}|_{\rho=0}=0,\quad\eta^{\psi}_{\rho}|_{\rho=0}=\psi(u).\end{cases} \tag{4.64}\] Using Lemmas 4.2-4.4, we can obtain the following lemma for the weak entropy pair \((\eta^{\psi},q^{\psi})\). **Lemma 4.5**.: _For any weak entropy \((\eta^{\psi},q^{\psi})\) defined in (4.63), there exists a constant \(C_{\psi}>0\) depending only on \(\rho^{*}\) and \(\psi\) such that, for all \(\rho\in[0,2\rho^{*}]\),_ \[|\eta^{\psi}(\rho,u)|+|q^{\psi}(\rho,u)|\leq C_{\psi}\rho.\] _If \(\eta^{\psi}\) is regarded as a function of \((\rho,m)\), then_ \[|\eta^{\psi}_{m}(\rho,m)|+|\rho\eta^{\psi}_{mm}(\rho,m)|\leq C_{\psi},\qquad| \eta^{\psi}_{\rho}(\rho,m)|\leq C_{\psi}(1+\rho^{\theta_{1}}).\] _Moreover, if \(\eta^{\psi}_{m}\) is regarded as a function of \((\rho,u)\), then_ \[|\eta^{\psi}_{mu}(\rho,u)|+|\rho^{1-\theta_{1}}\eta^{\psi}_{m\rho}(\rho,u)| \leq C_{\psi}.\] **Proof.** All the estimates can be found in [61, Lemma 3.8] or [62, Lemma 4.13] except the estimate of \(\eta^{\psi}_{\rho}(\rho,m)\). In fact, applying Lemma 4.4 to (4.64) and using \(d(\rho):=2+\dfrac{\rho k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\), we have \[\eta^{\psi}(\rho,u)= \,\dfrac{1}{2\rho k^{\prime}(\rho)}\int_{0}^{\rho}k^{\prime}(s) \,d(s)\,\eta^{\psi}(s,u+k(\rho)-k(s))\,\mathrm{d}s\] \[+\dfrac{1}{2\rho k^{\prime}(\rho)}\int_{0}^{\rho}k^{\prime}(s)\,d (s)\,\eta^{\psi}(s,u-k(\rho)+k(s))\,\mathrm{d}s:=I_{1}+I_{2}. \tag{4.65}\] We regard \(\eta^{\psi}\) as a function of \((\rho,m)\). Then we have \[\partial_{\rho}\eta^{\psi}(\rho,m)=\partial_{\rho}\eta^{\psi}(\rho,u)-\dfrac {u}{\rho}\partial_{u}\eta^{\psi}(\rho,u). \tag{4.66}\] Without loss of generality, we assume \(\operatorname{supp}\psi\subset[-L,L]\) for some \(L>0\). Then a direct calculation shows that \(\eta^{\psi}(\rho,u)=0\) if \(|u|\geq k(\rho)+L\). Noticing \(\eta^{\psi}_{u}(\rho,u)=\rho\eta^{\psi}_{m}(\rho,m)\), we have \[\Big{|}\dfrac{u}{\rho}\partial_{u}\eta^{\psi}(\rho,u)\Big{|}\leq|u|\,|\eta^{ \psi}_{m}(\rho,m)|\leq C_{\psi}(1+\rho^{\theta_{1}})\qquad\text{for }0\leq \rho\leq 2\rho^{*}. \tag{4.67}\] Thus, it suffices to calculate \(\partial_{\rho}\eta^{\psi}(\rho,u)\). It follows from (4.65) that \(\partial_{\rho}\eta^{\psi}(\rho,u)=\partial_{\rho}I_{1}+\partial_{\rho}I_{2}\). A direct calculation shows that \[\partial_{\rho}I_{1} =\dfrac{1}{2}\big{(}-\rho^{-2}(k^{\prime}(\rho))^{-1}-\rho^{-1}(k ^{\prime}(\rho))^{-2}k^{\prime\prime}(\rho)\big{)}\int_{0}^{\rho}k^{\prime}(s )\,d(s)\,\eta^{\psi}(s,u+k(\rho)-k(s))\,\mathrm{d}s\] \[+\dfrac{1}{2\rho}\int_{0}^{\rho}k^{\prime}(s)\,d(s)\,\eta^{\psi}_ {u}(s,u+k(\rho)-k(s))\,\mathrm{d}s+\dfrac{1}{2\rho}d(\rho)\eta^{\psi}(\rho,u). \tag{4.68}\] Using (3.6) and Lemma 3.2, we obtain that \[|\partial_{\rho}I_{1}|\leq C+C_{\psi}\rho^{\theta_{1}}+C_{\psi}\leq C_{\psi}(1 +\rho^{\theta_{1}})\qquad\text{ for }0\leq\rho\leq 2\rho^{*},\] which, with (4.68), yields \(|\partial_{\rho}I_{1}|\leq C_{\psi}(1+\rho^{\theta_{1}})\). Similarly, we obtain that \(|\partial_{\rho}I_{2}|\leq C_{\psi}(1+\rho^{\theta_{1}})\). Thus, we conclude that \(|\partial_{\rho}\eta^{\psi}(\rho,u)|\leq|\partial_{\rho}I_{1}|+|\partial_{\rho} I_{2}|\leq C_{\psi}(1+\rho^{\theta_{1}})\), which, with (4.66)-(4.67), implies that \(|\partial_{\rho}\eta^{\psi}(\rho,m)|\leq C_{\psi}(1+\rho^{\theta_{1}}).\) This completes the proof. \(\square\) We notice that all the above estimates for the weak entropy pairs in Lemmas 4.2-4.5 hold when the density is bounded. To establish the \(L^{p}\)-compensated compactness framework, we need the entropy pair estimates when the density is large, namely \(\rho\geq\rho^{*}\). From now on in this subsection, we use the representation formula of Lemma 4.4 to estimate \((\eta^{\psi},q^{\psi})\) in the large density region \(\rho\geq\rho^{*}\). **Lemma 4.6**.: _There exists a positive constant \(C>0\) depending only on \(\rho^{*}\) such that_ \[\|\chi(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\rho\qquad\text{for $\rho\geq\rho^{*}$}.\] **Proof.** For \(\rho\geq\rho^{*}\), \(\chi(\rho,u)\) satisfies \[\begin{cases}\chi_{\rho\rho}-k^{\prime}(\rho)^{2}\chi_{uu}=0,\\ \chi(\rho,u)|_{\rho=\rho^{*}}=\chi(\rho^{*},u),\quad\chi_{\rho}(\rho,u)|_{\rho= \rho^{*}}=\chi_{\rho}(\rho^{*},u).\end{cases}\] where \(\chi(\rho^{*},u)\) and \(\chi_{\rho}(\rho^{*},u)\) are given in Lemma 4.2. Then, applying Lemma 4.4, we obtain that, for \(\rho>\rho^{*}\), \[\|k^{\prime}(\rho)\chi(\rho,\cdot)\|_{L^{\infty}_{u}} \leq\frac{1}{\rho-\rho^{*}}\int_{\rho^{*}}^{\rho}d_{*}(s)\|k^{ \prime}(s)\chi(s,\cdot)\|_{L^{\infty}_{u}}\,\mathrm{d}s+\frac{1}{2(\rho-\rho^{ *})}\int_{-(k(\rho)-k(\rho^{*}))}^{k(\rho)-k(\rho^{*})}|\chi(\rho^{*},u-s)|\, \mathrm{d}s\] \[\leq\frac{1}{\rho-\rho^{*}}\int_{\rho^{*}}^{\rho}d_{*}(s)\|k^{ \prime}(s)\chi(s,\cdot)\|_{L^{\infty}_{u}}\,\mathrm{d}s+C\rho^{\theta_{2}-1},\] where \(d_{*}(\rho):=2+(\rho-\rho^{*})\frac{k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\). By a similar proof to that for Lemma A.3, we have \[\|k^{\prime}(\rho)\chi(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\rho^{\theta_{2}} \qquad\text{for $\rho\geq 2\rho^{*}$},\] which, with (3.7), yields that \(\|\chi(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\rho\) for \(\rho\geq 2\rho^{*}\). For \(\rho^{*}\leq\rho\leq 2\rho^{*}\), it follows from Lemma 4.2 that \(\|\chi(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\leq C\rho\). **Lemma 4.7**.: _Let \(\rho\geq\rho^{*}\) and \(\psi\in C^{2}_{0}(\mathbb{R})\). Then, in the \((\rho,u)\)-coordinates,_ \[|\eta^{\psi}(\rho,u)|+|\eta^{\psi}_{u}(\rho,u)|+|\eta^{\psi}_{uu}(\rho,u)|\leq C _{\psi}\rho,\quad|\eta^{\psi}_{\rho}(\rho,u)|+\rho^{1-\theta_{2}}|\eta^{\psi} _{\rho\rho}(\rho,u)|\leq C_{\psi}\rho^{\theta_{2}}.\] _In the \((\rho,m)\)-coordinates,_ \[|\eta^{\psi}_{\rho}(\rho,m)|+\rho^{\theta_{2}}|\eta^{\psi}_{m}(\rho,m)|+\rho^ {1+\theta_{2}}|\eta^{\psi}_{mm}(\rho,m)|\leq C_{\psi}\rho^{\theta_{2}}.\] _If we regard \(\eta^{\psi}_{m}(\rho,m)\) as a function of \((\rho,u)\), then_ \[|\eta^{\psi}_{mu}|+\rho^{1-\theta_{2}}|\eta^{\psi}_{m\rho}|\leq C_{\psi}.\] _All the above constants \(C_{\psi}>0\) depend only on \(\|\psi\|_{C^{2}}\) and \(\operatorname{supp}\psi\)._ **Proof.** We divide the proof into five steps. 1. Using (4.63) and Lemma 4.6, we obtain that, for \(\rho\geq\rho^{*}\), \[|\eta^{\psi}(\rho,u)|+|\eta^{\psi}_{u}(\rho,u)|+|\eta^{\psi}_{uu}(\rho,u)|\leq \|\chi(\rho,\cdot)\|_{L^{\infty}(\mathbb{R})}\|(\psi,\psi^{\prime},\psi^{\prime \prime})\|_{L^{1}(\mathbb{R})}\leq C_{\psi}\rho.\] (4.69) 2. For the estimate of \(\eta^{\psi}_{\rho}(\rho,u)\), the proof is similar to Lemma 4.5. Indeed, \(\eta^{\psi}\) satisfies \[\begin{cases}\eta^{\psi}_{\rho\rho}-k^{\prime}(\rho)^{2}\eta^{\psi}_{uu}=0,\\ \eta^{\psi}(\rho,u)|_{\rho=\rho^{*}}=\eta^{\psi}(\rho^{*},u),\quad\eta^{\psi}_ {\rho}(\rho,u)|_{\rho=\rho^{*}}=\eta^{\psi}_{\rho}(\rho^{*},u).\end{cases}\] (4.70) It follows from (4.70) and Lemma 4.4 that \[\eta^{\psi}(\rho,u) =\frac{1}{2(\rho-\rho^{*})k^{\prime}(\rho)}\int_{\rho^{*}}^{\rho} d_{*}(s)k^{\prime}(s)\big{(}\eta^{\psi}(s,u+k(\rho)-k(s))-\eta^{\psi}(s,u-k(\rho)+k(s)) \big{)}\,\mathrm{d}s\] \[\quad-\frac{1}{2(\rho-\rho^{*})k^{\prime}(\rho)}\int_{-(k(\rho)-k( \rho^{*}))}^{k(\rho)-k(\rho^{*})}\eta^{\psi}(\rho^{*},u-s)\,\mathrm{d}s, \tag{4.71}\] where \(d_{*}(\rho)=2+(\rho-\rho^{*})\frac{k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\) and \(0<d_{*}(\rho)\leq 3\) for \(\rho\geq\rho^{*}\) from (3.11). Then, following the similar arguments as in the proof of Lemma 4.5, we can obtain that \(|\eta^{\psi}_{\rho}(\rho,u)|\leq C_{\psi}\rho^{\theta_{2}}\) for \(\rho\geq 2\rho^{*}\). Moreover, from Lemma 4.5, \(|\eta^{\psi}_{\rho}(\rho,u)|\leq C_{\psi}\leq C_{\psi}\rho^{\theta_{2}}\) for \(\rho\in[\rho^{*},2\rho^{*}]\) so that \[|\eta^{\psi}_{\rho}(\rho,u)|\leq C_{\psi}\rho^{\theta_{2}}\qquad\text{for $ \rho\geq\rho^{*}$}. \tag{4.72}\] 3. For \(\eta^{\psi}_{\rho\rho}(\rho,u)\), it follows from (4.69)-(4.70) that \[|\eta^{\psi}_{\rho\rho}(\rho,u)|\leq|k^{\prime}(\rho)^{2}|\,|\eta^{\psi}_{uu} (\rho,u)|\leq C_{\psi}\rho^{2\theta_{2}-1}=C_{\psi}\rho^{\gamma_{2}-2}\qquad \text{ for $\rho\geq\rho^{*}$}.\] 4. In the \((\rho,m)\)-coordinates, it is clear that \[\eta^{\psi}_{m}(\rho,m)=\rho^{-1}\eta^{\psi}_{u}(\rho,u),\quad\eta^{\psi}_{ mm}(\rho,m)=\rho^{-2}\eta^{\psi}_{uu}(\rho,u),\quad\eta^{\psi}_{\rho}(\rho,m)= \eta^{\psi}_{\rho}(\rho,u)-\frac{m}{\rho^{2}}\eta^{\psi}_{u}(\rho,u).\] On the other hand, if \(\eta^{\psi}_{m}\) is regarded as a function \((\rho,u)\), it is direct to obtain \[\eta^{\psi}_{mu}(\rho,u)=\partial_{u}\big{(}\rho^{-1}\eta^{\psi}_{u}(\rho,u) \big{)}=\rho^{-1}\eta^{\psi}_{uu}(\rho,u).\] Thus, using (4.69) and (4.72), \[|\eta^{\psi}_{m}(\rho,m)|+|\eta^{\psi}_{mu}(\rho,u)|+\rho|\eta^{ \psi}_{mm}(\rho,m)|\leq C_{\psi}\rho^{-1},\] \[|\eta^{\psi}_{\rho}(\rho,m)|\leq|\eta^{\psi}_{\rho}(\rho,u)|+\frac {|m|}{\rho^{2}}\big{|}\eta^{\psi}_{u}(\rho,u)\big{|}\leq C_{\psi}\rho^{\theta _{2}}+C_{\psi}(L+k(\rho))\leq C_{\psi}\rho^{\theta_{2}},\] where we have used that \(\operatorname{supp}\psi\subset[-L,L]\) and \(\eta^{\psi}_{u}(\rho,u)=\rho\eta^{\psi}_{m}(\rho,m)\). 5. For the estimates of \(\eta^{\psi}_{m\rho}(\rho,u)=\partial_{\rho}\eta^{\psi}_{m}(\rho,u)\), it follows from (4.71) that \[\eta^{\psi}_{m}(\rho,m)=\frac{1}{\rho}\eta^{\psi}_{u}(\rho,u) =\frac{1}{2\rho(\rho-\rho^{*})k^{\prime}(\rho)}\int_{\rho^{*}}^{ \rho}d_{*}(s)\,k^{\prime}(s)\,\eta^{\psi}_{u}(s,u+k(\rho)-k(s))\,\mathrm{d}s\] \[\qquad+\frac{1}{2\rho(\rho-\rho^{*})k^{\prime}(\rho)}\int_{\rho^{* }}^{\rho}d_{*}(s)\,k^{\prime}(s)\,\eta^{\psi}_{u}(s,u-k(\rho)+k(s))\,\mathrm{d}s\] \[\qquad-\frac{1}{2\rho(\rho-\rho^{*})k^{\prime}(\rho)}\int_{-(k( \rho)-k(\rho^{*}))}^{k(\rho)-k(\rho^{*}))}\eta^{\psi}_{u}(\rho^{*},u-s)\, \mathrm{d}s\] \[:=J_{1}+J_{2}+J_{3}, \tag{4.73}\] and \(\partial_{\rho}\eta^{\psi}_{m}(\rho,u)=\partial_{\rho}J_{1}+\partial_{\rho}J_{ 2}+\partial_{\rho}J_{3}\). A direct calculation shows that \[\partial_{\rho}J_{1} =\partial_{\rho}\Big{(}\frac{1}{2\rho(\rho-\rho^{*})k^{\prime}( \rho)}\Big{)}\int_{\rho^{*}}^{\rho}d_{*}(s)\,k^{\prime}(s)\,\eta^{\psi}_{u}(s,u+ k(\rho)-k(s))\,\mathrm{d}s\] \[\quad+\frac{1}{2\rho(\rho-\rho^{*})}\int_{\rho^{*}}^{\rho}d_{*}(s )\,k^{\prime}(s)\,\eta^{\psi}_{uu}(s,u+k(\rho)-k(s))\,\mathrm{d}s+\frac{1}{2 \rho(\rho-\rho^{*})}d_{*}(\rho)\,\eta^{\psi}_{u}(\rho,u)\] \[:=J_{1,1}+J_{1,2}+J_{1,3}, \tag{4.74}\] \[\partial_{\rho}J_{2} =\partial_{\rho}\Big{(}\frac{1}{2\rho(\rho-\rho^{*})k^{\prime}( \rho)}\Big{)}\int_{\rho^{*}}^{\rho}d_{*}(s)\,k^{\prime}(s)\,\eta^{\psi}_{u}(s,u- k(\rho)+k(s))\,\mathrm{d}s\] \[\quad-\frac{1}{2\rho(\rho-\rho^{*})}\int_{\rho^{*}}^{\rho}d_{*}(s) \,k^{\prime}(s)\,\eta^{\psi}_{uu}(s,u-k(\rho)+k(s))\,\mathrm{d}s+\frac{1}{2 \rho(\rho-\rho^{*})}d_{*}(\rho)\eta^{\psi}_{u}(\rho,u)\] \[:=J_{2,1}+J_{2,2}+J_{2,3}. \tag{4.75}\] Clearly, we have \[\Big{|}\partial_{\rho}\Big{(}\frac{1}{2\rho(\rho-\rho^{*})k^{\prime}(\rho)} \Big{)}\Big{|}=\frac{1}{2}\Big{|}\frac{\rho-\rho^{*}}{2\rho^{2}\,(\rho-\rho^{*}) ^{2}\,k^{\prime}(\rho)}+\frac{k^{\prime\prime}(\rho)}{\rho\,(\rho-\rho^{*})\,k ^{\prime}(\rho)^{2}}\Big{|}\leq\frac{C}{(\rho-\rho^{*})^{2}\rho^{\theta_{2}}},\] which, with (4.69) and \(0<d_{*}(\rho)\leq 3\) for \(\rho\geq\rho^{*}\), yields \[\begin{split}|J_{1,1}+J_{2,1}|&=\Big{|}\partial_{ \rho}\Big{(}\frac{1}{2\rho\,(\rho-\rho^{*})\,k^{\prime}(\rho)}\Big{)}\int_{ \rho^{*}}^{\rho}k^{\prime}(s)\,d_{*}(s)\,\eta_{u}^{\psi}(s,u+k(\rho)-k(s))\, \mathrm{d}s\Big{|}\\ &\quad+\Big{|}\partial_{\rho}\Big{(}\frac{1}{2\rho\,(\rho-\rho^{ *})\,k^{\prime}(\rho)}\Big{)}\int_{\rho^{*}}^{\rho}k^{\prime}(s)\,d_{*}(s)\, \eta_{u}^{\psi}(s,u-k(\rho)+k(s))\,\mathrm{d}s\Big{|}\\ &\leq\frac{C_{\psi}}{(\rho-\rho^{*})^{2}\,\rho^{\theta_{2}}}\int _{\rho^{*}}^{\rho}s^{\theta_{2}}\,\mathrm{d}s\leq\frac{C_{\psi}}{\rho-\rho^{* }}.\end{split} \tag{4.76}\] It follows from (4.69) and \(0<d_{*}(\rho)\leq 3\) for \(\rho\geq\rho^{*}\) that \[\begin{split}|J_{1,2}|+|J_{2,2}|&=\Big{|}\frac{1} {2\rho(\rho-\rho^{*})}\int_{\rho^{*}}^{\rho}d_{*}(s)\,k^{\prime}(s)\,\eta_{uu }^{\psi}(s,u+k(\rho)-k(s))\,\mathrm{d}s\Big{|}\\ &\quad+\Big{|}\frac{1}{2\rho(\rho-\rho^{*})}\int_{\rho^{*}}^{ \rho}d_{*}(s)\,k^{\prime}(s)\,\eta_{uu}^{\psi}(s,u-k(\rho)+k(s))\,\mathrm{d}s \Big{|}\\ &\leq\frac{C_{\psi}}{\rho(\rho-\rho^{*})}\int_{\rho^{*}}^{\rho}s ^{\theta_{2}}\,\mathrm{d}s\leq\frac{C_{\psi}}{\rho(\rho-\rho^{*})}\rho^{ \theta_{2}}(\rho-\rho^{*})\leq C_{\psi}\rho^{\theta_{2}-1}.\end{split} \tag{4.77}\] For \(J_{1,3}+J_{2,3}\), it is direct to see that \[|J_{1,3}+J_{2,3}|\leq\Big{|}\frac{1}{\rho(\rho-\rho^{*})}\,d_{*}(\rho)\,\eta_ {u}^{\psi}(\rho,u)\Big{|}\leq\frac{C_{\psi}}{\rho-\rho^{*}}. \tag{4.78}\] For \(\partial_{\rho}J_{3}\), we notice that \[\begin{split}\partial_{\rho}J_{3}&=-\partial_{ \rho}\Big{(}\frac{1}{2\rho\,(\rho-\rho^{*})\,k^{\prime}(\rho)}\Big{)}\int_{-(k (\rho)-k(\rho^{*}))}^{k(\rho)-k(\rho^{*})}\eta_{u}^{\psi}(\rho^{*},u-s)\, \mathrm{d}s\\ &\quad-\frac{1}{2\rho\,(\rho-\rho^{*})}\,\big{(}\eta_{u}^{\psi}( \rho^{*},u-k(\rho)+k(\rho^{*}))+\eta_{u}^{\psi}(\rho^{*},u+k(\rho)-k(\rho^{*}) )\big{)},\end{split}\] which, with \(0<\theta_{2}\leq 1\), yields \[|\partial_{\rho}J_{3}|\leq\frac{C_{\psi}\rho^{*}}{(\rho-\rho^{*})^{2}}\,|k( \rho)-k(\rho^{*})|+\frac{C_{\psi}\rho^{*}}{\rho\,(\rho-\rho^{*})}\leq\frac{C_{ \psi}}{\rho-\rho^{*}}(1+\rho^{\theta_{2}-1})\leq\frac{C_{\psi}}{\rho-\rho^{*}}. \tag{4.79}\] Combining (4.74)-(4.79) with (4.73) yields that \(|\eta_{m\rho}^{\psi}(\rho,u)|\leq C_{\psi}\rho^{\theta_{2}-1}\) for \(\rho\geq 2\rho^{*}\). For \(\rho^{*}\leq\rho\leq 2\rho^{*}\), it follows from Lemma 4.5 that \(|\eta_{m\rho}^{\psi}(\rho,u)|\leq C_{\psi}\rho^{\theta_{1}-1}\leq C_{\psi}\) for \(\rho^{*}\leq\rho\leq 2\rho^{*}\). Thus, we obtain \(|\eta_{m\rho}^{\psi}(\rho,u)|\leq C_{\psi}\rho^{\theta_{2}-1}\) for \(\rho\geq\rho^{*}\). We now estimate \(q^{\psi}\) for \(\rho\geq\rho^{*}\). It follows from (4.56) that \(h:=\sigma-u\chi\) satisfies \[\begin{cases}h_{\rho\rho}-k^{\prime}(\rho)^{2}h_{uu}=\frac{P^{\prime\prime}( \rho)}{\rho}\chi_{u},\\ h(\rho^{*},u)=(\sigma-u\chi)(\rho^{*},u),\quad h_{\rho}(\rho^{*},u)=(\sigma-u \chi)_{\rho}(\rho^{*},u),\end{cases}\] where \((\sigma-u\chi)(\rho^{*},u)\) and \((\sigma-u\chi)_{\rho}(\rho^{*},u)\) are given by Lemma 4.3. Similar to Lemma 4.4, we have the following representation formula for \(h\). **Lemma 4.8** (Second representation formula [62, Lemmas 3.4 and 3.9]).: _For any \((\rho,u)\) with \(|u|\leq k(\rho)\) and \(\rho>\rho^{*}\),_ \[h(\rho,u)= \,\frac{1}{2(\rho-\rho^{*})k^{\prime}(\rho)}\int_{\rho^{*}}^{\rho} k^{\prime}(s)d_{*}(s)\big{(}h(s,u+k(\rho)-k(s))+h(s,u-k(\rho)+k(s))\big{)}\, \mathrm{d}s\] \[+\frac{1}{2(\rho-\rho^{*})k^{\prime}(\rho)}\int_{\rho^{*}}^{\rho} (s-\rho^{*})\frac{P^{\prime\prime}(s)}{s}\big{(}\chi(s,u+k(\rho)-k(s))-\chi(s,u -k(\rho)+k(s))\big{)}\,\mathrm{d}s\] \[-\frac{1}{2(\rho-\rho^{*})k^{\prime}(\rho)}\int_{-(k(\rho)-k(\rho ^{*}))}^{k(\rho)-k(\rho^{*})}h(\rho^{*},u-s)\,\mathrm{d}s, \tag{4.80}\] _where \(d_{*}(\rho)=2+(\rho-\rho^{*})\frac{k^{\prime\prime}(\rho)}{k^{\prime}(\rho)}\)._ **Lemma 4.9**.: _There exists a constant \(C>0\) depending only on \(\rho^{*}\) such that_ \[\|(\sigma-u\chi)(\rho,u)\|_{L^{\infty}_{u}}\leq C\rho^{1+\theta_{2}}\qquad \text{ for }\rho\geq\rho^{*}.\] **Proof.** It follows from (3.3), (4.80), and Lemma 4.6 that \[\|k^{\prime}(\rho)h(\rho,\cdot)\|_{L^{\infty}_{u}} \leq\frac{1}{\rho-\rho^{*}}\int_{\rho^{*}}^{\rho}d_{*}(s)\|k^{ \prime}(s)h(s,\cdot)\|_{L^{\infty}_{u}}\,\mathrm{d}s+\frac{C}{\rho-\rho^{*}} \int_{\rho^{*}}^{\rho}s^{\gamma_{2}-1}\,\mathrm{d}s+C\rho^{\theta_{2}-1}\] \[\leq\frac{1}{\rho-\rho^{*}}\int_{\rho^{*}}^{\rho}d_{*}(s)\|k^{ \prime}(s)h(s,\cdot)\|_{L^{\infty}_{u}}\,\mathrm{d}s+C\rho^{2\theta_{2}},\] which, with (3.7) and a similar proof to that for Lemma A.3, yields \[\|k^{\prime}(\rho)h(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\rho^{2\theta_{2}} \,\Longrightarrow\,\|h(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\rho^{1+\theta_{2} }\qquad\text{for }\rho\geq 2\rho^{*}.\] For \(\rho^{*}\leq\rho\leq 2\rho^{*}\), it follow from Lemma 4.3 that \(\|h(\rho,\cdot)\|_{L^{\infty}_{u}}\leq C\leq C\rho^{1+\theta_{2}}\). **Lemma 4.10**.: _For \(\rho\geq\rho^{*}\) and \(\psi\in C^{2}_{0}(\mathbb{R})\),_ \[|q^{\psi}(\rho,u)|\leq C_{\psi}\rho^{1+\theta_{2}}. \tag{4.81}\] **Proof.** Recall that \[q^{\psi}(\rho,u)=\int_{\mathbb{R}}\big{(}\sigma(\rho,u,s)-u\chi(\rho,u-s) \big{)}\psi(s)\,\mathrm{d}s+u\int_{\mathbb{R}}\chi(\rho,u-s)\psi(s)\,\mathrm{ d}s:=h^{\psi}(\rho,u)+u\,\eta^{\psi}(\rho,u). \tag{4.82}\] It follows from Lemma 4.9 that \[|h^{\psi}(\rho,u)|\leq C\|(\sigma-u\chi)(\rho,\cdot)\|_{L^{\infty}_{u}( \mathbb{R})}\|\psi\|_{L^{1}(\mathbb{R})}\leq C_{\psi}\rho^{1+\theta_{2}}. \tag{4.83}\] Since there exists \(L>0\) such that \(\operatorname{supp}\psi\subset[-L,L]\), then it follows from Lemma 4.7 that \(|u\eta^{\psi}(\rho,u)|\leq(k(\rho)+L)|\eta^{\psi}(\rho,u)|\leq C_{\psi}\rho^{1+ \theta_{2}}\) for \(\rho\geq\rho^{*}\), which, with (4.82)-(4.83), yields (4.81). ### Singularities of the entropy kernel and the entropy flux kernel As indicated in [10, 61, 62], understanding the singularities of the entropy kernel and the entropy flux kernel is essential for the reduction of the Young measure. Thus, it requires some detailed estimates of the singularities of the entropy kernel and the entropy flux kernel. The arguments in this subsection are similar to [62, SS6], the main difference is that a more subtle Gronwall inequality (see Lemma A.3) is needed to obtain the desired estimates of the singularities. **Lemma 4.11**.: _For \(\rho\geq\rho^{*}\), the coefficient functions \(a_{1}(\rho)\) and \(a_{2}(\rho)\) and the remainder term \(g_{1}(\rho,u)\) in Lemma 4.2 satisfy_ \[|a_{1}(\rho)|+\rho^{\theta_{2}}|a_{2}(\rho)|\leq C\rho^{\frac{1}{2}-\frac{ \theta_{2}}{2\theta_{1}}},\qquad\|g_{1}(\rho,u)\|_{L^{\infty}_{u}(\mathbb{R})} \leq\begin{cases}C\rho&\text{ if }\theta_{2}<\theta_{1},\\ C\rho\ln\rho&\text{ if }\theta_{2}=\theta_{1},\end{cases}\] \[\|b^{\prime}(\rho)g_{1}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\leq\begin{cases} C\rho^{\theta_{2}}&\text{if $\theta_{2}<\theta_{1}$},\\ C\rho^{\theta_{2}}\ln\rho&\text{if $\theta_{2}=\theta_{1}$},\end{cases}\] which, with (3.7), yields that, for \(\rho\geq\rho^{*}\), \[\|g_{1}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\leq\begin{cases}C\rho&\text{ if }\theta_{2}<\theta_{1},\\ C\rho\ln\rho&\text{ if }\theta_{2}=\theta_{1}.\end{cases}\] 3. Applying \(\partial_{u}\) to (4.87), we have \[\begin{split} k^{\prime}(\rho)\partial_{u}g_{1}(\rho,u)& =\frac{1}{2\rho}\int_{0}^{\rho}d(s)k^{\prime}(s)\Big{(}\partial _{u}g_{1}(s,u+k(\rho)-k(s))+\partial_{u}g_{1}(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{ d}s\\ &\quad+\frac{1}{2\rho}\int_{0}^{\rho}sA(s)k(s)^{-1}\Big{(}f_{ \lambda_{1}+1}(\frac{u+k(\rho)-k(s)}{k(s)})-f_{\lambda_{1}+1}(\frac{u-k(\rho) +k(s)}{k(s)})\Big{)}\,\mathrm{d}s.\end{split} \tag{4.89}\] Since \(|f_{\lambda_{1}+1}(s)|\leq 1\), by similar arguments as in Step 2, we can obtain \[\|\partial_{u}g_{1}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\leq C\rho\qquad \text{ for }\rho\geq\rho^{*}.\] 4. Applying the fractional derivative \(\partial_{u}^{\lambda_{1}}\) to (4.89), we have \[\begin{split}& k^{\prime}(\rho)\partial_{u}^{\lambda_{1}+1}g_{1}( \rho,u)\\ &=\frac{1}{2\rho}\int_{0}^{\rho}d(s)k^{\prime}(s)\Big{(}( \partial_{u}^{\lambda_{1}+1}g_{1})(s,u+k(\rho)-k(s))+(\partial_{u}^{\lambda_{ 1}+1}g_{1})(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{d}s\\ &\quad+\frac{1}{2\rho}\int_{0}^{\rho}sA(s)k(s)^{-1-\lambda_{1}} \Big{(}(\partial_{u}^{\lambda_{1}}f_{\lambda_{1}+1})(\frac{u+k(\rho)-k(s)}{k( s)})-(\partial_{u}^{\lambda_{1}}f_{\lambda_{1}+1})(\frac{u-k(\rho)+k(s)}{k(s)}) \Big{)}\,\mathrm{d}s,\end{split}\] where we have taken into account the homogeneity of the factional derivative in the last term. Using the Fourier transform relation as in [46, (I.26)-(I.27)], we can obtain \[\big{|}\mathscr{F}\big{(}(\partial_{u}^{\lambda_{1}}f_{\lambda_{1}+1})(u) \big{)}(\xi)\big{|}=C_{\lambda_{1}+1}|\xi|^{-\frac{3}{2}}\,|J_{\lambda_{1}+ \frac{3}{2}}(|\xi|)|\leq\frac{\tilde{C}_{\lambda_{1}+1}}{1+\xi^{2}},\] for some positive constants \(C_{\lambda_{1}+1}\) and \(\tilde{C}_{\lambda_{1}+1}\) depending only on \(\lambda_{1}+1\), where we have used the asymptotic relations for the first kind of Bessel functions \(J_{\lambda_{1}+\frac{3}{2}}(|\xi|)\) to obtain the final inequality. Since \((1+|\xi|^{2})^{-1}\) is integrable, applying the Fourier inversion theorem, we see that \((\partial_{u}^{\lambda_{1}}f_{\lambda_{1}+1})(u)\) is uniformly bounded. Hence, by similar arguments as in Step 2, we have \[\|\big{(}\partial_{u}^{\lambda_{1}+1}g_{1}\big{)}(\rho,\cdot)\|_{L^{\infty}_{u }(\mathbb{R})}\leq C\rho\qquad\text{ for }\rho\geq\rho^{*}.\] 5. By Lemma 4.2, we assume that \(\alpha_{0}\in(0,1)\) is the Holder exponent of \((\partial_{u}^{\lambda_{1}+1})g_{1}(\rho,u)\). Then, applying the fractional derivative \(\partial_{u}^{\lambda_{1}}\) to (4.89), we have \[\begin{split}& k^{\prime}(\rho)\partial_{u}^{\lambda_{1}+1+ \alpha_{0}}g_{1}(\rho,u)\\ &=\frac{1}{2\rho}\int_{0}^{\rho}d(s)k^{\prime}(s)\Big{(}( \partial_{u}^{\lambda_{1}+1+\alpha_{0}}g_{1})(s,u+k(\rho)-k(s))+(\partial_{u}^ {\lambda_{1}+1+\alpha_{0}}g_{1})(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{d}s\\ &\quad+\frac{1}{2\rho}\int_{0}^{\rho}sA(s)k(s)^{-1-\lambda_{1}- \alpha_{0}}(\partial_{u}^{\lambda_{1}+\alpha_{0}}f_{\lambda_{1}+1})(\frac{u+k (\rho)-k(s)}{k(s)})\,\mathrm{d}s\\ &\quad-\frac{1}{2\rho}\int_{0}^{\rho}sA(s)k(s)^{-1-\lambda_{1}- \alpha_{0}}(\partial_{u}^{\lambda_{1}+\alpha_{0}}f_{\lambda_{1}+1})(\frac{u-k (\rho)+k(s)}{k(s)})\,\mathrm{d}s.\end{split}\] Noting \[\big{|}\mathscr{F}\big{(}(\partial_{u}^{\lambda_{1}+\alpha_{0}}f_{\lambda_{1}+1} )(u)\big{)}(\xi)\big{|}=C_{\lambda_{1}+1}|\xi|^{-\frac{3}{2}+\alpha_{0}}\big{|}J_ {\lambda_{1}+\frac{3}{2}}(|\xi|)\big{|}\leq\frac{\tilde{C}_{\lambda_{1}+1}}{1+ \xi^{2-\alpha_{0}}},\] and using the Fourier inversion theorem, we find that \((\partial_{u}^{\lambda_{1}+\alpha_{0}}f_{\lambda_{1}+1})(u)\) is uniformly bounded. By similar arguments as in Step 2, we obtain that \(\|\big{(}\partial_{u}^{\lambda_{1}+1+\alpha_{0}}g_{1})(\rho,\cdot)\|_{L^{ \infty}_{u}(\mathbb{R})}\leq C\rho\) for \(\rho\geq\rho^{*}\). This completes the proof of Lemma 4.11. From Lemmas 4.2 and 4.11, we conclude **Corollary 4.12**.: \(\chi(\rho,\cdot)\) _is Holder continuous and_ \[\|\chi(\rho,\cdot)\|_{C^{\tilde{\alpha}}_{u}}\leq C(1+\rho|\ln\rho|)\qquad\text{ for }\tilde{\alpha}\in(0,\min\{\lambda_{1},1\}]\text{ and }\rho\geq 0.\] **Lemma 4.13**.: _For \(\rho\geq\rho^{*}\), the coefficient functions \(b_{1}(\rho)\) and \(b_{2}(\rho)\) and the remainder term \(g_{2}(\rho,u)\) in Lemma 4.3 satisfy_ \[|b_{1}(\rho)|+\rho^{\theta_{2}}|b_{2}(\rho)|\leq C\rho^{\frac{1}{2}-\frac{ \theta_{2}}{2\theta_{1}}},\qquad\|g_{2}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{ R})}\leq\begin{cases}C\rho^{1+\theta_{2}}&\text{ if }\theta_{2}<\theta_{1},\\ C\rho^{1+\theta_{2}}\ln\rho&\text{ if }\theta_{2}=\theta_{1},\end{cases}\] \[\|\partial_{u}g_{2}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}+\|\big{(} \partial_{u}^{\lambda_{1}+1}g_{2}\big{)}(\rho,\cdot)\|_{L^{\infty}_{u}( \mathbb{R})}+\|\big{(}\partial_{u}^{\lambda_{1}+1+\alpha_{0}}g_{2}\big{)}( \rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\leq C\rho^{1+\theta_{2}},\] _where \(\alpha_{0}\in(0,1)\) is the Holder exponent._ **Proof.** We divide the proof into five steps. 1. It follows from (4.62) that \[|b_{1}(\rho)|+\rho^{1-2\theta_{1}}|b_{1}^{\prime}(\rho)|+\rho^{2-2\theta_{1}} |b_{1}^{\prime\prime}(\rho)|+|b_{2}(\rho)|+\rho|b_{2}^{\prime}(\rho)|+\rho^{2} |b_{2}^{\prime\prime}(\rho)|\leq C\quad\text{ for }\rho\in[0,\rho^{*}]. \tag{4.90}\] From (4.61) and (3.7), we have \[|b_{1}(\rho)|+|\rho b_{1}^{\prime}(\rho)|+|\rho^{2}b_{1}^{\prime\prime}(\rho)| \leq C\rho^{\frac{1}{2}-\frac{\theta_{2}}{2\theta_{1}}}\qquad\text{ for }\rho\geq\rho^{*}. \tag{4.91}\] Using (4.84)-(4.85) and (4.90)-(4.91), we obtain that, for \(\rho\geq\rho^{*}\), \[\Big{|}\int_{0}^{\rho}k(s)^{\lambda_{1}}k^{\prime}(s)^{-\frac{1} {2}}a_{1}^{\prime\prime}(s)\,\mathrm{d}s\Big{|}\leq C\int_{0}^{\rho^{*}}s^{-1+ \theta_{1}}\mathrm{d}s+C\int_{\rho_{*}}^{\rho}s^{-1-\theta_{2}}\,\mathrm{d}s \leq C,\] \[\Big{|}\int_{0}^{\rho}k(s)^{\lambda_{1}+1}k^{\prime}(s)^{-\frac{1 }{2}}b_{1}^{\prime\prime}(s)\,\mathrm{d}s\Big{|}+\Big{|}\int_{0}^{\rho}sk(s)^{ -\lambda_{1}}k^{\prime}(s)^{\frac{1}{2}}a_{1}^{\prime\prime}(s)\,\mathrm{d}s \Big{|}\leq C\ln\rho,\] which, with (4.61), yields that \(|b_{2}(\rho)|\leq C\rho^{\frac{1}{2}-\theta_{2}(1+\frac{1}{2\theta_{1}})}\) for \(\rho\geq\rho^{*}\). Moreover, by calculating the derivatives explicitly, we obtain \[\rho|b_{2}^{\prime}(\rho)|+\rho^{2}|b_{2}^{\prime\prime}(\rho)|\leq C\rho^{ \frac{1}{2}-\theta_{2}(1+\frac{1}{2\theta_{1}})}\qquad\text{for }\rho\geq\rho^{*}. \tag{4.92}\] 2. For the remainder term \(g_{2}(\rho,u)\), recalling from [10, Proof of Theorem 2.2], \(g_{2}\) satisfies \[\begin{cases}\partial_{\rho\rho}g_{2}(\rho,u)-k^{\prime}(\rho)^{2}\partial_{uu }g_{2}(\rho,u)=ub_{2}^{\prime\prime}(\rho)k(\rho)^{2\lambda_{1}+2}f_{\lambda_ {1}+1}(\frac{u}{k(\rho)})+\frac{P^{\prime\prime}(\rho)}{\rho}\partial_{u}g_{1} (\rho,u),\\ g_{2}(0,u)=0,\quad\partial_{\rho}g_{2}(0,u)=0,\end{cases}\] where \(f_{\lambda_{1}}(y)=[1-y^{2}]_{+}^{\lambda_{1}}\). Similar to the arguments for Lemma 4.8, we obtain \[k^{\prime}(\rho)g_{2}(\rho,u) =\frac{1}{2\rho}\int_{0}^{\rho}d(s)k^{\prime}(s)\Big{(}g_{2}(s,u+ k(\rho)-k(s))+g_{2}(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{d}s\] \[\quad+\frac{1}{2\rho}\int_{0}^{\rho}sb_{2}^{\prime\prime}(s)k(s)^ {2\lambda_{1}+2}\Big{(}\int_{u-k(\rho)+k(s)}^{u+k(\rho)-k(s)}yf_{\lambda_{1}+1 }(\frac{y}{k(s)})\,\mathrm{d}y\Big{)}\,\mathrm{d}s\] \[\quad+\frac{1}{2\rho}\int_{0}^{\rho}P^{\prime\prime}(s)\Big{(}g_ {1}(s,u+k(\rho)-k(s))-g_{1}(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{d}s, \tag{4.93}\] which yields \[\|k^{\prime}(\rho)g_{2}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\] \[=\frac{1}{\rho}\int_{0}^{\rho}d(s)\|k^{\prime}(s)g_{2}(s,\cdot)\| _{L^{\infty}_{u}(\mathbb{R})}\,\mathrm{d}s+\frac{C}{\rho}\int_{0}^{\rho}s|b_{2} ^{\prime\prime}(s)|k(s)^{2\lambda_{1}+4}\,\mathrm{d}s+\frac{C}{\rho}\int_{0}^{ \rho}P^{\prime\prime}(s)\|g_{1}(s,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\, \mathrm{d}s\] \[\|\partial_{u}g_{2}(\rho,\cdot)\|_{L^{\infty}_{u}(\mathbb{R})}\leq C \rho^{1+\theta_{2}}\qquad\text{ for }\rho\geq\rho^{*}.\] 5. Applying \(\partial_{u}^{\lambda_{1}+\alpha_{0}}\) to (4.97), we have \[k^{\prime}(\rho)\partial_{u}^{\lambda_{1}+1+\alpha_{0}}g_{2}(\rho,u)\] \[=\frac{1}{2\rho}\int_{0}^{\rho}d(s)k^{\prime}(s)\Big{(}(\partial_{u }^{\lambda_{1}+1+\alpha_{0}}g_{2})(s,u+k(\rho)-k(s))+(\partial_{u}^{\lambda_{1 }+1+\alpha_{0}}g_{2})(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{d}s\] \[\quad+\frac{1}{2\rho}\int_{0}^{\rho}sb_{2}^{\prime\prime}(s)k(s)^ {\lambda_{1}+3-\alpha_{0}}\Big{(}\widetilde{f}^{(\lambda_{1})}(\frac{u+k( \rho)-k(s)}{k(s)})-\widetilde{f}^{(\lambda_{1})}(\frac{u-k(\rho)+k(s)}{k(s)}) \Big{)}\,\mathrm{d}s\] \[\quad+\frac{1}{2\rho}\int_{0}^{\rho}P^{\prime\prime}(s)\Big{(}( \partial_{u}^{\lambda_{1}+1+\alpha_{0}})g_{1}(s,u+k(\rho)-k(s))-(\partial_{u} ^{\lambda_{1}+1+\alpha_{0}})g_{1}(s,u-k(\rho)+k(s))\Big{)}\,\mathrm{d}s.\] Noting that \(\widetilde{f}^{(\lambda_{1}+\alpha_{0})}(s)\) is uniformly bounded, by similar arguments as in Step 2, we have \[\|\partial_{u}^{\lambda_{1}+1+\alpha_{0}}g_{2}(\rho,\cdot)\|_{L_{u}^{\infty}( \mathbb{R})}\leq C\rho^{1+\theta_{2}}\qquad\text{for }\rho\geq\rho^{*}.\] This completes the proof. The following lemma provides the explicit singularities of \(\chi(\rho,u-s)\) and \((\sigma-u\chi)(\rho,u-s)\). **Lemma 4.14**.: _The fractional derivatives \(\partial_{u}^{\lambda_{1}+1}\chi\) and \(\partial_{u}^{\lambda_{1}+1}(\sigma-u\chi)\) admit the expansions_:__ \[\partial_{s}^{\lambda_{1}+1}\chi(\rho,u-s) =\sum_{\pm}\Big{(}A_{1,\pm}(\rho)\,\delta(s-u\pm k(\rho))+A_{2, \pm}(\rho)\,H(s-u\pm k(\rho))\Big{)}\] \[\quad+\sum_{\pm}\Big{(}A_{3,\pm}(\rho)\,PV(s-u\pm k(\rho))+A_{4, \pm}(\rho)\,Ci(s-u\pm k(\rho))\Big{)}\] \[\quad+r_{\chi}(\rho,u-s), \tag{4.98}\] \[\partial_{s}^{\lambda_{1}+1}(\sigma-u\chi)(\rho,u-s) =\sum_{\pm}(s-u)\Big{(}B_{1,\pm}(\rho)\,\delta(s-u\pm k(\rho))+B_ {2,\pm}(\rho)\,H(s-u\pm k(\rho))\Big{)}\] \[\quad+\sum_{\pm}(s-u)\Big{(}B_{3,\pm}(\rho)\,PV(s-u\pm k(\rho))+B _{4,\pm}\,Ci(s-u\pm k(\rho))\Big{)}\] \[\quad+r_{\sigma}(\rho,u-s), \tag{4.99}\] _where \(\delta\) is the Dirac measure, \(H\) is the Heaviside function, \(PV\) is the principle value distribution, and \(Ci\) is the Cosine integral_:__ \[Ci(s):=-\int_{|s|}^{\infty}\frac{\cos y}{y}\,\mathrm{d}y=\log|s|+\int_{0}^{|s |}\frac{\cos y-1}{y}\,\mathrm{d}y+C_{0}\qquad\text{for }s\in\mathbb{R}\] _for some constant \(C_{0}>0\). The remainder terms \(r_{\chi}\) and \(r_{\sigma}\) are Holder continuous function. Moreover, there exists a positive constant \(C=C(\gamma_{1},\gamma_{2},\rho_{*},\rho^{*})\) such that, for \(\rho\geq\rho^{*}\),_ \[\sum_{j=1,\pm}^{4}|A_{j,\pm}(\rho)|+\sum_{j=1,\pm}^{6}|B_{j,\pm}|\leq C\rho^{ \frac{1}{2}-\frac{\theta_{2}}{2}},\quad\|r_{\chi}(\rho,\cdot)\|_{C^{\alpha_{1}} (\mathbb{R})}\leq C\rho,\quad\|r_{\sigma}(\rho,\cdot)\|_{C^{\alpha_{1}}( \mathbb{R})}\leq C\rho^{1+\theta_{2}},\] _where \(\alpha_{1}\in(0,\alpha_{0}]\) is the common Holder exponent of \(r_{\chi}\) and \(r_{\sigma}\)._ **Proof.** From [62, Lemma 6.4], we obtain (4.98)-(4.99), where the coefficients are given by \[A_{1,\pm}(\rho) =a_{1}(\rho)k(\rho)^{\lambda_{1}}A_{1}^{\lambda_{1}},\quad\quad A_ {2,\pm}(\rho)=\pm a_{1}(\rho)k(\rho)^{\lambda_{1}-1}A_{3}^{\lambda_{1}}+a_{2}( \rho)k(\rho)^{\lambda_{1}+1}A_{1}^{\lambda_{1}+1},\] \[A_{3,\pm}(\rho) =\pm a_{1}(\rho)k(\rho)^{\lambda_{1}}A_{2}^{\lambda_{1}},\quad A _{4,\pm}(\rho)=\pm a_{1}(\rho)k(\rho)^{\lambda_{1}-1}A_{4}^{\lambda_{1}}\pm a _{2}(\rho)k(\rho)^{\lambda_{1}+1}A_{2}^{\lambda_{1}+1},\] \[r_{\chi}(\rho,u-s) =a_{1}(\rho)k(\rho)^{\lambda_{1}-1}\tilde{q}(\frac{s-u}{k(\rho)})+a_ {2}(\rho)k(\rho)^{\lambda_{1}+1}\tilde{r}(\frac{s-u}{k(\rho)})-A_{4}^{\lambda_{1 }}k(\rho)^{\lambda_{1}-1}(\log k(\rho))^{2}\] \[\quad+\partial_{s}^{\lambda_{1}+1}g_{1}(\rho,u-s),\] where \(A_{i}^{\lambda_{1}}\in\mathbb{C}\) for \(i=1,\cdots,4\), are constants depending only on \(\lambda_{1}\), and \(\tilde{r}\) and \(\tilde{q}\) are uniformly bounded Holder continuous functions. Thus, using Lemma 4.11, we see that, for \(\rho\geq\rho^{*}\), \[|A_{i,\pm}(\rho)|\leq C\rho^{\frac{1}{2}-\frac{\theta_{2}}{2 \theta_{1}}}\rho^{\theta_{2}(\frac{1}{2\theta_{1}}-\frac{1}{2})}\leq C\rho^{ \frac{1}{2}-\frac{1}{2}\theta_{2}}\qquad\text{ for }i=1,3,\] \[|A_{j,\pm}(\rho)|\leq C\rho^{\frac{1}{2}-\frac{3\theta_{2}}{2}}+C \rho^{\frac{1}{2}-\frac{\theta_{2}}{2\theta_{1}}-\theta_{2}}\rho^{\theta_{2}( \frac{1}{2\theta_{1}}+\frac{1}{2})}\leq C\rho^{\frac{1}{2}-\frac{\theta_{2}}{ 2}}\qquad\text{ for }j=2,4.\] \[\|r_{\chi}(\rho,\cdot)\|_{C^{\alpha}(\mathbb{R})}:=\|r_{\chi}( \rho,\cdot)\|_{L^{\infty}(\mathbb{R})}+[r_{\chi}(\rho,\cdot)]_{C^{\alpha}( \mathbb{R})}\leq C\big{(}\rho^{\frac{1}{2}-\frac{\theta_{2}}{2}}+\rho^{\frac{ \theta_{2}}{2\theta_{1}}-\frac{3}{2}\theta_{2}}|\ln\rho|^{2}+\rho\big{)}\leq C\rho.\] Similarly, we have \[B_{1,\pm}(\rho) =b_{1}(\rho)k(\rho)^{\lambda_{1}}A_{1}^{\lambda_{1}},\quad B_{2, \pm}(\rho)=\pm b_{1}(\rho)k(\rho)^{\lambda_{1}-1}A_{3}^{\lambda_{1}}+b_{2}( \rho)k(\rho)^{\lambda_{1}+1}A_{1}^{\lambda_{1}+1},\] \[B_{3,\pm}(\rho) =\pm b_{1}(\rho)k(\rho)^{\lambda_{1}}A_{2}^{\lambda_{1}},\quad B_ {4,\pm}(\rho)=\pm b_{1}(\rho)k(\rho)^{\lambda_{1}-1}A_{4}^{\lambda_{1}}\pm b_ {2}(\rho)k(\rho)^{\lambda_{1}+1}A_{2}^{\lambda_{1}+1},\] \[B_{5,\pm}(\rho) =(\lambda_{1}+1)b_{1}(\rho)k(\rho)^{\lambda_{1}}A_{1}^{\lambda_{1 }},\quad B_{6,\pm}(\rho)=\pm(\lambda_{1}+1)b_{1}(\rho)k(\rho)^{\lambda_{1}}A_ {2}^{\lambda_{1}},\] \[r_{\sigma}(\rho,u-s) =(s-u)\Big{(}b_{1}(\rho)k(\rho)^{\lambda_{1}-1}\big{(}-A_{4}^{ \lambda_{1}}(\log k(\rho))^{2}+\tilde{q}(\frac{s-u}{k(\rho)})\big{)}+b_{2}( \rho)k(\rho)^{\lambda_{1}+1}\tilde{r}(\frac{s-u}{k(\rho)})\Big{)}\] \[\quad+(\lambda_{1}+1)\Big{(}b_{1}(\rho)k(\rho)^{\lambda_{1}} \tilde{r}(\frac{s-u}{k(\rho)})+b_{2}(\rho)k(\rho)^{\lambda_{1}+2}\tilde{\ell} (\frac{s-u}{k(\rho)})\Big{)}+\partial_{s}^{\lambda_{1}+1}g_{2}(\rho,u-s),\] where \(\tilde{\ell}\) is also a uniformly bounded Holder continuous function. Using Lemma 4.13, we conclude that, for \(\rho\geq\rho^{*}\), \[|B_{i,\pm}(\rho)| \leq C\rho^{\frac{1}{2}-\frac{\theta_{2}}{2\theta_{1}}}\rho^{ \theta_{2}(\frac{1}{2\theta_{1}}-\frac{1}{2})}\leq C\rho^{\frac{1}{2}-\frac{ \theta_{2}}{2}}\qquad\text{for }i=1,3,5,6,\] \[|B_{j,\pm}(\rho)| \leq C\rho^{\frac{1}{2}-\frac{3\theta_{2}}{2}}+C\rho^{\frac{1}{2 }-(\frac{1}{2\theta_{1}}+1)\theta_{2}}\rho^{\theta_{2}(\frac{1}{2\theta_{1}} +\frac{1}{2})}\leq C\rho^{\frac{1}{2}-\frac{\theta_{2}}{2}}\qquad\text{for }j=2,4,\] \[\|r_{\sigma}(\rho,\cdot)\|_{C^{\alpha}(\mathbb{R})}:=\|r_{\sigma}( \rho,\cdot)\|_{L^{\infty}(\mathbb{R})}+[r_{\sigma}(\rho,\cdot)]_{C^{\alpha}( \mathbb{R})}\leq C\big{(}\rho^{\frac{1}{2}-\frac{\theta_{2}}{2}}|\ln\rho|^{2}+ \rho^{1+\theta_{2}}\big{)}\leq C\rho^{1+\theta_{2}}.\] This completes the proof. ## 5. Uniform Estimates of Approximate Solutions As in [9], we construct the approximate solutions via the following approximate free boundary problem for CNSPEs: \[\begin{cases}\rho_{t}+(\rho u)_{r}+\frac{2}{r}\rho u=0,\\ (\rho u)_{t}+(\rho u^{2}+P(\rho))_{r}+\frac{2}{r}\rho u^{2}+\frac{ \rho}{r^{2}}\int_{a}^{r}\rho(t,y)\,y^{2}\mathrm{d}y=\varepsilon\Big{(}\rho(u_ {r}+\frac{2}{r}u)\Big{)}_{r}-\frac{2\varepsilon}{r}\rho_{r}u,\end{cases} \tag{5.1}\] for \((t,r)\in\Omega_{T}\) with \[\Omega_{T}=\{(t,r)\in[0,\infty)\times\mathbb{R}\,:\,a\leq r\leq b(t),\;0\leq t \leq T\}, \tag{5.2}\] where \(\{r=b(t)\,:\,0\leq t\leq T\}\) is a free boundary determined by \[b^{\prime}(t)=u(t,b(t))\quad\text{for }t>0,\qquad\quad b(0)=b, \tag{5.3}\] and \(a=b^{-1}\) with \(b\gg 1\). On the free boundary \(r=b(t)\), we impose the stress-free boundary condition: \[\big{(}P(\rho)-\varepsilon\rho(u_{r}+\frac{2}{r}u)\big{)}(t,b(t))=0\qquad\text{ for }t>0. \tag{5.4}\] On the fixed boundary \(r=a=b^{-1}\), we impose the Dirichlet boundary condition: \[u(t,r)|_{r=a}=0\qquad\text{for }t>0. \tag{5.5}\] The initial condition is \[(\rho,\rho u)|_{t=0}=(\rho_{0}^{\varepsilon,b},\rho_{0}^{\varepsilon,b}u_{0}^{ \varepsilon,b})\qquad\text{for }r\in[a,b]. \tag{5.6}\] ### Basic estimates Denote \[E_{0}^{\varepsilon,b}:=\omega_{3}\int_{a}^{b}\rho_{0}^{\varepsilon,b}\Big{(} \frac{1}{2}\big{|}u_{0}^{\varepsilon,b}\big{|}^{2}+e(\rho_{0}^{\varepsilon,b}) \Big{)},r^{2}\mathrm{d}r,\qquad E_{1}^{\varepsilon,b}:=\omega_{3}\varepsilon ^{2}\int_{a}^{b}\Big{|}\big{(}\sqrt{\rho_{0}^{\varepsilon,b}}\big{)}_{r}\Big{|} ^{2}\,r^{2}\mathrm{d}r.\] For given total energy \(E_{0}^{\varepsilon,b}>0\), the critical mass \(M_{\mathrm{c}}^{\varepsilon,b}\) is defined in (2.5)-(2.8) by replacing \(E_{0}\) with \(E_{0}^{\varepsilon,b}\). For the approximate initial data \((\rho_{0}^{\varepsilon},m_{0}^{\varepsilon})\) imposed in (2.17) satisfying (2.9)-(2.10), using similar arguments in [9, Appendix A], we can construct a sequence of smooth functions \((\rho_{0}^{\varepsilon,b},u_{0}^{\varepsilon,b})\) defined on \([a,b]\), which is compatible with the boundary conditions (5.4)-(5.5), such that 1. There exists a constant \(C_{\varepsilon,b}>0\) depending on \((\varepsilon,b)\) so that, for all \(\varepsilon\in(0,1]\) and \(b>1\), \[0<C_{\varepsilon,b}^{-1}\leq\rho_{0}^{\varepsilon,b}(r)\leq C_{\varepsilon,b }<\infty.\] (5.7) 2. For all \(\varepsilon\in(0,1]\) and \(b>1\), \[\int_{a}^{b}\rho_{0}^{\varepsilon,b}(r)r^{2}dr=\frac{M}{\omega_{3}},\qquad E_ {0}^{\varepsilon,b}\leq C(1+E_{0}),\qquad E_{1}^{\varepsilon,b}\leq C(1+M)\varepsilon,\] (5.8) \[\rho_{0}^{\varepsilon,b}(b)\cong b^{-(3-\alpha)}\qquad\text{ with }\alpha:=\min\{\frac{1}{2},\frac{3(\gamma_{1}-1)}{\gamma_{1}}\}.\] (5.9) 3. For each fixed \(\varepsilon\in(0,1]\), as \(b\to\infty\), \((E_{0}^{\varepsilon,b},E_{1}^{\varepsilon,b})\to(E_{0}^{\varepsilon},E_{1}^{ \varepsilon})\) and \[(\rho_{0}^{\varepsilon,b},\rho_{0}^{\varepsilon,b}u_{0}^{\varepsilon,b}) \longrightarrow(\rho_{0}^{\varepsilon},m_{0}^{\varepsilon})\qquad\text{in }L^{ \tilde{q}}([a,b];r^{2}dr)\times L^{1}([a,b];r^{2}dr)\text{ with }\tilde{q}=\{1,\gamma_{2}\},\] (5.10) 4. For each fixed \(\varepsilon\in(0,\varepsilon_{0}]\), there exists a large constant \(\mathcal{B}(\varepsilon)>0\) such that \[M<M_{\mathrm{c}}^{\varepsilon,b}\qquad\text{for }b\geq\mathcal{B}( \varepsilon)\text{ and }\gamma_{2}\in(\frac{6}{5},\frac{4}{3}],\] (5.11) where \(M_{\mathrm{c}}^{\varepsilon,b}\) is defined in (2.5)-(2.8) by replacing \(E_{0}\) with \(E_{0}^{\varepsilon,b}\). We point out that (5.9) is important for us to close the BD-type entropy estimate in Lemma 5.4 and to obtain the higher integrability of the density in Lemma 5.6 below. Once free boundary problem (5.1)-(5.6) is solved, we define the potential function \(\Phi\) to be the solution of the Poisson equation: \[\Delta\Phi=\rho\mathbf{I}_{\Omega_{t}},\qquad\lim_{|\mathbf{x}|\to\infty}\Phi (\mathbf{x})=0,\] with \(\Omega_{t}:=\{\mathbf{x}\in\mathbb{R}^{3}\,:\,a\leq|\mathbf{x}|\leq b(t)\}\), for which \(\rho\) has been extended to be zero outside \(\Omega_{t}\). In fact, we can show that \(\Phi(t,\mathbf{x})=\Phi(t,r)\) with \[\Phi_{r}(t,r)=\left\{\begin{aligned} & 0&&\text{for }0\leq r\leq a,\\ &\frac{1}{r^{2}}\int_{a}^{r}\rho(t,y)\,y^{2}\mathrm{d}y&& \text{for }a\leq r\leq b(t),\\ &\frac{M}{\omega_{3}}\frac{1}{r^{2}}&&\text{for }r\geq b(t), \end{aligned}\right. \tag{5.12}\] so that \(\Phi(t,r)\) can be recovered by integrating (5.12). In this section, parameters \((\varepsilon,b)\) are fixed with \(\varepsilon\in(0,\varepsilon_{0}]\) and \(b\geq\max\{\rho_{*}^{-\frac{\gamma_{1}}{3}},\mathcal{B}(\varepsilon)\}\) such that (5.11) holds and \(\rho_{0}^{\varepsilon,b}(b)\leq\rho_{*}\). The global existence of smooth solutions of our approximate problem (5.1)-(5.6) whose initial data satisfy (5.7)-(5.11) and pressure satisfies (1.4)-(1.6) can be obtained by using similar arguments in [24, SS3] with \(\gamma_{2}\in(\frac{4}{3},\infty)\), or with \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3}]\) and \(M<M_{\mathrm{c}}^{\varepsilon,b}(\gamma_{2})\), so the details are omitted here for simplicity. Noting that the upper and lower bounds of \(\rho^{\varepsilon,b}\) in [24] depend on parameters \((\varepsilon,b)\), we now establish some uniform estimates, independent of \(b\), such that we can take the limit: \(b\to\infty\) to obtain the global weak solutions of problem (1.10) and (2.17)-(2.18) in SS6 below as approximate solutions of problem (1.1) and (1.12)-(1.13). Throughout this section, we drop the superscript in both approximate solutions \((\rho^{\varepsilon,b},u^{\varepsilon,b})(r)\) and the approximate initial data \((\rho_{0}^{\varepsilon,b},u_{0}^{\varepsilon,b})\) for simplicity. For smooth solutions, it is convenient to analyze (5.1)-(5.6) in the Lagrangian coordinates. It follows from (5.3) that \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{a}^{b(t)}\rho(t,r)\,r^{2}\mathrm{d}r=(\rho u )(t,b(t))b(t)^{2}-\int_{a}^{b(t)}(\rho ur^{2})_{r}(t,r)\,\mathrm{d}r=0,\] which implies \[\int_{a}^{b(t)}\rho(t,r)\,r^{2}\mathrm{d}r=\int_{a}^{b}\rho_{0}(r)\,r^{2} \mathrm{d}r=\frac{M}{\omega_{3}}\qquad\text{for all }t\geq 0. \tag{5.13}\] For \(r\in[a,b(t)]\) and \(t\in[0,T]\), the Lagrangian coordinates \((\tau,x)\) are defined by \[\tau=t,\qquad x(t,r)=\int_{a}^{r}\rho(t,y)\,y^{2}\mathrm{d}y,\] which translates \([0,T]\times[a,b(t)]\) into a fixed domain \([0,T]\times[0,\frac{M}{\omega_{3}}]\). By direct calculation, we see that \(\nabla_{(t,r)}x=(-\rho ur^{2},\rho r^{2})\), \(\nabla_{(t,r)}\tau=(1,0)\), \(\nabla_{(\tau,x)}r=(u,\rho^{-1}r^{-2})\), and \(\nabla_{(\tau,x)}t=(1,0)\). In the Lagrangian coordinates, the initial-boundary value problem (5.1)-(5.6) becomes \[\begin{cases}\rho_{\tau}+\rho^{2}(r^{2}u)_{x}=0,\\ u_{\tau}+r^{2}P_{x}=-\frac{x}{r^{2}}+\varepsilon r^{2}(\rho^{2}(r^{2}u)_{x})_{ x}-2\varepsilon r\rho_{x}u\end{cases} \tag{5.14}\] for \((\tau,x)\in[0,T]\times[0,\frac{M}{\omega_{3}}]\), and \[u(\tau,0)=0,\quad(P-\varepsilon\rho^{2}(r^{2}u)_{x})(\tau,\frac{M}{\omega_{3 }})=0\qquad\text{for }\tau\in[0,T], \tag{5.15}\] where \(r=r(\tau,x)\) is defined by \(\frac{\mathrm{d}}{\mathrm{d}\tau}r(\tau,x)=u(\tau,x)\) for \((\tau,x)\in[0,T]\times[0,\frac{M}{\omega_{3}}]\), and the fixed boundary \(x=\frac{M}{\omega_{3}}\) corresponds to the free boundary: \(b(\tau)=r(\tau,\frac{M}{\omega_{3}})\) in the Eulerian coordinates. **Lemma 5.1** (Basic energy estimate).: _The smooth solution \((\rho,u)(t,r)\) of problem (5.1)-(5.6) satisfies_ \[\int_{a}^{b(t)}\Big{(}\frac{1}{2}\rho u^{2}+\rho e(\rho)\Big{)}\,r ^{2}\mathrm{d}r-\frac{1}{2}\int_{a}^{\infty}\frac{1}{r^{2}}\Big{(}\int_{a}^{r} \rho(t,z)\,z^{2}\mathrm{d}z\Big{)}^{2}\,\mathrm{d}r\] \[\qquad+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\Big{(}\rho u_{r}^{2} +2\frac{\rho u^{2}}{r^{2}}\Big{)}\,r^{2}\mathrm{d}r\mathrm{d}s+2\varepsilon \int_{0}^{t}(\rho u^{2})(s,b(s))b(s)\,\mathrm{d}s\] \[=\int_{a}^{b}\Big{(}\frac{1}{2}\rho_{0}u_{0}^{2}+\rho_{0}e(\rho_{ 0})\Big{)}\,r^{2}\mathrm{d}r-\frac{1}{2}\int_{a}^{\infty}\frac{1}{r^{2}}\Big{(} \int_{a}^{r}\rho_{0}(t,z)z^{2}dz\Big{)}^{2}\,\mathrm{d}r,\] _where \(\rho(t,r)\) has been understood to be \(0\) for \(r\in[0,a]\cup(b(t),\infty)\) in the second term of the left-hand side_ (LHS) _and the second term of the right-hand side_ (RHS)_. In particular, there exists a positive constant \(C(E_{0},M)\) depending only on the total initial energy \(E_{0}\) and initial-mass \(M\) such that the following estimates hold for the two separate cases_: _Case 1._ \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3}]\) _and_ \(M<M_{\rm c}^{c,b}\)_: _Then_ \[\int_{a}^{b(t)}\rho\big{(}\frac{1}{2}u^{2}+e(\rho)\big{)}\,r^{2} \mathrm{d}r+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\rho\Big{(}u_{r}^{2}+\frac{2 u^{2}}{r^{2}}\Big{)}(t,r)\,r^{2}\mathrm{d}r\mathrm{d}s+2\varepsilon\int_{0}^{t}( \rho u^{2})(s,b(s))b(s)\,\mathrm{d}s\] \[\leq C(E_{0},M). \tag{5.16}\] _Case 2._ \(\gamma_{2}>\frac{4}{3}\)_: _Then_ \[\int_{a}^{b(t)}\frac{1}{2}\rho\left(u^{2}+e(\rho)\right)\,r^{2} \mathrm{d}r+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\rho\Big{(}u_{r}^{2}+\frac{2 u^{2}}{r^{2}}\Big{)}(t,r)\,r^{2}\mathrm{d}r\mathrm{d}s+2\varepsilon\int_{0}^{t}( \rho u^{2})(s,b(s))b(s)\,\mathrm{d}s\] \[\leq C(E_{0},M). \tag{5.17}\] **Proof.** We divide the proof into three steps. 1. Using (2.3) and similar calculations as in the proof [9, Lemma 3.1], we have \[\int_{a}^{b(t)}\rho\Big{(}\frac{1}{2}u^{2}+e(\rho)\Big{)}\,r^{2} \mathrm{d}r-\int_{a}^{b(t)}\Big{(}\int_{a}^{r}\rho(t,z)\,z^{2}\mathrm{d}z\Big{)} \rho\,r\mathrm{d}r\] \[\quad+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\Big{(}\rho u_{r}^{2} +2\rho\frac{u^{2}}{r^{2}}\Big{)}\,r^{2}\mathrm{d}r\mathrm{d}s+2\varepsilon\int _{0}^{t}(\rho u^{2})(s,b(s))\,b(s)\mathrm{d}s\] (5.18) \[=\int_{a}^{b}\rho_{0}\Big{(}\frac{1}{2}u_{0}^{2}+e\left(\rho_{0} \right)\Big{)}\,r^{2}\mathrm{d}r-\int_{a}^{b}\Big{(}\int_{a}^{r}\rho_{0}(z)z^{ 2}\,\mathrm{d}z\Big{)}\rho_{0}(r)\,r\mathrm{d}r.\] 2. We now control the second term on the LHS of (5.18) and the second term on the RHS of (5.18) to close the estimates. By similar calculations as in [9, Lemma 3.1], one can obtain \[\int_{a}^{b(t)}\Big{(}\int_{a}^{r}\rho\,z^{2}\mathrm{d}z\Big{)} \rho\,r\mathrm{d}r=\frac{1}{2\omega_{3}}\|\nabla\Phi\|_{L^{2}(\mathbb{R}^{3})} ^{2}=\frac{1}{2}\int_{a}^{\infty}\frac{1}{r^{2}}\Big{(}\int_{a}^{r}\rho\,z^{2} \mathrm{d}z\Big{)}^{2}\,\mathrm{d}r, \tag{5.19}\] where we have understood \(\rho\) to be zero for \(r\in[0,a)\cup(b(t),\infty)\) in (5.19). 3. Now we use the internal energy to control the gravitational potential term. First, we obtain from (3.13) that there exist two constant \(C_{1},C_{2}>0\) depending only on \(\rho^{*}\) such that \[|\rho e(\rho)-\frac{\kappa_{2}\rho^{\gamma_{2}}}{\gamma_{2}-1}| \leq C_{1}\rho^{\max\{\gamma_{2}-\epsilon,0\}}\ \ \text{for}\ \rho\geq\rho^{*},\ \ \ \ \ \ \ \ |\rho e(\rho)-\frac{\kappa_{2}\rho^{\gamma_{2}}}{\gamma_{2}-1}|\leq C_{2}\rho^{ \gamma_{2}}\ \ \text{for}\ \rho\leq\rho^{*}.\] Thus, we have \[\Big{|}\int_{a}^{b(t)}\left(\rho e(\rho)-\frac{\kappa_{2}}{\gamma _{2}-1}\rho^{\gamma_{2}}\right)r^{2}\mathrm{d}r\Big{|}\] \[=\int_{\rho(t,r)\geq K}\Big{|}\rho e(\rho)-\frac{\kappa_{2}}{ \gamma_{2}-1}\int_{a}^{b(t)}\rho^{\gamma_{2}}\Big{|}\,r^{2}\mathrm{d}r+\int_{ \rho(t,r)\leq K}\Big{|}\rho e(\rho)-\frac{\kappa_{2}}{\gamma_{2}-1}\int_{a}^{ b(t)}\rho^{\gamma_{2}}\Big{|}\,r^{2}\mathrm{d}r\] \[\leq C_{1}K^{-\min\{\gamma_{2},\epsilon\}}\int_{a}^{b(t)}\rho^{ \gamma_{2}}\,r^{2}\mathrm{d}r+C_{2}\omega_{3}^{-1}K^{\gamma_{2}-1}\,M, \tag{5.20}\] where \(K>\rho^{*}\) is some large constant to be chosen later. Multiplying (5.11) by \(\Phi\) and integrating by parts yield \[\|\nabla\Phi\|_{L^{2}(\mathbb{R}^{3})}^{2}\leq\|\Phi\|_{L^{6}(\mathbb{R}^{3})} \|\rho\|_{L^{\frac{6}{6}}(\Omega_{t})}\leq\sqrt{A_{3}}\|\nabla\Phi\|_{L^{2}( \mathbb{R}^{3})}\|\rho\|_{L^{\frac{6}{6}}(\Omega_{t})}, \tag{5.21}\] where we have used the positive constant \(A_{3}:=\frac{4}{3}\omega_{4}^{-\frac{2}{3}}>0\) that is the sharp constant for the Sobolev inequality in \(\mathbb{R}^{3}\) (see Lemma A.1). Then it follows from (5.19) and (5.21) that \[\int_{a}^{b(t)}\Big{(}\int_{a}^{r}\rho\,z^{2}\mathrm{d}z\Big{)} \rho\,r\mathrm{d}r=\frac{1}{2\omega_{3}}\|\nabla\Phi\|_{L^{2}(\mathbb{R}^{3})} ^{2}\leq\frac{2}{3\omega_{3}}\omega_{4}^{-\frac{2}{3}}\|\rho\|_{L^{\frac{6}{5} }(\Omega_{t})}^{2}\] \[\leq\frac{2}{3\omega_{3}}\omega_{4}^{-\frac{2}{3}}\Big{(}\int_{ \Omega_{t}}\rho^{\frac{6(\gamma_{2}-1)}{5\gamma_{2}-6}}\big{(}\beta\rho+\rho e (\rho)\big{)}^{-\frac{1}{5\gamma_{2}-6}}\,\mathrm{d}\mathbf{x}\Big{)}^{\frac{ 5\gamma_{2}-6}{3(\gamma_{2}-1)}}\Big{(}\int_{\Omega_{t}}\big{(}\beta\rho+\rho e (\rho)\big{)}\,\mathrm{d}\mathbf{x}\Big{)}^{\frac{1}{3(\gamma_{2}-1)}}\] \[\leq\frac{2}{3}\omega_{4}^{-\frac{2}{3}}\omega_{3}^{\frac{4-3 \gamma_{2}}{3(\gamma_{2}-1)}}\Big{(}\int_{\Omega_{t}}C_{\max}(\beta)\rho\, \mathrm{d}\mathbf{x}\Big{)}^{\frac{5\gamma_{2}-6}{3(\gamma_{2}-1)}}\Big{(} \int_{a}^{b(t)}\big{(}\beta\rho+\rho e(\rho)\big{)}\,r^{2}\mathrm{d}r\Big{)}^ {\frac{1}{3(\gamma_{2}-1)}}\] \[=B_{\beta}M^{\frac{5\gamma_{2}-6}{3(\gamma_{2}-1)}}\Big{(}\int_{a }^{b(t)}\big{(}\beta\rho+\rho e(\rho)\big{)}\,r^{2}\mathrm{d}r\Big{)}^{\frac{ 1}{3(\gamma_{2}-1)}}, \tag{5.22}\] where \(B_{\beta}\) is the constant defined in (2.8). When \(\gamma_{2}>\frac{4}{3}\), _i.e._, \(\frac{1}{3(\gamma_{2}-1)}<1\), it follows from (5.22) by taking \(\beta=1\) that \[\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r-\int_{a}^{b(t)} \Big{(}\int_{a}^{r}\rho\,z^{2}\mathrm{d}z\Big{)}\rho\,r\mathrm{d}r\] \[\geq\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r-B_{1}M^{\frac{5 \gamma_{2}-6}{3(\gamma_{2}-1)}}\Big{(}\big{(}\omega_{3}^{-1}M\big{)}^{\frac{1 }{3(\gamma_{2}-1)}}+\Big{(}\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r\Big{)} ^{\frac{1}{3(\gamma_{2}-1)}}\Big{)}\] \[\geq\frac{1}{2}\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r-C(M), \tag{5.23}\] which, with (5.18), yields (5.17). When \(\gamma_{2}=\frac{4}{3}\), _i.e._, \(\frac{1}{3(\gamma_{2}-1)}=1\). It has been proved in [17, Theorem 3.1] that there exists an optimal constant \(C_{\min}=6\kappa_{2}M_{\mathrm{ch}}^{-\frac{2}{3}}\) such that \[\int_{a}^{b(t)}\Big{(}\int_{a}^{r}\rho\,z^{2}\mathrm{d}z\Big{)} \rho\,r\mathrm{d}r=\frac{1}{2\omega_{3}}\|\nabla\Phi\|_{L^{2}(\mathbb{R}^{3})} ^{2}\leq\frac{C_{\min}}{2\omega_{3}}\|\rho\|_{L^{1}(\Omega_{t})}^{\frac{2}{3}} \|\rho\|_{L^{\frac{4}{3}}(\Omega_{t})}^{\frac{4}{3}}=\frac{C_{\min}}{2}M^{ \frac{2}{3}}\int_{a}^{b(t)}\rho^{\frac{4}{3}}\,r^{2}\mathrm{d}r, \tag{5.24}\] which, with (5.20), yields \[\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r-\int_{a}^{b(t)} \Big{(}\int_{a}^{r}\rho\,z^{2}\mathrm{d}z\Big{)}\rho\,r\mathrm{d}r\] \[\geq\Big{(}3\kappa_{2}-\frac{C_{\min}}{2}M^{\frac{2}{3}}-C_{1}K^{ -\min\{\gamma_{2},\epsilon\}}\Big{)}\int_{a}^{b(t)}\rho^{\frac{4}{3}}\,r^{2} \mathrm{d}r-C(M,K). \tag{5.25}\] Since \(M<M_{\mathrm{ch}}\), we can always choose \(K>\rho^{*}\) large enough such that \[3\kappa_{2}-\frac{C_{\min}}{2}M^{\frac{2}{3}}-C_{1}K^{-\min\{\gamma_{2}, \epsilon\}}>0.\] Then one can deduce (5.16) for \(\gamma_{2}=\frac{4}{3}\) from (5.18), (5.25), and the fact that \(\rho^{\gamma_{2}}\geq C\rho e(\rho)\). When \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3})\), we define \[F(s;\beta)=s-B_{\beta}M^{\frac{5\gamma_{2}-6}{3(\gamma_{2}-1)}}\left(\omega_{3} ^{-1}\beta M+s\right)^{\frac{1}{3(\gamma_{2}-1)}}\qquad\text{ for $s\geq 0$ and any fixed $\beta>0$.}\] A direct calculation shows that \[\left\{\frac{\mathrm{d}F(s;\beta)}{\mathrm{d}s}=1-\frac{1}{3(\gamma_{2}-1)}B_{ \beta}M^{\frac{5\gamma_{2}-6}{3(\gamma_{2}-1)}}\big{(}\omega_{3}^{-1}\beta M+s \big{)}^{\frac{4-3\gamma_{2}}{3(\gamma_{2}-1)}},\right.\] \[\left.\frac{\mathrm{d}^{2}F(s;\beta)}{\mathrm{d}s^{2}}=-\frac{4-3 \gamma_{2}}{9(\gamma_{2}-1)^{2}}B_{\beta}M^{\frac{5\gamma_{2}-6}{3(\gamma_{2}- 1)}}\big{(}\omega_{3}^{-1}\beta M+s\big{)}^{\frac{7-6\gamma_{2}}{3(\gamma_{2}- 1)}},\right.\] which yields that \(\frac{\mathrm{d}^{2}F(s;\beta)}{\mathrm{d}s^{2}}<0\) for \(s>0\) since \(\gamma_{2}<\frac{4}{3}\). Thus, \(F(s;\beta)\) is concave with respect to \(s>0\). We denote \[s_{*}(\beta)=\Big{(}\frac{B_{\beta}}{3(\gamma_{2}-1)}\Big{)}^{-\frac{3(\gamma _{2}-1)}{4-3\gamma_{2}}}M^{-\frac{5\gamma_{2}-6}{4-3\gamma_{2}}}-\omega_{3}^{- 1}\beta M, \tag{5.26}\] which is the critical point of \(F(s)\) satisfying \(\frac{\mathrm{d}F(s;\beta)}{\mathrm{d}s}(s_{*}(\beta))=0\). The maximum of \(F(s;\beta)\) with respect to \(s>0\) is \[F(s_{*}(\beta);\beta)=(4-3\gamma_{2})\Big{(}\frac{B_{\beta}}{3(\gamma_{2}-1)} \Big{)}^{-\frac{3(\gamma_{2}-1)}{4-3\gamma_{2}}}M^{-\frac{5\gamma_{2}-6}{4-3 \gamma_{2}}}-\omega_{3}^{-1}\beta M. \tag{5.27}\] It follows from the definition of \(M_{\mathrm{c}}^{\varepsilon,b}\) that, if \(M<M_{\mathrm{c}}^{\varepsilon,b}\), there exists \(\beta_{0}>0\) such that \(M<M_{\mathrm{c}}^{\varepsilon,b}(\beta_{0})\). Then, from (5.26)-(5.27), we have \[F(s_{*}(\beta_{0});\beta_{0})>E_{0}^{\varepsilon,b}, \tag{5.28}\] \[s_{*}(\beta_{0})>\Big{(}\frac{B_{\beta_{0}}}{3(\gamma_{2}-1)} \Big{)}^{-\frac{3(\gamma_{2}-1)}{4-3\gamma_{2}}}\left(M_{\mathrm{c}}^{ \varepsilon,b}(\beta_{0})\right)^{-\frac{5\gamma_{2}-6}{4-3\gamma_{2}}}-\omega _{3}^{-1}\beta_{0}M_{\mathrm{c}}^{\varepsilon,b}(\beta_{0})\] \[\qquad\qquad=\frac{1}{4-3\gamma_{2}}\big{(}E_{0}^{\varepsilon,b} +\omega_{3}^{-1}\beta_{0}M_{\mathrm{c}}^{\varepsilon,b}(\beta_{0})\big{)}- \omega_{3}^{-1}\beta_{0}M_{\mathrm{c}}^{\varepsilon,b}(\beta_{0})>E_{0}^{ \varepsilon,b}, \tag{5.29}\] where we have used that \(\frac{1}{4-3\gamma_{2}}>\frac{5}{2}>1\) for \(\gamma_{2}\in(\frac{6}{5},\frac{4}{3})\). Then, combining (5.18) and (5.22) with (5.28)-(5.29), we obtain \[F(\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r;\beta_{0})\leq E_{0}^{ \varepsilon,b}<F(s_{*}(\beta_{0});\beta_{0}),\ \ \ \ \int_{a}^{b}\big{(}\rho_{0}e(\rho_{0})\big{)}(r)\,r^{2}\mathrm{d}r\leq E_{0}^{ \varepsilon,b}<s_{*}(\beta_{0}). \tag{5.30}\] Hence, due to the continuity of \(\int_{a}^{b(t)}\big{(}\rho e(\rho)\big{)}(t,r)\,r^{2}\mathrm{d}r\) with respect to \(t\), the strict inequality: \[\int_{a}^{b(t)}\big{(}\rho e(\rho)\big{)}(t,r)\,r^{2}\mathrm{d}r<s_{*}(\beta_{ 0}) \tag{5.31}\] must hold. Otherwise, there exists \(t_{0}>0\) such that \(\int_{a}^{b(t_{0})}\big{(}\rho e(\rho)\big{)}(t_{0},r)\,r^{2}\mathrm{d}r=s_{*}( \beta_{0})\), which yields \[F(\int_{a}^{b(t_{0})}(\rho e(\rho))(t_{0},r)\,r^{2}\mathrm{d}r;\beta_{0})=F(s_{ *}(\beta_{0});\beta_{0})>E_{0}^{\varepsilon,b}.\] This contradicts (5.30). Thus, we prove (5.31) under condition (5.11). Therefore, under condition (5.11), it follows from (5.26) and (5.31) that \[F(\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r;\beta_{0})\] \[\geq\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r-B_{\beta_{0}}M^ {\frac{5\gamma_{2}-6}{3(\gamma_{2}-1)}}\big{(}s_{*}(\beta_{0})+\omega_{3}^{-1} \beta_{0}M\big{)}^{\frac{4-3\gamma_{2}}{3(\gamma_{2}-1)}}\Big{(}\int_{a}^{b(t)} \rho e(\rho)\,r^{2}\mathrm{d}r+\omega_{3}^{-1}\beta_{0}M\Big{)}\] \[=(4-3\gamma_{2})\int_{a}^{b(t)}\rho e(\rho)\,r^{2}\mathrm{d}r-3( \gamma_{2}-1)\omega_{3}^{-1}\beta_{0}M^{\frac{5}{3}}. \tag{5.32}\] Combining (5.18) and (5.25) with (5.32), we conclude (5.16). **Corollary 5.2**.: _Under the assumptions of_ Lemma 5.1 _and noting (3.5),_ \[\int_{a}^{b(t)}\rho^{\gamma_{2}}(t,r)\,r^{2}\mathrm{d}r\leq C\int_{a}^{b(t)}(\rho+ \rho e(\rho))(t,r)\,r^{2}\mathrm{d}r<C(M,E_{0})\qquad\text{ for }t\geq 0.\] **Corollary 5.3**.: _Under the assumptions of_ Lemma 5.1_, it follows from (5.12), (5.16)-(5.17), and (5.19) that_ \[\big{|}r^{2}\Phi_{r}(t,r)\big{|}\leq\frac{M}{\omega_{3}}\qquad \text{ for }(t,r)\in[0,\infty)\times[0,\infty),\] \[\int_{a}^{b(t)}\Big{(}\int_{a}^{r}\rho(t,y)\,y^{2}\mathrm{d}y \Big{)}\rho(t,r)\,r\mathrm{d}r+\|\Phi(t)\|_{L^{6}(\mathbb{R}^{3})}+\|\nabla \Phi(t)\|_{L^{2}(\mathbb{R}^{3})}\leq C(M,E_{0})\qquad\text{ for }t\geq 0.\] For later use, we analyze the boundary value of density \(\rho\). Using \(\eqref{eq:p_1}\) and (5.15), we have \[\rho_{\tau}(\tau,\frac{M}{\omega_{3}})=-\frac{1}{\varepsilon}\,P(\tau,\frac{M} {\omega_{3}})\leq 0, \tag{5.33}\] which yields that \(\rho(\tau,\frac{M}{\omega_{3}})\leq\rho_{0}(\frac{M}{\omega_{3}})\). In the Eulerian coordinates, it is equivalent to \[\rho(t,b(t))\leq\rho_{0}(b). \tag{5.34}\] Moreover, noting (5.8) and \(b\geq(\rho_{*})^{-\gamma_{1}/3}\), we see that \(\rho(t,b(t))\leq\rho_{0}(b)\leq\rho_{*}\) for all \(t\geq 0\). From \(\eqref{eq:p_1}\) and (5.33), there exists a positive constant \(\tilde{C}\) depending only on \((\gamma_{1},\kappa_{1})\) such that \[\rho_{\tau}(\tau,\frac{M}{\omega_{3}})=-\frac{1}{\varepsilon}\,P(\tau,\frac{M} {\omega_{3}})\geq-\frac{\tilde{C}}{\varepsilon}\big{(}\rho(\tau,\frac{M}{ \omega_{3}})\big{)}^{\gamma_{1}},\] which implies \[\rho(\tau,\frac{M}{\omega_{3}})\geq\rho_{0}(\frac{M}{\omega_{3}})\Big{(}1+ \frac{\tilde{C}(\gamma_{1}-1)}{\varepsilon}\big{(}\rho_{0}(\frac{M}{\omega_{3 }})\big{)}^{\gamma_{1}-1}\tau\Big{)}^{-\frac{1}{\gamma_{1}-1}}.\] Therefore, in the Eulerian coordinates, \[\rho(t,b(t))\geq\rho_{0}(b)\Big{(}1+\frac{\tilde{C}(\gamma_{1}-1)}{ \varepsilon}(\rho_{0}(b))^{\gamma_{1}-1}t\Big{)}^{-\frac{1}{\gamma_{1}-1}} \qquad\text{for }t\geq 0. \tag{5.35}\] **Lemma 5.4** (BD-type entropy estimate).: _Under the conditions of_ Lemma 5.1_, for any given \(T>0\),_ \[\varepsilon^{2}\int_{a}^{b(t)}\big{|}\,(\sqrt{\rho})_{r}\,\big{|} ^{2}\,r^{2}\mathrm{d}r+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\frac{P^{\prime}( \rho)}{\rho}|\rho_{r}|^{2}\,r^{2}\mathrm{d}r\mathrm{d}s+\frac{1}{3}\,P(\rho(t,b(t)))\,b(t)^{3}\] \[\quad+\frac{1}{3\varepsilon}\int_{0}^{t}\big{(}P(\rho)P^{\prime} (\rho)\big{)}(s,b(s))\,b(s)^{3}\,\mathrm{d}s\leq C(E_{0},M,T)\qquad\text{ for }t\in[0,T]. \tag{5.36}\] **Proof.** We divide the proof into three steps. 1. Using (2.3) and similar calculations as in the proof [9, Lemma 3.3], we have \[\int_{a}^{b(t)}\Big{(}\frac{1}{2}\big{(}u+\varepsilon\frac{\rho_ {r}}{\rho}\big{)}^{2}\rho+\rho e(\rho)\Big{)}\,r^{2}\mathrm{d}r-\int_{a}^{b(t)} \Big{(}\int_{a}^{r}\rho(t,y)\,y^{2}\mathrm{d}y\Big{)}\rho\,r\mathrm{d}r\] \[+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\frac{P^{\prime}(\rho)}{ \rho}\rho_{r}^{2}\,r^{2}\mathrm{d}r\mathrm{d}s+\frac{1}{3}P(\rho(t,b(t)))\,b (t)^{3}+\frac{1}{3\varepsilon}\int_{0}^{t}\big{(}P(\rho)P^{\prime}(\rho)\big{)} (s,b(s)\,b(s)^{3}\mathrm{d}s\] \[=\int_{a}^{b}\Big{(}\frac{1}{2}\big{(}u_{0}+\varepsilon\frac{\rho _{0,r}}{\rho}\big{)}^{2}+e(\rho_{0})\Big{)}\rho_{0}\,r^{2}\mathrm{d}r-\int_{a} ^{b}\Big{(}\int_{a}^{r}\rho_{0}(y)\,y^{2}\mathrm{d}y\Big{)}\rho_{0}(r)\,r \mathrm{d}r\] \[\quad+\frac{1}{3}P(\rho_{0}\,(b))b^{3}+\varepsilon\int_{0}^{t} \int_{a}^{b(s)}\rho^{2}\,r^{2}\mathrm{d}r\mathrm{d}s-\frac{M\varepsilon}{ \omega_{3}}\int_{0}^{t}\rho(s,b(s))\,\mathrm{d}s,\] which, with Lemma 5.1, yields \[\varepsilon^{2}\int_{a}^{b(t)}|\left(\sqrt{\rho}\right)_{r}|^{2}\,r^ {2}\mathrm{d}r+\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\frac{P^{\prime}(\rho)}{ \rho}|\rho_{r}|^{2}\,r^{2}\mathrm{d}r\mathrm{d}s\] \[\quad+\frac{1}{3}P(\rho(t,b(t)))\,b(t)^{3}+\frac{1}{3\varepsilon} \int_{0}^{t}\big{(}P(\rho)P^{\prime}(\rho)\big{)}(s,b(s))\,b(s)^{3}\mathrm{d}s\] \[\leq C(E_{0},M)+\frac{1}{3}P(\rho_{0}(b))b^{3}+\varepsilon\int_{0 }^{t}\int_{a}^{b(s)}\rho^{2}\,r^{2}\mathrm{d}r\mathrm{d}s-\frac{M\varepsilon}{ \omega_{3}}\int_{0}^{t}\rho(s,b(s))\,\mathrm{d}s. \tag{5.37}\] 2. For the second term on the RHS of (5.37), it follows from (5.8) and \(\eqref{eq:1}_{1}\) that \[\frac{1}{3}P(\rho_{0}(b))b^{3}\leq C. \tag{5.38}\] For the last term on the RHS of (5.37), using (5.34), we have \[\Big{|}\frac{M\varepsilon}{\omega_{3}}\int_{0}^{t}\rho(s,b(s))\, \mathrm{d}s\Big{|}\leq C(M)\rho_{0}(b)T\leq C(M,T). \tag{5.39}\] 3. To close the estimates, we need to control the third term on the RHS of (5.37), that is, \[\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\rho^{2}\,r^{2}\mathrm{d}r \mathrm{d}s=\frac{\varepsilon}{\omega_{3}}\int_{0}^{t}\|\rho(s,\cdot)\|_{L^{2 }(\Omega_{s})}^{2}\,\mathrm{d}s.\] We divide the estimate of the above term into the following two cases: Case 1. For \(\gamma_{2}\geq 2\), it follows from Corollary 5.2 that \[\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\rho^{2}\,r^{2}\mathrm{d}r \mathrm{d}s\leq\varepsilon\int_{0}^{t}\int_{a}^{b(s)}(\rho+\rho^{\gamma_{2}}) \,r^{2}\mathrm{d}r\mathrm{d}s\leq C(E_{0},M,T). \tag{5.40}\] Case 2. For \(\gamma_{2}\in(\frac{6}{5},2)\), then \(3\gamma_{2}>2\). A direct calculation shows that \[\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\rho^{2}\mathbf{I}_{\{\rho \leq 2\rho^{*}\}}\,r^{2}\mathrm{d}r\mathrm{d}s\leq 2\varepsilon\rho^{*}\int_{0}^{t} \int_{a}^{b(s)}\rho\,r^{2}\mathrm{d}r\mathrm{d}s\leq C(M,\rho^{*}). \tag{5.41}\] Denote \(\sqrt{F(\rho)}:=\int_{0}^{\rho}\sqrt{\frac{P^{\prime}(s)}{s}}\,\mathrm{d}s\). Then it follows from \(\eqref{eq:1}_{2}\) that \[\sqrt{F(\rho)}\geq\big{(}1-2^{-\frac{\gamma_{2}}{2}}\big{)}\frac{2 \sqrt{(1-\mathfrak{a}_{0})\kappa_{2}\gamma_{2}}}{\gamma_{2}}\rho^{\frac{\gamma _{2}}{2}}:=C(\gamma_{2})^{-\frac{\gamma_{2}}{2\vartheta}}\rho^{\frac{\gamma_{2} }{2}}\qquad\text{for }\rho\in[2\rho^{*},\infty),\] which, with Corollary 5.2, implies that, for \(\overline{\vartheta}=\frac{3(2-\gamma_{2})}{4}\), \[\|\rho\mathbf{I}_{\{\rho\geq 2\rho^{*}\}}\|_{L^{2}(\Omega_{t})}\leq\|\rho \mathbf{I}_{\{\rho\geq 2\rho^{*}\}}\|_{L^{3\gamma_{2}}(\Omega_{t})}^{ \overline{\vartheta}}\|\rho\mathbf{I}_{\{\rho\geq 2\rho^{*}\}}\|_{L^{\gamma_{2}}( \Omega_{t})}^{1-\overline{\vartheta}}\leq C(\gamma_{2})\|\sqrt{F(\rho)}\|_{L^ {6}(\Omega_{t})}^{\overline{2}\overline{\vartheta}}\|\rho\|_{L^{\gamma_{2}}( \Omega_{t})}^{1-\overline{\vartheta}}. \tag{5.42}\] For \(B_{R}(\mathbf{0})\subset\mathbb{R}^{3}\), the following Sobolev's inequality holds: \[\|f\|_{L^{6}(B_{R}(\mathbf{0}))}\leq C\big{(}\|\nabla f\|_{L^{2}(B_{R}( \mathbf{0}))}+R^{-1}\|f\|_{L^{2}(B_{R}(\mathbf{0}))}\big{)}. \tag{5.43}\] It follows from (5.13) and Corollary 5.2 that \[\frac{M}{\omega_{3}}=\int_{a}^{b(t)}\rho(t,r)\,r^{2}\mathrm{d}r \leq\Big{(}\int_{a}^{b(t)}\rho^{\gamma_{2}}\,r^{2}\mathrm{d}r\Big{)}^{\frac{1} {\gamma_{2}}}\Big{(}\int_{a}^{b(t)}r^{2}\mathrm{d}r\Big{)}^{1-\frac{1}{\gamma_ {2}}}\leq Cb(t)^{\frac{3(\gamma_{2}-1)}{\gamma_{2}}}\Big{(}\int_{a}^{b(t)}\rho ^{\gamma_{2}}\,r^{2}\mathrm{d}r\Big{)}^{\frac{1}{\gamma_{2}}},\] which yields \[b(t)^{-1}\leq CM^{-\frac{\gamma_{2}}{3(\gamma_{2}-1)}}\Big{(}\int_{a}^{b(t)} \rho^{\gamma_{2}}\,r^{2}\mathrm{d}r\Big{)}^{\frac{1}{3(\gamma_{2}-1)}}\leq C \left(M,E_{0}\right). \tag{5.44}\] Using \((\ref{3.2})_{2}\)-\((\ref{3.3})_{2}\) leads to \(F(\rho)\leq C(\rho+\rho^{\gamma_{2}})\), which, with (5.43)-(5.44) and Corollary 5.2, implies \[\big{\|}\sqrt{F(\rho)}\big{\|}_{L^{6}(\Omega_{t})} \leq C\,\Big{(}\big{\|}\nabla(\sqrt{F(\rho)})\big{\|}_{L^{2}( \Omega_{t})}+b(t)^{-1}\big{\|}\sqrt{F(\rho)}\big{\|}_{L^{2}(\Omega_{t})}\Big{)}\] \[\leq C\Big{(}\int_{a}^{b(t)}\frac{P^{\prime}(\rho)}{\rho}|\rho_{r }|^{2}\,r^{2}\mathrm{d}r\Big{)}^{\frac{1}{2}}+C(M,E_{0})\Big{(}\int_{a}^{b(t)} F(\rho)\,r^{2}\mathrm{d}r\mathrm{d}t\Big{)}^{\frac{1}{2}}\] \[\leq C(M,E_{0})\Big{(}1+\big{(}\int_{a}^{b(t)}\frac{P^{\prime}( \rho)}{\rho}|\rho_{r}|^{2}\,r^{2}\mathrm{d}r\big{)}^{\frac{1}{2}}\Big{)}. \tag{5.45}\] Substituting (5.45) into (5.42), we obtain \[\varepsilon\int_{0}^{t}\int_{a}^{b(s)}\rho^{2}r^{2}\mathbf{I}_{ \{\rho\geq 2\rho^{*}\}}\,\mathrm{d}r\mathrm{d}s \leq C(M,E_{0},T)\varepsilon\Big{(}1+\big{(}\int_{0}^{t}\int_{a}^ {b(s)}\frac{P^{\prime}(\rho)}{\rho}|\rho_{r}|^{2}\,r^{2}\mathrm{d}r\mathrm{d} s\big{)}^{\frac{2\overline{\partial}}{\gamma_{2}}}\Big{)}\] \[\leq C(M,E_{0},T)+\frac{\varepsilon}{2}\int_{0}^{t}\int_{a}^{b(s) }\frac{P^{\prime}(\rho)}{\rho}|\rho_{r}|^{2}\,r^{2}\mathrm{d}r\mathrm{d}s, \tag{5.46}\] where we have used \(\frac{2\overline{\partial}}{\gamma_{2}}\in(0,1)\) for \(\gamma_{2}>\frac{6}{5}\). Finally, substituting (5.38)-(5.41) and (5.46) into (5.37), we conclude (5.36). In order to take the limit: \(b\to\infty\), we need to make sure that domain \(\Omega_{T}\) can be expanded to \([0,T]\times\mathbb{R}_{+}\) for fixed \(\varepsilon>0\): \(\lim\limits_{b\to\infty}b(t)=\infty\). **Lemma 5.5** (Expanding of domain \(\Omega_{T}\)).: _Given \(T>0\) and \(\varepsilon\in(0,\varepsilon_{0}]\), there exists \(C_{1}(M,E_{0},T,\varepsilon)>0\) such that, if \(b\geq C_{1}(M,E_{0},T,\varepsilon)\),_ \[b(t)\geq\frac{1}{2}b\qquad\text{ for }t\in[0,T]. \tag{5.47}\] **Proof.** Noting \(b(0)=b\) and the continuity of \(b(t)\), we first make the _a prior_ assumption: \[b(t)\geq\frac{1}{2}b. \tag{5.48}\] Integrating (5.3) over \([0,t]\) yields \[b(t)=b+\int_{0}^{t}u(s,b(s))\,\mathrm{d}s. \tag{5.49}\] It follows from (5.35), (5.48), and Lemma 5.1 that \[\int_{0}^{t}|u(s,b(s))|\,\mathrm{d}s \leq\frac{C}{\sqrt{\varepsilon}}\Big{(}\int_{0}^{t}\varepsilon( \rho u^{2}r)(s,b(s))\,\mathrm{d}s\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{t} \frac{1}{\rho(s,b(s))b(s)}\,\mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq C(M,E_{0})\varepsilon^{-\frac{1}{2}}\Big{(}\int_{0}^{t}\frac {(1+\tilde{C}(\gamma_{1}-1)\varepsilon^{-1}\rho_{*}^{\gamma_{1}-1}s)^{\frac{1 }{\gamma_{1}-1}}}{\rho_{0}(b)b}\,\mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq C(M,E_{0},T,\rho_{*},\gamma_{1},\gamma_{2},\varepsilon)\rho_ {0}(b)^{-\frac{1}{2}}b^{-\frac{1}{2}}. \tag{5.50}\] We take \[C_{1}(M,E_{0},T,\varepsilon):=\max\{\rho_{*}^{-\frac{\gamma_{1}}{3}},(4C(M,E_ {0},T,\rho_{*},\gamma_{1},\gamma_{2},\varepsilon))^{\frac{2}{\alpha}},\mathcal{ B}(\varepsilon)\},\] which, with (5.8) and (5.50), implies that \[\Big{|}\int_{0}^{t}u(s,b(s))\,\mathrm{d}s\Big{|}\leq\int_{0}^{t}|u(s,b(s))|\, \mathrm{d}s\leq\frac{1}{4}b, \tag{5.51}\] provided that \(b\geq C_{1}(M,E_{0},T,\varepsilon)\). Combining (5.51) with (5.49), we have \[b(t)\geq\frac{3}{4}b. \tag{5.52}\] Thus, we have closed the _a priori_ assumption (5.48). Finally, using (5.52) and the continuity argument, we can conclude (5.47). ### Higher integrability of the density and the velocity As implied in [12], the higher integrability of the density and the velocity are important for the \(L^{p}\) compensated compactness framework. However, for the general pressure law, due to the lack of an explicit formula for the entropy kernel, for the special entropy pair \((\eta^{\psi},q^{\psi})\) by taking the test function \(\psi=\frac{1}{2}s|s|\) in (2.15)-(2.16), we can not obtain that \(q^{\psi}\gtrsim\rho|u|^{3}+\rho^{\gamma+\theta}\) in general. To derive the higher integrability of the velocity, we use the special entropy pair constructed in Lemma 4.1, at the cost of the higher integrability of the density over domain \([0,T]\times[d,b(t)]\) for some \(d>0\). Since \(b(t)\to\infty\) as \(b\to\infty\), we indeed need the higher integrability of the density on the unbounded domain. We point out that this is different from the case of [9] in which only the higher integrability on the bounded domain \([0,T]\times[d,D]\) for any given \(0<d<D<\infty\) is needed. **Lemma 5.6** (Higher integrability on the density).: _Let \((\rho,u)\) be a smooth solution of (5.1)-(5.6). Then, under the assumption of Lemma 5.1, for any given \(d>2a>0\),_ \[\int_{0}^{T}\int_{d}^{b(t)}\rho P(\rho)\,r^{2}\mathrm{d}r\mathrm{d}t\leq C(d,M, E_{0},T). \tag{5.53}\] **Proof.** Let \(\omega(r)\) be a smooth function with \(\mathrm{supp}\,\omega\subset(\frac{d}{2},\infty)\) and \(\omega(r)=1\) for \(r\in[d,\infty)\). Multiplying (5.1)\({}_{2}\) by \(w(y)y^{2}\), we have \[(y^{2}\rho u\omega)_{t}+(y^{2}\rho u^{2}\omega)_{y}+(y^{2}P(\rho )\omega)_{y}-\omega_{y}\big{(}y^{2}\rho u^{2}+y^{2}P(\rho)\big{)}+\rho\omega \int_{a}^{r}\rho\,z^{2}\mathrm{d}z\] \[=2yP(\rho)\omega+\varepsilon(y^{2}\rho u_{y}\omega)_{y}- \varepsilon\omega_{y}y^{2}\rho u_{y}-2\varepsilon\rho u\,\omega. \tag{5.54}\] Integrating (5.54) with respect to \(y\) from \(\frac{d}{2}\) to \(r\) and then multiplying the equation by \(\rho(t,r)\) yield \[r^{2}\rho(r)P(\rho)\omega(r)\] \[=-\rho\frac{\mathrm{d}}{\mathrm{d}t}\int_{\frac{d}{2}}^{r}\rho u \,\omega\,y^{2}\mathrm{d}y-r^{2}\rho^{2}u^{2}\omega(r)+\rho\int_{\frac{d}{2}} ^{r}\omega_{y}\rho u\,y^{2}\mathrm{d}y+\rho\int_{\frac{d}{2}}^{r}\omega_{y}P( \rho)\,y^{2}\mathrm{d}y+2\rho\int_{\frac{d}{2}}^{r}P(\rho)\,\omega\,y\mathrm{d}y\] \[\quad-\rho\int_{\frac{d}{2}}^{r}\rho\omega\Big{(}\int_{a}^{y} \rho\,z^{2}\mathrm{d}z\Big{)}\,\mathrm{d}r+\varepsilon r^{2}\rho^{2}u_{r} \omega(r)-\varepsilon\rho\int_{\frac{d}{2}}^{r}\omega_{y}\rho u_{y}\,y^{2} \mathrm{d}y-2\varepsilon\rho\int_{\frac{d}{2}}^{r}\rho u\,\omega\,\mathrm{d}y. \tag{5.55}\] Using (5.1)\({}_{1}\), we have \[\rho\frac{\mathrm{d}}{\mathrm{d}t}\int_{\frac{d}{2}}^{r}\rho u\omega\,y^{2} \mathrm{d}y=\Big{(}\rho\int_{\frac{d}{2}}^{r}\rho u\omega\,y^{2}\mathrm{d}y \Big{)}_{t}+\Big{(}\rho u\int_{\frac{d}{2}}^{r}\rho u\omega\,y^{2}\mathrm{d}y \Big{)}_{r}-\rho^{2}u^{2}\omega(r)r^{2}+\frac{2}{r}\rho u\int_{\frac{d}{2}}^{r }\rho u\omega\,y^{2}\mathrm{d}y,\] which, with (5.55), yields that \[r^{2}\rho(r)P(\rho)\omega(r)\] \[=-\Big{(}\rho\int_{\frac{d}{2}}^{r}\rho u\omega\,y^{2}\mathrm{d}y \Big{)}_{t}-\Big{(}\rho u\int_{\frac{d}{2}}^{r}\rho u\omega\,y^{2}\mathrm{d}y \Big{)}_{r}-\frac{2}{r}\rho u\int_{\frac{d}{2}}^{r}\rho u\omega\,y^{2}\mathrm{ d}y+\rho\int_{\frac{d}{2}}^{r}\omega_{y}\rho u^{2}\,y^{2}\mathrm{d}y\] \[\quad+\rho\int_{\frac{d}{2}}^{r}\omega_{y}P(\rho)\,y^{2}\mathrm{d} y+2\rho\int_{\frac{d}{2}}^{r}P(\rho)\omega\,y\mathrm{d}y+\varepsilon\rho^{2}u_{r} \omega(r)r^{2}-\varepsilon\rho\int_{\frac{d}{2}}^{r}\rho u_{y}\omega_{y}\,y^{2} \mathrm{d}y\] \[\quad-2\varepsilon\rho\int_{\frac{d}{2}}^{r}\rho u\omega\,\mathrm{d }y-\rho\int_{\frac{d}{2}}^{r}\rho\omega\,\Big{(}\int_{a}^{y}\rho\,z^{2} \mathrm{d}z\Big{)}\,\mathrm{d}y. \tag{5.56}\] Multiplying (5.56) by \(\omega(r)\) to see that \[r^{2}\rho(t,r)P(\rho(t,r))\omega^{2}(r)\] \[=-\Big{(}\rho\omega(r)\int_{\frac{d}{2}}^{r}\rho u\omega(y)\,y^{2} \mathrm{d}y\Big{)}_{t}-\Big{(}\rho u\omega(r)\int_{\frac{d}{2}}^{r}\rho u\omega( y)\,y^{2}\mathrm{d}y\Big{)}_{r}+\omega_{r}\rho u\int_{\frac{d}{2}}^{r}\rho u \omega\,y^{2}\mathrm{d}y\] \[\quad-\frac{2}{r}\rho u\omega(r)\int_{\frac{d}{2}}^{r}\rho u\omega \,y^{2}\mathrm{d}y+\rho\omega(r)\int_{\frac{d}{2}}^{r}\rho u\omega_{y}\,y^{2} \mathrm{d}y+\rho\omega(r)\int_{\frac{d}{2}}^{r}P(\rho)\,\omega_{y}\,y^{2} \mathrm{d}y\] \[\quad+2\rho\omega(r)\int_{\frac{d}{2}}^{r}P(\rho)\omega\,y\mathrm{ d}y-\rho\omega(r)\int_{\frac{d}{2}}^{r}\rho\omega\Big{(}\int_{a}^{y}\rho\,z^{2} \mathrm{d}z\Big{)}\,\mathrm{d}y-\varepsilon\rho\omega(r)\int_{\frac{d}{2}}^{r} \rho u_{y}\,\omega_{y}\,y^{2}\mathrm{d}y\] \[\quad-2\varepsilon\rho\omega(r)\int_{\frac{d}{2}}^{r}\rho u\, \omega\,\mathrm{d}y+\varepsilon r^{2}\rho^{2}u_{r}\omega^{2}(r):=\sum_{i=1}^{ 11}I_{i}. \tag{5.57}\] Using Lemma 5.1 and (5.13), we have \[\Big{|}\int_{\frac{d}{2}}^{r}\big{(}\rho u+P(\rho)\big{)}\omega(y )y^{2}+\varepsilon\rho u_{y}\omega_{y}\,y^{2}\mathrm{d}y\Big{|}\] \[\leq C\int_{a}^{b(t)}\big{(}\rho u^{2}+\rho+\rho^{\gamma_{2}} \big{)}\omega(y)\,y^{2}\mathrm{d}y+\varepsilon\int_{a}^{b(t)}\rho(u_{y}^{2}+1) |\omega_{y}|\,y^{2}\mathrm{d}y\leq C(M,E_{0},\|\omega\|_{C^{1}}),\] which yields \[\Big{|}\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}I_{i}\,\mathrm{d}r \mathrm{d}t\Big{|} \leq C(M,E_{0},T,\|\omega\|_{C^{1}})(d^{-2}+d^{-4})\qquad\text{ for }i=3,4,\cdots,10, \tag{5.58}\] \[\Big{|}\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}I_{1}\,\mathrm{d}r \mathrm{d}t\Big{|} =\Big{|}\int_{\frac{d}{2}}^{b(t)}\rho(T,r)\omega(r)\Big{(}\int_{ \frac{d}{2}}^{r}y^{2}\rho(T,y)u(T,y)\omega(y)\,\mathrm{d}y\Big{)}\mathrm{d}r \Big{|}\] \[\quad+\Big{|}\int_{\frac{d}{2}}^{b(t)}\rho(0,r)\omega(r)\Big{(} \int_{\frac{d}{2}}^{r}y^{2}\rho(0,y)u(0,y)\omega(y)\,\mathrm{d}y\Big{)} \mathrm{d}r\Big{|}\] \[\leq C(M,E_{0},\|\omega\|_{C^{1}})d^{-2}. \tag{5.59}\] For \(I_{2}\), using (5.8), (5.34), (5.51), and \(b\gg 1\), we have \[\Big{|}\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}I_{2}\,\mathrm{d}r \mathrm{d}t\Big{|} =\Big{|}\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\Big{(}\rho u\omega(r )\int_{\frac{d}{2}}^{r}\rho u\omega(y)\,y^{2}\mathrm{d}y\Big{)}_{r}\,\mathrm{d }r\mathrm{d}t\Big{|}\] \[\leq\Big{|}\int_{0}^{T}\rho u(t,b(t))\Big{(}\int_{\frac{d}{2}}^{b( t)}\rho u\omega(y)\,y^{2}\mathrm{d}y\Big{)}\,\mathrm{d}t\Big{|}\] \[\leq C(E_{0},M)b^{-3+\frac{1}{2}}b\leq C(E_{0},M). \tag{5.60}\] For \(I_{11}\), we obtain \[\Big{|}\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}I_{11}\,\mathrm{d}r \mathrm{d}t\Big{|} =\varepsilon\Big{|}\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{2}u_{ r}\omega^{2}\,r^{2}\mathrm{d}r\mathrm{d}t\Big{|}\] \[\leq\varepsilon\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{3} \omega^{2}\,r^{2}\mathrm{d}r\mathrm{d}t+C(M,E_{0},T,\|\omega\|_{C^{1}}). \tag{5.61}\] We divide the estimate of \(\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\varepsilon\rho^{3}\omega^{2}\,r^{2} \mathrm{d}r\mathrm{d}t\) into two cases: _Case 1._\(\gamma_{2}\in(\frac{6}{5},2)\): For any fixed \(t\in[0,T]\), denoting \(A(t):=\{r\in[\frac{d}{2},b(t)]\,:\,\rho(t,r)\geq\rho^{*}\}\), then it follows from (5.13) that \(|A(t)|\leq C(d,\rho^{*})M\). For any \(r\in A(t)\), let \(r_{0}\) be the closest point to \(r\) so that \(\rho(t,r_{0})=\rho^{*}\) with \(|r-r_{0}|\leq|A(t)|\leq C(d,\rho^{*})M\). Then, for any smooth function \(f(\rho)\), \[\sup_{r\in A(t)}f(\rho)\omega^{2}(r) \leq f(\rho(t,r_{0}))\omega^{2}(r_{0})+\Big{|}\int_{r_{0}}^{r} \partial_{y}\big{(}f(\rho(t,y))\omega^{2}(y)\big{)}\,\mathrm{d}y\Big{|}\] \[\leq C(\|\omega\|_{C^{1}})|f(\rho^{*})|+\int_{A(t)}\big{|} \partial_{y}\big{(}f(\rho(t,y))\omega^{2}(y)\big{)}\big{|}\,\mathrm{d}y.\] Recalling (3.3) and (3.5), we notice that \(P(\rho)\cong\rho^{\gamma_{2}}\) and \(e(\rho)\cong\rho^{\gamma_{2}-1}\) for any \(r\in A(t)\). Then \[\varepsilon\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{3}\omega^{2 }\,r^{2}\mathrm{d}r\mathrm{d}t\] \[=\varepsilon\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{3}\mathbf{ I}_{\{\rho\leq\rho^{*}\}}\omega^{2}\,r^{2}\mathrm{d}r\mathrm{d}t+\varepsilon \int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{3}\mathbf{I}_{\{\rho\geq\rho^{*}\} }\omega^{2}\,r^{2}\mathrm{d}r\mathrm{d}t\] \[\leq C(M,E_{0},\rho^{*},T)+C(M,E_{0})\,\varepsilon\int_{0}^{T} \Big{(}\int_{\frac{d}{2}}^{b(t)}\rho e(\rho)\,r^{2}dr\Big{)}\sup_{r\in A(t)} \Big{(}\frac{\rho^{2}}{e(\rho)}\omega^{2}\Big{)}\mathrm{d}t\] \[\leq C(M,E_{0},\rho^{*},T)+C(M,E_{0})\,\varepsilon\int_{0}^{T} \int_{A(t)}\Big{|}\Big{(}\frac{\rho^{2}}{e(\rho)}\omega^{2}\Big{)}_{r}(t,r) \Big{|}\,\mathrm{d}r\mathrm{d}t\] \[\leq C(M,E_{0})\,\varepsilon\int_{0}^{T}\int_{A(t)}\Big{(}\big{(} \frac{2\rho}{e(\rho)}-\frac{P(\rho)}{e(\rho)^{2}}\big{)}|\rho_{r}|\omega^{2}+ \frac{\rho^{2}}{e(\rho)}\omega|\omega_{r}|\Big{)}\,\mathrm{d}r\mathrm{d}t+C(M,E _{0},\rho^{*},T). \tag{5.62}\] A direct calculation shows that \[\int_{0}^{T}\int_{A(t)}\varepsilon\Big{(}\frac{2\rho}{e(\rho)}- \frac{P(\rho)}{e(\rho)^{2}}\Big{)}|\rho_{r}|\,\omega^{2}\,\mathrm{d}r\mathrm{d}t\] \[\leq\int_{0}^{T}\int_{A(t)}\varepsilon\frac{P^{\prime}(\rho)}{\rho }|\rho_{r}|^{2}\omega^{2}\,r^{2}\mathrm{d}r\mathrm{d}t+\int_{0}^{T}\int_{A(t)} \varepsilon\Big{(}\frac{2\rho}{e(\rho)}-\frac{P(\rho)}{e(\rho)^{2}}\Big{)}^{2 }\frac{\rho}{P^{\prime}(\rho)}\omega^{2}\,r^{-2}\,\mathrm{d}r\mathrm{d}t\] \[\leq C(M,E_{0},T)+\int_{0}^{T}\int_{A(t)}\varepsilon\rho^{6-3 \gamma_{2}}\omega^{2}\,r^{-2}\,\mathrm{d}r\mathrm{d}t\] \[\leq C(M,E_{0},T)+\varepsilon C(M,E_{0})^{-1}\int_{0}^{T}\int_{A(t )}\rho^{3}\omega^{2}\,r^{2}\,\mathrm{d}r\mathrm{d}t+C(M,E_{0})\varepsilon\int_ {0}^{T}\int_{A(t)}(r^{2})^{-\frac{3-\gamma_{2}}{\gamma_{2}-1}}\omega^{2}\, \mathrm{d}r\mathrm{d}t\] \[\leq C(M,E_{0},T)+\varepsilon C(M,E_{0})^{-1}\int_{0}^{T}\int_{ \frac{d}{2}}^{b(t)}\rho^{3}\omega^{2}\,r^{2}\mathrm{d}r\mathrm{d}t. \tag{5.63}\] \[\int_{0}^{T}\int_{A(t)}\varepsilon\frac{\rho^{2}}{e(\rho)}\omega| \omega_{r}|\,\mathrm{d}r\mathrm{d}t\leq\int_{0}^{T}\Big{(}\varepsilon\sup_{r \in A(t)}(\rho\omega)(t,r)\int_{A(t)}\frac{\rho}{e(\rho)}|\omega_{r}|\, \mathrm{d}r\Big{)}\,\mathrm{d}t\] \[\leq C(\rho^{*},M,\|\omega\|_{C^{1}})d^{-2}\int_{0}^{T}\Big{(} \varepsilon\sup_{r\in A(t)}(\rho\omega)(t,r)\Big{)}\,\mathrm{d}t\] \[\leq C(\rho^{*},M,\|\omega\|_{C^{1}},T)d^{-2}+C(\rho^{*},M,\| \omega\|_{C^{1}})d^{-2}\int_{0}^{T}\int_{A(t)}\varepsilon\Big{(}\frac{P^{ \prime}(\rho)}{\rho}|\rho_{r}|^{2}\omega+\rho|\omega_{r}|+\rho^{2-\gamma_{2}} \omega\Big{)}\,\mathrm{d}r\mathrm{d}t\] \[\leq C(\rho^{*},M,\|\omega\|_{C^{1}},T)d^{-2}+C(\rho^{*},M,E_{0},T,\|\omega\|_{C^ {1}})d^{-4}. \tag{5.64}\] Combining (5.62)-(5.64), we obtain that, for \(\gamma_{2}\in(\frac{6}{5},2)\), \[\varepsilon\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{3}\omega^{2}\,r^{2} \mathrm{d}r\mathrm{d}t\leq C(M,E_{0},\rho^{*},T,\|\omega\|_{C^{1}})+C(M,E_{0}, \rho^{*},T,\|\omega\|_{C^{1}})d^{-4}. \tag{5.65}\] _Case 2. \(\gamma_{2}\in[2,3)\)_: Using (5.13) and the same argument as for (5.64), we have \[\varepsilon\int_{0}^{T}\int_{\frac{d}{2}}^{b(t)}\rho^{3}\omega^{2 }\,r^{2}\mathrm{d}r\mathrm{d}t \leq C(M)\int_{0}^{T}\varepsilon\sup_{r\in[\frac{d}{2},b(t)]}( \rho^{2}\omega)(t,r)\,\mathrm{d}t\] \[\leq C(M,\rho^{*},\|\omega\|_{C^{1}},T)+C(M)\int_{0}^{T} \varepsilon\sup_{r\in A(t)}(\rho^{2}\omega)(t,r)\,\mathrm{d}t\] \[\leq C(M,\rho^{*},\|\omega\|_{C^{1}},T)+C(M,E_{0},\rho^{*},\| \omega\|_{C^{1}},T)d^{-2}. \tag{5.66}\] Finally, integrating (5.57) over \([0,T]\times[\frac{d}{2},b(t)]\) and using (5.58)-(5.61) and (5.65)-(5.66), we conclude (5.53). **Corollary 5.7**.: _It follows from (3.3) and Lemma 5.6 that_ \[\int_{0}^{T}\int_{d}^{b(t)}\rho^{\gamma_{2}+1}(t,r)\,r^{2}\mathrm{d}r\mathrm{d }t\leq C\int_{0}^{T}\int_{d}^{b(t)}\big{(}\rho+\rho P(\rho)\big{)}(t,r)\,r^{2} \mathrm{d}r\mathrm{d}t\leq C(d,M,E_{0},T). \tag{5.67}\] In order to use the \(L^{p}\) compensated compactness framework, we still need to obtain the higher integrability of the velocity (see [12]). With the help of Lemma 5.6, we use the special entropy pair constructed in Lemma 4.1 to achieve this. **Lemma 5.8** (Higher integrability of the velocity).: _Let \((\rho,u)\) be the smooth solution of (5.1)-(5.6). Then, under the assumption of Lemma 5.1,_ \[\int_{0}^{T}\int_{d}^{D}(\rho|u|^{3})(t,r)\,r^{2}\mathrm{d}r\mathrm{d}t\leq C (d,D,\rho^{*},M,E_{0},T)\qquad\text{for any $(d,D)\Subset[a,b(t)]$.}\] **Proof.** Considering \(\eqref{eq:1}_{1}\times\hat{\eta}_{\rho}r^{2}+\eqref{eq:1}_{2}\times\hat{ \eta}_{m}r^{2}\), we can obtain \[(\hat{\eta}r^{2})_{t}+(\hat{q}r^{2})_{r}+2r\big{(}-\hat{q}+\rho u\hat{\eta}_{ \rho}+\rho u^{2}\hat{\eta}_{m}\big{)}=\varepsilon\,r^{2}\Big{(}(\rho u_{r})_{r }+2\rho\big{(}\frac{u}{r}\big{)}_{r}\Big{)}\hat{\eta}_{m}-\rho\int_{a}^{r} \rho\,y^{2}\mathrm{d}y\,\hat{\eta}_{m}. \tag{5.68}\] Using (5.3), a direct calculation yields \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{r}^{b(t)}\hat{\eta}\,y^{2}\mathrm{d}y=(u \hat{\eta})(t,b(t))\,b(t)^{2}+\int_{r}^{b(t)}\partial_{t}\hat{\eta}(t,y)\,y^ {2}\mathrm{d}y. \tag{5.69}\] Integrating (5.68) over \([r,b(t))\) and using (5.69), we have \[\hat{q}(t,r)\,r^{2} =-\varepsilon\int_{r}^{b(t)}\hat{\eta}_{m}(t,y)(\rho u_{y}y^{2})_ {y}\,\mathrm{d}y+2\varepsilon\int_{r}^{b(t)}\hat{\eta}_{m}(t,y)\,\rho u\, \mathrm{d}y\] \[\quad+\Big{(}\int_{r}^{b(t)}\hat{\eta}(t,y)\,y^{2}\mathrm{d}y \Big{)}_{t}+(\hat{q}-u\hat{\eta})(t,b(t))\,b(t)^{2}\] \[\quad+2\int_{r}^{b(t)}\big{(}-\hat{q}+\rho u\hat{\eta}_{\rho}+ \rho u^{2}\hat{\eta}_{m}\big{)}\,y\mathrm{d}y+\int_{r}^{b(t)}\Big{(}\int_{a}^{ y}\rho\,z^{2}\mathrm{d}z\Big{)}\rho\,\hat{\eta}_{m}\,\mathrm{d}y. \tag{5.70}\] We now control the terms on the RHS of (5.70). For the third term on the RHS of (5.70), it follows from (5.8)\({}_{2}\), (5.34), (5.44), and Lemmas 4.1, 5.1, and 5.4-5.5 that \[\int_{0}^{T}\big{|}(\hat{q}-u\hat{\eta})(t,b(t))\big{|}b(t)^{2}\,\mathrm{d}t \leq C\int_{0}^{T}\Big{(}(\rho(t,b(t)))^{\gamma_{1}+\theta_{1}}+(\rho^{\gamma_{ 1}}|u|)(t,b(t))\Big{)}\,b(t)^{2}\,\mathrm{d}t\] \[\leq C(D,M,E_{0},T)+C\int_{0}^{T}\int_{d}^{D}\varepsilon\big{(}\rho|u |^{2}+\rho|u_{r}|^{2}+\rho^{\gamma(\rho)}\big{)}\,r^{2}\mathrm{d}r\mathrm{d}t\] \[\quad+C(D)\int_{0}^{T}\int_{d}^{b(t)}\varepsilon\rho|u_{y}|(|u_{y }|+\rho^{\theta(\rho)-1}|\rho_{y}|)\,y^{2}\mathrm{d}y\mathrm{d}r\mathrm{d}t\] \[\leq C(D,M,E_{0},T)+C\int_{0}^{T}\int_{d}^{D}\varepsilon\big{(} \rho|u|^{2}+\rho|u_{r}|^{2}+\rho^{\gamma(\rho)}\big{)}\,r^{2}\mathrm{d}r\mathrm{ d}t\] \[\quad+C(D)\int_{0}^{T}\int_{d}^{b(t)}\varepsilon\rho|u_{y}|^{2}\, y^{2}\mathrm{d}y\mathrm{d}t+C(D)\int_{0}^{T}\int_{d}^{b(t)}\varepsilon\rho^{ \gamma(\rho)-2}|\rho_{y}|^{2}\,y^{2}\mathrm{d}y\mathrm{d}t\leq C(D,M,E_{0},T). \tag{5.74}\] For the second term, third term, and sixth term on the RHS of (5.70), using (3.4)-(3.5) and Lemmas 4.1 and 5.1, we obtain \[\Big{|}\int_{0}^{T}\int_{d}^{D}\Big{(}2\varepsilon\int_{r}^{b(t)} \hat{\eta}_{m}(t,y)\rho u\,\mathrm{d}y\Big{)}\,\mathrm{d}r\mathrm{d}t\Big{|} \leq C(d)\int_{0}^{T}\int_{d}^{D}\int_{r}^{b(t)}\varepsilon\big{(} |u|+\rho^{\theta(\rho)}\big{)}\,\rho|u|\,y^{2}\mathrm{d}y\mathrm{d}r\mathrm{d}t\] \[\leq C(d,D)\int_{0}^{T}\int_{d}^{b(t)}\varepsilon\Big{(}\rho|u|^{ 2}+\rho+\rho e(\rho)\Big{)}\,y^{2}\mathrm{d}y\mathrm{d}t\] \[\leq C(d,D,M,E_{0},T), \tag{5.75}\] \[\Big{|}\int_{0}^{T}\int_{d}^{D}\Big{(}\int_{r}^{b(t)}\hat{\eta}(t,y) \,y^{2}\mathrm{d}y\Big{)}_{t}\,\mathrm{d}r\mathrm{d}t\Big{|} \leq\int_{d}^{D}\int_{r}^{b(t)}|\hat{\eta}(T,y)|\,y^{2}\mathrm{d}y \mathrm{d}r+\int_{d}^{D}\int_{r}^{b}|\hat{\eta}(0,y)|\,y^{2}\mathrm{d}y \mathrm{d}r\] \[\leq C\sup_{t\in[0,T]}\int_{d}^{D}\int_{r}^{b(t)}\big{(}\rho e( \rho)+\rho+\rho|u|^{2}\big{)}\,y^{2}\mathrm{d}y\mathrm{d}r\] \[\leq C(D,M,E_{0},T), \tag{5.76}\] \[\Big{|}\int_{0}^{T}\int_{d}^{D}\Big{(}\int_{r}^{b(t)}\Big{(}\int_ {a}^{y}\rho\,z^{2}\mathrm{d}z\Big{)}\rho\,\hat{\eta}_{m}\,\mathrm{d}y\Big{)} \,\mathrm{d}r\mathrm{d}t\Big{|} \leq\int_{0}^{T}\int_{d}^{D}\Big{|}\int_{r}^{b(t)}\Big{(}\int_{a} ^{y}\rho\,z^{2}\mathrm{d}z\Big{)}\rho\,\hat{\eta}_{m}\,\mathrm{d}y\Big{|}\, \mathrm{d}r\mathrm{d}t\] \[\leq C(d,D,M)\int_{0}^{T}\int_{d}^{b(t)}\rho\big{(}|u|+\rho^{ \gamma(\rho)}\big{)}\,r^{2}\mathrm{d}r\mathrm{d}t\] \[\leq C(d,D,M)\int_{0}^{T}\int_{d}^{b(t)}\big{(}\rho|u|^{2}+\rho+ \rho e(\rho)\big{)}\,r^{2}\mathrm{d}r\mathrm{d}t\] \[\leq C(d,D,M,E_{0},T). \tag{5.77}\] For the fifth term on the RHS of (5.70), we note from (3.6)-(3.7) and Lemma 4.1 that \[-\hat{q}+\rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}=0\qquad \text{ if }|u|\geq k(\rho), \tag{5.78}\] \[|-\hat{q}+\rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}|\leq C \rho^{\gamma(\rho)+\theta(\rho)}\leq C(\rho+\rho^{\gamma_{2}+\theta_{2}}) \qquad\text{ if }|u|\leq k(\rho). \tag{5.79}\] Then it follows from (5.78)-(5.79) and Corollary 5.7 that \[\int_{0}^{T}\int_{d}^{D}\Big{|}\int_{r}^{b(t)}\big{(}-\hat{q}+ \rho u\hat{\eta}_{\rho}+\rho u^{2}\hat{\eta}_{m}\big{)}\,y\mathrm{d}y\Big{|} \,\mathrm{d}r\mathrm{d}t\] \[\leq C(d,D)\int_{0}^{T}\int_{d}^{b(t)}\big{(}\rho+\rho^{\gamma_{2 }+\theta_{2}}\big{)}\,y^{2}\mathrm{d}y\mathrm{d}t\leq C(d,D,M,E_{0},T), \tag{5.80}\] where we have used \(\theta_{2}\in(0,1)\) since \(\gamma_{2}\in(\frac{6}{5},3)\). Combining (5.70)-(5.71), (5.74)-(5.77), and (5.80), we obtain that \(\int_{0}^{T}\int_{d}^{D}\hat{q}\,r^{2}\mathrm{d}r\mathrm{d}t\leq C(d,D,M,E_{0},T)\), which, along with (5.67) and Lemma 4.1, gives \[\int_{0}^{T}\int_{[d,D]\cap\{r:|u|\geq k(\rho)\}}\rho|u|^{3}\,r^{ 2}\mathrm{d}r\mathrm{d}t\] \[\leq 2\int_{0}^{T}\int_{[d,D]\cap\{r:|u|\geq k(\rho)\}}\hat{q}\,r^{ 2}\mathrm{d}r\mathrm{d}t=2\int_{0}^{T}\int_{d}^{D}\hat{q}\,r^{2}\mathrm{d}r \mathrm{d}t-2\int_{0}^{T}\int_{[d,D]\cap\{r:|u|<k(\rho)\}}\hat{q}\,r^{2} \mathrm{d}r\mathrm{d}t\] \[\leq C(d,D,M,E_{0},T)+C\int_{0}^{T}\int_{d}^{D}(\rho+\rho^{ \gamma_{2}+1})\,r^{2}\mathrm{d}r\mathrm{d}t\leq C(d,D,M,E_{0},T). \tag{5.81}\] On the other hand, we have \[\int_{0}^{T}\int_{[d,D]\cap\{r:\,|u|\leq k(\rho)\}}\rho|u|^{3}\,r ^{2}\mathrm{d}r\mathrm{d}t \leq C\int_{0}^{T}\int_{d}^{D}\rho^{\gamma(\rho)+\theta(\rho)}\,r ^{2}\mathrm{d}r\mathrm{d}t\] \[\leq C\int_{0}^{T}\int_{d}^{D}\big{(}\rho+\rho p(\rho)\big{)}\,r^{ 2}\mathrm{d}r\mathrm{d}t\leq C(M,E_{0},T). \tag{5.82}\] Combining (5.81) with (5.82), we obtain that \(\int_{0}^{T}\int_{d}^{D}\rho|u|^{3}\,r^{2}\mathrm{d}r\mathrm{d}t\leq C(d,D,M,E_ {0},T)\). This completes the proof of Lemma 5.8. ## 6. Existence of Global Weak Solutions of CNSPEs In this section, for fixed \(\varepsilon>0\), we take the limit: \(b\to\infty\) to obtain the global existence of solutions of the Cauchy problem for (1.10). Meanwhile, some uniform estimates in Theorem 2.1 are obtained. To take the limit, some careful attention is required, since the weak solutions may involve the vacuum. We use similar compactness arguments as in [9, 16] to handle the limit: \(b\to\infty\). Throughout this section, we denote the smooth solutions of (5.1)-(5.6) as \((\rho^{\varepsilon,b},u^{\varepsilon,b})\) for simplicity. First of all, we extend our solutions \((\rho^{\varepsilon,b},u^{\varepsilon,b})\) to be zero on \(([0,T]\times[0,\infty))\backslash\Omega_{T}\). It follows from Lemma 5.5 that \[\lim_{b\to\infty}\min_{t\in[0,T]}b(t)=\infty, \tag{6.1}\] which implies that domain \([0,T]\times[a,b(t)]\) expands to \([0,T]\times(0,\infty)\) as \(b\to\infty\). That is, for any set \(K\Subset(0,\infty)\), when \(b\gg 1,K\Subset(a,b(t))\) for all \(t\in[0,T]\). Now we define \[(\rho^{\varepsilon,b},\mathcal{M}^{\varepsilon,b},\Phi^{\varepsilon,b})(t, \mathbf{x}):=(\rho^{\varepsilon,b}(t,r),m^{\varepsilon,b}(t,r)\frac{\mathbf{x }}{r},\Phi^{\varepsilon,b}(t,r))\qquad\text{for }r=|\mathbf{x}|,\] where \(m^{\varepsilon,b}:=\rho^{\varepsilon,b}u^{\varepsilon,b}\). Then it is direct to check that the corresponding functions \((\rho^{\varepsilon,b},\mathcal{M}^{\varepsilon,b},\Phi^{\varepsilon,b})(t, \mathbf{x})\) are classical solutions of \[\begin{cases}\partial_{t}\rho^{\varepsilon,b}+\nabla\cdot\mathcal{M}^{ \varepsilon,b}=0,\\ \partial_{t}\mathcal{M}^{\varepsilon,b}+\nabla\cdot\Big{(}\frac{\mathcal{M}^{ \varepsilon,b}\otimes\mathcal{M}^{\varepsilon,b}}{\rho^{\varepsilon,b}}\Big{)} +\nabla P(\rho^{\varepsilon,b})+\rho^{\varepsilon,b}\nabla\Phi^{\varepsilon,b }=\varepsilon\nabla\cdot\Big{(}\rho^{\varepsilon,b}D\big{(}\frac{\mathcal{M}^{ \varepsilon,b}}{\rho^{\varepsilon,b}}\big{)}\Big{)},\\ \Delta\Phi^{\varepsilon,b}=\rho^{\varepsilon,b},\end{cases}\] for \((t,\mathbf{x})\in[0,\infty)\times\Omega_{t}\) with \(\mathcal{M}^{\varepsilon,b}|_{\partial B_{a}(\mathbf{0})}=0\). Based on the estimates obtained in SS5, by the same arguments as in [9, SS4], we have **Lemma 6.1**.: _For fixed \(\varepsilon>0\), as \(b\to\infty\) (up to a subsequence), there exists a vector function \((\rho^{\varepsilon},m^{\varepsilon})(t,r)\) such that_ * \((\sqrt{\rho^{\varepsilon,b}},\rho^{\varepsilon,b})\to(\sqrt{\rho^{\varepsilon} },\rho^{\varepsilon})\) _a.e. and strongly in_ \(C(0,T;L^{p}_{\mathrm{loc}})\) _for any_ \(p\in[1,\infty)\)_, where_ \(L^{p}_{\mathrm{loc}}\) _denotes_ \(L^{p}(K)\) _for any compact set_ \(K\Subset(0,\infty)\)_. In particular,_ \(\rho^{\varepsilon}\geq 0\) _a.e. on_ \(\mathbb{R}^{2}_{+}\)_._ * _The pressure function sequence_ \(P(\rho^{\varepsilon,b})\) _is uniformly bounded in_ \(L^{\infty}(0,T;L^{p}_{\mathrm{loc}}(\mathbb{R}))\) _for all_ \(p\in[1,\infty]\)_, and_ \[P(\rho^{\varepsilon,b})\longrightarrow P(\rho^{\varepsilon})\quad\text{ strongly in }L^{p}(0,T;L^{p}_{\mathrm{loc}}(\mathbb{R}))\qquad\text{for }p\in[1,\infty).\] * _The momentum function sequence_ \(m^{\varepsilon,b}\) _converges strongly in_ \(L^{2}(0,T;L^{p}_{\mathrm{loc}}(\mathbb{R}))\) _to_ \(m^{\varepsilon}(t,r)\) _for all_ \(p\in[1,\infty)\)_. In particular,_ \[m^{\varepsilon,b}(t,r)=(\rho^{\varepsilon,b}u^{\varepsilon,b})(t,r) \longrightarrow m^{\varepsilon}(t,r)\qquad\text{ a.e. in }[0,T]\times(0,\infty).\] * \(m^{\varepsilon}(t,r)=0\) _a.e. on_ \(\{(t,r)\,:\,\rho^{\varepsilon}(t,r)=0\}\)_. Furthermore, there exists a function_ \(u^{\varepsilon}(t,r)\) _such that_ \(m^{\varepsilon}(t,r)=\rho^{\varepsilon}(t,r)u^{\varepsilon}(t,r)\) _a.e.,_ \(u^{\varepsilon}(t,r)=0\) _a.e. on_ \(\{(t,r)\,:\,\rho^{\varepsilon}(t,r)=0\}\)_, and_ \[m^{\varepsilon},b\longrightarrow m^{\varepsilon}=\rho^{\varepsilon}u^{ \varepsilon}\qquad\text{ strongly in }L^{2}\left(0,T;L^{p}_{\mathrm{loc}}(\mathbb{R})\right)\text{ for }p\in[1,\infty),\] \[\frac{m^{\varepsilon,b}}{\sqrt{\rho^{\varepsilon,b}}} \longrightarrow\frac{m^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}=\sqrt{\rho^{ \varepsilon}}u^{\varepsilon}\qquad\text{ strongly in }L^{2}(0,T;L^{2}_{\mathrm{loc}}(\mathbb{R})).\] Let \((\rho^{\varepsilon},m^{\varepsilon})(t,r)\) be the limit function obtained above. Using (5.13), (6.1), Lemmas 5.1-5.6, 5.8, and 6.1, Corollary 5.2-5.3 and 5.7, Fatou's lemma, and the lower semicontinuity, we conclude the proof of (2.22)-(2.25). Now we show the convergence of potential functions \(\Phi^{\varepsilon,b}\). Using the similar arguments as in [9, Lemma 4.6], we have **Lemma 6.2**.: _For fixed \(\varepsilon>0\), there exists a function \(\Phi^{\varepsilon}(t,\mathbf{x})=\Phi^{\varepsilon}(t,r)\) such that, as \(b\to\infty\)\((\)up to a subsequence\()\),_ \[\Phi^{\varepsilon,b}\rightharpoonup\Phi^{\varepsilon}\qquad\text{ weak-star in $L^{\infty}(0,T;H^{1}_{\mathrm{loc}}(\mathbb{R}^{3}))$ and weakly in $L^{2}(0,T;H^{1}_{\mathrm{loc}}(\mathbb{R}^{3}))$}, \tag{6.2}\] \[\Phi^{\varepsilon,b}_{r}(t,r)r^{2}\,\to\,\Phi^{\varepsilon}_{r}(t,r)r^{2}=\int_{0}^{r}\rho^{\varepsilon}(t,y)\,y^{2}\mathrm{d}y\qquad\text{in $C_{\mathrm{loc}}([0,T]\times[0,\infty))$},\] (6.3) \[\|\Phi^{\varepsilon}(t)\|_{L^{6}(\mathbb{R}^{3})}+\|\nabla\Phi^{ \varepsilon}(t)\|_{L^{2}(\mathbb{R}^{3})}\leq C(M,E_{0})\qquad\text{for $t\geq 0$}. \tag{6.4}\] _Moreover, since \(\gamma_{2}>\frac{6}{5}\),_ \[\int_{0}^{\infty}\big{|}\big{(}\Phi^{\varepsilon,b}_{r}-\Phi^{\varepsilon}_{r }\big{)}(t,r)\big{|}^{2}\,r^{2}\mathrm{d}r\to 0\qquad\text{ as $b\to\infty$ ($up to a subsequence$)}. \tag{6.5}\] Using (6.5), Fatou's lemma, and Lemmas 5.1 and 6.1, we obtain the following energy inequality: \[\int_{0}^{\infty}\Big{(}\frac{1}{2}\Big{|}\frac{m^{\varepsilon}}{ \sqrt{\rho^{\varepsilon}}}\Big{|}^{2}+\rho^{\varepsilon}e(\rho^{\varepsilon}) \Big{)}(t,r)\,r^{2}\mathrm{d}r-\frac{1}{2}\int_{0}^{\infty}|\Phi^{\varepsilon }(t,r)|^{2}\,r^{2}\mathrm{d}r \tag{6.6}\] \[\leq\int_{0}^{\infty}\Big{(}\frac{1}{2}\Big{|}\frac{m_{0}^{ \varepsilon}}{\sqrt{\rho_{0}^{\varepsilon}}}\Big{|}^{2}+\rho_{0}^{ \varepsilon}e(\rho_{0}^{\varepsilon})\Big{)}(r)\,r^{2}\mathrm{d}r-\frac{1}{2} \int_{0}^{\infty}|\Phi_{0}^{\varepsilon}(r)|^{2}\,r^{2}\mathrm{d}r.\] We denote \[(\rho^{\varepsilon},\mathcal{M}^{\varepsilon},\Phi^{\varepsilon})(t,\mathbf{x }):=(\rho^{\varepsilon}(t,r),m^{\varepsilon}(t,r)\frac{\mathbf{x}}{r},\Phi^{ \varepsilon}(t,r)).\] Then (2.20) follows directly from (6.6). Moreover, we can prove that \((\rho^{\varepsilon},\mathcal{M}^{\varepsilon},\Phi^{\varepsilon})\) is a global weak solution of the Cauchy problem (1.10) and (2.17)-(2.18) in the sense of Definition 2.1. In fact, by the same arguments in [9, Remark 4.7 and Lemmas 4.9-4.11], we have **Lemma 6.3**.: _Let \(0\leq t_{1}<t_{2}\leq T\), and let \(\zeta(t,\mathbf{x})\in C^{1}_{0}([0,T]\times\mathbb{R}^{3})\) be any smooth function with compact support. Then_ \[\int_{\mathbb{R}^{3}}\rho^{\varepsilon}(t_{2},\mathbf{x})\zeta(t_{2},\mathbf{ x})\,\mathrm{d}\mathbf{x}=\int_{\mathbb{R}^{3}}\rho^{\varepsilon}(t_{1}, \mathbf{x})\zeta(t_{1},\mathbf{x})\,\mathrm{d}\mathbf{x}+\int_{t_{1}}^{t_{2}} \int_{\mathbb{R}^{3}}(\rho^{\varepsilon}\zeta_{t}+\mathcal{M}^{\varepsilon} \cdot\nabla\zeta)\,\mathrm{d}\mathbf{x}\mathrm{d}t. \tag{6.7}\] _Moreover, (2.21) holds, and the total mass is conserved\(:\)_ \[\int_{\mathbb{R}^{3}}\rho^{\varepsilon}(t,\mathbf{x})\,\mathrm{d}\mathbf{x}= \int_{\mathbb{R}^{3}}\rho^{\varepsilon}_{0}(\mathbf{x})\,\mathrm{d}\mathbf{x}= M\qquad\text{for $t\geq 0$}. \tag{6.8}\] **Lemma 6.4**.: _Let \(\Psi(t,\mathbf{x})\in(C^{2}_{0}([0,T]\times\mathbb{R}^{3}))^{3}\) be any smooth function with compact support so that \(\Psi(T,\mathbf{x})=0\). Then_ \[\int_{\mathbb{R}^{4}_{+}}\Big{\{}\mathcal{M}^{\varepsilon}\cdot \partial_{t}\Psi+\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}} \cdot\big{(}\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\cdot \nabla\big{)}\Psi+P(\rho^{\varepsilon})\nabla\cdot\Psi-\rho^{\varepsilon} \nabla\Phi^{\varepsilon}\cdot\Psi\Big{\}}\,\mathrm{d}\mathbf{x}\mathrm{d}t+ \int_{\mathbb{R}^{3}}\mathcal{M}^{\varepsilon}_{0}(\mathbf{x})\cdot\Psi(0, \mathbf{x})\,\mathrm{d}\mathbf{x}\] \[=-\varepsilon\int_{\mathbb{R}^{4}_{+}}\Big{\{}\frac{1}{2} \mathcal{M}^{\varepsilon}\cdot\big{(}\Delta\Psi+\nabla(\nabla\cdot\Psi)\big{)} +\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\cdot\big{(}\nabla \sqrt{\rho^{\varepsilon}}\cdot\nabla\big{)}\Psi+\nabla\sqrt{\rho^{\varepsilon}} \cdot\big{(}\frac{\mathcal{M}^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\cdot \nabla\big{)}\Psi\Big{\}}\,\mathrm{d}\mathbf{x}\mathrm{d}t\] \[=\sqrt{\varepsilon}\int_{\mathbb{R}^{4}_{+}}\sqrt{\rho^{ \varepsilon}}\Big{\{}V^{\varepsilon}\frac{\mathbf{x}\otimes\mathbf{x}}{r^{2}}+ \frac{\sqrt{\varepsilon}}{r}\frac{m^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}} \Big{(}I_{3\times 3}-\frac{\mathbf{x}\otimes\mathbf{x}}{r^{2}}\Big{)}\Big{\}}:\nabla \Psi\,\mathrm{d}\mathbf{x}\mathrm{d}t \tag{6.9}\] _with \(V^{\varepsilon}(t,\mathbf{x})\in L^{2}(0,T;L^{2}(\mathbb{R}^{3}))\) as a function satisfying_ \[\int_{0}^{T}\int_{\mathbb{R}^{3}}\left|V^{\varepsilon}(t,\mathbf{x})\right|^{2 }\,\,\mathrm{d}\mathbf{x}\mathrm{d}t\leq C(E_{0},M),\] _where \(C\left(E_{0},M\right)>0\) is a constant independent of \(T>0\)._ **Lemma 6.5**.: _It follows from (6.3) that \(\Phi^{\varepsilon}\) satisfies Poisson's equation in the classical sense except for the origin_:__ \[\Delta\Phi^{\varepsilon}=\rho^{\varepsilon}(t,{\bf x})\qquad\mbox{for }(t,{\bf x}) \times\mathbb{R}^{3}\backslash\{{\bf 0}\}.\] _Moreover, for any smooth function \(\xi({\bf x})\in C^{1}_{0}(\mathbb{R}^{3})\) with compact support,_ \[\int_{\mathbb{R}^{3}}\nabla\Phi^{\varepsilon}(t,{\bf x})\cdot\nabla\xi({\bf x} )\,{\rm d}{\bf x}=-\int_{\mathbb{R}^{3}}\rho^{\varepsilon}(t,{\bf x})\,\xi({ \bf x})\,{\rm d}{\bf x}\qquad\mbox{for }t\geq 0. \tag{6.10}\] ## 7. \(W^{-1,p}_{\rm loc}\)-Compactness of Weak Entropy Dissipation Measures In this section, using the estimates on the weak entropy pairs obtained in Lemmas 4.5, 4.7, and 4.10, we establish the compactness of weak dissipation measures \(\partial_{t}\eta^{\psi}(\rho^{\varepsilon},m^{\varepsilon})+\partial_{r}q^{ \psi}(\rho^{\varepsilon},m^{\varepsilon})\) for each weak entropy pairs \((\eta^{\psi},q^{\psi})\). Unfortunately, we fail to obtain the same \(H^{-1}_{\rm loc}\)-compactness as in [9, 16], since we only obtain that \(q^{\varepsilon}\) is uniformly bounded in \(L^{2}_{\rm loc}\) from Lemma 4.10 and Corollary 5.7. Instead, using similar arguments as in [9, Lemma 4.2], we can obtain the compactness in \(W^{-1,p}_{\rm loc}\) for any \(p\in[1,2)\). **Lemma 7.1** (Compactness of the entropy dissipation measures).: _Let \((\eta^{\psi},q^{\psi})\) be a weak entropy pair defined in (4.63) for any smooth compact supported function \(\psi(s)\) on \(\mathbb{R}\). Then, for \(\varepsilon\in(0,\varepsilon_{0}]\),_ \[\partial_{t}\eta^{\psi}(\rho^{\varepsilon},m^{\varepsilon})+\partial_{r}q^{ \psi}(\rho^{\varepsilon},m^{\varepsilon})\qquad\mbox{is compact in }W^{-1,p}_{\rm loc}( \mathbb{R}^{2}_{+})\quad\mbox{for any }p\in[1,2). \tag{7.1}\] **Proof.** To establish (7.1), we first need to study the equation: \(\partial_{t}\eta^{\psi}(\rho^{\varepsilon},m^{\varepsilon})+\partial_{r}q( \rho^{\varepsilon},m^{\varepsilon})\) in the distributional sense, which is more complicated than that in [12, 13]. For simplicity, we denote \((\eta^{\varepsilon,b},q^{\varepsilon,b})=(\eta^{\psi}(\rho^{\varepsilon,b},m^ {\varepsilon,b}),q^{\psi}(\rho^{\varepsilon,b},m^{\varepsilon,b}))\) and \((\eta^{\varepsilon},q^{\varepsilon})=(\eta^{\psi}(\rho^{\varepsilon},m^{ \varepsilon}),q^{\psi}(\rho^{\varepsilon},m^{\varepsilon}))\). Since the proof is long, we divide it into four steps. 1. Considering \(\eqref{eq:1}_{1}\times\eta^{\varepsilon,b}_{\rho}+\eqref{eq:1}_{2}\times\eta ^{\varepsilon,b}_{m}\), we obtain \[\partial_{t}\eta(\rho^{\varepsilon,b},m^{\varepsilon,b})+\partial_{r}q( \rho^{\varepsilon,b},m^{\varepsilon,b})+\frac{2}{r}m^{\varepsilon,\delta}( \eta^{\varepsilon,b}_{\rho}+u^{\varepsilon,b}\eta^{\varepsilon,b}_{m})\] \[=-\eta^{\varepsilon,b}_{m}\frac{\rho^{\varepsilon,b}}{r^{2}}\int_{0}^{r}\rho^{ \varepsilon,b}(t,y)\,y^{2}{\rm d}y+\varepsilon\eta^{\varepsilon,b}_{m}\Big{\{} \big{(}\rho^{\varepsilon,b}(u^{\varepsilon,b}_{r}+\frac{2}{r}u^{\varepsilon,b })\big{)}_{r}-\frac{2}{r}\rho^{\varepsilon,b}_{r}u^{\varepsilon,b}\Big{\}}, \tag{7.2}\] where \(\rho^{\varepsilon,b}\) is understood to be zero in domain \([0,T]\times[0,a)\) so that \(\int_{a}^{r}\rho^{\varepsilon,b}(t,z)\,z^{2}{\rm d}z\) can be written as \(\int_{0}^{r}\rho^{\varepsilon,b}(t,z)\,z^{2}{\rm d}z\) in the potential term. Let \(\phi(t,r)\in C^{\infty}_{0}\left(\mathbb{R}^{2}_{+}\right)\) and \(b\gg 1\) so that \({\rm supp}\,\phi(t,\cdot)\in(a,b(t))\). Multiplying (7.2) by \(\phi\) and integrating by parts yield \[\int_{\mathbb{R}^{2}_{+}}(\partial_{t}\eta^{\varepsilon,b}+ \partial_{r}q^{\varepsilon,b})\phi\,{\rm d}r{\rm d}t =-\int_{\mathbb{R}^{2}_{+}}\frac{2}{r}m^{\varepsilon,b}(\eta^{ \varepsilon,b}_{\rho}+u^{\varepsilon,b}\eta^{\varepsilon,b}_{m})\phi\,{\rm d }r{\rm d}t-\varepsilon\int_{\mathbb{R}^{2}_{+}}\rho^{\varepsilon,b}(\eta^{ \varepsilon,b}_{m})_{r}\big{(}u^{\varepsilon,b}_{r}+\frac{2}{r}u^{\varepsilon, b}\big{)}\phi\,{\rm d}r{\rm d}t\] \[\quad-\varepsilon\int_{\mathbb{R}^{2}_{+}}\rho^{\varepsilon,b} \eta^{\varepsilon,b}_{m}\big{(}u^{\varepsilon,b}_{r}+\frac{2}{r}u^{\varepsilon, b}\big{)}\phi_{r}\,{\rm d}r{\rm d}t-\varepsilon\int_{\mathbb{R}^{2}_{+}}\frac{2}{r}\eta^{ \varepsilon,b}_{m}\rho^{\varepsilon,b}_{r}u^{\varepsilon,b}\phi\,{\rm d}r{\rm d}t\] \[\quad-\int_{\mathbb{R}^{2}_{+}}\frac{\rho^{\varepsilon,b}}{r^{2}} \eta^{\varepsilon,b}_{m}\Big{(}\int_{0}^{r}\rho^{\varepsilon,b}(t,y)\,y^{2}{ \rm d}y\Big{)}\phi\,{\rm d}r{\rm d}t:=\sum_{i=1}^{5}I^{\varepsilon,b}_{i}. \tag{7.3}\] 2. From Lemmas 4.2 and 6.1, it is clear to see that \[\eta^{\varepsilon,b}\longrightarrow\eta^{\varepsilon}\qquad\mbox{a.e. in }\{(t,r)\,:\rho^{ \varepsilon}\neq 0\}\quad\mbox{as }b\to\infty. \tag{7.4}\] In \(\{(t,r)\,:\,\rho^{\varepsilon}(t,r)=0\}\), it follows from Lemmas 4.5 and 4.7 that \[|\eta^{\varepsilon,b}|\leq C\rho^{\varepsilon,b}\longrightarrow 0=\eta^{ \varepsilon}\qquad\mbox{as }b\to\infty. \tag{7.5}\] Combining (7.4)-(7.5), we obtain \[\eta^{\varepsilon,b}\longrightarrow\eta^{\varepsilon}\qquad\mbox{a.e. as }b\to\infty. \tag{7.6}\] Similarly, it follows from Lemmas 4.3, 4.10, and 6.1 that \[q^{\varepsilon,b}\longrightarrow q^{\varepsilon}\qquad\text{a.e. as $b\to\infty$}. \tag{7.7}\] For \(\gamma_{2}\in(1,3)\) and any subset \(K\Subset(0,\infty)\), it follows from Lemmas 4.5 and Corollary 5.7, 4.7, and 4.10 that \[\int_{0}^{T}\int_{K}\left(|\eta^{\varepsilon,b}|^{\gamma_{2}+1}+|q^{ \varepsilon,b}|^{2}\right)\mathrm{d}r\mathrm{d}t\leq C_{\psi}(K)\int_{0}^{T} \int_{K}\left(1+|\rho^{\varepsilon,b}|^{\gamma_{2}+1}\right)\mathrm{d}r \mathrm{d}t\leq C_{\psi}(K,M,E_{0},T),\] which implies that \((\eta^{\varepsilon,b},q^{\varepsilon,b})\) is uniformly bounded in \(L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\). This, with (7.6)-(7.7), yields that, up to a subsequence, \[(\eta^{\varepsilon,b},q^{\varepsilon,b})\rightharpoonup(\eta^{\varepsilon},q^ {\varepsilon})\qquad\text{in $L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})$ as $b\to\infty$}.\] Thus, for any \(\phi\in C^{1}_{0}(\mathbb{R}^{2}_{+})\), as \(b\to\infty\) (up to a subsequence), \[\int_{\mathbb{R}^{2}_{+}}\left(\partial_{t}\eta^{\varepsilon,b}+\partial_{r}q ^{\varepsilon,b}\right)\phi\,\mathrm{d}r\mathrm{d}t=-\int_{\mathbb{R}^{2}_{+} }\left(\eta^{\varepsilon,b}\partial_{t}\phi+q^{\varepsilon,b}\partial_{r} \phi\right)\mathrm{d}r\mathrm{d}t\longrightarrow-\int_{\mathbb{R}^{2}_{+}} \left(\eta^{\varepsilon}\partial_{t}\phi+q^{\varepsilon}\partial_{r}\phi \right)\mathrm{d}r\mathrm{d}t. \tag{7.8}\] Furthermore, \((\eta^{\varepsilon},q^{\varepsilon})\) is uniformly bounded in \(L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\), which implies that \[\partial_{t}\eta^{\varepsilon}+\partial_{r}q^{\varepsilon}\quad\text{ is uniformly bounded in $\varepsilon>0$ in $W^{-1,2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})$}. \tag{7.9}\] Notice that, since \(q^{\varepsilon,b}\) is only uniformly bounded in \(L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) in view of Lemma 4.10 and Corollary 5.7, we cannot conclude that \(\partial_{t}\eta^{\varepsilon}+\partial_{r}q^{\varepsilon}\) is uniformly bounded in \(\varepsilon>0\) in \(W^{-1,p}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) with \(p>2\), which is different from [9]. 3. For the terms on the RHS of (7.3), using Lemmas 4.5, 4.7, and 4.10, and similar calculations as in [9, Lemma 5.11], we obtain that \[\begin{array}{l} I_{1}^{\varepsilon,b}\longrightarrow-\int_{ \mathbb{R}^{2}_{+}}\frac{2}{r}m^{\varepsilon}(\eta^{\varepsilon}_{\rho}+u^{ \varepsilon}\eta^{\varepsilon}_{m})\phi\,\mathrm{d}r\mathrm{d}t\qquad\text{ up to a subsequence as $b\to\infty$},\\ \int_{0}^{T}\int_{K}\frac{2}{r}\big{|}m^{\varepsilon}(\eta^{ \varepsilon}_{\rho}+u^{\varepsilon}\eta^{\varepsilon}_{m})\big{|}^{\frac{7 }{6}}\,\mathrm{d}r\mathrm{d}t\leq C_{\psi}(K,M,E_{0},T),\end{array} \tag{7.10}\] and there exist local bounded Radon measures \((\mu^{\varepsilon}_{1},\mu^{\varepsilon}_{2},\mu^{\varepsilon}_{3})\) on \(\mathbb{R}^{2}_{+}\) such that, as \(b\to\infty\) (up to a subsequence), \[-(\varepsilon\rho^{\varepsilon,b}(\eta^{\varepsilon,b}_{m})_{r}(u^{ \varepsilon,b}_{r}+\frac{2}{r}u^{\varepsilon,b}),\,\frac{2\varepsilon}{r}\eta^ {\varepsilon,b}_{m}\rho^{\varepsilon,b}_{r}u^{\varepsilon,b},\,\kappa\eta^{ \varepsilon,b}_{m}\frac{\rho^{\varepsilon,b}}{r^{n-1}}\int_{0}^{r}\rho^{ \varepsilon,b}(t,z)\,z^{2}\mathrm{d}z)\rightharpoonup(\mu^{\varepsilon}_{1}, \mu^{\varepsilon}_{2},\mu^{\varepsilon}_{3}).\] In addition, for \(i=1,2,3\), \[\mu^{\varepsilon}_{i}((0,T)\times V)\leq C_{\psi}(K,T,E_{0})\qquad\text{ for any open subset $V\subset K$}. \tag{7.11}\] Then, up to a subsequence, we have \[I_{2}^{\varepsilon,b}+I_{4}^{\varepsilon,b}+I_{5}^{\varepsilon,b}\longrightarrow \langle\,\mu^{\varepsilon}_{1}+\mu^{\varepsilon}_{2}+\mu^{\varepsilon}_{3},\, \phi\rangle\qquad\text{ as $b\to\infty$}. \tag{7.12}\] Moreover, there exists a function \(f^{\varepsilon}\) such that, as \(b\to\infty\) (up to a subsequence), \[\begin{array}{l}-\sqrt{\varepsilon}\rho^{\varepsilon,b}\eta^{ \varepsilon,b}_{m}\big{(}u^{\varepsilon,b}_{r}+\frac{2}{r}u^{\varepsilon,b} \big{)}\rightharpoonup f^{\varepsilon}\qquad\text{ weakly in $L^{\frac{4}{3}}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})$},\\ \int_{0}^{T}\int_{K}|f^{\varepsilon}|^{\frac{4}{3}}\,\mathrm{d}r \mathrm{d}t\leq C_{\psi}(K,M,E_{0},T).\end{array} \tag{7.13}\] Then it follows from (7.13) that \[I_{3}^{\varepsilon,b}\longrightarrow\sqrt{\varepsilon}\int_{0}^{T}\int_{K}f^{ \varepsilon}\phi_{r}\,\mathrm{d}r\mathrm{d}t\qquad\text{ as $b\to\infty$ (up to a subsequence)}. \tag{7.14}\] 4. Taking \(b\to\infty\) (up to a subsequence) on both sides of (7.3), it follows from (7.8), (7.10), (7.12), and (7.14) that \[\partial_{t}\eta^{\varepsilon}+\partial_{r}q^{\varepsilon}=-\frac{2}{r}\rho^{ \varepsilon}u^{\varepsilon}(\eta_{\rho}^{\varepsilon}+u^{\varepsilon}\eta_{ m}^{\varepsilon})+\mu_{1}^{\varepsilon}+\mu_{2}^{\varepsilon}+\mu_{3}^{ \varepsilon}-\sqrt{\varepsilon}f_{r}^{\varepsilon}\] in the distributional sense. It follows from (7.10)-(7.11) that \[-\frac{2}{r}\rho^{\varepsilon}u^{\varepsilon}\left(\eta_{\rho}^{\varepsilon}+ u^{\varepsilon}\eta_{m}^{\varepsilon}\right)+\mu_{1}^{\varepsilon}+\mu_{2}^{ \varepsilon}+\mu_{3}^{\varepsilon} \tag{7.15}\] is a locally uniformly bounded Radon measure sequence. From (7.13), we know that \[\sqrt{\varepsilon}f_{r}^{\varepsilon}\longrightarrow 0\qquad\text{ in }W_{\rm loc}^{-1,\frac{4}{3}}(\mathbb{R}_{+}^{2})\text{ as } \varepsilon\to 0. \tag{7.16}\] Then it follows from (7.15)-(7.16) that the sequence: \[\partial_{t}\eta^{\varepsilon}+\partial_{r}q^{\varepsilon}\qquad\text{ is confined in a compact subset of }W_{\rm loc}^{-1,p_{2}}(\mathbb{R}_{+}^{2})\text{ for some }p_{2}\in(1,2), \tag{7.17}\] which also implies that \[\partial_{t}\eta^{\varepsilon}+\partial_{r}q^{\varepsilon}\qquad\text{is compact in }W_{\rm loc}^{-1,p}(\mathbb{R}_{+}^{2})\text{ with }1\leq p\leq p_{2}. \tag{7.18}\] On the other hand, the interpolation compactness theorem (_cf._[7, 21]) indicates that, for \(p_{2}>1,p_{1}\in(p_{2},\infty]\), and \(p_{0}\in[p_{2},p_{1})\), \[\big{(}\text{compact set of }W_{\rm loc}^{-1,p_{2}}(\mathbb{R}_{+}^{2})\big{)} \cap\big{(}\text{bounded set of }W_{\rm loc}^{-1,p_{1}}(\mathbb{R}_{+}^{2})\big{)} \subset\big{(}\text{compact set of }W_{\rm loc}^{-1,p_{0}}(\mathbb{R}_{+}^{2})\big{)},\] which is a generalization of Murat's lemma in [56, 65]. Combining this theorem for \(1<p_{2}<2\) and \(p_{1}=2\) with the facts in (7.9) and (7.17), we conclude that \[\partial_{t}\eta^{\varepsilon}+\partial_{r}q^{\varepsilon}\qquad\text{is compact in }W_{\rm loc}^{-1,p}(\mathbb{R}_{+}^{2})\text{ with }p_{2}\leq p<2. \tag{7.19}\] Combining (7.19) with (7.18), we conclude (7.1). ## 8. \(L^{p}\) Compensated Compactness Framework In this section, with the help of our understanding of the singularities of the entropy kernel and entropy flux kernel obtained in Lemma 4.14, we now establish the \(L^{p}\) compensated compactness framework and complete the proof of Theorem 2.2. The key ingredient is to prove the reduction of the Young measure. The arguments are similar to [61, SS4] and [62, SS7], based on [12, 10], so we only sketch the proof for self-containedness. We denote the upper half-plane by \[\mathbb{H}:=\{(\rho,u)\in\mathbb{R}^{2}\,:\,\rho>0\}\] and consider the following subset of continuous functions \[\overline{C}(\mathbb{H}):=\left\{\phi\in C(\overline{\mathbb{H}})\,:\,\, \begin{array}{l}\phi(\rho,u)\,\text{ is constant on vacuum states }\,\{\rho=0\}\,\text{ and }\\ \text{the map: }(\rho,u)\mapsto\lim_{s\to\infty}\phi(s\rho,su)\text{ belongs to }C(\mathbb{S}^{1}\cap\overline{\mathbb{H}})\end{array}\right\},\] where \(\mathbb{S}^{1}\subset\mathbb{R}^{2}\) is the unit circle. Since \(\overline{C}(\mathbb{H})\) is a complete sub-ring of the continuous functions on \(\mathbb{H}\) containing the constant functions, there exists a compactification \(\overline{\mathcal{H}}\) of \(\mathbb{H}\) such that \(C(\overline{\mathcal{H}})\) is isometrically isomorphic to \(\overline{C}(\mathbb{H})\) (_cf._[41]), written \(C(\overline{\mathcal{H}})\cong\overline{C}(\mathbb{H})\). The topology of \(\overline{\mathcal{H}}\) is the weak-star topology induced by \(C(\overline{\mathcal{H}})\), _i.e._, a sequence \(\{v_{n}\}_{n\in\mathbb{N}}\) in \(\overline{\mathcal{H}}\) converges to \(v\in\overline{\mathcal{H}}\) if \(|\varphi(v_{n})-\varphi(v)|\to 0\) for all \(\varphi\in C(\overline{\mathcal{H}})\), which is separable and metrizable (_cf._[41]). Denote by \(V\) the weak-star closure of vacuum states \(\{(\rho,u)\in\mathbb{R}^{2}\,:\,\rho=0\}\) and define \(\mathcal{H}:=\mathbb{H}\cup V\). In view of the functions that lie in \(\overline{C}(\mathbb{H})\), the topology of \(\overline{\mathcal{H}}\) does not distinguish points in \(V\). Since \(\overline{\mathcal{H}}\) is homeomorphic to a compact metric space, we may apply the fundamental theorem of Young measures in Alberti-Muller [1, Theorem 2.4]. **Lemma 8.1** ([1, Theorem 2.4]).: _Given any sequence of measurable functions \((\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon}):\mathbb{R}^{2}_{+}\to \overline{\mathbb{H}}\), there exists a subsequence \((\)still denoted\()\)\((\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\) generating a Young measure \(\nu_{(t,r)}\in\operatorname{Prob}(\overline{\mathcal{H}})\) in the sense that, for any \(\phi\in\overline{C}(\mathbb{H})\),_ \[\phi(\rho^{\varepsilon}(t,r),u^{\varepsilon}(t,r))\,\stackrel{{*} }{{\rightharpoonup}}\,\int_{\overline{\mathcal{H}}}\iota(\phi)(\rho,u)\, \mathrm{d}\nu_{(t,r)}(\rho,u)\qquad\text{ in }L^{\infty}(\mathbb{R}^{2}_{+}),\] _where \(\iota:\overline{C}(\mathbb{H})\to C(\overline{\mathcal{H}})\) is an isometrically isomorphism. Moreover, sequence \((\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\) converges to \((\rho,\rho u):\,\mathbb{R}^{2}_{+}\to\overline{\mathcal{H}}\) a.e. if and only if_ \[\nu_{(t,r)}(\rho,u)=\delta_{(\rho(t,r),m(t,r))}\qquad\text{a.e. }(t,r)\in \mathbb{R}^{2}_{+},\] _in the phase coordinates \((\rho,m)\) with \(m=\rho u\)._ From now on, we often use the same letter \(\nu_{(t,r)}\) for an element of \(\big{(}\overline{C}(\mathbb{H})\big{)}^{*}\) or \(\big{(}C(\overline{\mathcal{H}})\big{)}^{*}\), and use the same letter for \(\iota(\phi)\) and \(\phi\) for simplicity, when no confusion arises. The following lemma shows that the Young measure \(\nu_{(t,r)}\), generated by the sequence of measurable approximate solutions \((\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\) satisfying the assumptions of Theorem 2.2, is only supported on the interior of \(\mathcal{H}\). Moreover, the Young measure \(\nu_{(t,r)}\) can be extended to a larger class of test functions than just \(\overline{C}(\mathbb{H})\). This is proved in [12, Proposition 5.1]; also see [41, Proposition 2.3]. **Lemma 8.2** ([12, Proposition 5.1]).: _The following statements hold_:__ 1. _For the Young measure_ \(\nu_{(t,r)}\) _generated by the sequence of measurable approximate solutions_ \((\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\) _satisfying the assumptions in_ Theorem 2.2_,_ \[(t,r)\mapsto\int_{\mathbb{H}}(\rho^{\gamma_{2}+1}+\rho|u|^{3})\,\mathrm{d}\nu_ {(t,r)}(\rho,u)\in L^{1}_{\operatorname{loc}}(\mathbb{R}^{2}_{+}).\] 2. _Let_ \(\phi(\rho,u)\) _be a function such that_ 1. \(\phi\) _is continuous on_ \(\overline{\mathbb{H}}\) _and_ \(\phi=0\) _on_ \(\partial\mathbb{H}\)_,_ 2. _there exists a constant_ \(\mathfrak{a}>0\) _such that_ \(\operatorname{supp}\phi\subset\{u+k(\rho)\geq-\mathfrak{a},u-k(\rho)\leq \mathfrak{a}\}\)_,_ 3. \(|\phi(\rho,u)|\leq\rho^{\beta(\gamma_{2}+1)}\) _for all_ \((\rho,u)\) _with large_ \(\rho\) _and some_ \(\beta\in(0,1)\)_._ _Then_ \(\phi\) _is_ \(\nu_{t,r}\)_-integrable for_ \((t,r)\in\mathbb{R}^{2}_{+}\) _a.e. and_ \[\phi(\rho^{\varepsilon}(t,r),u^{\varepsilon}(t,r))\rightharpoonup\int_{ \mathbb{H}}\phi(\rho,u)\,\mathrm{d}\nu_{(t,r)}(\rho,u)\qquad\text{in }L^{1}_{ \operatorname{loc}}(\mathbb{R}^{2}_{+}).\] 3. \(\nu_{(t,r)}\in\operatorname{Prob}(\mathcal{H})\) _for_ \((t,r)\in\mathbb{R}^{2}_{+}\) _a.e., that is,_ \(\nu_{(t,r)}\big{(}\overline{\mathcal{H}}\backslash(\mathbb{H}\cup V)\big{)}=0\)_._ We now prove the commutation relation. Since we only have the \(W^{-1,p}_{\operatorname{loc}}\)-compactness of the entropy dissipation measures for \(p\in[1,2)\), the classical div-curl lemma in [56] fails to obtain the commutation relation. Hence, we adopt an improved version of the div-curl lemma. **Lemma 8.3** ([18, Theorem]).: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open bounded set, \(p,q\in(1,\infty)\) with \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\mathbf{v}^{\varepsilon}\) and \(\mathbf{w}^{\varepsilon}\) are sequences of vector fields such that_ \[\mathbf{v}^{\varepsilon}\rightharpoonup\mathbf{v}\ \text{ in }L^{p}(\Omega;\mathbb{R}^{n}), \quad\mathbf{w}^{\varepsilon}\rightharpoonup\mathbf{w}\ \text{ in }L^{q}(\Omega;\mathbb{R}^{n})\qquad\text{ as }\varepsilon\to 0.\] _Suppose that \(\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}\) is equi-integrable uniformly in \(\varepsilon\), and_ \[\operatorname{div}\mathbf{v}^{\varepsilon} \text{ is }(\text{pre-})\text{compact in }W^{-1,1}(\Omega),\] \[\operatorname{curl}\mathbf{w}^{\varepsilon} \text{ is }(\text{pre-})\text{compact in }W^{-1,1}(\Omega;\mathbb{R}^{n\times n}).\] _Then \(\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}\rightharpoonup\mathbf{v} \cdot\mathbf{w}\) in \(\mathcal{D}^{\prime}(\Omega)\)._ **Lemma 8.4** (Commutation relation).: _Let \(\{(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\}_{\varepsilon>0}\) be the measurable approximate solutions satisfying the assumptions of_ Theorem 2.2_, and let \(\nu_{(t,r)}\) be a Young measure generated by the family \(\{(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\}_{\varepsilon>0}\) in_ Lemma 8.2_. Then_ \[\overline{\chi(s_{1})\sigma(s_{2})-\chi(s_{2})\sigma(s_{1})}=\overline{\chi(s_ {1})}\;\overline{\sigma(s_{2})}-\overline{\chi(s_{2})}\;\overline{\sigma(s_{ 1})} \tag{8.1}\] _for all \(s_{1},s_{2}\in\mathbb{R}\), where \(\overline{f}:=\int f\,\mathrm{d}\nu_{(t,r)}\), \(\chi(s_{i})=\chi(\cdot,\cdot-s_{i})\), and \(\sigma(s_{i})=\sigma(\cdot,\cdot-s_{i})\)._ **Proof.** For any \(\psi\in C_{0}^{2}(\mathbb{R})\), it follows from Lemmas 4.5, 4.7, and 4.10 that \[|\eta^{\psi}(\rho,m)|\leq C_{\psi}\rho,\quad|q^{\psi}(\rho,m)|\leq C_{\psi} \big{(}\rho+\rho^{1+\theta_{2}}\big{)}. \tag{8.2}\] It is clear that the support of \((\eta^{\psi},q^{\psi})\) is contained in \(\{k(\rho)+u\geq-L,u-k(\rho)\leq L\}\) for some \(L>0\) depending only on \(\mathrm{supp}\;\psi\). For any \(\psi_{1},\psi_{2}\in C_{0}^{2}(\mathbb{R})\), we consider the sequences of vector fields: \[\mathbf{v}^{\varepsilon}=(\eta^{\psi_{1}}(\rho^{\varepsilon},\rho^{\varepsilon }u^{\varepsilon}),q^{\psi_{1}}(\rho^{\varepsilon},\rho^{\varepsilon}u^{ \varepsilon})),\qquad\mathbf{w}^{\varepsilon}=(q^{\psi_{2}}(\rho^{ \varepsilon},\rho^{\varepsilon}u^{\varepsilon}),-\eta^{\psi_{2}}(\rho^{ \varepsilon},\rho^{\varepsilon}u^{\varepsilon})).\] Noting \(\rho^{\varepsilon}\in L^{1+\gamma_{2}}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) and (8.2), we see that both \(\mathbf{v}^{\varepsilon}\) and \(\mathbf{w}^{\varepsilon}\) are uniformly bounded sequences in \(L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\). Moreover, by Lemma 8.2 and the uniqueness of weak limits, we obtain \[\mathbf{v}^{\varepsilon}\rightharpoonup(\overline{\eta^{\psi_{1}}},\overline{ q^{\psi_{1}}})\;\;\;\text{in}\;L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}),\qquad \mathbf{w}^{\varepsilon}\rightharpoonup(\overline{q^{\psi_{2}}},-\overline{ \eta^{\psi_{2}}})\;\;\;\text{in}\;L^{2}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}).\] By direct calculation, we see that \[\mathrm{div}\,\mathbf{v}^{\varepsilon}=\partial_{t}\eta^{\psi_{ 1}}(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})+\partial_{r}q^{\psi _{1}}(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\quad\text{ are compact in }W^{-1,1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}),\] \[\mathrm{curl}\,\mathbf{w}^{\varepsilon}=\partial_{t}\eta^{\psi_{ 2}}(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})+\partial_{r}q^{\psi _{2}}(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\quad\text{are compact in }W^{-1,1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}).\] Using (8.2), we obtain that \(|\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}|\leq C\big{(}(\rho^{ \varepsilon})^{2}+(\rho^{\varepsilon})^{2+\theta_{2}}\big{)}\) for \(\rho>0\) which, with (2.25), yields that \(\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}\in L^{\frac{1+\gamma_{2} }{2+\theta_{2}}}_{\mathrm{loc}}(\mathbb{R}^{2}_{+})\) uniformly in \(\varepsilon\). Thus, \(\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}\) is equi-integrable uniformly in \(\varepsilon\) since \(\frac{1+\gamma_{2}}{2+\theta_{2}}>1\). It follows from Lemma 8.3 that \[\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}\,\to\,\overline{\eta^{ \psi_{1}}}\;\overline{q^{\psi_{2}}}-\overline{\eta^{\psi_{2}}}\;\overline{q^{ \psi_{1}}}\qquad\text{in the sense of distributions in }\mathbb{R}^{2}_{+}. \tag{8.3}\] On the other hand, using (8.2) and Lemma 8.2, we find that \[\mathbf{v}^{\varepsilon}\cdot\mathbf{w}^{\varepsilon}\,\to\,\overline{\eta^{ \psi_{1}}q^{\psi_{2}}-\eta^{\psi_{2}}q^{\psi_{1}}}\qquad\text{ in }L^{1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}),\] which, with (8.3), yields that \[\overline{\eta^{\psi_{1}}q^{\psi_{2}}-\eta^{\psi_{2}}q^{\psi_{1}}}=\overline{ \eta^{\psi_{1}}}\;\overline{q^{\psi_{2}}}-\overline{\eta^{\psi_{2}}}\; \overline{q^{\psi_{1}}}. \tag{8.4}\] It follows from (8.4) and Fubini's theorem that \[\int_{\mathbb{R}}\int_{\mathbb{R}}\Big{(}\overline{\chi(s_{1})\sigma(s_{2})- \chi(s_{2})\sigma(s_{1})}\psi_{1}(s_{1})\psi_{2}(s_{2})-\overline{\chi(s_{1})} \;\overline{\sigma(s_{2})}+\overline{\chi(s_{2})}\;\overline{\sigma(s_{1})} \Big{)}\psi_{1}(s_{1})\psi_{2}(s_{2})\,\mathrm{d}s_{1}\mathrm{d}s_{2}=0.\] Since \(\psi_{1},\psi_{2}\in C_{0}^{2}(\mathbb{R})\) are arbitrary, we conclude \[\overline{\chi(s_{1})\sigma(s_{2})-\chi(s_{2})\sigma(s_{1})}=\overline{\chi(s_ {1})}\;\overline{\sigma(s_{2})}-\overline{\chi(s_{2})}\;\overline{\sigma(s_{1})} \qquad\text{for any }s_{1},s_{2}\in\mathbb{R}.\] This completes the proof. \(\square\) **Theorem 8.5** (Reduction of the Young measure).: _Let \(\nu_{(t,r)}\in\mathrm{Prob}(\mathcal{H})\) be the Young measure generated by sequence \(\{(\rho^{\varepsilon},\rho^{\varepsilon}u^{\varepsilon})\}_{\varepsilon>0}\) in_ Lemma 8.2_. Then either \(\nu_{(t,r)}\) is contained in \(V\) or the support of \(\nu\) is a single point in \(\mathbb{H}\)._ **Proof.** The proof is similar to [10, 46, 61, 62]. Since the estimates of the entropy kernel and entropy-flux kernel are different, we sketch the proof for self-containedness of this paper. Taking \(s_{1},s_{2},s_{3}\in\mathbb{R}\) and multiplying (8.1) by \(\overline{\chi(s_{3})}\), one obtains \[\overline{\chi(s_{3})}\,\overline{\chi(s_{1})\,\sigma(s_{2})-\chi(s_{2})\, \sigma(s_{1})}=\overline{\chi(s_{3})}\,\overline{\chi(s_{1})}\,\overline{ \sigma(s_{2})}-\overline{\chi(s_{3})}\,\overline{\chi(s_{2})}\,\overline{ \sigma(s_{1})}.\] Cyclically permuting index \(s_{j}\) and adding the resultant equations together, we have \[\overline{\chi(s_{1})}\,\overline{\chi(s_{2})\sigma(s_{3})-\chi(s_{3})\sigma( s_{2})}=\overline{\chi(s_{3})}\,\overline{\chi(s_{2})\sigma(s_{1})-\chi(s_{1}) \sigma(s_{2})}-\overline{\chi(s_{2})}\,\overline{\chi(s_{3})\sigma(s_{1})- \chi(s_{1})\sigma(s_{3})}.\] Applying the fractional derivative operators \(P_{2}:=\partial_{s_{2}}^{\lambda_{1}+1}\) and \(P_{3}:=\partial_{s_{3}}^{\lambda_{1}+1}\) in the sense of distributions to obtain \[\begin{split}&\overline{\chi(s_{1})}\,\overline{P_{2}\chi(s_{2}) \,P_{3}\sigma(s_{3})-P_{3}\chi(s_{3})\,P_{2}\sigma(s_{2})}\\ &=\overline{P_{3}\chi(s_{3})}\,\overline{P_{2}\chi(s_{2})\, \sigma(s_{1})-\chi(s_{1})\,P_{2}\sigma(s_{2})}-\overline{P_{2}\chi(s_{2})}\, \overline{P_{3}\chi(s_{3})\,\sigma(s_{1})-\chi(s_{1})\,P_{3}\sigma(s_{3})}, \end{split} \tag{8.5}\] where, for example, distribution \(\overline{P_{2}\chi(s_{2})}\) is defined by \[\langle\overline{P_{2}\chi(s_{2})},\psi\rangle=-\int_{\mathbb{R}}\overline{ \partial_{s_{2}}^{\lambda_{1}}\chi(s_{2})}\,\psi^{\prime}(s_{2})\,\mathrm{d}s _{2}\qquad\text{ for }\psi\in\mathcal{D}(\mathbb{R}).\] We take two standard but different functions \(\phi_{2},\phi_{3}\in C_{0}^{\infty}(-1,1)\) such that \(\int_{\mathbb{R}}\phi_{j}(s_{j})\,\mathrm{d}s_{j}=1\) with \(\phi_{j}\geq 0\) for \(j=2,3\). For \(\tau>0\), denote \(\phi_{j}^{\tau}(s_{j}):=\frac{1}{\tau}\phi_{j}(\frac{s_{j}}{\tau})\). As indicated in [41], we can always choose \(\phi_{2}\) and \(\phi_{3}\) such that \[Y(\phi_{2},\phi_{3})=\int_{-\infty}^{\infty}\int_{-\infty}^{s_{2}}\left(\phi_{ 2}(s_{2})\phi_{3}(s_{3})-\phi_{2}(s_{3})\phi_{3}(s_{2})\right)\mathrm{d}s_{3} \mathrm{d}s_{2}>0. \tag{8.6}\] Multiplying (8.5) by \(\phi_{2}^{\tau}(s_{1}-s_{2})\phi_{3}^{\tau}(s_{1}-s_{3})\) and integrating the resultant equation with respect to \((s_{2},s_{3})\) yield \[\overline{\chi(s_{1})}\,\overline{P_{2}\chi_{2}^{\tau}\,P_{3}\sigma_{3}^{\tau }-P_{3}\chi_{3}^{\tau}\,\overline{P_{2}\sigma_{2}^{\tau}}}=\overline{P_{3} \chi_{3}^{\tau}}\,\overline{P_{2}\chi_{2}^{\tau}\,\sigma_{1}-\chi_{1}\,P_{2} \sigma_{2}^{\tau}}-\overline{P_{2}\chi_{2}^{\tau}}\,\overline{P_{3}\chi_{3}^ {\tau}\,\sigma_{1}-\chi_{1}\,P_{3}\sigma_{3}^{\tau}}, \tag{8.7}\] where we have used the notion: \[\overline{P_{j}\chi_{j}^{\tau}}=\overline{P_{j}\chi_{j}}*\phi_{j}^{\tau}(s_{1 })=\int_{\mathbb{R}}\overline{\partial_{s_{j}}^{\lambda}\chi(s_{j})}\frac{1}{ \tau^{2}}\phi_{j}^{\prime}(\frac{s_{1}-s_{j}}{\tau})\,\mathrm{d}s_{j}\qquad \text{ for }j=2,3.\] Multiplying (8.7) by \(\psi(s_{1})\in\mathcal{D}(\mathbb{R})\), integrating the resultant equation with respect to \(s_{1}\), then taking limit \(\tau\to 0\) and applying Lemmas 8.8-8.9 below, we obtain \[Y(\phi_{2},\phi_{3})\int_{\mathcal{H}}Z(\rho)\sum_{\pm}(K^{\pm})^{2}\, \overline{\chi(u\pm k(\rho))}\,\psi(u\pm k(\rho))\,\mathrm{d}\nu_{(t,r)}(\rho,u )=0. \tag{8.8}\] Noting that \(Z(\rho)>0\) for \(\rho>0\) from Lemma 8.7 below, \(Y(\phi_{2},\phi_{3})>0\) from (8.6), and \(\psi(s)\) is an arbitrary test function, we deduce from (8.8) that \[\int_{\mathcal{H}}Z(\rho)\,\overline{\chi(u\pm k(\rho))}\,\mathrm{d}\nu_{(t,r)}( \rho,u)=0. \tag{8.9}\] We define \(\mathbb{S}=\{s\in\mathbb{R}\,:\,\overline{\chi(s)}>0\}\). It follows from [61] that \(\mathbb{S}\) admits the representation: \[\mathbb{S}=\left\{s\in\mathbb{R}\,:\,u-k(\rho)<s<u+k(\rho)\ \text{ with }(\rho,u)\in\mathrm{supp}\,\nu_{(t,r)}\right\}.\] For the case: \(\mathbb{S}=\emptyset\), it is clear that \(\overline{\chi(s)}=0\) for all \(s\in\mathbb{R}\) so that \(\mathrm{supp}\,\nu_{(t,r)}\subset V\) since \(\chi(s)>0\) for all \(\rho>0\) and \(s\in(u-k(\rho),u+k(\rho))\). For the case: \(\mathbb{S}\neq\emptyset\), it follows from (8.15) below that \(s\mapsto\overline{\chi(s)}\) is a continuous map. Then \(\mathbb{S}\) is an open set so that \(\mathbb{S}\) is at most a countable union of open intervals. Thus, we may write \[\mathbb{S}=\bigcup_{k}(\zeta_{k},\,\xi_{k})\] for at most countably many numbers \(\zeta_{k}:=u_{k}-k(\rho_{k})\) and \(\xi_{k}:=u_{k}+k(\rho_{k})\) with \((\rho_{k},u_{k})\in\operatorname{supp}\nu_{(t,r)}\) in the extended real line \(\mathbb{R}\cup\{\pm\infty\}\) such that \(\zeta_{k}<\xi_{k}\leq\zeta_{k+1}\) for all \(k\). For later use, we denote the Riemann invariants \(z(\rho,u):=u-k(\rho)\) and \(w(\rho,u):=u+k(\rho)\). Thus, noting \(\operatorname{supp}\chi(s)=\{(\rho,u)\,:\,z(\rho,u)\leq s\leq w(\rho,u)\}\), we obtain \[\operatorname{supp}\nu_{(t,r)}\subset\bigcup_{k}\{(\rho,u)\in\mathbb{H}\,:\, \zeta_{k}\leq z(\rho,u)<w(\rho,u)\leq\xi_{k}\}\cup V.\] If \(\zeta_{k}\) and \(\xi_{k}\) are both finite, due to the fact that \(k(\rho)\) is a strictly monotone increasing and unbounded function of \(\rho\), it is clear that \(\{(\rho,u)\,:\,\zeta_{k}\leq z(\rho,u)\leq w(\rho,u)\leq\xi_{k}\}\) is bounded. Now we deduce from (8.9) that \[\operatorname{supp}\nu_{(t,r)}\cap\{(\rho,u)\in\mathbb{H}\,:\,\zeta_{k}<z( \rho,u)<w(\rho,u)<\xi_{k}\}=\emptyset\qquad\text{for all $k$}.\] Thus, the support of measure \(\nu_{(t,r)}\) must be contained in the vacuum set \(V\) and at most a countable union of points \(P_{k}(\rho_{k},\,u_{k})\): \[\operatorname{supp}\nu_{(t,r)}\subset V\cup\big{(}\bigcup_{\{k:\zeta_{k}, \xi_{k}\text{ are finite}\}}P_{k}(\rho_{k},u_{k})\big{)}.\] Therefore, we may write \[\nu_{(t,r)}=\nu_{V}+\sum_{k}\alpha_{k}\delta_{P_{k}} \tag{8.10}\] for all \(\alpha_{k}\in[0,1]\) with measure \(\nu_{V}\) supported on the vacuum set \(V\). For later use, we denote \[\chi(P_{k},s):=\chi(\rho_{k},u_{k},s),\quad\sigma(P_{k},s):=\sigma(\rho_{k},u_ {k},s)\qquad\text{for $s\in\mathbb{R}$}.\] We claim that, if \(\chi(P_{k},s)>0\), then \(\chi(P_{k^{{}^{\prime}}},s)=0\) for all \(k\neq k^{\prime}\). Indeed, recall that \(\operatorname{supp}\chi(s)=\{(\rho,u)\,:\,z(\rho,u)\leq s\leq w(\rho,u)\}\) and that \(\chi(\rho,u,s)>0\) if and only if \(z(\rho,u)<s<w(\rho,u)\). If \(\chi(P_{k},s)>0\), then \(\zeta_{k}<s<\xi_{k}\). If, in addition, \(\chi(P_{k^{\prime}},s)>0\) for some \(k\neq k^{\prime}\), it must hold that \(\zeta_{k^{\prime}}<s<\xi_{k^{\prime}}\). However, since \(\xi_{k-1}\leq\zeta_{k}<\xi_{k}\leq\zeta_{k+1}\), this is impossible for any \(P_{k^{{}^{\prime}}}\) with \(k^{{}^{\prime}}\neq k\). Thus, taking \(s_{1},s_{2}\in\mathbb{R}\) such that \(\chi(P_{k},s_{1})\chi(P_{k},s_{2})>0\), we deduce from the commutation relation (8.1) and (8.10) that \[(\alpha_{k}-\alpha_{k}^{2})\big{(}\chi(P_{k},s_{1})\sigma(P_{k},s_{2})-\chi(P _{k},s_{2})\sigma(P_{k},s_{1})\big{)}=0.\] Now, choosing \(s_{1}\) and \(s_{2}\) such that the second factor in this expression is non-zero, we obtain that \(\alpha_{k}=0\) or \(1\) for all \(k\). This completes the proof. Combining Theorem 8.5 with Lemma 8.1, we conclude that \((\rho^{\varepsilon},m^{\varepsilon})\) converges to \((\rho,m)\) almost everywhere Moreover, noting that \(|m^{\varepsilon}|^{\frac{3(\gamma_{2}+1)}{\gamma_{2}+3}}\leq C\big{(}(\rho^{ \varepsilon})^{\gamma_{2}+1}+\rho^{\varepsilon}|u^{\varepsilon}|^{3}\big{)}\) for any \(T,d,D>0\), we have \[\int_{0}^{T}\int_{d}^{D}|m^{\varepsilon}|^{\frac{3(\gamma_{2}+1)}{\gamma_{2}+3 }}\,\mathrm{d}r\mathrm{d}t\leq C\int_{0}^{T}\int_{d}^{D}\big{(}(\rho^{ \varepsilon})^{\gamma_{2}+1}+\rho^{\varepsilon}|u^{\varepsilon}|^{3}\big{)} \,\mathrm{d}r\mathrm{d}t\leq C(d,D,T),\] which implies that \(m^{\varepsilon}\) is uniformly bounded in \(L_{\mathrm{loc}}^{\frac{3(\gamma_{2}+1)}{\gamma_{2}+3}}(\mathbb{R}_{+}^{2})\) with respect to \(\varepsilon\). This implies that (2.31) holds. Therefore, the proof of Theorem 2.2 is complete. Now, we are going to prove the auxiliary lemmas, Lemmas 8.8-8.9, which are used in the proof of Theorem 8.5. We first recall two useful lemmas in [10, 41]. **Lemma 8.6** ([41, Lemmas 3.8-3.9]).: _Let \(\mathfrak{R}\in C^{\alpha}_{\mathrm{loc}}(\mathbb{R})\) be a Holder continuous function for some \(\alpha\in(0,1)\) and \(g\in C^{\alpha}_{0}(\mathbb{R})\) be a Holder continuous function with compact support. Assume \(L_{0}>2\) such that \(\operatorname{supp}g\subset B_{L_{0}-2}(0)\)._ * _For any pair of distributions_ \(T_{2},T_{3}\in\mathcal{D}^{\prime}(\mathbb{R})\) _from the following collection_:__ \[(T_{2},T_{3})=(\delta,Q_{3}),\ (PV,Q_{3}),\ (Q_{2},Q_{3})\] _with \(Q_{2},Q_{3}\in\{H,Ci,R\}\), there exists a constant \(C>0\) independent of \((\rho,u)\) such that_ \[\sup_{\tau\in(0,1)}\Big{|}\int_{-\infty}^{\infty}g(s_{1})\Big{\{} \big{(}T_{2}(s_{2}-u\pm k(\rho))T_{3}(s_{3}-u\pm k(\rho))\] \[\qquad\qquad\qquad-T_{2}(s_{3}-u\pm k(\rho))T_{3}(s_{2}-u\pm k( \rho))\big{)}*\phi_{2}^{\tau}*\phi_{3}^{\tau}\Big{\}}\,(s_{1})\,\,\mathrm{d}s_{ 1}\Big{|}\] \[\leq C\|g\|_{C^{\alpha}(\mathbb{R})}\big{(}1+\|\mathfrak{R}\|_{C^ {\alpha}(\overline{B_{L_{0}}(0)})}\big{)}^{2}.\] * _For any pair of distributions from_ \[(T_{2},T_{3})=(\delta,\delta),\ (PV,PV),\ (Q_{2},Q_{3}),\ (\delta,PV),\ (PV,Q_{3}),\] _with_ \(Q_{2},Q_{3}\in\{H,Ci,R\}\)_, there exists a constant_ \(C>0\) _independent of_ \((\rho,u)\) _such that_ \[\sup_{\tau\in(0,1)}\Big{|}\int_{-\infty}^{\infty}\Big{\{}\big{(}(s _{2}-s_{3})T_{2}(s_{2}-u\pm k(\rho))T_{3}(s_{3}-u\pm k(\rho))\big{)}*\phi_{2}^{ \tau}*\phi_{3}^{\tau}\Big{\}}(s_{1})\,\mathrm{d}s_{1}\Big{|}\] \[\qquad\leq C\|g\|_{C^{\alpha}(\mathbb{R})}\big{(}1+\|\mathfrak{R} \|_{C^{\alpha}(\overline{B_{L_{0}}(0)})}\big{)}^{2}.\] Motivated by [10], it follows from Lemmas 4.2-4.3, (1.4), and a direct calculation that \[D(\rho):= \,a_{1}(\rho)b_{1}(\rho)-2k(\rho)^{2}(a_{1}(\rho)b_{2}(\rho)-a_{2 }(\rho)b_{1}(\rho)) \tag{8.11}\] \[= \,\frac{M_{\lambda_{1}}^{2}}{2(\lambda_{1}+1)}k(\rho)^{-2\lambda_ {1}}k^{\prime}(\rho)^{-2}\big{(}k^{\prime}(\rho)+(\rho k^{\prime}(\rho))^{ \prime}\big{)}>0\qquad\text{for $\rho>0$}.\] **Lemma 8.7** ([10, Lemmas 4.2-4.3]).: _The mollified fractional derivatives of the entropy kernel and the entropy flux kernel satisfy the following convergence properties_:__ * _When_ \(0\leq\rho<\infty\)_,_ \[P_{2}\chi_{2}^{\tau}P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}P_{2}\sigma_{2} ^{\tau}\longrightarrow Y(\phi_{2},\phi_{3})Z(\rho)\sum_{\pm}(K^{\pm})^{2} \delta_{s_{1}=u\pm k(\rho)}\] _as_ \(\tau\to 0\) _weakly-star in measures in_ \(s_{1}\) _and locally uniformly in_ \((\rho,u)\)_, where_ \(Y(\phi_{2},\phi_{3})\) _satisfies (_8.6_),_ \(Z(\rho):=(\lambda_{1}+1)M_{\lambda}^{-2}k(\rho)^{2\lambda}D(\rho)>0\) _with_ \(D(\rho)\) _defined in (_8.11_), and_ \(K^{\pm}\neq 0\) _are some constants._ * _For_ \(j=2,3\)_,_ \(\chi_{1}\,P_{j}\sigma_{j}^{\tau}-\sigma_{1}\,P_{j}\chi_{j}^{\tau}\) _are Holder continuous in_ \((\rho,u,s_{1})\)_, uniformly in_ \(\tau\)_, and there exists a Holder continuous function_ \(X=X(\rho,u,s_{1})\)_, independent of the mollifying sequence_ \(\phi_{j}\)_, such that_ \[\chi(s_{1})P_{j}\sigma_{j}^{\tau}-P_{j}\chi_{j}^{\tau}\sigma(s_{1}) \longrightarrow X(\rho,u,s_{1})\qquad\text{as $\tau\to 0$}\] _uniformly in_ \((\rho,u,s_{1})\) _on the sets on which_ \(\rho\) _is bounded._ **Lemma 8.8**.: _For any test function \(\psi\in\mathcal{D}(\mathbb{R})\),_ \[\lim_{\tau\to 0}\int_{\mathbb{R}}\overline{\chi(s_{1})}\ \overline{P_{2}\chi_{2}^{\tau}P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}P_{2} \sigma_{2}^{\tau}}(s_{1})\psi(s_{1})\,\mathrm{d}s_{1} \tag{8.12}\] \[\quad=Y(\phi_{2},\phi_{3})\int_{\mathcal{H}}Z(\rho)\sum_{\pm}(K^{ \pm})^{2}\,\overline{\chi(u\pm k(\rho))}\,\psi(u\pm k(\rho))\,\mathrm{d}\nu_{(t,r)}(\rho,u),\] _where \(Y(\phi_{2},\phi_{3})\) is defined by (8.6) and \(Z(\rho)\) is given in Lemma 8.7._ **Proof.** It follows from Lemma 8.7 that, when \(\rho\) is bounded, \[P_{2}\chi_{2}^{\tau}P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}P_{2}\sigma_{2} ^{\tau}\to Y(\phi_{2},\phi_{3})Z(\rho)\sum_{\pm}\big{(}K^{\pm}\big{)}^{2}\, \delta_{s_{1}=u\pm k(\rho)}\qquad\text{as $\tau\to 0$}\] locally uniform in \((\rho,u)\) and hence pointwise for all \((\rho,u)\). Therefore, we have \[\lim_{\tau\to 0}\int_{-\infty}^{\infty}\overline{\chi(s_{1})} \langle\,\nu,\,(P_{2}\chi_{2}^{\tau}P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau} P_{2}\sigma_{2}^{\tau})\mathbf{I}_{\{\rho\leq\rho^{*}\}}\rangle\psi(s_{1})\, \mathrm{d}s_{1}\] \[=\lim_{\tau\to 0}\langle\,\nu,\,\int_{-\infty}^{\infty}\overline{ \chi(s_{1})}(P_{2}\chi_{2}^{\tau}P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}P_{ 2}\sigma_{2}^{\tau})\psi(s_{1})\,\mathrm{d}s_{1}\mathbf{I}_{\{\rho\leq\rho^{*} \}}\rangle\] \[=\langle\,\nu,\,Y(\phi_{2},\phi_{3})Z(\rho)\sum_{\pm}(K^{\pm})^{2 }\overline{\chi(u\pm k(\rho))}\psi(u\pm k(\rho))\mathbf{I}_{\{\rho\leq\rho^{*} \}}\rangle\] \[=Y(\phi_{2},\phi_{3})\sum_{\pm}(K^{\pm})^{2}\langle\,\nu,\,Z( \rho)\overline{\chi(u\pm k(\rho))}\psi(u\pm k(\rho))\mathbf{I}_{\{\rho\leq\rho ^{*}\}}\rangle. \tag{8.13}\] For \(\rho\geq\rho^{*}\), we notice that \[P_{2}\chi_{2}^{\tau}\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}\,P_{2} \sigma_{2}^{\tau}=P_{2}\chi_{2}^{\tau}\,P_{3}(\sigma_{3}^{\tau}-u\chi_{3}^{ \tau})-P_{3}\chi_{3}^{\tau}\,P_{2}(\sigma_{2}^{\tau}-u\chi_{2}^{\tau}). \tag{8.14}\] Using Lemma 4.14, we see that (8.14) consists of a sum of terms of the form: \[A_{i,\pm}(\rho)B_{j,\pm}(\rho)(s_{3}-s_{2})T_{2}(s_{2}-u\pm k(\rho))T_{3}(s_{3} -u\pm k(\rho))\] with \(T_{2},T_{3}\in\{\delta,\mathrm{PV},H,\mathrm{Ci}\}\), the terms of the form: \[A_{i,\pm}(\rho)B_{j,\pm}(\rho)\big{(}T_{2}(s_{2}-u\pm k(\rho))T_{3}(s_{3}-u\pm k (\rho))-T_{2}(s_{3}-u\pm k(\rho))T_{3}(s_{2}-u\pm k(\rho))\big{)},\] and the terms of the form: \[A_{i,\pm}(\rho)B_{j,\pm}(\rho)(s_{3}-u)\big{(}T_{2}(s_{2}-u\pm k(\rho))T_{3}(s _{3}-u\pm k(\rho))-T_{2}(s_{3}-u\pm k(\rho))T_{3}(s_{2}-u\pm k(\rho))\big{)}\] with \(T_{2}\in\{\delta,H,\mathrm{PV},\mathrm{Ci},r_{\chi}\}\) and \(T_{3}\in\{H,\mathrm{Ci},r_{\sigma}\}\). We emphasize that, in the last two cases, when \(T_{2},T_{3}\in\{r_{\chi},r_{\sigma}\}\), \(A_{i,\pm}(\rho)B_{j,\pm}(\rho)\) or \(A_{i,\pm}(\rho)B_{j,\pm}(\rho)(s_{k}-u)\) should be replaced by \(1\). Before applying Lemma 8.6, we now show that \(\overline{\chi(s)}\) is Holder continuous. In fact, it follows from Corollary 4.12 and Lemma 8.2 that, for any \(s,s^{\prime}\in\mathbb{R}\) and \(\alpha\in(0,\min\{\lambda_{1},1\}]\), \[\sup_{s,s^{\prime}\in\mathbb{R}}\frac{|\overline{\chi(s)}-\overline{\chi(s^{ \prime})}|}{|s-s^{\prime}|^{\alpha}}=\int_{\mathcal{H}}\frac{|\chi(s)-\chi(s^{ \prime})|}{|s-s^{\prime}|^{\alpha}}\,\mathrm{d}\nu\leq\int_{\mathcal{H}}C\big{(} 1+\rho|\ln\rho|\big{)}\,\mathrm{d}\nu_{(t,r)}<\infty, \tag{8.15}\] which implies that \(\overline{\chi(s)}\) is Holder continuous. Hence, using Lemma 8.6 and the fact that \(|s_{j}-u|\leq k(\rho)\) for \(j=2,3\), we obtain \[\Big{|}\int_{-\infty}^{\infty}\overline{\chi(s_{1})}(P_{2}\chi_{2 }^{\tau}P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}P_{2}\sigma_{2}^{\tau}) \psi(s_{1})\,\mathrm{d}s_{1}\mathbf{I}_{\{\rho\geq\rho^{*}\}}\Big{|}\] \[\leq C\max_{j,k,\pm}\Big{\{}|A_{j,\pm}k(\rho)|\big{(}|B_{k,\pm}|+ \|r_{\sigma}(\rho,\cdot)\|_{C^{\alpha_{1}}(\overline{B_{L_{0}}})}\big{)},\,\|r_ {\chi}(\rho,\cdot)\|_{C^{\alpha_{1}}(\overline{B_{L_{0}}})}\big{(}|B_{j,\pm}k( \rho)|+\|r_{\sigma}(\rho,\cdot)\|_{C^{\alpha_{1}}(\overline{B_{L_{0}}})} \big{)}\Big{\}}\] \[\leq C\big{(}1+\rho^{2+\theta_{2}}\big{)}=C\big{(}\rho^{\beta( \gamma_{2}+1)}+1\big{)}\qquad\text{for $\rho\geq\rho^{*}$},\] with \(L_{0}:=|\operatorname{supp}\psi|+2\) and \(\beta=\frac{\theta_{2}+2}{\gamma_{2}+1}\in(0,1)\). Thus, using Lemmas 8.2 and 8.7, and Lebesgue's dominated convergence theorem, we obtain \[\lim_{\tau\to 0}\int_{-\infty}^{\infty}\overline{\chi(s_{1})} \langle\,\nu_{(t,r)},\,(P_{2}\chi_{2}^{\tau}\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_ {3}^{\tau}\,P_{2}\sigma_{2}^{\tau})\mathbf{I}_{\{\rho\geq\rho^{*}\}}\rangle\, \psi(s_{1})\,\mathrm{d}s_{1}\] \[=Y(\phi_{2},\phi_{3})\sum_{\pm}(K^{\pm})^{2}\langle\nu_{(t,r)},\,Z( \rho)\,\overline{\chi(u\pm k(\rho))}\,\psi(u\pm k(\rho))\mathbf{I}_{\{\rho\geq \rho^{*}\}}\rangle,\] which, with (8.13), yields (8.12). This completes the proof. **Lemma 8.9**.: _For any test function \(\psi\in\mathcal{D}(\mathbb{R})\),_ \[\lim_{\tau\to 0}\int_{\mathbb{R}}\left(\overline{P_{3}\chi_{3}^{\tau}}\,\, \overline{P_{2}\chi_{2}^{\tau}\,\sigma(s_{1})-\chi(s_{1})\,P_{2}\sigma_{2}^{\tau} }-\overline{P_{2}\chi_{2}^{\tau}}\,\,\overline{P_{3}\chi_{3}^{\tau}\,\sigma(s_{1 })+\chi(s_{1})\,P_{3}\sigma_{3}^{\tau}}\right)\psi(s_{1})\,\mathrm{d}s_{1}=0. \tag{8.16}\] **Proof.** Fix \((\rho,u)\in\mathbb{H}\). It follows from Lemma 8.7 that \[\big{(}\chi(s_{1})\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}\,\sigma(s_{1}) \big{)}(\rho,u,s_{1})\longrightarrow X(\rho,u,s_{1})\qquad\text{uniformly in $s_{1}$ as $\tau\to 0$.}\] It is clear that \[\int_{\mathbb{R}}\overline{P_{2}\chi_{2}^{\tau}}\,\big{(}\chi(s_{1 })\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}\,\sigma(s_{1})\big{)}\,\psi(s_{ 1})\,\mathrm{d}s_{1}\] \[=\int_{\mathcal{H}}\int_{\mathbb{R}}(P_{2}\chi_{2}^{\tau})(\tilde{ \rho},\tilde{u},s_{1})\,\big{(}\chi(s_{1})\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3 }^{\tau}\,\sigma(s_{1})\big{)}(\rho,u,s_{1})\,\psi(s_{1})\,\mathrm{d}s_{1} \mathrm{d}\nu_{(t,r)}(\tilde{\rho},\tilde{u}).\] It follows Lemma 4.14 that \(P_{j}\chi_{j}^{\tau},j=2,3\), are measures in \(s_{1}\) such that \(\|P_{j}\chi_{j}^{\tau}(\tilde{\rho},\tilde{u},\cdot)\|_{\mathfrak{R},\alpha} \leq C_{\alpha}\tilde{\rho}\) for large \(\tilde{\rho}\), where \(\|\mu\|_{\mathfrak{R},\alpha}=\sup\left\{|\langle\mu,f\rangle|\,:\,f\in C_{0} ^{\alpha}(\mathbb{R})\right.\) and \(\left.\|f\|_{C^{\alpha}(\mathbb{R})}\leq 1\right\}\) with \(\alpha\in(0,1)\). Then we use Lemma 8.2 and Lebesgue's dominated convergence theorem to pass the limit inside the Young measure to obtain \[\int_{\mathbb{R}}\overline{P_{2}\chi_{2}^{\tau}}\,\big{(}\chi(s_{1 })\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}\,\sigma(s_{1})\big{)}\,\psi(s_ {1})\,\mathrm{d}s_{1}\] \[\longrightarrow\int_{\mathcal{H}}\int_{\mathbb{R}}(P_{1}\chi)( \tilde{\rho},\tilde{u},s_{1})X(\rho,u,s_{1})\psi(s_{1})\,\mathrm{d}s_{1} \mathrm{d}\nu_{(t,r)}(\tilde{\rho},\tilde{u})\] pointwise in \((\rho,u)\) as \(\tau\to 0\). Now we are going to prove \[\Big{|}\int_{\mathbb{R}}\overline{P_{2}\chi_{2}^{\tau}}\,\big{(}\chi(s_{1})\, P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}\,\sigma(s_{1})\big{)}\,\psi(s_{1})\, \mathrm{d}s_{1}\Big{|}\leq C\big{(}\rho^{\beta(\gamma_{2}+1)}+1\big{)} \tag{8.17}\] for some constants \(C>0\) and \(\beta\in(0,1]\), which are both independent of \(\tau\). Once (8.17) is proved, it follows from Lebesgue's dominated convergence theorem that \[\lim_{\tau\to 0}\int_{\mathbb{R}}\overline{P_{2}\chi_{2}^{\tau}}(s_{1 })\,\overline{\chi(s_{1})\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi_{3}^{\tau}\,\sigma( s_{1}))}\,\psi(s_{1})\,\mathrm{d}s_{1}\] \[=\lim_{\tau\to 0}\int_{\mathcal{H}}\int_{\mathbb{R}}\overline{P_{2} \chi_{2}^{\tau}}(s_{1})\,\big{(}\chi(s_{1})\,P_{3}\sigma_{3}^{\tau}-P_{3}\chi _{3}^{\tau}\,\sigma(s_{1})\big{)}(\rho,u,s_{1})\,\psi(s_{1})\,\mathrm{d}s_{1} \mathrm{d}\nu_{(t,r)}(\rho,u)\] \[=\int_{\mathcal{H}}\int_{\mathcal{H}}\int_{\mathbb{R}}(P_{1}\chi) (\tilde{\rho},\tilde{u},s_{1})X(\rho,u,s_{1})\psi(s_{1})\,\mathrm{d}s_{1} \mathrm{d}\nu_{(t,r)}(\tilde{\rho},\tilde{u})\,\mathrm{d}\nu_{(t,r)}(\rho,u).\] Since \(X(\rho,u,s_{1})\) is independent of the choice of the mollifying functions \(\phi_{2}^{\tau}\) and \(\phi_{3}^{\tau}\) from Lemma 8.7, we may interchange the roles of \(s_{2}\) and \(s_{3}\) to conclude the proof of (8.16). To see the validity of (8.17), we begin by observing that, for \(j=2,3\), \(\overline{P_{j}\chi_{j}^{\tau}}(s_{1})\) and \(\psi(s_{1})\) are independent of \((\rho,u)\). Then it suffices to estimate the function: \[\chi(s_{1})\,P_{j}\sigma_{j}^{\tau}-P_{j}\chi_{j}^{\tau}\,\sigma(s_{1})=\chi(s_ {1})\,P_{j}(\sigma_{j}^{\tau}-u\chi_{j}^{\tau})-(\sigma(s_{1})-u\chi(s_{1}))\, P_{j}\chi_{j}^{\tau}. \tag{8.18}\] It follows from Lemmas 4.2-4.3 and 4.14 (also see [10, Proof of Lemma 4.2]) that \[\text{RHS of (\ref{R \[\begin{split}:=& E_{1}^{2,\tau}+E_{2}^{2,\tau},\\ E^{3,\tau}=&\sum_{\pm}\big{(}B_{1,\pm}(\rho)a_{1}( \rho)-A_{1,\pm}(\rho)b_{1}(\rho)\big{)}G_{\lambda_{1}}(s_{1})\big{(}(s_{j}-u) \delta(s_{j}-u\pm k(\rho))\big{)}*\phi_{j}^{\tau}\\ &+\sum_{\pm}\big{(}B_{1,\pm}(\rho)a_{2}(\rho)-A_{1,\pm}(\rho)b_{ 2}(\rho)\big{)}G_{\lambda_{1}+1}(s_{1})\big{(}(s_{j}-u)\delta(s_{j}-u\pm k( \rho))\big{)}*\phi_{j}^{\tau}\\ &+\sum_{\pm}B_{1,\pm}(\rho)g_{1}(s_{1})\big{(}(s_{j}-u)\delta(s_ {j}-u\pm k(\rho))\big{)}*\phi_{j}^{\tau}\\ &+\sum_{\pm}\big{(}B_{3,\pm}(\rho)a_{1}(\rho)-A_{3,\pm}(\rho)b_{ 1}(\rho)\big{)}G_{\lambda_{1}}(s_{1})\big{(}(s_{j}-u)PV(s_{j}-u\pm k(\rho)) \big{)}*\phi_{j}^{\tau}\\ &+\sum_{\pm}\big{(}B_{3,\pm}(\rho)a_{2}(\rho)-A_{3,\pm}(\rho)b_{ 2}(\rho)\big{)}G_{\lambda_{1}+1}(s_{1})\big{(}(s_{j}-u)PV(s_{j}-u\pm k(\rho)) \big{)}*\phi_{j}^{\tau}\\ &+\sum_{\pm}B_{3,\pm}(\rho)g_{1}(s_{1})\big{(}(s_{j}-u)PV(s_{j}- u\pm k(\rho))\big{)}*\phi_{j}^{\tau}\\ :=& E_{1}^{3,\tau}+E_{2}^{3,\tau}+E_{3}^{3,\tau}+E_ {4}^{3,\tau}+E_{5}^{3,\tau}+E_{6}^{3,\tau},\end{split}\] and \(E^{4,\tau}\) is the remainder term which consists of the mollification of continuous functions, where we have used the notation: \(G_{\lambda_{1}}(s_{1})=[k(\rho)^{2}-(u-s_{1})^{2}]^{\lambda_{1}}\), and \(g_{i}(s_{1})=g_{i}(\rho,u-s_{1})\) for \(i=1,2\). We first demonstrate the uniform bound on the term involving the delta measures. By direct calculation, we have \[\begin{split}&\delta(s_{j}-u+k(\rho))*\phi_{j}^{\tau}=\frac{1}{ \tau}\phi_{j}(\frac{s_{1}-u+k(\rho)}{\tau}),\\ &\big{(}(s_{j}-s_{1})\delta(s_{j}-u+k(\rho))\big{)}*\phi_{j}^{ \tau}=-\frac{s_{1}-u+k(\rho)}{\tau}\phi_{j}(\frac{s_{1}-u+k(\rho)}{\tau}),\\ &\big{(}(s_{j}-u)\delta(s_{j}-u+k(\rho))\big{)}*\phi_{j}^{\tau}\\ &=\big{(}(s_{j}-s_{1})\delta(s_{j}-u+k(\rho))\big{)}*\phi_{j}^{ \tau}+\big{(}(s_{1}-u)\delta(s_{j}-u+k(\rho))\big{)}*\phi_{j}^{\tau}\\ &=(s_{1}-u)\tau^{-1}\phi_{j}(\frac{s_{1}-u+k(\rho)}{\tau})-\frac {s_{1}-u+k(\rho)}{\tau}\phi_{j}(\frac{s_{1}-u+k(\rho)}{\tau}).\end{split} \tag{8.19}\] Noting \[\begin{split}& G_{\lambda_{1}+1}(s_{1})=G_{\lambda_{1}}(s_{1}) \,(k(\rho)-u+s_{1})\,(k(\rho)+u-s_{1}),\\ &|g_{i}(s_{1})|\leq\|\partial_{u}g_{i}(\rho,\cdot)\|_{L^{\infty}( \mathbb{R})}|s_{1}-u\pm k(\rho)|\qquad\text{ for }i=1,2,\end{split} \tag{8.20}\] using (8.19)-(8.20) and Lemmas 4.2-4.3 and 4.11-4.14, we obtain \[E_{1}^{3,\tau}=0,\qquad|E_{1}^{1,\tau}|+|E_{1}^{2,\tau}|+|E_{2}^{3,\tau}|+|E_{ 3}^{3,\tau}|\leq C_{\phi}(1+\rho^{\frac{3}{2}+\frac{\theta_{2}}{2}}). \tag{8.21}\] For the term involving with the principal value distribution, a direct calculation shows that \[|PV*\phi_{j}^{\tau}(x)|=\Big{|}\int_{0}^{\infty}\frac{\phi_{j}^{\tau}(x-y)-\phi _{j}^{\tau}(x+y)}{y}\,\mathrm{d}y\Big{|}=\frac{1}{\tau}\Big{|}\int_{0}^{\infty} \frac{1}{y}\big{(}\phi_{j}(\frac{x+y}{\tau})-\phi_{j}(\frac{x-y}{\tau})\big{)} \,\mathrm{d}y\Big{|}.\] If \(|x|\leq 2\tau\), we have \[|PV*\phi_{j}^{\tau}(x)|\leq\frac{1}{\tau}\int_{0}^{4\tau}\frac{1}{|y|}\big{|} \phi_{j}(\frac{x-y}{\tau})-\phi_{j}(\frac{y+x}{\tau})\big{|}\,\mathrm{d}y\leq C \frac{1}{\tau}\|\phi_{j}^{\prime}\|_{L^{\infty}}=\frac{C_{\phi}}{|x|}. \tag{8.22}\] On the other hand, if \(|x|\geq 2\tau\), we can obtain \[|PV*\phi_{j}^{\tau}(x)|\leq\frac{2}{\tau}\int_{|x|-\tau}^{|x|+\tau}\frac{\| \phi\|_{L^{\infty}}}{|x|-\tau}\,\mathrm{d}y\leq\frac{C_{\phi}}{|x|}. \tag{8.23}\] Notice that \[\begin{split}&\big{(}(s_{j}-s_{1})PV(s_{j}-u-k(\rho))\big{)}*\phi_{j }^{\tau}\\ &\quad=\big{(}(s_{j}-u+k(\rho))PV(s_{j}-u+k(\rho))\big{)}*\phi_{j}^{ \tau}+\big{(}(u-k(\rho)-s_{1})PV(s_{j}-u+k(\rho))\big{)}*\phi_{j}^{\tau},\\ &\big{(}(s_{j}-u)PV(s_{j}-u-k(\rho))\big{)}*\phi_{j}^{\tau}\\ &\quad=\big{(}(s_{j}-u+k(\rho))PV(s_{j}-u+k(\rho))\big{)}*\phi_{j }^{\tau}-k(\rho)PV(s_{j}-u+k(\rho))*\phi_{j}^{\tau},\end{split} \tag{8.24}\] which, with (8.20), (8.22)-(8.24), and Lemmas 4.2-4.3 and 4.11-4.14, yields \[E_{4}^{3,\tau}=0,\qquad|E_{2}^{1,\tau}|+|E_{2}^{2,\tau}|+|E_{5}^{3,\tau}|+|E_{6 }^{3,\tau}|\leq C_{\phi}\big{(}1+\rho^{\frac{3}{2}+\frac{\theta_{2}}{2}}\big{)}. \tag{8.25}\] Combining (8.21) with (8.25) yields that there exists \(\beta_{1}=\frac{3+\theta_{2}}{2(\gamma_{2}+1)}\in(0,1)\) such that \[|E^{1,\tau}+E^{2,\tau}+E^{3,\tau}|\leq C_{\phi}(1+\rho^{\frac{3}{2}+\frac{ \theta_{2}}{2}})\leq C_{\phi}(1+\rho^{\beta_{1}(\gamma_{2}+1)}). \tag{8.26}\] For \(E^{4,\tau}\) consisting of the mollification of continuous functions, direct calculations show that \[|E^{4,\tau}|\leq C_{\phi}\,\big{(}1+\rho^{2+\theta_{2}}|\ln\rho|\big{)}\leq C_ {\phi}\,\big{(}1+\rho^{\beta_{2}(\gamma_{2}+1)}\big{)}, \tag{8.27}\] with \(\beta_{2}=\frac{4+3\theta_{2}}{2(\gamma_{2}+1)}\in(0,1)\). Combining (8.27) with (8.26), we conclude the proof of (8.17). ## 9. Existence of Global Finite-Energy Solutions of CEPEs In this section, we complete the proof of Theorem 2.3. Since the proof is similar to [9], we sketch the proof for the self-containedness of this paper. We divide the proof into four steps. 1. Since \((\rho^{\varepsilon},m^{\varepsilon})(t,r)\) obtained in Theorems 2.1 satisfies all the assumptions of Theorem 2.2, then it follows from Theorem 2.2 that there exists a vector function \((\rho,m)(t,r)\) such that, up to a subsequence as \(\varepsilon\to 0\), \[(\rho^{\varepsilon},m^{\varepsilon}) \longrightarrow(\rho,m)\quad\text{a.e. }(t,r)\in\mathbb{R}_{+}^{2}, \tag{9.1}\] \[(\rho^{\varepsilon},m^{\varepsilon}) \longrightarrow(\rho,m)\quad\text{in }L^{p_{1}}_{\text{loc}}(\mathbb{R}_{+}^{2})\times L ^{p_{2}}_{\text{loc}}(\mathbb{R}_{+}^{2})\text{ for }p_{1}\in[1,\gamma_{2}+1)\text{ and }p_{2}\in[1,\frac{3(\gamma_{2}+1)}{ \gamma_{2}+3}), \tag{9.2}\] where \(L^{p_{j}}_{\text{loc}}\left(\mathbb{R}_{+}^{2}\right)\) represents \(L^{p_{j}}([0,T]\times K)\) for any \(T>0\) and \(K\Subset(0,\infty),j=1,2\). Noting (9.1) and \(\rho^{\varepsilon}\geq 0\)_a.e._ from Lemma 6.1, it is direct to show that \(\rho(t,r)\geq 0\)_a.e._ on \(\mathbb{R}_{+}^{2}\). Moreover, it follows from (2.22) that \(\sqrt{\rho^{\varepsilon}}u^{\varepsilon}r=\frac{m^{\varepsilon}}{\sqrt{\rho^{ \varepsilon}}}r\) is uniformly bounded in \(L^{\infty}(0,T;L^{2}(\mathbb{R}))\). Then using Fatou's lemma yields \[\int_{0}^{T}\int_{0}^{\infty}\frac{|m(t,r)|^{2}}{\rho(t,r)}\,r^{2}\mathrm{d}r \mathrm{d}t\leq\liminf_{\varepsilon\to 0}\int_{0}^{T}\int_{0}^{\infty}\frac{|m^{ \varepsilon}(t,r)|^{2}}{\rho^{\varepsilon}(t,r)}\,r^{2}\mathrm{d}r\mathrm{d}t<\infty.\] Thus, \(m(t,r)=0\)_a.e._ on \(\{(t,r)\,:\,\rho(t,r)=0\}\), and we can define the limit velocity \(u(t,r)\) as \[u(t,r) =\frac{m(t,r)}{\rho(t,r)}\qquad\text{a.e. on }\{(t,r)\,:\,\rho(t,r) \neq 0\},\] \[u(t,r) =0\qquad\text{a.e. on }\{(t,r)\,:\,\rho(t,r)=0\text{ or }r=0\}.\] Therefore, \(m(t,r)=\rho(t,r)u(t,r)\)_a.e._ on \(\mathbb{R}_{+}^{2}\). Also, we can define \(\big{(}\frac{m}{\sqrt{\rho}}\big{)}(t,r):=\sqrt{\rho(t,r)}u(t,r)\), which is zero _a.e._ on \(\{(t,r)\,:\,\rho(t,r)=0\}\). Moreover, using (2.24) and Fatou's lemma, we obtain \[\int_{0}^{T}\int_{d}^{D}\rho|u|^{3}\,\mathrm{d}r\mathrm{d}t\leq\liminf_{ \varepsilon\to 0}\int_{0}^{T}\int_{d}^{D}\rho^{\varepsilon}|u^{\varepsilon}|^{3}\, \mathrm{d}r\mathrm{d}t\leq C(d,D,M,E_{0},T)<\infty\] for any \([d,D]\Subset(0,\infty)\). By similar calculations as in [9, SS5], we obtain that, as \(\varepsilon\to 0\), \[\frac{m^{\varepsilon}}{\sqrt{\rho^{\varepsilon}}}\equiv\sqrt{\rho^{\varepsilon}}u^ {\varepsilon}\longrightarrow\frac{m}{\sqrt{\rho}}\equiv\sqrt{\rho}u\qquad\text{ strongly in }L^{2}\left([0,T]\times[d,D],r^{n-1}\,\mathrm{d}r\mathrm{d}t\right) \tag{9.3}\] for any \(T>0\) and \([d,D]\Subset(0,\infty)\). From (9.2)-(9.3), we also obtain the convergence of the mechanical energy as \(\varepsilon\to 0\) : \[\eta^{*}(\rho^{\varepsilon},m^{\varepsilon})\longrightarrow\eta^{*}(\rho,m) \qquad\text{ in }L^{1}_{\mathrm{loc}}(\mathbb{R}^{2}_{+}). \tag{9.4}\] Using (9.2), (9.4), and Fatou's lemma, and taking limit \(\varepsilon\to 0\) in (2.21)-(2.22), we have \[\int_{t_{1}}^{t_{2}}\int_{0}^{\infty}\big{(}\eta^{*}(\rho,m)(t,r)+\rho^{\gamma _{2}}(t,r)+\rho(t,r)\big{)}\,r^{2}\mathrm{d}r\mathrm{d}t\leq C(M,E_{0})(t_{2}- t_{1}), \tag{9.5}\] which implies \[\sup_{0\leq t\leq T}\int_{0}^{\infty}\big{(}\eta^{*}(\rho,m)(t,r)+\rho^{\gamma _{2}}(t,r)+\rho(t,r)\big{)}\,r^{2}\mathrm{d}r\leq C(M,E_{0}). \tag{9.6}\] This indicates that \(\rho(t,r)\in L^{\infty}([0,T];L^{\gamma_{2}}(\mathbb{R};r^{2}\mathrm{d}r))\), which implies that \(\rho(t,\mathbf{x})\) is a function in \(L^{\infty}([0,T];L^{\gamma_{2}}(\mathbb{R}^{3}))\) with \(\gamma_{2}>1\) (rather than a measure in \((t,\mathbf{x})\)). Therefore, no delta measure (_i.e._, concentration) is formed in the density in the time interval \([0,T]\), especially at the origin: \(r=0\). 2. For the convergence of the gravitational potential functions \(\Phi^{\varepsilon}(t,r)\), by similar calculation in [9, SS5], we obtain that, as \(\varepsilon\to 0\) (up to a subsequence), \[\Phi^{\varepsilon}_{r}(t,r)r^{2}=\int_{0}^{\tau}\rho^{\varepsilon}(t,y)\,y^{2 }\mathrm{d}y\longrightarrow\int_{0}^{\tau}\rho(t,y)\,y^{2}\mathrm{d}y\qquad \text{ a.e. }(t,r)\in\mathbb{R}^{2}_{+}. \tag{9.7}\] Thus, using (6.3), (9.1), (9.7), Fatou's lemma, and similar arguments as in (9.5)-(9.6), we have \[\int_{0}^{\infty}\Big{(}\int_{0}^{\tau}\rho(t,y)\,y^{2}\mathrm{d}y\Big{)}\rho (t,r)\,r\mathrm{d}r\leq C(M,E_{0})\qquad\text{ for }\text{a.e. }t\geq 0.\] On the other hand, it follows from (6.4) that there exists a function \(\Phi(t,\mathbf{x})=\Phi(t,r)\) such that, as \(\varepsilon\to 0\) (up to a subsequence), \[\Phi^{\varepsilon}\rightharpoonup\Phi\qquad\text{ weak-star in }L^{\infty}(0,T;H^{1}_{\mathrm{loc}}(\mathbb{R}^{3}))\text{ and weakly in }L^{2}(0,T;H^{1}_{\mathrm{loc}}(\mathbb{R}^{3}),\] \[\|\Phi(t)\|_{L^{6}(\mathbb{R}^{3})}+\|\nabla\Phi(t)\|_{L^{2}( \mathbb{R}^{3})}\leq C(M,E_{0})\qquad\text{ a.e. }t\geq 0.\] Thus, by (9.7) and the uniqueness of limit, we obtain that \(\Phi_{r}(t,r)r^{2}=\int_{0}^{\tau}\rho(t,z)z^{2}\,\mathrm{d}z\)_a.e._\((t,r)\in\mathbb{R}^{2}_{+}\). By similar arguments in [9, SS5], we also have the strong convergence of the potential functions: \[\lim_{\varepsilon\to 0}\int_{0}^{T}\int_{0}^{\infty}\big{|}(\Phi^{\varepsilon}_{r}- \Phi_{r})(t,r)\big{|}^{2}r^{2}\,\mathrm{d}r\mathrm{d}t=0\qquad\text{ for }\gamma_{2}>\frac{6}{5}. \tag{9.8}\] 3. Now we define \[(\rho,\mathcal{M},\Phi)(t,\mathbf{x}):=(\rho(t,r),m(t,r)\frac{\mathbf{x}}{r}, \Phi(t,r)).\] Then it follows from (2.20), (9.8), and Fatou's lemma that \[\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{ \mathcal{M}}{\sqrt{\rho}}\Big{|}^{2}+\rho e(\rho)-\frac{1}{2}|\nabla\Phi|^{2} \Big{)}(t,\mathbf{x})\,\mathrm{d}\mathbf{x}\mathrm{d}t\leq(t_{2}-t_{1})\int_{ \mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{\mathcal{M}_{0}}{\sqrt{\rho_{0}}} \Big{|}^{2}+\rho_{0}e(\rho_{0})-\frac{1}{2}|\nabla\Phi_{0}|^{2}\Big{)}( \mathbf{x})\,\mathrm{d}\mathbf{x},\] which implies that, for _a.e._\(t\geq 0\), \[\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{\mathcal{M}}{\sqrt{\rho}} \Big{|}^{2}+\rho e(\rho)-\frac{1}{2}|\nabla\Phi|^{2}\Big{)}(t,\mathbf{x})\, \mathrm{d}\mathbf{x}\leq\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{ \mathcal{M}_{0}}{\sqrt{\rho_{0}}}\Big{|}^{2}+\rho_{0}e(\rho_{0})-\frac{1}{2}| \nabla\Phi_{0}|^{2}\Big{)}(\mathbf{x})\,\mathrm{d}\mathbf{x}. \tag{9.9}\] On the other hand, using (2.22), (9.6), and (9.8), we obtain \[\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{2}\Big{|}\frac{\mathcal{M}}{\sqrt{\rho}} \Big{|}^{2}+\rho e(\rho)+\frac{1}{2}|\nabla\Phi|^{2}\Big{)}(t,\mathbf{x})\, \mathrm{d}\mathbf{x}\leq C(M,E_{0}). \tag{9.10}\] Combining (9.9) with (9.10), we complete the proof of (2.27). 4. Using (6.7), (6.9)-(6.10), and similar arguments as in [9, SS5], we conclude the proof of (2.28)-(2.30) which, along with Steps 1-3, shows that \((\rho,\mathcal{M},\Phi)(t,\mathbf{x})\) is indeed a global weak solution of problem (1.1) and (1.12)-(1.13) in sense of Definition 2.2. This completes the proof. ## Appendix A Some Inequalities ### A sharp Sobolev inequality In this subsection, we recall a sharp Sobolev inequality, which is used in SS5.1. The proof can be found in [43, SS8.3]. **Lemma A.1** (Sobolev inequality).: _Let \(n\geq 3\) and \(\nabla f\in L^{2}(\mathbb{R}^{n})\) with \(\lim_{|\mathbf{x}|\to\infty}f(\mathbf{x})=0\). Then_ \[\|f\|_{L^{\frac{2n}{n-2}}}^{2}\leq A_{n}\|\nabla f\|_{L^{2}}^{2},\] _where \(A_{n}=\frac{4}{n(n-2)}\omega_{n+1}^{-\frac{2}{n}}\) is the best constant and \(\omega_{k}=\frac{2\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}\) is the surface area of unit sphere in \(\mathbb{R}^{k}\)._ ### Some variants of the Gronwall inequality In this subsection, we introduce some variants of the Gronwall inequality, which plays an essential role in identifying the singularities of the entropy kernel and entropy flux kernel; see also [62]. **Lemma A.2** (A variant of Gronwall inequality [57, Theorem 1.2.4]).: _Let \(x(t),y(t),z(t)\), and \(w(t)\) be non-negative continuous functions on \(J=[t_{0},t_{1}]\) with \(t_{0}\geq 0\). If_ \[x(t)\leq y(t)+z(t)\int_{t_{0}}^{t}w(s)x(s)\,\mathrm{d}s\qquad\text{for $t\in J$},\] _then_ \[x(t)\leq y(t)+z(t)\int_{t_{0}}^{t}w(s)y(s)\exp\Big{(}\int_{s}^{t}w(r)z(r)\, \mathrm{d}r\Big{)}\,\mathrm{d}s\qquad\text{for $t\in J$}.\] **Lemma A.3**.: _Let \(\theta\geq 0\), and let \(d(s)\) be defined in (3.10). Assume that \(x(t)\geq 0\) is measurable and locally integrable, and satisfies_ \[x(t)\leq Ct^{\theta}+\frac{1}{t}\int_{0}^{t}d(s)x(s)\,\mathrm{d}s\qquad\text{ for $t\geq\rho^{*}$}\] (A.1) _for some constant \(C>0\). Then there exists a possibly larger constant \(\tilde{C}>0\) independent of \(t\) such that, for \(t\geq\rho^{*}\),_ \[x(t)\leq\begin{cases}\tilde{C}t^{\theta_{2}}&\text{if $0\leq\theta<\theta_{2}$}, \\ \tilde{C}t^{\theta_{2}}\ln t&\text{if $\theta=\theta_{2}$},\\ \tilde{C}t^{\theta}&\text{if $\theta>\theta_{2}$}.\end{cases}\] **Proof.** Since \(x(t)\) is positive and locally integrable, then, using Lemma 3.2, there exists a constant \(C>0\) that may depend on \(\rho^{*}\), but independent of \(t\), such that \[\frac{1}{t}\int_{0}^{\rho^{*}}d(s)x(s)\,\mathrm{d}s\leq Ct^{-1}\qquad\text{for $ t\geq\rho^{*}$},\] which, with (A.1), yields that \(x(t)\leq Ct^{\theta}+\frac{1}{t}\int_{\rho^{*}}^{t}d(s)x(s)\,\mathrm{d}s\) for \(t\geq\rho^{*}\). Applying Lemma A.2, we obtain \[x(t)\leq Ct^{\theta}+\frac{1}{t}\int_{\rho^{*}}^{t}Cs^{\theta}d(s)\mathrm{exp }\Big{(}\int_{s}^{t}\frac{d(r)}{r}\,\mathrm{d}r\Big{)}\,\mathrm{d}s\qquad \text{for $t\geq\rho^{*}$}.\] (A.2) It is clear that \[\Big{|}\int_{s}^{t}\frac{d(r)}{r}\,\mathrm{d}r\Big{|}\leq\int_{s}^{t}\frac{1+ \theta_{2}}{r}\,\mathrm{d}r+\int_{s}^{t}\frac{|d(r)-(1+\theta_{2})|}{r}\, \mathrm{d}r.\] (A.3) It follows from Lemma 3.2 that \(\frac{|d(r)-(1+\theta_{2})|}{r}\leq Cr^{-\epsilon-1}\) for \(r\geq\rho^{*}\) which, with (A.3), yields \[\exp\Bigl{(}\int_{s}^{t}\frac{d(r)}{r}\,\mathrm{d}r\Bigr{)}\leq C\Bigl{(}\frac{t }{s}\Bigr{)}^{1+\theta_{2}}\qquad\text{for $t\geq s\geq\rho^{*}$}.\] Combining (A.2) with \(|d(s)|\leq 3\), we obtain that, for \(t\geq\rho^{*}\), \[x(t)\leq Ct^{\theta}+\frac{1}{t}\int_{\rho^{*}}^{t}Cs^{\theta}\Bigl{(}\frac{t }{s}\Bigr{)}^{1+\theta_{2}}\,\mathrm{d}s\leq Ct^{\theta}+Ct^{\theta_{2}}\int_ {\rho^{*}}^{\theta}s^{-1-\theta_{2}+\theta}\,\mathrm{d}s.\] (A.4) _Case 1._ If \(0\leq\theta<\theta_{2}\), it follows from (A.4) that \[x(t)\leq Ct^{\theta}+Ct^{\theta_{2}}\Bigl{(}\int_{\rho^{*}}^{\infty}s^{-1- \theta_{2}+\theta}\,\mathrm{d}s\Bigr{)}\leq\tilde{C}t^{\theta_{2}}\qquad \text{for $t\geq\rho^{*}$}.\] _Case 2._ If \(\theta=\theta_{2}\), it follows from (A.4) that \[x(t)\leq Ct^{\theta_{2}}+Ct^{\theta_{2}}\Bigl{(}\int_{\rho^{*}}^{t}s^{-1- \theta_{2}+\theta}\,\mathrm{d}s\Bigr{)}\leq\tilde{C}t^{\theta}\qquad\text{for $t \geq\rho^{*}$}.\] _Case 3._ If \(\theta>\theta_{2}\), then it follows from (A.4) that \[x(t)\leq Ct^{\theta}+Ct^{\theta_{2}}\Bigl{(}\int_{\rho^{*}}^{t}s^{-1-\theta_ {2}+\theta}\,\mathrm{d}s\Bigr{)}\leq\tilde{C}t^{\theta}\qquad\text{for $t\geq\rho^{*}$}.\] This completes the proof. **Corollary A.4**.: _If \(x(t)\) satisfies_ \[x(t)\leq Ct^{\theta}\ln t+\frac{1}{t}\int_{0}^{t}d(s)x(s)\,\mathrm{d}s\qquad \text{for $t\geq\rho^{*}$},\] _with \(\theta>\theta_{2}\), then \(\,x(t)\leq Ct^{\theta}\ln t\,\) for \(t\geq\rho^{*}\)._ **Acknowledgments.** Gui-Qiang G. Chen's research is partially supported by the UK Engineering and Physical Sciences Research Council Awards EP/L015811/1, EP/V008854/1, and EP/V051121/1. Feimin Huang's research is partially supported by the National Natural Science Foundation of China No. 12288201 and the National Key R&D Program of China No. 2021YFA1000800. Tianhong Li's research is partially supported by the National Natural Science Foundation of China No. 10931007. Yong Wang's research is partially supported by the National Natural Science Foundation of China No. 12022114 and No. 12288201, and CAS Project for Young Scientists in Basic Research, Grant No. YSBR-031. **Statements and Declarations.** The authors have no competing interests to declare that are relevant to the content of this article. In addition, our manuscript has no associated data.
2304.05040
Unsupervised out-of-distribution detection for safer robotically guided retinal microsurgery
Purpose: A fundamental problem in designing safe machine learning systems is identifying when samples presented to a deployed model differ from those observed at training time. Detecting so-called out-of-distribution (OoD) samples is crucial in safety-critical applications such as robotically guided retinal microsurgery, where distances between the instrument and the retina are derived from sequences of 1D images that are acquired by an instrument-integrated optical coherence tomography (iiOCT) probe. Methods: This work investigates the feasibility of using an OoD detector to identify when images from the iiOCT probe are inappropriate for subsequent machine learning-based distance estimation. We show how a simple OoD detector based on the Mahalanobis distance can successfully reject corrupted samples coming from real-world ex vivo porcine eyes. Results: Our results demonstrate that the proposed approach can successfully detect OoD samples and help maintain the performance of the downstream task within reasonable levels. MahaAD outperformed a supervised approach trained on the same kind of corruptions and achieved the best performance in detecting OoD cases from a collection of iiOCT samples with real-world corruptions. Conclusion: The results indicate that detecting corrupted iiOCT data through OoD detection is feasible and does not need prior knowledge of possible corruptions. Consequently, MahaAD could aid in ensuring patient safety during robotically guided microsurgery by preventing deployed prediction models from estimating distances that put the patient at risk.
Alain Jungo, Lars Doorenbos, Tommaso Da Col, Maarten Beelen, Martin Zinkernagel, Pablo Márquez-Neila, Raphael Sznitman
2023-04-11T07:54:11Z
http://arxiv.org/abs/2304.05040v2
# Unsupervised out-of-distribution detection for safer robotically guided retinal microsurgery ###### Abstract **Purpose:** A fundamental problem in designing safe machine learning systems is identifying when samples presented to a deployed model differ from those observed at training time. Detecting so-called out-of-distribution (OoD) samples is crucial in safety-critical applications such as robotically guided retinal microsurgery, where distances between the instrument and the retina are derived from sequences of 1D images that are acquired by an instrument-integrated optical coherence tomography (iiOCT) probe. **Methods:** This work investigates the feasibility of using an OoD detector to identify when images from the iiOCT probe are inappropriate for subsequent machine learning-based distance estimation. We show how a simple OoD detector based on the Mahalanobis distance can successfully reject corrupted samples coming from real-world ex vivo porcine eyes. **Results:** Our results demonstrate that the proposed approach can successfully detect OoD samples and help maintain the performance of the downstream task within reasonable levels. MahaAD outperformed a supervised approach trained on the same kind of corruptions and achieved the best performance in detecting OoD cases from a collection of iiOCT samples with real-world corruptions. **Conclusion:** The results indicate that detecting corrupted iiOCT data through OoD detection is feasible and does not need prior knowledge of possible corruptions. Consequently, MahaAD could aid in ensuring patient safety during robotically guided microsurgery by preventing deployed prediction models from estimating distances that put the patient at risk. Keywords:Out-of-distribution detection, Instrument-integrated OCT, Medical robotics, Retinal microsurgery ## 1 Introduction Ensuring safe machine learning models is one of the key challenges for real-world medical systems. While the need for reliable models is highly important for image-based diagnostics with human-in-the-loop users, it is mission-critical when combined with medical robotic systems that tightly couple image-based sensing for augmented visualizations or automation. In this context, one of the fundamental problems in designing safe machine learning is identifying when samples presented to a deployed model differ from those observed at training time. This problem, commonly known as _out-of-distribution_ (OoD) detection [1], aims to alleviate the risks of evaluating OoD samples, as performances on these are known to be erratic and typically produce wrong answers with high confidences, whereby making them potentially dangerous. As machine learning has become increasingly prevalent in mission-critical systems, the problem of OoD detection has gathered significant attention both in general computer vision research [1], and in applied medical imaging systems [2; 3; 4; 5; 6; 7; 8]. OoD detection for robotically assisted surgery is particularly relevant as erratic machine learning predictions can have extremely serious consequences for the patient. For example, a misprediction in the distance estimation Figure 1: Out-of-distribution detection of an inappropriate sequence of 1D images, or _M-scan_, acquired by an iOCT probe. These should be rejected rather than processed by a subsequent machine learning-based distance estimation method. between an instrument and its targeted tissue could lead to important inadvertent trauma. Surprisingly, the topic of OoD detection for robotically assisted surgery has received little attention to date, despite its necessity and advantages. More broadly, the potential benefits of OoD detection in this context remain largely unexplored. This work aims to close this gap by analyzing the implications of integrating an OoD detector in a relevant robot-assisted surgery use case. Specifically, we consider the setting of retinal microsurgery, where a machine learning model is needed to infer the distance between a robotically manipulated instrument and the retina of the eye (see Fig. 1). As with most of the recently proposed robotic systems for retinal microsurgery [9; 10; 11; 12; 13], the goal is to assist an operating surgeon when manipulating micron-sized retinal structures using an _optical coherence tomography_ (OCT) imaging probe which yields 1D OCT measures over time, also known as _M-scans_. When using such a probe to help guide the robot to an intra-retinal injection site, automatic estimation between the instrument and the retinal surface is key. Yet, for any robot-tissue interacting system, a critical necessity is to ensure that the inferred distances derived from the imaging probe are safe for the robotic system to use. To this end, this work investigates the feasibility of using OoD detection to identify when images from an _intraoperative instrument-integrated OCT_ (iiOCT) probe are inappropriate for subsequent machine learning-based distance estimation (see Fig. 2). We show how data from this probe, in combination with the simple MahaAD OoD [14] detector, can be rejected from further evaluation when the data is corrupted. We demonstrate the implications of our approach on the downstream task of distance estimation using simulated corruptions and report OoD detection performance on ex vivo porcine eyes with real-world corruptions. Figure 2: Six M-scans acquired from a 1D OCT image probe from which distance estimates to the ILM of the retina (shown in green) need to be computed. Evaluating unexpected images (right column) can lead to incorrect estimates and endanger the intervention. Images were resized for improved visualization. ## 2 Methods ### Problem setting Our retinal microsurgical setup is equipped with a robot that manipulates an injection needle with an iiOCT sensor. The sensor captures the retinal tissue in front of the instrument in a M-scan, which is a sequence of one-dimensional depth signals, denoted _A-scans_. Specifically, M-scans contain useful information about the layers of the retina and the sensor's distance to the different layers (see Fig. 2). However, extracting distance information from M-scans is challenging due to the large appearance variability and noise observed in these signals. To this end, machine learning and deep learning models are a natural approach to do so consistently and reliably. We thus train a deep learning model \(r:\mathbb{R}^{P}\rightarrow[0,1]^{P}\) to estimate the location of the internal limiting membrane (ILM) of the retina. Given an M-scan \(\mathbf{x}\), the retinal detection model \(r\) receives individual A-scans as one-dimensional vectors \(\mathbf{x}_{j}\) and produces one-dimensional heatmaps \(\hat{\mathbf{y}}_{j}=r(\mathbf{x}_{j})\) indicating the probability that the ILM is located at each location of the input A-scan. The location of maximum probability determines the ML-based distance as shown in Fig. 1. Similar to [15], the model \(r\) is trained by minimizing the standard L2 loss over a training dataset \(\mathcal{T}=\{(\mathbf{x}^{(i)},\mathbf{y}^{(i)})\}_{i=1}^{N}\) of M-scans and their corresponding ground-truth retinal maps. At inference time, the retinal detection model \(r\) is robust to the types of A-scan variability learned from the training set \(\mathcal{T}\), but not to others never seen in this dataset. This poses a risk to the safety of the surgical system in practice, as we cannot ensure that the range of potential perturbations that can occur during surgery are present in the training dataset. The range is simply too large to build a representative dataset that covers all cases. ### Unsupervised OoD detection We augment our system with an unsupervised out-of-distribution detection method to tackle the abovementioned limitation. Our approach is unsupervised in the sense that we do not have examples of OoD cases from which we can train a supervised model to perform OoD. Instead, we have only the dataset from which the distance estimation model, \(r\), is trained. In this context, we leverage the MahaAD method proposed by Rippel _et al._[14] to learn the appearance of M-scans in the training dataset and detect when novel M-scans are too far from the training distribution to be safely processed by \(r\). We select this model as it has been shown to be highly effective in a large number of cases while being interpretable and computationally lean [16]. At training time, MahaAD learns the training distribution by fitting multiple multivariate Gaussians to latent representations of the training data at different scales. More specifically, we build a training dataset \(\mathcal{T}^{\prime}=\{\mathbf{x}^{(i)}\}_{i=1}^{M}\) where each sample \(\mathbf{x}^{(i)}\in\mathbb{R}^{10\times P}\) is a M-scan of 10 consecutive A-scans. M-scans in \(\mathcal{T}^{\prime}\) may come from the training data \(\mathcal{T}\) used to train \(r\) or, given the unsupervised nature of MahaAD, from any other dataset of M-scans without annotations. Given a pre-trained network \(f\) with \(K\) layers for feature extraction, MahaAD first describes each training sample \(\mathbf{x}^{(i)}\) as a collection of \(K\) feature vectors \(\{\mathbf{f}_{i,k}=f_{k}(\mathbf{x}^{(i)})\}_{k=1}^{K}\), where each vector \(f_{k}(\mathbf{x}^{(i)})\) is the spatial average of the \(k\)-th feature map for the input \(\mathbf{x}^{(i)}\). The collection of the feature vectors for all training samples is then used to fit \(K\) multivariate Gaussians, one per layer \(k\), with parameters, \[\boldsymbol{\mu}_{k}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{f}_{i,k}\quad\text{and} \quad\boldsymbol{\Sigma}_{k}=\frac{1}{N}\sum_{i=1}^{N}\left(\mathbf{f}_{i,k}- \boldsymbol{\mu}_{k}\right)(\mathbf{f}_{i,k}-\boldsymbol{\mu}_{k})^{T}\quad \forall k\in\{1,\ldots,K\}.\] At test time, MahaAD computes \(K\) Mahalanobis distances between an M-scan \(\mathbf{x}\) and the means \(\boldsymbol{\mu}_{k}\) of the learned Gaussians as shown in Figure 3, \[d_{k}(\mathbf{x})=d(\mathbf{x},\boldsymbol{\mu}_{k})=\sqrt{(\mathbf{f}_{k}- \boldsymbol{\mu}_{k})^{T}\boldsymbol{\Sigma}_{\boldsymbol{k}}^{-1}(\mathbf{f} _{k}-\boldsymbol{\mu}_{k})},\quad\forall k\in\{1,\ldots,K\}.\] The final OoD score for a test-time sample \(\mathbf{x}\) is then the sum over all distances, \[s(\mathbf{x})=\sum_{k=1}^{K}d_{k}(\mathbf{x})\,.\] The M-scan is then considered OoD if its score \(s(\mathbf{x})\) is larger than a threshold \(\tau\), which is the only hyperparameter of the method. When an M-scan is considered OoD, we treat all of its individual A-scan components \(\mathbf{x}_{j}\) as OoD and assume they are not suitable for safe estimation with the subsequent retina detection model \(r\). We experimentally found that applying MahaAD on M-scans produced more reliable results than on individual A-scans. Figure 3: Example of MahaAD with a single bivariate Gaussian (_i.e._, a single 2D latent representation). A multivariate Gaussian is fit to the latent representations of the training samples and is used to determine the Mahalanobis distance of the test samples’ latent representations. Based on the distance, samples are considered in- or out-of-distribution. ## 3 Experimental setting ### Data Our data consists of four recordings from ex vivo trials on four different pig eyes, with each recording containing approximately 900'000 A-scans. The iiOCT device produced temporal A-scans at a frequency of approximately 700Hz with a resolution of \(3.7\,\mathrm{\SIUnitSymbolMicro m}\)/pixel and a scan depth of \(P=674\) pixels (\(2.49\,\mathrm{mm}\)). Of the four pig recordings, three were used for training (and validation), and the fourth recording was held out for evaluation. From the training recordings, a collection of 334 in-distribution M-scans consisting of 10 A-scans was used to train the OoD detector. We manually selected samples with identifiable retina to ensure that they are in-distribution samples. ### Implementation details To measure the impact of OoD samples on the retinal model \(r\), we compared six OoD detection strategies and one reference baseline. **MahaAD:** The method proposed in Sect. 2.2. As proposed in [14], our feature extractor \(f\) is an EfficientNet-B0 [17] with \(K=9\) blocks pre-trained on ImageNet. The input to \(f\) are M-scans \(\mathbf{x}\) resized from \(10\times 674\) to \(64\times 224\) with bicubic interpolation to increase the importance of the temporal information and to make the input size closer to the training sizes of the EfficientNet-B0. We applied z-score normalization with the mean and standard deviation from ImageNet to all the input sequences. **Supervised:** An OoD detector implemented as a binary classifier and trained in a supervised fashion with both in-distribution and OoD samples. Given that OoD samples are not available in large amounts, we synthetically generated them by perturbing 50% of the training data using four types of perturbations: noise, smoothing, shifts and intensity (see Fig. 4). The OoD detector uses an ImageNet-pre-trained EfficientNet-B0 as backbone with a classification head adapted to the binary OoD detection task. We fine-tuned all layers with Adam optimizer and learning rate \(10^{-5}\). **Glow:** A generative flow-based model [18] used as OoD detector. We use the model's negative likelihood output as the OoD score (_i.e._, the lower the likelihood, the less probable a sample is in-distribution). The employed architecture has three blocks of 32 layers and was trained with the public implementation of [19]. **Uncertainty:** OoD samples tend to produce estimations with lower maximum softmax probabilities, (_i.e._, higher uncertainty [20]). We take the maximum probability of the estimated heatmap \(\hat{\mathbf{y}}_{j}=r(\mathbf{x}_{j})\) and use its entropy as the OoD score. **Raw-MahaAD:** Similar to **MahaAD** but, instead of the feature vectors \(\mathbf{f}_{i,k}\), we use the raw signal to fit a single (\(K=1\)) multivariate Gaussian. This can be seen as an ablation of the deep features. **SNR:** A simple measure of scan quality directly used as OoD score. We measure the signal-to-noise ratio (SNR) as \(\mu_{\mathbf{x}}/\sigma_{\mathbf{x}}\). **No-rejection** : Reference baseline that considers all samples as inliers (_i.e._, no OoD detection is applied). In all cases, we used a retinal model \(r\) that was implemented as a one-dimensional U-Net-like architecture [21] with four down-pooling/upsampling steps and one convolutional layer per step. We used Adam with a learning rate of \(10^{-4}\) for optimization and performed early stopping according to the performance on the validation split. To train and validate \(r\), the location of the ILM was manually annotated for a random collection of 14'700 M-scans from the original pig recordings. ### Experiments #### 3.3.1 OoD detection for distance estimation The first experiment measures the impact of our approach in a simulated scenario of retinal surgery where the retinal model \(r\) only receives the samples considered safe for estimation by the OoD detector. For this purpose, we employed a test set of 2'000 M-scans with annotated ILM locations. To account for the lack of real OoD samples, OoD samples were synthetically generated by perturbing a fraction \(p\) of elements from the test data with eight types of corruptions: _Noise:_ Additive Gaussian noise with \(\mu\)=0 and \(\sigma\)=50. _Smoothing:_ Gaussian filtering with \(\sigma\)=5. _Contrast:_ Contrast increase/decrease by a factor uniformly sampled from \(\{0.1,0.2,0.3,2,3,4\}\). _Intensity:_ Equally probable positive/negative shift of the intensity uniformly sampled from the set \(\mathcal{U}([-50,-25]\cup[25,50])\). _Stripes:_ Randomly replacing one or two A-scans in a sequence with a constant intensity sampled from \(\mathcal{U}(100,200)\). Figure 4: Examples of the eight types of perturbations applied to simulate OoD samples. Each sample is an M-scan with a depth of 674 pixels and 10 consecutive A-scans. Images were resized for improved visualization. _Rectangle:_ Randomly placing a rectangle with size (_i.e._, M-scan stretch\(\times\)depth) sampled from \(\mathcal{U}([6,10]\times[15,30])\) pixels and a constant intensity sampled from \(\mathcal{U}(100,200)\). _Shift:_ Roll in depth for a random split of A-scans in the sequence with positive/negative shift sampled from \(\mathcal{U}(25,100)\) pixels. _Zoom:_ Zoom each A-scan in a sequence by a factor sampled from \(\mathcal{U}(1.5,1.75)\). All perturbations were applied with equal probability to samples with intensities rescaled to the range \([0,255]\). Figure 4 shows examples of produced synthetic corruptions. #### 3.3.2 Real OoD samples in ex vivo data In a second experiment, we explore the behavior of the methods when presented with real OoD M-scans that were manually identified in our data. For this purpose, we built a test dataset with 258 real OoD M-scans and 258 in-distribution M-scans, where each M-scan consists of 10 A-scans. Figure 8 includes a few examples. As these samples are real OoD cases, it is impossible to label the location of the ILM and thus prevents us from using the above experimental protocol. Instead, we compared the performance of the baselines in the task of detecting OoD samples in this small dataset, omitting the retinal network \(r\). ## 4 Results ### OoD detection for distance estimation We measured the performance of \(r\) in terms of the mean absolute error (MAE) between the estimated and the real distances for a progressively increasing Figure 5: Effect of different OoD methods on the retinal surgery pipeline. Mean absolute distance error (MAE) is shown for different perturbation ratios \(p\). For each baseline, a proportion of \(p\) M-scans were considered OoD and rejected from MAE computation. ratio of corruptions \(p\), which ranged from 0 (_i.e._, no corruptions) to 0.9. To quantify the impact of each OoD detection approach, M-scans detected as OoD were discarded from MAE computation. For proper comparison, an M-scan was considered OoD if it was among the top-\(p\) highest OoD-scoring M-scans. Hence, a perfect OoD detector will discard all the corrupted M-scans, keeping the MAE low. **MahaAD** outperformed all baselines, with its MAE staying almost constant for all the perturbation ratios (Fig. 5). **Raw-MahaAD**, **Glow**, **Uncertainty**, and **SNR** underperformed compared to **No-rejection**, suggesting that they flag a large proportion of correct samples as OoD while allowing corrupted A-scans to be processed by the subsequent retinal network. The poor behavior of **Uncertainty** and **SNR** is noticeable for perturbation ratios as low as 0.2, which makes them unsuitable for OoD detection in the present setting. Finally, **Supervised** matched the performance of **MahaAD**, but given that it was trained with the same kind of synthetic perturbations used in the evaluation, this is most likely an overoptimistic result. Additionally, we compared **MahaAD** and **Supervised** based on their isolated OoD detection performance for individual corruption types at a proportion \(p\) of 0.5. To investigate our presumption of **Supervised**'s overoptimistic performance due to known perturbations, we analyzed the corruptions that **Supervised** has not seen during training (_i.e._, _stripes_, _rectangle_, _zoom_, _contrast_). Figure 6 shows that **MahaAD** is outperforming **Supervised** in terms of OoD detection on the unseen corruptions. Specifically, the difference is notable for _zoom_ and _rectangle_, which seem to be the most difficult perturbations to detect. This result indicates that **MahaAD** is a better OoD detector when the type of expected perturbations is unknown and for which we cannot train. Figure 6: Comparison between **MahaAD** and **Supervised** on the OoD detection performance in terms of area under the receiver operating characteristic curve (AUROC) for corruptions not used for training **Supervised**. ### Real OoD samples in ex vivo data Figure 7 reports the results for the second experiment on the selection of real OoD samples. As previously found, **MahaAD** outperformed the other baselines, demonstrating its ability to generalize beyond simulated perturbations in a more realistic scenario. Furthermore, **Supervised** performs significantly worse than **MahaAD** on real data, suggesting that the results of Fig. 5 were indeed overoptimistic and that **Supervised** is not suitable as an OoD detector in a realistic scenario. In contrast, **SNR**'s performance improved on real data, likely due to a selection bias facilitating discrimination of OoD samples through low-order statistics. Surprisingly, **Glow** and **Raw-MahaAD** seem to produce OoD scores that better describe in-distribution samples than OoD samples. Figure 8 shows visual examples of correctly and incorrectly classified in- and out-of-distribution samples for the **MahaAD** approach. The examples confirm that **MahaAD** typically classifies obvious in-distribution or OoD samples correctly but can misclassify borderline samples. For instance, some false negatives may be considered in-distribution based on their retinal-like structure, while false positives often exhibit a hyperreflective retinal pigment epithelium layer, which might lead to their OoD classification. Figure 8: Examples of correctly detected and missed OoD and in-distribution samples with **MahaAD**. Images have been resized for improved visualization. Figure 7: Area under the receiver operating characteristic curve (AUROC) and average precision (AP) performance for the detection task on real OoD samples. ## 5 Discussion and conclusion In this work, we showed how corrupted data from an iOCT probe in the context of retinal microsurgery can be rejected from further evaluation by using unsupervised OoD detection. The simple MahaAD approach was able to maintain good performance of distance estimation by reliably detecting and rejecting simulated corruptions, and showed promising results on OoD cases from an ex vivo porcine trial. The experiments revealed that the benefits of MahaAD observed for a variety of scenarios on 2D images [16] translate well to temporal iiOCT scans with high levels of noise and limited lateral view. Another benefit is its computational efficiency, allowing it to cope with high-frequency A-scan acquisition with minimal latency. Additionally, the experiments point to the challenges of supervised OoD detection when not all unknowns (_i.e._, possible corruptions) are known and why unsupervised OoD detection might be suitable for improved generalization. In conclusion, we showed that detecting corrupted iiOCT data through unsupervised OoD detection is feasible and that MahaAD could potentially be used to improve safety in retinal microsurgery. However, one limitation of this work is that the temporal component of the iOCT is largely ignored as individual samples were considered for the distance estimation without any knowledge of the past. In the future, we plan to take this temporal information into account by combining the MahaAD OoD detection with dedicated techniques such as Bayesian filters to further improve performances. ## Declarations * Funding: This work was supported by EUREKA Eurostars (project #114442) and H2020 project GEYEDANCE. * Competing interests: The authors have no conflict of interest. * Ethics approval: Not applicable. * Consent: Not applicable. * Availability of data, materials, and code: Data is private, code and models are available at [https://github.com/alainjungo/ipcai23-iioct-ood](https://github.com/alainjungo/ipcai23-iioct-ood).
2308.14298
Direct initial orbit determination
Initial orbit determination (IOD) is an important early step in the processing chain that makes sense of and reconciles the multiple optical observations of a resident space object. IOD methods generally operate on line-of-sight (LOS) vectors extracted from images of the object, hence the LOS vectors can be seen as discrete point samples of the raw optical measurements. Typically, the number of LOS vectors used by an IOD method is much smaller than the available measurements (\ie, the set of pixel intensity values), hence current IOD methods arguably under-utilize the rich information present in the data. In this paper, we propose a \emph{direct} IOD method called D-IOD that fits the orbital parameters directly on the observed streak images, without requiring LOS extraction. Since it does not utilize LOS vectors, D-IOD avoids potential inaccuracies or errors due to an imperfect LOS extraction step. Two innovations underpin our novel orbit-fitting paradigm: first, we introduce a novel non-linear least-squares objective function that computes the loss between the candidate-orbit-generated streak images and the observed streak images. Second, the objective function is minimized with a gradient descent approach that is embedded in our proposed optimization strategies designed for streak images. We demonstrate the effectiveness of D-IOD on a variety of simulated scenarios and challenging real streak images.
Chee-Kheng Chng, Trent Jansen-Sturgeon, Timothy Payne, Tat-Jun Chin
2023-08-28T04:34:50Z
http://arxiv.org/abs/2308.14298v1
# Direct initial orbit determination ###### Abstract Initial orbit determination (IOD) is an important early step in the processing chain that makes sense of and reconciles the multiple optical observations of a resident space object. IOD methods generally operate on line-of-sight (LOS) vectors extracted from images of the object, hence the LOS vectors can be seen as discrete point samples of the raw optical measurements. Typically, the number of LOS vectors used by an IOD method is much smaller than the available measurements (_i.e._, the set of pixel intensity values), hence current IOD methods arguably under-utilize the rich information present in the data. In this paper, we propose a _direct_ IOD method called D-IOD that fits the orbital parameters directly on the observed streak images, without requiring LOS extraction. Since it does not utilize LOS vectors, D-IOD avoids potential inaccuracies or errors due to an imperfect LOS extraction step. Two innovations underpin our novel orbit-fitting paradigm: first, we introduce a novel non-linear least-squares objective function that computes the loss between the candidate-orbit-generated streak images and the observed streak images. Second, the objective function is minimized with a gradient descent approach that is embedded in our proposed optimization strategies designed for streak images. We demonstrate the effectiveness of D-IOD on a variety of simulated scenarios and challenging real streak images. keywords: Space Domain Awareness ; Initial orbit determination ; Direct method + Footnote †: journal: 0273-1177/$2023 COSPAR. Published by Elsevier Ltd All rights reserved. ## 1 Introduction Initial orbit determination (IOD) was proposed more than two centuries ago to determine the orbits of celestial bodies, given their ephemerides. Today, IOD is a key step in tracking Resident Space Objects (RSOs) (Stokes et al., 2000; Chambers et al., 2016; Drake et al., 2009), a capability that contributes to Space Domain Awareness (SDA). SDA is essential for safe utilisation of space, due to the the ever-growing population of RSOs. Many SDA systems employ optical sensors due to their lower cost and ability to capture rich information. When an RSO passes through the field-of-view of a telescope-equipped camera conducting long-exposure imaging, a _streak_ is formed in the resulting image. The images that contain streaks ("streak images"; see Fig. 1) are the input data to IOD. Most IOD methods, including the classical techniques of Gauss, Laplace, and Double-r (Vallado, 2001, Chapter 7) as well as more advanced techniques from Wishnek et al. (2021) and Ansalone & Curti (2013), operate on Line-of-Sight (LOS) vectors of the RSO. Thus the streak observations must be converted to LOS vectors before the application of the IOD methods. Often, this is accomplished by finding the endpoints of streaks, since these can be associated with the start and end times of exposure. At least three timestamped LOS vectors from multiple images are then fed into the IOD solver to compute an orbit solution. The top row of Fig. 2 illustrates LOS-based IOD. An obvious weakness of the above "two-stage" process is that errors in LOS extraction will propagate to the estimated orbit. It is well known that bright stars and low signal-to-noise ratio can greatly challenge the precise detection of the imaged streak (Tagawa et al., 2016; Levesque & Buteau, 2007; Du et al., 2022; Virtanen et al., 2016). Indeed, a single poorly localized LOS can lead to an arbitrarily wrong orbital solution; see Fig. 3 for examples. ContributionsIn this paper, we introduce a novel Direct Initial Orbit Determination (D-IOD) method. D-IOD fits the orbital parameters directly on the streak images _without_ requiring LOS extraction. This is achieved by minimizing the intensity differences between the observed streaks and the generated images of the RSO trajectory propagated from the candidate orbit state vector; see bottom row of Fig. 2. By avoiding the usage of LOS vectors, D-IOD is not susceptible to LOS errors. More fundamentally, D-IOD maximizes the use of all available measurements (_i.e._, the pixel intensities), as opposed to LOS-based IOD that operates on discrete point samples (_i.e._, the LOS vectors). D-IOD is inspired by direct image registration techniques (Lucas et al., 1981; Tomasi & Kanade, 1991; Baker & Matthews, 2004) in computer vision that employ all pixels in performing image registration, which stand in contrast to feature-based methods (Nister, 2004; Stewenius et al., 2006; Hartley, 1992) that utilize higher-level features such as edges, corners and keypoints resulting from a feature extraction step. Utilization modes and assumptionsA typical optical sensing pipeline for SDA produces high-resolution images of the night sky that contain clutter (_e.g._, background stars, clouds) and potentially multiple streaks per frame. In this work, we assume that the "raw" images have been preprocessed to extract individual streak images (as exemplified by Fig. 1) for D-IOD. Fig. 4 depicts the pre-processing steps, while Sec. 2.1 will discuss pre-processing techniques. Given the streak images with metadata (_i.e._, timestamps, extrinsic and intrinsic parameters), Two operation modes can be conceived for D-IOD: * End-to-end mode, whereby D-IOD takes only the streak images and outputs an orbit state vector estimate. Enabling end-to-end operation is a novel built-in initialization scheme that constructs a viable initial orbit state vector from the input streak images. * Refine mode, whereby D-IOD takes the streak images and an initial orbit state vector (_e.g._, a low-accuracy result of an LOS-based IOD method), then conducts direct fitting to polish the initial solution. Figure 1: Real streak images of RSOs captured under a 5-second exposure. The discontinuities in the streaks are due to background star removal (see Sec. 2.1 on pre-processing). For visual clarity, the images are scaled to unity. Note the remaining significant background noise. (Best viewed on electronic devices.) Figure 3: Projections of a poor orbital solution (highlighted by red lines) on three streak images. The green lines and circles represent the observed streaks and the ground-truth endpoints, respectively. The poor orbital solution is produced by the IOD solver as a result of an ill-localized endpoint (red dashed circle), as depicted in the first figure (left). Figure 2: Comparison of the LOS-based IOD method (2a and D-IOD (2b). The former first extracts timestamped endpoints and project them to obtain the LOS vectors. Then, the LOS vectors are fed to either a closed-formed IOD solver or an iterative method to determine the optimum orbit (\(\mathbf{e}^{*}\)). The depicted method here resembles the Double-r method which iteratively refines the initial estimate (\(\mathbf{e}^{\prime}\)). D-IOD, on the other hand, fits the orbital parameters directly onto the streak images without requiring an LOS extraction step. For D-IOD to yield sensible results, it is assumed that * The streak images contain the same RSO that has undergone a consistent orbital trajectory. * Each streak image contains only one streak, as exemplified by Fig. 1. Violations to the above assumptions (due to, _e.g._, erroneous data association, maneuvering RSO, incorrectly localized streaks) will manifest as high fitting error in the D-IOD result without causing program failure. Paper organizationSec. 2 surveys related works. Sec. 3 formulates direct orbit fitting, including streak image generation from propagated orbits and the objective function of D-IOD. Sec. 4 describes the optimization algorithm of D-IOD, including strategies to circumvent the difficulty of lack of intensity gradients in streak images. In Sec. 5, we report the performance of D-IOD under different simulated scenarios, such as different quality of initial estimates, various orbit types, ranging time intervals and signal-to-noise ratios. Additionally, we also showcase the practicality of D-IOD with challenging real streak images. The limitation of D-IOD is presented in Sec. 6, and Sec. 7 concludes this paper. ## 2 Literature review ### Pre-processing As mentioned above, optical sensing pipelines for SDA produce raw images of the night sky that contain significant clutter and potentially multiple RSO streaks per image. In general, the first step is to calibrate the raw images with flat-field correction and dark current subtraction. Then, pre-processing of the calibrated images is necessary before IOD can proceed. The major pre-processing steps are as follows. * Clutter removal--given an overall image frame, remove (_e.g._, by zeroing the intensities) blob-like regions corresponding to background stars, galaxy, nebula, _etc._. * Streak detection--finding individual streaks in the clutter-removed images, where the outputs are typically subimages that contain one streak each. * Endpoint localization--given a streak (sub)image, find the endpoints of the streak. Backproject the endpoints to form LOS vectors. Fig. 4 illustrates the pre-processing steps. While we distinguish the pre-processing steps above, existing pre-processing methods often conduct all or some of the steps jointly. Below details several representative algorithms. Bektesevic & Vinkovic (2017) proposed a pipeline that starts from removing background noise with a sequence of image-processing operations (_e.g._ erosion, dilation and histogram equalization). Then, the method runs a combination of Canny edge detection and contour detection to produce bounding boxes for potential streak candidates. Lastly, Hough transformed is executed to obtain the orientation of the streak inside the bounding box. Endpoint localization was not described in the paper. The ESA-funded streak detection algorithm (StreakDet) (Virtanen et al., 2016) operates on two versions of the image frame - black and white (BW) and grayscale (GS). The method first performs clutter removal and streak detection on the binarized (BW) image to increase computational efficiency. Subsequently, it refines the streak parameters (_i.e._, localizing the endpoints) in the GS image with a 2D Gaussian point-spread-function fitting method introduced by Veres et al. (2012). The following methods do not emphasize the clutter removal procedure. However, note that all of them mentioned the usage of some standard noise reduction and removal steps. Tagawa et al. (2016) proposed a novel technique centered around image shearing and compression to first detect streaks from the full frame image. Then, the detected streak region are handed over to an intensity-thresholding process to localize the endpoints of the streak. Nir et al. (2018) leveraged the Fast Radon Transform to detect streaks efficiently. The author reformulated the original Radon Transform to incorporate the Figure 4: Pre-processing of overall image frames. The red squares are the outputs of streak detection. The (cropped) streak images are associated to different RSOs, denoted as RSO 1 and RSO 2, and then passed on as input to D-IOD. Meanwhile, the endpoints of the streaks are localized for the LOS-based IOD. See text for details. endpoints as part of the parameters to be solved for. A similar approach is seen the work of Cegarra Polo et al. (2022), where a variant of the Hough Transform, namely the Progressive Probabilistic Hough Transform (Matas et al., 2000), is used in performing streak detection and endpoint localization simultaneously. Filter matching is another line of method that performs both streak detection and endpoint localization simultaneously (Schildknecht et al., 2015; Dawson et al., 2016; Du et al., 2022). In essence, these methods perform convolution over the clutter-removed image with a predefined streak model and seek regions with the maximum response. More recently, deep-learning-based object detection has been applied to streak detection (Varela et al., 2019; Duev et al., 2019; Jia et al., 2020). Such methods can accurately estimate the bounding boxes of streaks in the overall frame, even for faint streaks. Unlike the Hough/Radon-transform and image processing approaches, deep-learning-based object detection recovers the streak images independently of streak orientation estimation and endpoint localization. We highlight that endpoint localization is a necessity for the LOS-based IOD methods, as depicted in Fig. 2. In contrast, D-IOD only require loosely cropped streak images as input. Since D-IOD fits segments of the candidate orbit to streaks, it also achieves endpoint localization as a byproduct. As alluded to above, conducting direct orbit fitting maximises the use of all available data and prevents premature endpoint localization that is inconsistent with the orbital motion. ### IOD methods Classical IOD methodsGauss's and Laplace's analytical solutions have been regarded as the first two practical algorithms for the IOD problem (Vallado, 2001). Both methods were developed for the type of data available back then--LOS vectors from slow-moving celestial bodies acquired with a sextant. Three linearly independent LOS vectors are required to determine the six degree-of-freedom (DOF) orbital parameters since each LOS vector has two DOF. Modern numerical methods such as Double-r and Gooding's method (Gooding, 1996) have fewer restrictions on the measurement geometry. To deal with inaccurate LOS vectors, Der (2012) presented a Double-r method that allows the LOS vectors to be jointly optimized with the range parameters. The superiority of the Double-r method over classical IOD solvers is clearly demonstrated in cases with LOS errors. Our proposed D-IOD is also not affected by LOS errors; however, D-IOD refines the orbital estimate based on differences in the image intensities directly instead of adjusting the LOS vectors. Data association and too-short-arc problemTo-short-arc (TSA) measurements capture segments of an orbit that are too short geometrically to yield a reliable orbit estimate, especially if the measurements are noisy (Gronchi, 2004). It is essential to collect more measurements over time--by progressively associating new measurements to previous measurements--to improve the numerical accuracy of the resulting orbit. On the other hand, data association is informed by knowledge of the orbit. This chicken-and-egg problem--called the TSA problem--motivates solving data association and IOD jointly. Milani et al. (2004) proposed one of the earliest methods to solve the TSA problem. The authors presented a method to determine an Admissible Region (AR) based on a set of physical constraints, e.g., orbits with negative energies. Each point in the region is an orbital solution candidate, which enables propagation, in turn allowing data association. The AR-based method is well-received by the community and has been furthered by numerous works. Maruskin et al. (2008) discussed a conceptual algorithm for intersecting two AR regions to eliminate infeasible solution candidates before the subsequent (expensive) least square orbit correction process. Fujimoto & Scheeres (2012) proposed a technique to correlate tracks with probability distributions in the Poincare space to improve computational efficiency. Fujimoto et al. (2014) made another attempt to reduce the computational cost by incorporating an extra data domain - angle-rate. The added constraint reduces the number of minimal track associations needed from three to two. Meanwhile, DeMars & Jah (2013) incorporated the Gaussian mixture models to approximate the admissible region, which allows subsequent refinement when new data is available. Gronchi et al. (2010) proposed a closed-formed solution to the data association problem via two-body integrals. The authors further improved the original 48-degree polynomial equation to 20 degrees and 9 degrees in later works (Gronchi et al., 2011, 2015). The reduction of polynomial degrees is accompanied by an improvement in computational efficiency. Note that assumption A1 for D-IOD described in Sec. 1 implies that data association has been solved prior to the method. However, noting that the best-fit orbit returned by D-IOD would yield a high loss given a set of ill-associated streak images, it is possible to extend D-IOD to solve the TSA problem. We leave this as future research. ## 3 Problem formulation We formulate the orbit fitting problem in this section. We first discuss the given data in Sec. 3.1, followed by the interpolation of timestamps (required by the modeling) in Sec. 3.2. Then, we present the modeling of an intensity image as a function of the initial state vector in Sec. 3.3. This section ends with the objective function of D-IOD in Sec. 3.4. ### Data The data for our problem are: 1. A set of streak images (as seen in Fig. 1), 2. The starting and ending timestamps of each image throughout the long-exposure imaging process, and 3. The extrinsic and intrinsic parameters of the telescoped-equipped camera in-used. Each streak image is denoted as \(\mathbf{D}^{(m)}\in\mathbb{R}^{X_{m}\times Y_{m}}\). The extrinsic parameters include the camera location in the Earth-Centered, Earth-Fixed (ECEF) coordinates and the pointing direction. We denote the extrinsic parameters at the starting timestamp of each image (\(t_{0}^{(m)}\)) as \(\mathcal{E}_{b}^{(m)}\coloneqq[\mathbf{r}_{b}^{(m)}\in\mathbb{R}^{3},\mathbf{a }_{b}^{(m)}\in\mathbb{R}^{3},\|\ \mathbf{a}_{b}^{(m)}\|\equiv 1]\), where \(\mathbf{r}\) is the camera location and \(\mathbf{a}\) is the pointing direction of the camera. The intrinsic parameters include the focal length of the telescope, distortion parameters, pixel scale, etc. These are all provided in a FIT file as a standard practice in Astrometry (Calabretta & Greisen, 2002). We denote these constant intrinsic parameters as \(\mathcal{I}\). These parameters are used to perform pixel-to-world and world-to-pixel projections (more details in Sec. 3.3.2). ### Interpolation of timestamps We need to interpolate the _in-between_ timestamps since the imaged streak is a function of time. Specifically, the RSO state, the camera position, and the accumulation of photons at each pixel bin change continuously within the time exposure window. \[\tau^{(m)}\coloneqq\{t_{n}^{(m)}\ |\ t_{n}^{(m)}=\frac{n\ \Delta t^{(m)}}{d^{(m)}}+t_{0}\,n=0,1,...,N\}. \tag{1}\] Given \(\tau\), we obtain a set of camera extrinsic parameters \(\{\mathcal{E}_{t_{n}^{(m)}}\}_{n=0}^{N}\) as a function of Earth's motion, i.e., position, rotation, nutation and precession (Vallado, 2001). We employ an existing off-the-shelf tool, Astropy (Price-Whelan et al., 2018), for this task. ### Modeling We model the streak image as a function of the initial state vector \(\mathbf{o}_{\text{initial}}\coloneqq\{\mathbf{p}_{t_{\text{initial}}}\in \mathbb{R}^{3},\mathbf{v}_{t_{\text{initial}}}\in\mathbb{R}^{3}\}\) as follows, \[\mathbf{S}^{(m)}=F(\mathbf{o}_{t_{\text{initial}}};\mathbf{C}^{(m)}) \tag{2}\] where \(\mathbf{S}^{(m)}\in\mathbb{R}^{X_{m}\times Y_{m}}\) is an intensity matrix, and \(\mathbf{C}^{(m)}\) contains all the constant variables, e.g., timestamps \((t_{n}^{(m)})\), extrinsic and intrinsic camera parameters (\(\mathcal{E}_{t_{n}^{(m)}}\) and \(\mathcal{I}^{(m)}\)), and the standard deviation of the Gaussian point spread function (\(\sigma^{(m)}\)), etc., which are detailed in the following sections. The composition of functions in \(F\) are as follows. 1. The propagator function, i.e., \(\mathbf{o}_{t_{n}^{(m)}}=P(\mathbf{o}_{t_{\text{initial}}},t_{n}^{(m)})\). 2. The world-to-image projection function, i.e., \(\mathbf{u}_{t_{n}^{(m)}}=W(\mathbf{o}_{t_{n}^{(m)}};\mathcal{E}_{t_{n}^{(m)}},\mathcal{I}^{(m)})\). 3. The point spread function, i.e., \(g_{xy(t_{n}^{(m)}}=G(\mathbf{u}_{t_{n}^{(m)}};\mathbf{u}_{xy},\sigma^{(m)})\). 4. The long-exposure imaging process, i.e., \(s_{xy}^{(m)}=E(\{g_{xy(t_{n}^{(m)}}\}_{n=0}^{N})\). Fig. 5 depicts our model. The \(m\) notations are dropped for compactness in the rest of the modelling subsections. #### 3.3.1 Keplerian propagator The innermost function in \(F\) propagates the initial state vector to timestamp \(t_{n}\). The function is illustrated in Fig. 5, where \(\mathbf{p}_{t_{\text{initial}}}\) and \(\mathbf{v}_{t_{\text{initial}}}\) are propagated to a set of \(\mathbf{p}_{t_{n}}\) and \(\mathbf{v}_{t_{n}}\). We adopt the standard Keplerian propagation model (Vallado, 2001, Chapter 2) that incorporates gravitational effects only. #### 3.3.2 Gnomonic projection The next function is the projection of RSO positions to the pixel coordinates of the camera given its parameters (\(\mathcal{E}_{t_{n}}\) and \(\mathcal{I}\)). As illustrated in Fig. 5, the projected pixel coordinates are labelled with their timestamps, i.e., \(\mathbf{u}_{t_{n}}\in\mathbb{R}^{2}\). We adopt the standard gnomonic projection model (Calabretta & Greisen, 2002) (implemented by Astropy (Price-Whelan et al., 2018)) for this task. #### 3.3.3 Gaussian point spread function We model the spread of the photons on the image plane with the Gaussian point spread function. The intensity at location \(x,y\) at timestamp \(t_{n}\) can be computed with the following expression, \[g_{xy_{t_{n}}}(\mathbf{u}_{t_{n}};\mathbf{u}_{xy},\sigma)=\frac{1}{\sigma \sqrt{2\pi}}\exp\Big{(}-\frac{\|\ \mathbf{u}_{xy}-\mathbf{u}_{t_{n}}\ \|_{2}}{2\sigma^{2}}\Big{)}\,, \tag{3}\] where \(\sigma\) is the width of the spread that can be determined from the imaged streak. As illustrated in Fig. 5, the intensity level decreases as the distance of the pixel with the projected pixel \(\mathbf{u}_{t_{n}}\) increases. We highlight that our model assumes constant brightness along the streak. It does not consider the brightness variation in the streak caused by the rotation of RSOs. #### 3.3.4 Long-exposure imaging The intensity of each pixel is accumulated over \(N\) discretized timestamps to model the long-exposure imaging process, yielding \[s_{xy}=\sum_{n=0}^{N}g_{xy_{t_{n}}}\,. \tag{4}\] Figure 5: Our proposed model that maps the initial state vector (\(\mathbf{p}_{t_{\text{initial}}}\) and \(\mathbf{v}_{t_{\text{initial}}}\)) to one of the intensity images (\(\mathbf{S}\in\mathbb{R}^{(X,Y)}\)). Each propagated position vector (\(\mathbf{p}_{t_{n}}\)) is projected to the pixel coordinate \(\mathbf{u}_{t_{n}}\) given the camera location \(\mathbf{r}_{t_{n}}\) and other camera parameters (see text). The highlighted \(3\times 3\) matrices (next to \(\mathbf{u}_{t_{n}}\), \(\mathbf{u}_{t_{n}}\), and \(\mathbf{u}_{t_{n}}\)) depicts the point spread function (PSF). The long-exposure imaging process is modelled by the summation of all intensity images corresponding to the discretized timestamps. The summation process of the intensity matrices is illustrated in Fig. 5 as well. ### Orbit fitting formulation We are now ready to present the formulation of D-IOD's orbit fitting problem. The optimization problem aims to find the initial state vector (\(\mathbf{o}_{\text{initial}}\)) at \(t_{\text{initial}}\) that minimizes the deviations between the observed streak images \(\{\mathbf{D}^{(m)}\}_{m=1}^{M}\) and the generated streak images \(\{\mathbf{S}^{(m)}\}_{m=1}^{M}\). Formally, it has the following form, \[\underset{\mathbf{o}_{\text{initial}}\in\mathbb{R}^{6}}{minimize}\sum_{m=1}^{ M}L^{(m)}(\mathbf{o}_{\text{train}}), \tag{5}\] where each of the loss terms (\(L^{(m)}\)) is the mean Frobenium norm (\(\|\cdot\|_{F}\)) of the difference between \(\mathbf{S}^{(m)}\) and \(\mathbf{D}^{(m)}\), as expressed below, \[L^{(m)}(\mathbf{o}_{\text{initial}}):=\ \frac{1}{|\mathbf{D}^{(m)}|}\ \parallel \mathbf{S}^{(m)}(\mathbf{o}_{\text{train}};\mathbf{C})-\mathbf{D}^{(m)}\parallel _{F}, \tag{6}\] where the cardinality operation (\(|\mathbf{D}^{(m)}|\)) returns the number of pixels in \(\mathbf{D}^{(m)}\). In general, \(t_{\text{initial}}\) can be set to any arbitrary timestamp. However, setting it to the middle timestamp between the furthest pair of images, i.e., \(t_{\text{initial}}=\frac{\ell_{1}^{(m)}+\ell_{2}^{(m)}}{2}\), has a practical advantage that is detailed in Sec. 5.2. The geometrical relationship of D-IOD's orbit fitting problem is visualized in the bottom row of Fig. 2. ## 4 D-Iod We present the algorithmic details of D-IOD in this section. We solve the proposed non-linear least squares problem (5) with a gradient descent approach. In Sec. 4.1, we highlight several optimization strategies that we employed. Then, we detail our data pre-processing steps in Sec. 4.2 and the gradient descent method in Sec. 4.3. All steps of D-IOD are summarized in Alg. 1. ### Optimization strategies #### 4.1.1 Image blurring Intuitively, gradient descent algorithms determine the best direction (also known as the negative gradient vector) in the domain space to travel based on the current estimate's neighbouring loss landscape. However, the **sparsity** of streak images leads to uninformative gradients. We illustrate this problem in the top row of Fig. 6. The image on the left is an overlap of four streak images. We extract the red row and plot the intensity against pixel coordinates on the right plot. Following our notation from Sec. 3, \(\mathbf{D}^{(m)}\) denotes one of the observed streak images, and \(\mathbf{S}^{(m)}(\mathbf{o}^{+})\), \(\mathbf{S}^{(m)}(\mathbf{o}^{\prime})\), and \(\mathbf{S}^{(m)}(\mathbf{o}^{\prime\prime})\) represent three other generated streak images from different solution candidates. Let \(\mathbf{o}^{\prime}\) be the current estimate in this scenario, the goal of gradient descent algorithms is to move towards an optimal orbit that yields the least deviation with \(\mathbf{D}^{(m)}\). As alluded to above, the travelling direction (gradient) is determined based on the neighbouring loss changes, or more formally, the first-order derivative of the loss function. As observed in the top row of Fig. 6, both neighbours yield the same deviation (loss) from \(\mathbf{D}^{(m)}\), illustrating our point that the local gradient of \(\mathbf{o}^{\prime}\) provides (almost) no information to improve the fit between two signals. An effective remedy is to enlarge the streak region with a blurring kernel. It increases the likelihood of overlapping the streaks (or enlarging the overlapping region), which in turn boosting the information provided by the gradient. Formally, given a 2D kernel with \(k\times k\) dimensions, the blurring function, \(\mathbf{X}_{k}=B(\mathbf{X},k)\), can be expressed as \[x_{key}=\sum_{w=\{\frac{1}{2}\}}^{\lfloor\frac{1}{2}\rfloor}\sum_{h=\{ \frac{1}{2}\}}^{\lfloor\frac{1}{2}\rfloor}\frac{x_{x+w,\,y+h}}{k^{2}}\,, \tag{7}\] where \(x_{k,xy}\) is the pixel-wise blurred intensity given kernel size of \(k\). The bottom row of Fig. 6 visualizes the results of applying a blurring kernel to both the observed and the generated streak images. As seen in the 1D example (right plot), \(\mathbf{S}^{(m)}_{k}(\mathbf{o}^{+})\) has a smaller deviation from the blurred data \(\mathbf{D}^{(m)}_{k}\). Naturally, the _descending_ gradient points to \(\mathbf{o}^{+}\) - the direction to reduce the deviation. Blurring kernels of different sizes serve different purposes at different fitting stages. We embed the blurring operation in a coarse-to-fine fitting regime as detailed in the next section. Figure 6: The blurring operation enlarges the streak region in the image. **Top row**: The left image shows the stacking of the observed streak image (\(\mathbf{D}^{(m)}\)) and the streak images generated with three different estimates, \(\mathbf{S}^{(m)}(\mathbf{o}^{+})\), \(\mathbf{S}^{(m)}(\mathbf{o}^{\prime})\), and \(\mathbf{S}^{(m)}(\mathbf{o}^{\prime\prime})\). The 1D (intensity) signals on the right plot are extracted from the same (red) row in each image. **Bottom row**: The blurred correspondences of the top row. #### 4.1.2 Coarse-to-fine fitting The granularity refers to the resolution of the streak image, _i.e._, a larger blurring kernel produces a coarser image. In the early stage, where the overlapping region of the streaks is small, if not non-existent, a larger kernel size (\(k\)) is required. It is effective in fitting the general location and orientation of the streak. We illustrate this with an example in Fig. 6(a), which shows the iterative improvement in fitting three streak images associated to an RSO. The first two columns of Fig. 6(a) shows a 'before-and-after' example of fitting with a large kernel size (\(k=201\)), where the improvement of the fit is visible. Quantitatively, the endpoints' error, denoted as \(\Delta\mathbf{u}\), is a metric that precisely measure the fitness of the generated streak images based on the current orbital estimation. It measures the average distance between the endpoints of the streaks projected from the ground-truth1 and esti Figure 7: The progressive improvements throughout D-IOD’s coarse-to-fine fitting scheme. **(a)**: The odd rows in the figure shows the summation of the observed (\(\mathbf{D}^{(m)}\)) and generated streak images (\(\mathbf{S}^{(m)}\)). The even rows are the blurred versions of the odd rows, denoted as \(\mathbf{D}^{(m)}_{i}+\mathbf{S}^{(m)}_{i}\). The \({}^{\ast}\) superscript denotes other processing steps detailed in Sec. 4.2. These superimposed images show the deviations between both sets of images. **(b)**: Endpoints’ errors of the first, second, and third image, are denoted as \(\Delta\mathbf{u}^{(1)}\), \(\Delta\mathbf{u}^{(2)}\), and \(\Delta\mathbf{u}^{(3)}\), respectively. mated orbits. As seen in Fig. 6(b), at iteration 0, the endpoints' errors are 90 pixels, 96 pixels, and 25 pixels for the first to third streak images. When the loss converges2 at iteration 46, the errors improve to 6 pixels, 22 pixels, and 13 pixels. The loss converged because the deviations between the (blurred) generated and observed images became insignificant. However, the ill-fitness of the streaks are still apparent in the non-blurred images (finest resolution), as seen in the first, third, and fifth rows of Fig. 6(a). Recall that the optimization is performed based on the blurred images, we show the non-blurred images to show the actual fitness of the streaks. Footnote 2: See Sec.4.3 for the convergence criteria. As such, we gradually decrease the kernel size (\(k\)) upon each convergence to progressively refine the details of the fit until the pre-defined smallest kernel size. We found that the blurring operation also help with smoothing out the high frequency Gaussian noise, hence we stop at \(k_{\text{min}}>1\) to retain its noise suppression ability (more details in Sec. 4.2). The effectiveness of our coarse-to-fine fitting strategy can be seen in the consistent decrement of the endpoints and state vector errors in Fig. 6(b), 7(a) and 7(b). The implementation is summarized in Alg. 1. For each kernel size \(k\), we first pre-process the data, which includes applying a \(k\times k\) blurring kernel. Then, the orbital parameters (initial state vector) are updated iteratively with the gradient descent method (see Sec. 4.3) until convergence (line 13 to 28 of Alg. 1). Upon convergence, we restart the whole process to refine the orbital parameters with a smaller (halved) kernel size until \(k_{\text{min}}\). #### 4.1.3 Weighting coefficients Streak images with different streak-to-image ratios (SIRs) impose a loss imbalance problem on D-IOD's objective function. Formally, the SIR of a pre-processed streak image, denoted as \(\mathbf{D}_{k}^{\prime}\), can be expressed as \(\text{SIR}\coloneqq\frac{|\overline{\mathbf{d}}_{k}|}{|\mathbf{D}_{k}^{ \prime}|}\), where \(\overline{\mathbf{d}}_{k}^{\prime}\coloneqq\{d_{k_{x}y}^{\prime}|d_{k,y}^{ \prime}\geq\alpha_{\mathbf{p}_{k}^{\prime}},\forall x,y\}\). The thresholding value, \(\alpha_{\mathbf{p}_{k}^{\prime}}\), is an intensity value that differentiates if a pixel belongs to parts of the streak in \(\mathbf{D}_{k}^{\prime}\), which is detailed in Sec. 4.2 alongside other pre-processing steps. In a scenario where the observed streak images vary significantly in terms of SIR, the final loss term is dominated by the image with the highest SIR. This stems from the mean operation in (6) - a larger background pixel count (lower SIR) dilutes the loss contributed by the streak region. This leads to _over-fitting_, where the gradient descent method fits only to the high SIR image with significantly larger loss term, disregarding the actual fit between the streaks, i.e., endpoints' error. We illustrate this problem in Fig. 9. The effect of the imbalance losses can be seen in the middle row of Fig. 9. Firstly, notice that the gap between both loss terms (denoted as \(L^{(1)}\) and \(L^{(2)}\)) is huge from the beginning despite having similar endpoints' errors (denoted as \(\Delta\mathbf{u}^{(1)}\) and Figure 8: Continuation of Fig. 7. **(a)** and **(b)**: Position and velocity vectors errors in the radial (left), in-track (middle), and cross-track (right) components. Figure 9: Comparison of solving (5) with and without weighting coefficients. **Top row**: Images with huge SIR differences. Image 1 on the left has an SIR of \(1.32\times 10^{-6}\), while image 2 on the right has an SIR of \(1.22\times 10^{-5}\). **Middle row**: Optimizing **without** the weighting coefficients. **Bottom row**: Optimizing **with** the weighting coefficients. The (1) and (2) superscripts distinguish the loss term \(L\) and endpoints’ error \(\Delta\mathbf{u}\) of image 1 and 2. Meanwhile, the loss terms in both cases are differentiated with the \({}^{\prime}\) superscript. We highlight the spikes in the loss terms are caused by the restart of optimization routine with a smaller kernel size. Also, the loss is generally higher for smaller kernel size because the background noise are more significant. \(\Delta\mathbf{u}^{(2)}\)). Secondly, as alluded to above, the loss terms do not reflect the actual fitness of the streaks - notice from iteration 100 onward, \(\Delta\mathbf{u}^{(1)}\) is larger than \(\Delta\mathbf{u}^{(2)}\) but the ranking of the loss terms is inverse, i.e., \(\mathcal{L}^{\prime(1)}<\mathcal{L}^{\prime(2)}\). The much weaker \(\mathcal{L}^{\prime(1)}\) has a negligible influence on the gradient descent algorithm to compute the successive updates, which results in deteriorating \(\Delta\mathbf{u}^{(1)}\) after iteration 150. A simple and effective remedy is to add weighting coefficients \(\{\lambda^{(m)}\}_{m=1}^{M}\) to balance the loss terms. We compute \(\lambda^{(m)}\) as a function of the SIR, which can be expressed as follows, \[\lambda^{(m)}=\frac{\text{SIR}_{\text{max}}}{\text{SIR}^{(m)}}, \tag{8}\] where \(\text{SIR}_{\text{max}}\geq\text{SIR}^{(m)}\)\(\forall\)m. Each weighting term is then multiply to their respective loss terms in (5). The effects of having the weighting coefficients are reflected in the bottom row of Fig. 9. Notice that the gap is much closer in the beginning, and the loss terms accurately reflect the fitness as the optimization progresses. As a result, the endpoints' errors in both images continuously improved and converged to better accuracy. #### 4.1.4 Zero masking The discontinuities, i.e., regions with '0' intensity, in the observed streak images are an undesired outcome of the star removal process. These artifacts, 'zero holes' henceforth, penalize gradient descent for reaching orbital solutions that go through the holes when projected as streaks. To overcome this, we introduce the same 'zero holes' structure in the generated streak images. We obtain a binary mask, which can be formally expressed as \(\mathbf{Z}^{(m)}\in\mathbb{Z}^{\text{X}_{m}\times\mathcal{V}_{m}}:=\{z_{xy} |z_{xy}=0\text{ if }d_{xy}\in\mathbf{D}^{(m)}==0,\text{ and }z_{xy}=1\text{ otherwise}\}\). Then, the operation is a simple element-wise multiplication of \(\mathbf{Z}^{(m)}\) and \(\mathbf{S}^{(m)}\) (see line 16 in Alg. 1). Masking out these regions prevent these pixels to contribute to the final loss term, hence avoiding the unwanted penalty. ### Data pre-processing Here we summarize the main components in our data pre-processing procedure: blurring, background noise subtraction and streak scale determination. The blurring operation was described in Sec. 4.1.1, and the blurred streak is shown in Fig. 10b. Upon blurring, the separation between the signal and background noise became distinctive. In order to further increase the signal-to-noise ratio, we subtract the blurred image with the background noise, \(\beta_{\mathbf{D}_{k}}\), which is obtained with a median operation over the blurred image (\(\mathbf{D}_{k}\)). Formally, the noise-reduced pixel value \(d_{k,xy}^{\prime}\in\mathbf{D}_{k}^{\prime}\) can be expressed as \[d_{k,xy}^{\prime}=max(d_{k,xy}-\beta_{\mathbf{D}_{k}},0)\;. \tag{9}\] We then scale the noise-reduced image to have a unity scale. As a result, the SNR of the pre-processed streak, as shown in Fig. 10c, has increased significantly, e.g. from approximately 2 to 4. Note that the SNR increment is not a constant; it depends on the noise in the image and the blurring kernel size. In order to deal with the intensity variability in the streak region (as seen in Fig. 10c), we compute its median (\(\alpha_{\mathbf{D}_{k}^{\prime}}\)) to scale the amplitude of our generated image \(\mathbf{S}\) (Alg. 1, line 19). The scaling factor (\(\alpha_{\mathbf{D}_{k}}\)) can be easily determined with a median operation over the top-\(\eta\) percentage of an intensity-sorted (vectorized) image. The sorted vector is illustrated in Fig. 10d, where the \(\eta\mathbf{|D}_{k}^{\prime}\) black dashed line separates the top-\(\eta\) percentage and the rest of the pixels (note the log scale x-axis). Recall that \(\alpha_{\mathbf{D}_{k}^{\prime}}\) is also used in computing SIR in Sec. 4.1.3. The selection of \(\eta\) is discussed in Sec. 5.2. ### Gradient descent The main workhorse of D-IOD, a gradient descent optimizer, is detailed in this section. The main steps are as follows. 1. Generate streak images with the current estimate (Alg. 1, line 16 to line 19). 2. Loss computation (Alg. 1, line 20). 3. Gradient computation (Alg. 1, line 25). 4. Parameters update (Alg. 1, line 26). 5. Convergence detection (Alg. 1, line 12). Step 1 has been covered in Sec. 3.3 (generation) and Sec. 4.2 (blurring and scaling operation). We elaborate only step 3 and 5 below since step 2 and 4 are straightforward operations. #### 4.3.1 Gradient computation _Numerical differentiation._ We opt for the numerical differentiation instead of the analytical differentiation since the transcendental Keplerian propagator3 (Sec. 3.3.1) has no analytical gradient. We approximate the gradient via the central finite difference method (Wright et al., 1999, Chapter 8). Specifically, each partial derivative of the loss function \(L\), \(\frac{\partial L}{\partial\omega_{q}}\in\frac{\partial L}{\partial\omega}\in \mathbb{R}^{Q}\) is expressed as Footnote 3: It is solved with numerical methods. \[\frac{\partial L}{\partial\omega_{q}}\approx\lim_{h\to 0}\frac{L(\mathbf{o}+h \mathbf{e}_{q})-L(\mathbf{o}-h\mathbf{e}_{q})}{2h}\,, \tag{10}\] where \(\mathbf{e}_{q}\) is the \(q\)-\(th\) column of the identity matrix \(\mathbf{E}\in\mathbb{R}^{Q\times Q}\), and \(h\) is the step size to perturb the current estimate. Note that the \(t_{\text{initial}}\) subscript of the initial state vector (\(\mathbf{o}\)) is dropped here for compactness. _ADAM optimizer._ To speed up convergence, we use the ADAM optimizer (Kingma & Ba, 2014) as part of our gradient descent algorithm. Detailing the ADAM optimizer is out of the scope of this paper, hence we summarize only the essential components below. In contrast to normal gradient descent, it leverages past gradient information compute a better update. We denote the ADAM optimizer as \(\mathcal{A}\) in line 25 of Alg. 1, where the hyperparameters are 1) the step size of the gradient, \(\alpha\), 2) exponential decay rates for the moment estimates, \(\beta_{1}\) and \(\beta_{2}\). The decay rates are set to the recommended \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) for all the experiments in this paper. For D-IOD, the crucial hyperparameters in the gradient descent method are \(h\) (for gradient computation) and \(\alpha\) (for parameter updates). The standard practice of gradient descent methods is to reduce the step size as the solution converges. We implement that by down-scaling \(\alpha\) with a _cododown_ hyperparameter denoted as \(c\) (see line 28 in Alg. 1). The selections of \(\alpha\), \(c\), and \(h\) are detailed in Sec. 5.2. #### 4.3.2 Convergence detection We compute the moving average (MA) of the absolute loss differences \(L_{\text{diff}}\) (see line 20 in Alg. 1) to determine if the optimization has reached a plateau. The moving average of \(L_{\text{diff}}\) at iteration \(i\) can be expressed as \[L_{\text{diff}_{\text{MA}}}=\frac{1}{v}\sum_{j=i-v+1}^{i}L_{\text{diff}_{j}}\,. \tag{11}\] D-IOD breaks out of the _while_ loop when the current \(L_{\text{diff}_{\text{MA}}}\) is smaller than \(\gamma\overline{L}_{\text{diff}_{\text{MA}}}\) (Alg. 1, line 12), where \(\overline{L}_{\text{diff}_{\text{MA}}}\) is the current maximum absolute loss difference (Alg. 1, line 22). We set \(\gamma\) to 0.3 and the number of \(L_{\text{diff}}\) to be averaged (\(v\)) to 10 in all of our experiments. ## 5 Experiments We detail our experiments in this section. First, we provide the simulated settings and the chosen hyperparameters in Sec. 5.1 and Sec. 5.2, respectively. Additionally, the evaluation metrics used in our experiments are outlined in Sec. 5.3. We evaluated both modes of D-IOD - _refine_ and _end-to-end_, under a variety of simulated scenarios, including different orbit types, variations in the quality of the initialization (Sec. 5.4.1), different time intervals between images (Sec. 5.4.2), and varying signal-to-noise ratios (Sec. 5.4.3). These simulated experiments allow us to measure the accuracy of D-IOD in predicting the orbital state due to the availability of ground truths. As a proof-of-concept, we generate only three streak images for an RSO from a consistent orbit (recall assumption A1 in Sec. 1) in Figure 10: D-IOD’s data pre-processing steps; see text for details. The y-axes represent the normalized intensity values. Plots 10a, 10b, and 10c shows the intensity values extracted from the streak region of image (including some nearby background pixels) at different stages of the pre-processing pipeline. The blue dashed line in 10b indicates the background noise level obtained with a median operation over the blurred observed image \(\mathbf{D}_{\text{L}}\). The log scale plot (x-axis) in 10d visualizes the intensity-sorted (processed) streak image, \(\mathbf{P}_{\text{r}}^{\prime}\). The median intensity of the streak is highlighted with magenta dashed lines. The hyperparameter \(\eta\) is the percentage of the observed image that is assumed to be part of the streak. all the simulated experiments in this section. In practice, three streak images are used for the LOS-based IOD methods, with each streak image contributing one LOS vector. Additionally, we also demonstrate the robustness of D-IOD against real streak images (with no orbital information) with manual endpoints annotations in Sec. 5.4.4. D-IOD was implemented using Python version 3.7. All experiments were conducted on an Intel i5-8400 2.8 GHz CPU machine with 32GB of RAM and running Ubuntu 18.04 as the operating system. ### Simulated settings The simulated camera produces images of dimension \(4930\times 7382\) (equivalent to a camera with 36 Megapixels). It has an effective field-of-view of \(12.5^{\circ}\) by \(20^{\circ}\) degrees, with 10-arcsecond per pixel. Each image is captured from a randomly generated location on Earth's surface, and its pointing direction is randomly tilted (with positive elevation). Additionally, the exposure of each image is set to 5 seconds, and they are cropped with extra border regions to simulate the real streak images as shown in Fig. 1. In our experiments, we used the proposed model (described in Sec. 3.3) to generate streak images from four different orbit types in order to test the generality of D-IOD. The periapsis and eccentricity of these simulated orbits were sampled uniformly and are listed in Table 1. Orbit type A represents nearly circular orbits in Low Earth Orbit (LEO), while B, C, and D simulate orbits with a range of eccentricities in Medium Earth Orbit (MEO). The rest of the Keplerian orbital elements are uniformly sampled from their full angular ranges, i.e., inclination \(i\sim U(0^{\lx@math@degree},180^{\lx@math@degree})\), the longitude of the ascending node \(\omega\sim U(0^{\lx@math@degree},360^{\lx@math@degree})\), the argument of periapsis \(\Omega\sim U(0^{\lx@math@degree},360^{\lx@math@degree})\), and true anomaly \(f\sim U(0^{\lx@math@degree},360^{\lx@math@degree})\). The differences in average image size (diagonal length, \(d\)) are tabulated in Table 1 as well. Images of test case A are larger due to the closer range of the simulated orbits. The upper limit of the periapsis is restricted to ensure that each orbit projection forms a proper streak instead of a light blob. In order to simulate the holes that we observe in real streak images, we added four uniformly distributed holes on the streak as seen in Fig. 11. The diameter of these holes is uniformly sampled between 5 to 20 pixels. ### Hyperparameters In this section, we present the hyperparameters of D-IOD. A summary of the hyperparameters can be found in Table 2. Firstly, both step size parameters, i.e., \(\alpha\) and \(h\), play crucial roles in the convergence of D-IOD. The initial step size (\(\alpha\)) determines the magnitude of each iterative update, and it is adjusted during the optimization process (See Alg. 1, line 29). Meanwhile, \(h\) is used to approximate the gradient vector in the finite difference method (see (10)), and it is fixed throughout the optimization process. Setting \(\alpha\) and \(h\) too large leads to convergence failure, while too small causes slow convergence. In D-IOD, we found that the magnitude of \(\Delta t_{\text{max}}\) impacts the appropriate values for these hyperparameters. The notation \(\Delta t_{\text{max}}\) represents the maximum time interval between the timestamp of the initial state vector to be optimized (\(t_{\text{initial}}\)) and the timestamps (\(\{\tau^{m}\}_{m=1}^{M}\)) of the observed streak images. Here we provide three sets of \(\alpha\) and \(h\) that cover all the experiments performed in this section. In general, a larger \(\Delta t_{\text{max}}\) requires smaller values of \(\alpha\) and \(h\). The reason behind that is the propagated state vector is sensitive to both the initial state vector and the time interval. Increments in both factors lead to larger deviations in the propagated state vectors. As such, when the time interval increases, the perturbation (affected by \(\alpha\) and \(h\)) should be decreased to compensate for the sensitivity. Propagated state vectors with too large of a deviation might fall out of the field-of-view of the observed streak images, where the gradient is not informative due to the non-overlapping streaks (recall the uninformative gradient problem in Sec. 4.1.1), which in turn leads to convergence failure. As such, it is of interest to decrease \(\Delta t_{\text{max}}\) which allows the usage of larger \(\alpha\) and \(h\). As described in Sec. 3, we achieve this by setting \(t_{\text{initial}}\) to the middle timestamp between the furthest \begin{table} \begin{tabular}{c c c c} \hline \hline Orbit types & \(r_{p}(km)\) & \(e\) & \(d\) (pixels) \\ \hline A & \(U(6880,8380)\) & \(U(0,0.01)\) & 538 \\ B & \(U(8380,9380)\) & \(U(0.01,0.2)\) & 333 \\ C & \(U(8380,9380)\) & \(U(0.2,0.4)\) & 343 \\ D & \(U(8380,9380)\) & \(U(0.4,0.6)\) & 353 \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of the simulated orbits. The periapsis (\(r_{p}\)), eccentricity (\(e\)) of the simulated orbits, and diagonal length (\(d\)) of the streak images are presented here. \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{\(\Delta t_{\text{max}}(\text{s})\)} \\ \cline{2-4} Hyperparameters & 30s & 60s & 120s \\ \hline \(h\) & \(2\times 10^{-3}\) & \(4\times 10^{-4}\) & \(2\times 10^{-4}\) \\ \(\alpha\) & 0.1 & 0.02 & 0.01 \\ \(c\) & & 0.5 & \\ \(k_{\text{max}}\) & & 101 & \\ \(k_{\text{min}}\) & & 3 & \\ \(\eta\) & & 0.1 & \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameters of D-IOD for different \(\Delta t_{\text{max}}\) (see text). Figure 11: **Top row**: Simulated streak images with different SNRs. **Bottom row**: Intensity values extracted from the streak region of the image. timestamps in \(\{\tau^{m}\}_{m=1}^{M}\). The maximum kernel size \(k_{\max}\) of 101 deems to be an appropriate starting size in general. The mentioned kernel occupies approximately 20% of the average image diagonal length from orbit type A, and approximately 33% for the smaller images in orbit types B, C, and D. We found that this coverage has a high chance of overlapping the streaks from the observed and generated images (from the initial estimates). For images larger than average (Table 1), the maximum kernel size is increased automatically before the optimization begins. The minimum kernel size \(k_{\min}\) is set to 3 instead of 1 to retain the blurring effect that is part of the noise reduction as detailed in Sec. 4.2. The \(\eta\) hyperparameter is needed for the streak's scale determination (Sec. 4.2). We found that a median operation over the top-0.1% of the brightest pixels is optimal for the SIR computation (Sec. 4.1.3) and the scaling of our generated streak (Sec. 4.2). ### Metrics In our experiments, we report two main metrics: the endpoints' error and the orbital errors. The endpoints' error, denoted as \(\Delta\mathbf{u}\), is the average Euclidean distance between the predicted and simulated streak's endpoints. The orbital error, on the other hand, is the absolute deviation between predicted and simulated Keplerian orbital elements, which are denoted as \(\Delta r_{p}\), \(\Delta e\), \(\Delta i\), \(\Delta\Omega\), \(\Delta o\), and \(\Delta f\). The conversions between the initial state vector (domain of D-IOD) and Keplerian orbital elements can be referred to in the textbook by Vallado (2001). ### Results #### 5.4.1 Initialization experiment D-IOD\({}_{\mathrm{refine}}\) assumes given an initial orbit estimate as input. As described in the introduction, one practical usage of D-IOD\({}_{\mathrm{refine}}\) is to improve the potentially sub-optimal orbital estimate from the two-stage IOD method. As such, it is important to evaluate the robustness of D-IOD against initialization of different qualities. SetupAs mentioned earlier, the orbital solution of the IOD solver in the two-stage method highly depends on the accuracy of the given set of LOS vectors, which are projected from the estimated endpoints. The higher the endpoints' error, the worse the orbital solution is. As such, we simulated the difficulty levels based on the accuracy of the estimated endpoints. We present the median of the endpoints' errors (\(\Delta\mathbf{u}\)) and the periapsis errors (\(\Delta r_{p}\)) of each level in Table 3. Specifically, the orbital estimate from level III is obtained by feeding the Gauss IOD solver with a set of LOS vectors that are back-projected from the streaks' endpoints with an error of approximately 70 pixels. The rest of the orbital-elements errors follow the same pattern, which can be seen in Appendix 9.1 (Table C1 to C5). The constant variables in this experiments are the time interval and SNR (see below for their respective experiments). The time interval between the two furthest images (_i.e._ first and third) is fixed at 60s, and the SNR is set to 4. ResultsThe large differences in the median endpoints' error and the median periapsis error between the initial and converged solutions can be observed in Table 3. The improvement in \(\Delta\mathbf{u}\) is significant (order of magnitudes) and consistent across all levels and orbit types, particularly the challenging levels II, III, IV, and V. For level I, the converged solutions are slightly worse than the initial solutions. We associate this to the sub-optimal hyperparameters settings. The step size \(\alpha\) that we used is ideal in the scenario where the initial solution is far from the optimal solution. It encourages the gradient descent algorithm to take larger steps towards the optimal solution. However, in the scenario where the initial solution is close to the optimal solution, the step size is too large, and the gradient descent algorithm "escapes" the region around the optimal solution. This is a classical _overshooting_ problem in optimization (Dixon, 1972). We used the same set of hyperparameters for all the levels here to show that D-IOD requires no intensive tuning. Overall, the endpoints' errors of D-IOD are consistently low, with a median range of 0.67 to 1.33 pixels in all experiments, demonstrating its robustness against initialization. The im \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{M} & \multirow{2}{*}{S} & \multicolumn{5}{c}{Quality of initial estimates} \\ \cline{3-7} & & I & II & III & IV & V \\ \hline \hline \multicolumn{7}{c}{Orbit type A} \\ \hline \(\Delta\mathbf{u}\) & Init. & **0.72** & 34.48 & 69.12 & 172.69 & 172.69 \\ & Conv. & 0.73 & **0.81** & **0.78** & **0.77** & **0.77** \\ \cline{2-7} \(\Delta r_{p}\) & Init. & **5.07** & 129.74 & 248.77 & 248.77 & 350.25 \\ & Conv. & 7.98 & **9** & **9.44** & **9.44** & **9.15** \\ \hline \hline \multicolumn{7}{c}{Orbit type B} \\ \hline \(\Delta\mathbf{u}\) & Init. & 0.7 & 24.45 & 50.12 & 124.99 & 124.99 \\ & Conv. & **0.66** & **1.06** & **1.25** & **1.18** & **1.18** \\ \cline{2-7} \(\Delta r_{p}\) & Init. & **9.72** & 177.81 & 302.82 & 302.82 & 577.02 \\ & Conv. & 16.17 & **32.21** & **27.22** & **27.22** & **31.77** \\ \hline \multicolumn{7}{c}{Orbit type C} \\ \hline \(\Delta\mathbf{u}\) & Init. & 0.82 & 25.55 & 51.81 & 133.18 & 133.18 \\ & Conv. & **0.67** & **1.1** & **1.33** & **1.14** & **1.14** \\ \cline{2-7} \(\Delta r_{p}\) & Init. & **6.6** & 78.98 & 204.12 & 204.12 & 273.26 \\ & Conv. & 8.68 & **17.62** & **18.3** & **18.3** & **15.96** \\ \hline \hline \multicolumn{7}{c}{Orbit type D} \\ \hline \(\Delta\mathbf{u}\) & Init. & 0.9 & 26.97 & 53.68 & 134.08 & 134.08 \\ & Conv. & **0.65** & **0.96** & **1.14** & **0.92** & **0.92** \\ \cline{2-7} \(\Delta r_{p}\) & Init. & **4.25** & 75.94 & 142.55 & 142.55 & 218.5 \\ & Conv. & 6.16 & **14.63** & **10.01** & **10.01** & **9.18** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of D-IOD against different initialization and orbit types. Reported numbers are the median (Q2) of the endpoints’ errors (\(\Delta\mathbf{u}\)) and the periapsis errors (\(\Delta r_{p}\)). Best numbers in **bold**. See Table C1 to Table C5 for the first (Q1) and third (Q3) quartiles’ numbers. Legends: ‘M’ for metrics, ‘S’ for the stage of optimization, ‘Init.’ for initial estimates, and ‘Conv.’ for converged solutions. The unit for \(\Delta\mathbf{u}\) is pixel, and \(k_{\min}\) for \(\Delta r_{p}\). provement in terms of periapsis error is also consistent, albeit the different error ranges across the orbit types. Such differences stem from the variations in arc spanned by the orbit types - images that capture a larger arc tend to better constrain the solution space. The order of the orbit types based on average arc-span in this experiment is A(3.3\({}^{\circ}\)) \(>\) D(3.07\({}^{\circ}\)) \(>\) C(2.88\({}^{\circ}\)) \(>\) B(2.67\({}^{\circ}\)). We also provide a visual comparison of the initial and final orbital solutions in Fig. 15. The examples were sampled from different periapsis error ranges - left and right columns were sampled from the low to median error ranges, while right column show failure examples. The qualitative improvements can be seen in these examples, where the converged (blue) orbital solutions fit the ground-truth (in black) much better than the initial estimates (in red). RuntimeThe number of iterations and runtime of D-IOD are plotted in Fig. 12. The speed of convergence of D-IOD is affected by two factors: the image size and the quality of initialization. The first factor can be observed in the figure where the runtime for orbit type A (in blue) is consistently slower due to its larger image size (see Table 1). The second factor is also obvious in the increasing pattern across the levels. #### 5.4.2 Time interval experiments In real application settings, the time interval between the streak images from an RSO varies due to factors such as observing geometry and the speed of the RSO. We generated streak images that were captured over different time intervals and evaluated the performance of D-IOD (both _refine_ and _end-to-end_ modes) against them in this experiment. SetupThe experiment includes three time intervals: 60s, 120s, and 240s between the first image and the third image. Meanwhile, the corresponding time intervals between the first image and the second image are randomly sampled with a mean (and a standard deviation of) of 30s (10s), 60s (15s), and 120s (20s), respectively. The SNR of the images is fixed to 4 in this experiment. InitializationFor the _refine_ mode, we provided D-IOD with level III initialization (detailed in Sec. 5.4.1). D-IOD\({}_{\text{end-to-end}}\) initializes by running the two-stage method without the LOS extraction module. D-IOD backprojects the corner of the observed streak images to obtain the LOS vectors. Specifically, the corner that is close to the beginning of the streak, which we assume is given as part of the meta-data. We highlight that the two-stage method shares a similar assumption. ResultsAs shown in Fig. 13, the endpoints' errors of the converged solutions are consistently low. The median of the endpoints' errors for the _refine_ mode and the _end-to-end_ mode range from 0.8 to 1.7 pixels and 0.8 to 1.4 pixels, respectively. The generally lower errors of the _end-to-end_ mode, as observed in the bottom row of Fig. 13, stem from its better initial estimate quality. Specifically, the endpoints' error of the initial estimate for the _refine_ mode is approximately 60 pixels, which is roughly two times larger than the _end-to-end_ mode. This is because the corner of the cropped image is usually closer to the streak than the simulated level III, which is selected to display the robustness of the _refine_ mode in the remaining experiments. Despite having similar ranges of \(\Delta\mathbf{u}\) for all three testing time intervals, the periapsis error (and other orbital-element errors) is lower for test cases with a larger time interval. This aligns with the results in the initialization experiment, i.e., images that capture a larger orbital arc (due to a larger time interval in this case) better constrain the orbital solution space. Likewise, the full result table and figures can be found in Appendix 9.2 (Table 6 to 8). #### 5.4.3 Signal-to-noise ratio experiment The SNR of an streak image varies depending on factors such as the imaging condition and the RSO size. As such, we simulated streak images of three different noise levels to evaluate the robustness of D-IOD against them. SetupWe added zero-mean Gaussian intensity noise to the testing images to simulate different SNRs. The chosen sigmas are 0.25, 0.33, and 0.5, corresponding to the SNR of 4, 3, and 2. Fig. 11 provides imagery examples. The fixed variable in this experiment is the time interval between the furthest images, which we set to 120s (detailed in Sec. 5.4.2). Figure 12: Comparison of the convergence speed of D-IOD in term of number of iterations and runtime for four distinct orbit types across five initialization levels. The X-axes represent the orbit types. The Y-axes of the top and bottom plot represent the iteration count and seconds, respectively. The box plots are color coded to represent the levels - blue for 1, orange for 2, green for III, red for IV, and purple for V. The Q2 (median) of the runtimes are labeled on the boxplots, and the Q1 (first quartile) and Q3 (first quartile) of the runtimes are represented by the bottom and top of each box plot. InitializationThe initialization scheme is the same as the time interval experiment above. ResultsAs expected, the endpoints' errors, as seen in Fig. 14, are lower for higher SNR. As the SNR increases from 2 to 4, the average median \(\Delta\mathbf{u}\) for _refine_ mode decreases from approximately 2.15 pixels to 1.35 pixels. Consistent with the observation in the time interval experiment above, the average median \(\Delta\mathbf{u}\) for the _end-to-end_ mode is generally lower, which decreases from approximately 1.4 pixels to 1 pixel. A similar declining periapsis error can be observed in Fig. 14 as well. See Appendix 9.3 (Table 19 to Table 11) for the full results. #### 5.4.4 Real data experiments We evaluated the robustness of D-IOD against real streak images in this experiment. Note that the real streak images we used are not associated. So we performed only single-image fitting with D-IOD. Besides, they were not provided with orbital information. As such, we evaluated only the ability of D-IOD in fitting the streak here by evaluating its endpoints' error. However, recall that the endpoints' error is strongly correlated to the orbital accuracy, as observed in our simulated experiments above. SetupWe manually annotated the endpoints of fifty real streaks images for this experiment. InitializationWe ran D-IOD with the _end-to-end_ mode since the initial estimates are not available. Contrast to the setting where we have multiple streak images, D-IOD backprojects three pixels, i.e., two from the image corners that are closed to the endpoints of the streak and one from the middle, to obtain the LOS vectors for its initialization scheme. ResultsThe median endpoints' error of D-IOD is 1.59 pixels, which is similar to the range of our simulated experiments. We show several examples of the real streak image fitting process in Fig. 16. These examples highlight the robustness of D-IOD against artifacts caused by the star removal process and poor imaging conditions. We show some failure examples in Fig. 17. These examples showcase some challenging conditions. In the first example, D-IOD fails to fit the lower right corner endpoint that is drowned by the brighter background region. Meanwhile, the second example shows an example where the streak has uneven intensity, where the right end is visibly much brighter than the left end. Similarly, D-IOD fails to fit to the fainter (left) end in this example. We highlight that the challenge here is the uneven intensity and not low intensity which D-IOD has shown to be robust against in the last example of Fig. 16. Lastly, the background noise in the third example is as bright as the segmented streak, which also causes problems for D-IOD. ## 6 Limitations We discuss several limitations of D-IOD here. Firstly, our proposed model is simplistic by design since this is a proof-of-concept paper to put forward a new IOD paradigm. In the application where the streak images are days or months apart, a more robust propagator such as the SGP4 (Vallado, 2001, Chapter 9) is needed. Besides, the Gaussian PSF could also be replaced with a more sophisticated model such as the Airy disk (Airy, 1835). Secondly, the current implementation of D-IOD is slow - it takes approximately 250 seconds to converge. Although, we highlight that more than 80% of the runtime was occupied by the image formation process which can be sped up with parallel computing. Specifically, given \(N\) timestamps, a simple parallelization strategy is to generate \(N\) copies of images before the long-exposure operation that sums all of them to form \(\mathbf{S}\). Figure 14: Comparison of endpoints’ error \(\Delta\mathbf{u}\) and periapsis error \(\Delta r_{p}\) for four orbit types (A, B, C, and D) across three SNRs. The X-axes represent the orbit types. The Y-axes represent pixel distance and \(\Delta m\) for \(\Delta\mathbf{u}\) and \(\Delta r_{p}\) plots, respectively. The box plots are color coded to represent the SNRs: blue for 2, orange for 3, and green for 4. **Top row**: D-IOD\({}_{\text{refine}}\) mode, **bottom row**: D-IOD\({}_{\text{end-to-end}}\) mode. Figure 13: Comparison of endpoints’ error \(\Delta\mathbf{u}\) and periapsis error \(\Delta r_{p}\) for four orbit types (A, B, C, and D) across three time intervals. The X-axes represent the orbit types. The Y-axes represent pixel distance and \(\Delta m\) for \(\Delta\mathbf{u}\) and \(\Delta r_{p}\) plots, respectively. The box plots are color coded to represent the time interval: blue for 60s, orange for 120s, and green for 240s. **Top row**: D-IOD\({}_{\text{refine}}\) mode, **bottom row**: D-IOD\({}_{\text{end-to-end}}\) mode. Figure 15: Visualizations of the initial (red dots) and converged orbits (blue dots) of D-IOD. The simulated orbits (ground-truth) are represented by black ’x’, and the magenta ’x’ denotes the Earth’s center. The three black arrows point to the three captured orbital positions. Each plot is labelled with the corresponding periapsis errors (km) \(\mathrm{L}_{\mathrm{s}_{\mu}}\) (for the initial estimate) and \(\mathrm{C}_{\mathrm{A}_{\mu}}\) (for the converged estimate). From **top** to **bottom** rows: orbits from A, B, C, and D. The orbits are transformed to perfect coordinate (PQW) systems, and viewed from +W axis for better visualizations. Lastly, we tested D-IOD only on images with visible streaks. Under the 5-second time exposure setting, RSOs in most of the MEO and GEO regions (9400 km and above) would produce very short streaks or point sources. We suspect that convergence could be an issue since these point sources a share similar pattern with background noises. ## 7 Conclusion We presented D-IOD, a direct approach to solve the IOD problem in this paper. The proposed method is driven by the principle of _making full use of the available data_. D-IOD iteratively refines the orbital estimate by minimizing our proposed objective function, i.e., the deviations between the generated and observed streak images. Apart from the optimization formulation, we introduced a series of optimization strategies that were inspired by the computer vision literature. D-IOD showcases its robustness against various testing scenarios in both simulated and real data experiments. The significant improvement of D-IOD given poor initial orbital estimates demonstrates its practicality in enhancing the existing two-stage IOD pipeline. Last but not least, we also discussed several future developments for the direct orbit fitting regime. ## 8 Acknowledgements Chee-Kheng Chng was funded by Lockheed Martin Australia. Tat-Jun Chin is SmartSat CRC Professorial Chair of Sentient Satellites. The imaging data in this paper were provided by a network of widefield optical staring sensors called FireOPAL. FireOPAL is a research project funded by a partnership between Lockheed Martin Australia and the Space Science Technology Centre at Curtin University. ## 9 Appendices ### Appendix A - Full results for the initialization experiments Table 1 to 5 contains all three quartiles (i.e., Q1, Q2, and Q3) for all metrics (i.e., \(\Delta\mathbf{u}\), \(\Delta r_{p}\), \(\Delta e\), \(\Delta i\), \(\Delta\Omega\), \(\Delta\omega\), and \(\Delta f\)) in the initialization experiment (Sec.5.4.1). For circular orbits (A), \(\Delta\omega\), and \(\Delta f\) are not reported because they are undefined. Figure 16: The iterative improvement of D-IOD on real streak images. The real streak images in each example are displayed on the left column. The annotation of the endpoints are highlighted by the red circles. The bottom row of each example shows the generated streak images and the top row depicts the corresponding blurred version. The current iteration count \(i\) and the kernel size \(k\) are written on the blurred streak images. Figure 17: Failure examples of D-IOD on real streak images. ### Appendix B - Full results for the time interval experiments The same information (as above) for the time interval experiments (Sec. 5.4.2) are tabulated in Table C6, C7 and C8. These tables supplement Fig. 13. ### Appendix C - Full results for the SNR experiments The same information (as above) for the SNR experiments (Sec. 5.4.3) are tabulated in Table C9, C10 and C11. These tables supplement Fig. 14.
2306.14734
Parking functions, Fubini rankings, and Boolean intervals in the weak order of $\mathfrak{S}_n$
Let $\mathfrak{S}_n$ denote the symmetric group and let $W(\mathfrak{S}_n)$ denote the weak order of $\mathfrak{S}_n$. Through a surprising connection to a subset of parking functions, which we call unit Fubini rankings, we provide a complete characterization and enumeration for the total number of Boolean intervals in $W(\mathfrak{S}_n)$ and the total number of Boolean intervals of rank $k$ in $W(\mathfrak{S}_n)$. Furthermore, for any $\pi\in\mathfrak{S}_n$, we establish that the number of Boolean intervals in $W(\mathfrak{S}_n)$ with minimal element $\pi$ is a product of Fibonacci numbers. We conclude with some directions for further study.
Jennifer Elder, Pamela E. Harris, Jan Kretschmann, J. Carlos Martínez Mori
2023-06-26T14:37:10Z
http://arxiv.org/abs/2306.14734v2
# Boolean intervals in the weak order of \(\mathfrak{S}_{n}\) ###### Abstract. Let \(\mathfrak{S}_{n}\) denote the symmetric group and let \(W(\mathfrak{S}_{n})\) denote the weak order of \(\mathfrak{S}_{n}\). Through a surprising connection to a subset of parking functions, which we call _unit Fubini rankings_, we provide a complete characterization and enumeration for the total number of Boolean intervals in \(W(\mathfrak{S}_{n})\) and the total number of Boolean intervals of rank \(k\) in \(W(\mathfrak{S}_{n})\). Furthermore, for any \(\pi\in\mathfrak{S}_{n}\), we establish that the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) with minimal element \(\pi\) is a product of Fibonacci numbers. We conclude with some directions for further study. Key words and phrases:permutation, weak order lattice, Fubini ranking, parking function, fibonacci number 2020 Mathematics Subject Classification: Primary: 05A05; Secondary 06A07, 05A05, 05A15, 05A19 ## 1. Introduction A poset is called _Boolean_ if it is isomorphic to the poset of subsets of a set \(I\) ordered by inclusion. The term _Boolean poset_ is inherited from _Boolean algebras_, given that one of the most familiar examples of a Boolean algebra is the power set \(2^{I}\). If \(|I|=k<\infty\), then a Boolean poset is a distributive lattice, making it a ranked poset. Hence, we let \(B_{k}\) denote a Boolean poset of rank \(k\). Boolean posets appear frequently in combinatorics, especially as intervals (subposets) within larger structures. In these cases, they are referred to as _Boolean intervals_. One notable example is that of Boolean intervals in the _weak right (Bruhat) order lattice_ on the symmetric group \(\mathfrak{S}_{n}\)[2, 10, 12, 11], where \(n\in\mathbb{N}\coloneqq\{1,2,3,\ldots\}\). The weak order lattice on \(\mathfrak{S}_{n}\), denoted \(W(\mathfrak{S}_{n})\), is constructed by the simple transpositions \(s_{i}=(i,i+1)\) for \(i\in[n-1]\), where \([n]\coloneqq\{1,2,\ldots,n\}\). That is, cover relations arise from the (right hand side) application of a single simple transposition. Therefore, simple transpositions are also referred to as generators. Figure 1 highlights a \(B_{3}\) interval in \(W(\mathfrak{S}_{6})\). Tenner established that Boolean posets appear as intervals \([v,w]\) in the weak order if and only if \(v^{-1}w\) is a permutation composed of only commuting generators [12, Corollary 4.4]. We recall that generators \(s_{i}\) and \(s_{j}\) commute whenever \(|i-j|>1\). We provide more background on the weak order lattice and Boolean intervals in Section 2. Tenner also established that Boolean intervals with a generator as minimal element are enumerated by products of at most two Fibonacci numbers [12, Proposition 5.9]. Our first result generalizes Tenner's result as follows. **Theorem 1.1**.: _Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{S}_{n}\) be in one-line notation and partition its ascent set \(\operatorname{\mathrm{Asc}}(\pi)=\{i\in[n-1]:\pi_{i}<\pi_{i+1}\}\) into maximal blocks \(b_{1},b_{2},\ldots,b_{k}\) of consecutive entries. Then, the number of Boolean intervals \([\pi,w]\) in \(W(\mathfrak{S}_{n})\) with fixed minimal element \(\pi\) and arbitrary maximal element \(w\) (including the case \(\pi=w\)) is given by_ \[\prod_{i=1}^{k}F_{|b_{i}|+2},\] _where \(F_{\ell}\) is the \(\ell\)th Fibonacci number, and \(F_{1}=F_{2}=1\)._ Surprisingly, our proof of Theorem 1.1 and subsequent results rely on a class of combinatorial objects we refer to as _unit Fubini rankings_. A tuple \(\alpha\in[n]^{n}\) is a _Fubini ranking_ of length \(n\) if it records a valid ranking over \(n\) competitors with ties allowed. For example, \((4,1,1,1)\) is a Fubini ranking since competitors \(2\), \(3\), and \(4\) are tied and rank first, and competitor \(1\) ranks fourth. However, \((1,1,2,3)\) is not a Fubini ranking since two competitors are tied and rank first, implying no competitor can rank second (the next available rank is third). Unit Fubini rankings are a subset of the \(\mathbb{Z}_{2}\)-normals of the \(\mathbb{Z}_{2 of Fubini rankings in which ranks are shared by at most two competitors. For example, \((4,2,2,1)\) is a unit Fubini ranking, whereas \((4,1,1,1)\) is not a unit Fubini ranking. Our main result is as follows. **Theorem 1.2**.: _The set of unit Fubini rankings with \(n-k\) distinct ranks is in bijection with the set of Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\)._ We establish a count for the number of unit Fubini rankings with \(n-k\) distinct ranks. In turn, by Theorem 1.2, this provides a count for the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\). **Theorem 1.3**.: _Let \(f(n,k)\) denote the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\). Then,_ \[f(n,k)=\frac{n!}{2^{k}}\binom{n-k}{k}. \tag{1}\] Note that Equation (1) recovers the following known results: * \(f(n,0)=n!\) is the number of permutations (\(B_{0}\)) in \(\mathfrak{S}_{n}\) (OEIS A000142), * \(f(n,1)=\frac{n!(n-1)}{2}\) is the number of edges (\(B_{1}\)) in \(W(\mathfrak{S}_{n})\) (OEIS A001286), and * \(f(n,2)=\frac{n!(n-2)(n-3)}{8}\) is the number of 4-cycles (\(B_{2}\)) in \(W(\mathfrak{S}_{n})\) (OEIS A317487). To the best of our knowledge, we are the first to establish a general formula for \(f(n,k)\). By setting \(q=1\) into the exponential generating function [9, Exercise 3.185(h)] \[F(x,q)=\sum_{n\geq 0}\sum_{k\geq 0}f(n,k)q^{k}\frac{x^{n}}{n!}=\frac{1}{1-x- \frac{q}{2}x^{2}}, \tag{2}\] Stanley [8] points out that the _total_ number of Boolean intervals in \(W(\mathfrak{S}_{n})\) (OEIS A080599) satisfies the recurrence relation \[f(n+1)=(n+1)f(n)+\binom{n+1}{2}f(n-1), \tag{3}\] where \(f(0)=1\) and \(f(1)=1\). We give a proof of this result from the perspective of unit Fubini rankings (Theorem 5.1). The remainder of this paper is organized as follows. In Section 2 we provide necessary background on Boolean intervals, unit interval parking functions, and Fubini rankings. In Section 3 we present preliminary results on unit Fubini rankings, including an inequality characterization and operations that preserve unit Fubini rankings. In Section 4 we prove Theorem 1.2, establishing a bijection between unit Fubini rankings with \(n-k\) distinct ranks and Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\). In Section 5, we prove Theorem 1.3, giving a closed formula for the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\). We conclude with Section 6, providing directions for future study. ## 2. Background We begin this section with necessary background on Boolean intervals in \(W(\mathfrak{S}_{n})\). We then provide some history and results related to Fubini rankings and their interpretation as parking functions, which we use in the proofs of our main results. ### Boolean intervals in the weak order lattice Boolean posets are constructed by subsets of a set \(I\) ordered by inclusion. Figure 2 illustrates some small examples. The following definition plays a key role in our proof of Theorem 1.1. **Definition 2.1**.: For a permutation \(\sigma=\sigma_{1}\sigma_{2}\cdots\sigma_{n}\in\mathfrak{S}_{n}\), the _ascent set of \(\sigma\)_ is given by \[\operatorname{Asc}(\sigma)=\{j\in[n-1]\;:\;\sigma_{j}<\sigma_{j+1}\}.\] Let \(\operatorname{asc}(\sigma)=|\operatorname{Asc}(\sigma)|\) denote the number of ascents of \(\sigma\). Similarly, the _descent set of \(\sigma\)_ is given by \[\operatorname{Des}(\sigma)=\{j\in[n-1]\;:\;\sigma_{j}>\sigma_{j+1}\}.\] Let \(\operatorname{des}(\sigma)=|\operatorname{Des}(\sigma)|\) denote the number of descents of \(\sigma\). The _weak right (Bruhat) order_, denoted \(W(\mathfrak{S}_{n})\), is a partial order on \(\mathfrak{S}_{n}\). Its cover relations are defined by the application of a single simple (adjacent) transposition on the right hand side. That is, \(\tau\lessdot\sigma\) if and only if \(\tau s_{i}=\sigma\) for some \(i\in\operatorname{Des}(\sigma)\). In general, if \(\tau\leq\sigma\), then there exists a collection \(s_{i_{1}},\ldots,s_{i_{k}}\) of simple transpositions such that \(\tau s_{i_{1}}\ldots s_{i_{k}}=\sigma\). Note that \(W(\mathfrak{S}_{n})\) is a bounded lattice for all \(n\geq 2\)[9]. In one-line notation, its minimal element is \(12\cdots n\) while its maximal element is \(n(n-1)\cdots 21\). Figure 3 illustrates \(W(\mathfrak{S}_{4})\) with its elements written in one-line notation. **Remark 2.1**.: _In a similar way, we can define the weak left (Bruhat) order, where \(\tau\leq\sigma\) if and only if there exists a collection \(s_{k_{1}}\ldots,s_{k_{m}}\) of simple transpositions such that \(\sigma=s_{k_{1}}\ldots s_{k_{m}}\tau\). The two weak orders are distinct, but isomorphic under the map \(\sigma\mapsto\sigma^{-1}\)._ A subset \([\sigma,\tau]\subseteq W(\mathfrak{S}_{n})\) is an interval if \(\sigma\leq\tau\) and \(\pi\in[\sigma,\tau]\) whenever \(\sigma\leq\pi\leq\tau\). As noted in the introduction, Tenner established that Boolean intervals in \(W(\mathfrak{S}_{n})\) have the structure \([v,w]\) if and only if \(v^{-1}w\) is a permutation composed of only commuting generators [12, Corollary 4.4]. **Example 2.1**.: In Figure 3, if \(\pi\in\mathfrak{S}_{4}\), then the interval \([\pi,\pi]\) is a Boolean interval of rank zero. In addition, all intervals \([\pi,\pi s_{i}]\) where \(i\in\operatorname{Asc}(\pi)\) are Boolean intervals of rank one. Finally, if \(\operatorname{Asc}(\pi)=\{1,3\}\), then the interval \([\pi,\pi s_{1}s_{3}]\) is a Boolean interval of rank two. For example, the interval \([2314,3241]\), which is highlighted in Figure 3, is one of the six Boolean intervals of rank two in \(W(\mathfrak{S}_{4})\). ### Parking Functions, Unit Interval Parking Functions, and Fubini Rankings A tuple \(\alpha=(a_{1},a_{2},\ldots,a_{n})\in[n]^{n}\) is a _parking function_ of length \(n\) if its weakly increasing rearrangement \(\alpha^{\prime}=(a^{\prime}_{1},a^{\prime}_{2},\ldots,a^{\prime}_{n})\) satisfies \(a^{\prime}_{i}\leq i\) for all \(i\in[n]\). For example \(\alpha=(1,6,4,4,3,3,2)\) is a parking function of length seven as its weakly increasing rearrangement \(\alpha^{\prime}=(1,2,3,3,4,4,6)\) satisfies the inequality conditions. However, \(\alpha=(1,5,4,6,6,3,7)\) is not a parking function, as its weakly increasing rearrangement \(\alpha^{\prime}=(1,3,4,5,6,6,7)\) does not satisfy the inequality condition for \(i=2\) because \(3\nleq 2\). Let \(\operatorname{PF}_{n}\) denote the set of parking functions of length \(n\). Parking functions were introduced by Konheim and Weiss [7], who established that \(|\operatorname{PF}_{n}|=(n+1)^{n-1}\) for all \(n\geq 1\). One can interpret parking functions by treating \(\alpha=(a_{1},a_{2},\ldots,a_{n})\in[n]^{n}\) as tuple encoding the parking preferences of \(n\) cars that attempt to park, one at a time, on a one-way street with \(n\) parking spots. When car \(i\in[n]\) arrives, it attempts to park in its preferred spot \(a_{i}\). If spot \(a_{i}\) is unoccupied, car \(i\) parks there. Otherwise, car \(i\) continues driving down the one-way street until it finds the first unoccupied spot in which to park, if there is one. If no such spot exists, then car \(i\) is unable to park. If \(\alpha\) leads to all cars being able to park, then it is a parking function. Figure 4 illustrates the order in which cars park on the street when \(\alpha=(1,6,4,4,3,3,2)\). We refer to the resulting parking order as the _outcome_ of \(\alpha\). Hadaway and Harris introduced unit interval parking functions, which are a subset of parking functions in which cars park exactly at their preferred spot or one spot away [6]. For example, \((1,2,3,4,5)\), \((1,1,3,4,5)\), \((1,1,2,4,5)\) are unit interval parking functions (of length \(5\)), whereas \((1,1,1,1,1)\) is a parking function but not a unit interval parking function. Let \(\mathrm{UPF}_{n}\) denote the set of unit interval parking functions of length \(n\). Hadaway and Harris established that the number of unit interval parking functions of length \(n\) is given by the Fubini numbers, also known as the Figure 4. The parking outcome of the preference tuple \(\alpha=(1,6,4,4,3,3,2)\). Figure 3. Illustration of \(W(\mathfrak{S}_{4})\) with a highlighted Boolean interval \(B_{2}\). ordered Bell numbers (OEIS A000670). That is, \[|\mathrm{UPF}_{n}|=\mathrm{Fub}_{n}=\sum_{k=1}^{n}k!\,S(n,k), \tag{4}\] where \(S(n,k)\) are Stirling numbers of the second kind (OEIS A008277), which count the number of set partitions of \([n]\) with \(k\) non-empty parts. To establish their result, Hadaway and Harris proved that the set of unit interval parking functions is in bijection with the set of _Fubini rankings_. A Fubini ranking of length \(n\) is a tuple \(r=(r_{1},r_{2},\ldots,r_{n})\in[n]^{n}\) that records a valid ranking over \(n\) competitors with ties allowed (i.e., multiple competitors can be tied and have the same rank). However, if \(k\) competitors are tied and rank \(i\)th, the \(k-1\) subsequent ranks \(i+1,i+2,\ldots,i+k-1\) are disallowed. For example, if two competitors are tied and rank first, the second rank is disallowed and the next available rank is the third1. Similarly, \((1,1,3,3,5)\), \((1,2,3,4,5)\), \((1,1,1,1,1)\), \((3,1,5,1,3)\) are all Fubini rankings (of length \(5\)) while \((3,1,5,1,2)\) is not, as competitors \(2\) and \(4\) are tied and rank first, implying no competitor can rank second. Let \(\mathrm{FR}_{n}\) denote the set of Fubini rankings of length \(n\). Cayley [4] showed that \(|\mathrm{FR}_{n}|=\mathrm{Fub}_{n}\), as in (4). Footnote 1: One noteworthy instance of this took place at the men’s high jump event at the Summer 2020 Olympics [5]. In this competition, Mutaz Essa Barshim of Qatar and Gianmarco Tamberi of Italy led the final round. Both athletes cleared 2.37 meters but neither of them cleared 2.39 meters. Upon being presented the option of a “jump-off” to determine the sole winner, they agreed to instead share the gold medal. The next best rank was held by Maksim Nedasekau of Belarus, who obtained the bronze medal. Note that by the definition of Fubini ranking, any rearrangement of a Fubini ranking is itself a Fubini ranking; as long as the distribution of ranks does not change, which competitor holds which rank is immaterial. In other words, Fubini rankings are invariant under permutations. As we reference this fact in a later section, we state it formally below. **Lemma 2.1**.: _Fubini rankings are invariant under permutations._ In the remainder of this paper, we consider the intersection of Fubini rankings and unit interval parking functions, which we describe in the next section. ## 3. Unit Fubini Rankings Despite the fact that the sets \(\mathrm{FR}_{n}\) and \(\mathrm{UPF}_{n}\) are in bijection, their intersection \(\mathrm{FR}_{n}\cap\mathrm{UPF}_{n}\) is non-trivial for all \(n>1\). For example, \((1,1,2)\) is a unit interval parking function but not a Fubini ranking, \((1,1,1)\) is a Fubini ranking but not a unit interval parking function, while \((1,1,3)\) is both a Fubini ranking and a unit interval parking function. Henceforth, we refer to the elements in \(\mathrm{FR}_{n}\cap\mathrm{UPF}_{n}\) as _unit Fubini rankings_, and we denote this set by \(\mathrm{UFR}_{n}\). Note that elements in \(\mathrm{UFR}_{n}\) are Fubini rankings with the additional constraint that ranks are shared by at most two competitors. Table 1 gives the cardinality of \(\mathrm{UFR}_{n}\) for small values of \(n\), agreeing with OEIS A080599, which Stanley identifies as the number of Boolean intervals in \(W(\mathfrak{S}_{n})\). His remark motivates this work. The following definition and result are due to Bradt, Elder, Harris, Rojas Kirby, Reutercrona, Wang, and Whidden [3], who gave a complete characterization of unit interval parking functions. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \(|\mathrm{UFR}_{n}|\) & 1 & 3 & 12 & 66 & 450 & 3690 & 35280 \\ \end{tabular} \end{table} Table 1. The number of unit Fubini rankings with \(1\leq n\leq 7\) competitors. **Definition 3.1** ([3]).: Let \(\alpha=(a_{1},a_{2},\ldots,a_{n})\in\text{UPF}_{n}\) and \(\alpha^{\prime}=(\alpha^{\prime}_{1},\alpha^{\prime}_{2},\ldots,\alpha^{\prime }_{n})\) be its weakly increasing rearrangement. Let \(i_{1},i_{2},\ldots,i_{m}\in[n]\) be the increasing sequence of indices satisfying \(\alpha^{\prime}_{i_{j}}=i_{j}\). The partition of \(\alpha^{\prime}\) as the concatenation \(b_{1}|b_{2}|\ldots|b_{m}\) where \(b_{j}=(\alpha^{\prime}_{i_{j}},\alpha^{\prime}_{i_{j}+1},\ldots,\alpha^{\prime }_{i_{j+1}-1})\) is called the _block structure_ of \(\alpha\). Each part \(b_{j}\) for \(j\in[m]\) is called a _block_ of \(\alpha\). Next we state the characterization of unit parking functions by Bradt et al. [3, Theorem 2.9]. **Theorem 3.1**.: _Given \(\alpha=(a_{1},\ldots,a_{n})\in\text{UPF}_{n}\), let \(\alpha^{\prime}\) be its weakly increasing rearrangement and \(\alpha^{\prime}=\pi_{1}\,|\,\pi_{2}\,|\,\ldots\,|\,\pi_{m}\) be the block structure of \(\alpha\) (as in Definition 3.1)._ 1. _There are_ \[\begin{pmatrix}n\\ |\pi_{1}|,\ldots,|\pi_{m}|\end{pmatrix}\] (5) _possible rearrangements_ \(\sigma\) _of_ \(\alpha\) _such that_ \(\sigma\) _is still a unit interval parking function._ 2. _A rearrangement_ \(\sigma\) _of_ \(\alpha\) _is in_ \(\text{UPF}_{n}\) _if and only if the entries in_ \(\sigma\) _respect the relative order of the entries in each of the blocks_ \(\pi_{1},\pi_{2},\ldots,\pi_{m}\)_._ For our purposes, we only need the following result, which follows from Theorem 3.1. **Corollary 3.1**.: _Let \(\alpha\in\text{UPF}_{n}\) and \(b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{m}\) be its block structure. For each \(j\in[m]\), let \(i_{j}\) be the minimal element of \(b_{j}\). Consider any \(j\in[m-1]\). If \(|b_{j}|=1\), then \(b_{j}=(i_{j})\) and \(i_{j+1}=i_{j}+1\). Otherwise, if \(|b_{j}|=2\), then \(b_{j}=(i_{j},i_{j})\) and \(i_{j+1}=i_{j}+2\). Otherwise, \(|b_{j}|\geq 3\), \(b_{j}=(i_{j},i_{j},\underbrace{i_{j}+1,i_{j}+2,\ldots,i_{j}+|b_{j}|-2}_{|b_{j }|-2})\) and \(i_{j+1}=i_{j}+|b_{j}|\)._ We now give a characterization of unit Fubini rankings based on their block structure. We employ this technical result in our proof of Theorem 1.2. **Theorem 3.2**.: _Let \(\alpha\in\text{UPF}_{n}\) and \(b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{m}\) be its block structure. Then, \(\alpha\in\text{UFR}_{n}\) if and only if \(|b_{j}|\leq 2\) for each \(j\in[m]\)._ Proof.: First, suppose \(|b_{j}|\leq 2\) for each \(j\in[m]\). We need to show that \(\alpha\in\text{UFR}_{n}\). To do this, it suffices to show that for each pair \(b_{j},b_{j+1}\) of consecutive blocks with \(j\in[m-1]\), there being competitors whose ranks correspond to the block \(b_{j}\) does not disallow there being a competitor whose rank is the minimal element of block \(b_{j+1}\). Consider any such pair \(b_{j},b_{j+1}\) of consecutive blocks and let \(i_{j}\) and \(i_{j+1}\) be the minimal elements of blocks \(b_{j}\) and \(b_{j+1}\), respectively. If \(|b_{j}|=1\), then by Corollary 3.1 we know that \(b_{j}=(i_{j})\) and \(i_{j+1}=i_{j}+1\), so there being a competitor whose rank is \(i_{j}\) does not disallow there being a competitor whose rank is \(i_{j+1}=i_{j}+1\). If \(|b_{j}|=2\), then by Corollary 3.1 we know that \(b_{j}=(i_{j},i_{j})\) and \(i_{j+1}=i_{j}+2\), so there being two competitors whose ranks are both \(i_{j}\) does not disallow there being a competitor whose rank is \(i_{j+1}=i_{j}+2\). Now, suppose \(|b_{j}|=k>2\) for some \(j\in[m]\). We need to show that \(\alpha\notin\text{UFR}_{n}\). Let \(i_{j}\) be the minimal element of block \(b_{j}\) so that, by Corollary 3.1, \(b_{j}=(i_{j},i_{j},i_{j}+1,\ldots,i_{j}+k)\). Note that, in \(b_{j}\), \(i_{j}\) appears twice while \(i_{j}+1\) appears once. Therefore, similarly in \(\alpha\), \(i_{j}\) appears twice while \(i_{j}+1\) appears once. This implies that \(\alpha\notin\text{UFR}_{n}\), since there being two competitors whose ranks are both \(i_{j}\)th disallows the subsequent rank \(i_{j}+1\), which some competitor supposedly holds. As a corollary, we give an inequality description of unit Fubini rankings. **Corollary 3.2**.: _Let \(\alpha=(a_{1},a_{2},\ldots,a_{n})\in[n]^{n}\) and \(\alpha^{\prime}=(a^{\prime}_{1},a^{\prime}_{2},\ldots,a^{\prime}_{n})\) be its weakly increasing rearrangement. Then, \(\alpha\in\text{UFR}_{n}\) if and only if \(c_{i}\leq a^{\prime}_{i}\leq i\) for each \(i\in[n]\), where_ \[c_{i}=\begin{cases}1,&\text{if $i=1$}\\ i,&\text{if $a^{\prime}_{i-1}=i-2$ and $2\leq i\leq n$}\\ i-1,&\text{otherwise}.\end{cases}\] Proof.: First, let \(\alpha\in\mathrm{UFR}_{n}\). Then, by Theorem 3.2, the block structure \(b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{m}\) of \(\alpha\) satisfies \(|b_{j}|\leq 2\) for each \(j\in[m]\). This implies that \(c_{i}\leq a_{i}^{\prime}\leq i\) for each \(i\in[n]\). Now, let \(\alpha\in[n]^{n}\) such that \(c_{i}\leq a_{i}^{\prime}\leq i\) for all \(i\in[n]\). This implies that each number \(i\in[n]\) occurs at most twice in \(\alpha\). Moreover, if \(i\in[n]\) occurs twice, then the next smallest number, if any, is \(i+2\). This implies that the block structure \(b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{m}\) of \(\alpha\) satisfies \(|b_{j}|\leq 2\) for each \(j\in[m]\). By Theorem 3.2, this implies \(\alpha\in\mathrm{UFR}_{n}\). We now take a quick aside to provide a connection between unit Fubini rankings and the Fibonacci numbers (OEIS A000045), defined by \(F_{n+1}=F_{n}+F_{n-1}\) for \(n\geq 2\) and \(F_{1}=F_{2}=1\). **Theorem 3.3**.: _Let \(\mathrm{UFR}_{n}^{\uparrow}\) be the set of weakly increasing unit Fubini rankings of length \(n\). Then, for \(n\geq 1\) we have_ \[|\mathrm{UFR}_{n}^{\uparrow}|=F_{n+1},\] _where \(F_{n+1}\) is the \((n+1)\)th Fibonacci number._ Proof.: We show that \(|\mathrm{UFR}_{n}^{\uparrow}|\) satisfies the same recurrence relation as the Fibonacci numbers. That is, we show \(|\mathrm{UFR}_{n}^{\uparrow}|=|\mathrm{UFR}_{n-1}^{\uparrow}|+|\mathrm{UFR}_{ n-2}^{\uparrow}|\), \(|\mathrm{UFR}_{2}^{\uparrow}|=2\), and \(|\mathrm{UFR}_{1}^{\uparrow}|=1\). By Theorem 3.2, the block structure of any unit Fubini ranking has blocks of size at most two. Moreover, for any \(n\in\mathbb{N}\), each \(\alpha\in\mathrm{UFR}_{n}\) satisfies \(|\{i\in[n]:a_{i}=n\}|\leq 1\). That is, no two competitors can tie and rank \(n\)th over \(n\) competitors. Therefore, to compute \(|\mathrm{UFR}_{n}^{\uparrow}|\), we need only consider forming a block of size two in which \(2\) participants tie and rank \(n-1\) to any \(\beta\in\mathrm{UFR}_{n-2}^{\uparrow}\), or appending a block of size one with rank \(n\) to any \(\gamma\in\mathrm{UFR}_{n-1}^{\uparrow}\). These cases are disjoint and exhaustive, and therefore give the required recursion relation. To conclude, we note that \(|\mathrm{UFR}_{1}^{\uparrow}|=|\{(1)\}|=1\) and \(|\mathrm{UFR}_{2}^{\uparrow}|=|\{(1,1),(1,2)\}|=2\). Lastly, we describe a set of functions on unit Fubini rankings used in future sections to establish Theorem 1.2. **Definition 3.2**.: For each \(i\in[n-1]\) define \(\delta_{i}:\mathrm{UFR}_{n}\to\mathrm{UFR}_{n}\) given by \[\delta_{i}(\alpha)=\begin{cases}\alpha,&\text{if }|\{j:a_{j}=i-1\}|=2 \text{ or }|\{j:a_{j}=i\}|=2\text{ or }|\{j:a_{j}=i+1\}|=2\\ \widehat{\alpha}(i),&\text{otherwise};\end{cases} \tag{6}\] where \(\widehat{\alpha}(i)\) is obtained from \(\alpha\) by decreasing the singular occurrence of \(i+1\) to \(i\). For example, if \(\alpha=(1,3,5,3,6,1,7)\), then \(\delta_{i}(\alpha)=\alpha\), for \(1\leq i\leq 4\), while * \(\delta_{5}(\alpha)=\widehat{\alpha}(5)=(1,3,5,3,5,1,7)\), because \(4\), \(5\), and \(6\) each occur only once in \(\alpha\) and * \(\delta_{6}(\alpha)=\widehat{\alpha}(6)=(1,3,5,3,6,1,6)\), because \(5\), \(6\), and \(7\) each occur only once in \(\alpha\). One can readily confirm that all of the tuples above are in \(\mathrm{UFR}_{7}\). This motivates the next result. **Lemma 3.1**.: _The functions \(\delta_{i}\) for \(i\in[n-1]\) are well-defined._ Proof.: Let \(\alpha\in\mathrm{UFR}_{n}\) and let \(b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{m}\) be its block structure. Consider any fixed but arbitrary \(i\in[n-1]\). We need to show that \(\delta_{i}(\alpha)\in\mathrm{UFR}_{n}\). There are two possibilities. **Case 1**: Suppose \(\delta_{i}(\alpha)=\alpha\). The claim holds since \(\alpha\in\mathrm{UFR}_{n}\), by assumption. **Case 2**: Suppose \(\delta_{i}(\alpha)=\widehat{\alpha}(i)\). By definition of \(\delta_{i}\) this means that each of \(i-1\), \(i\), and \(i+1\), whenever they appear in \(\alpha\), in fact appear exactly once. In addition, by Corollary 3.2, if \(i+2\leq n\), then \(i+2\) appears at least once in \(\alpha\). Note the only change that \(\delta_{i}\) makes to obtain \(\widehat{\alpha}(i)\) from \(\alpha\) occurs at the value \(i+1\), which is decreased to \(i\); all other entries of \(\alpha\) remain unchanged. Therefore, the only change that \(\delta_{i}\) makes to the block structure \(b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{m}\) is that the singleton block containing \((i)\) and the (adjacent) singleton block containing \((i+1)\) are turned into a single block of size \(2\) containing \((i,i)\). Then, Corollary 3.1 guarantees that \(\widehat{\alpha}(i)\in\text{UPF}_{n}\) while, in turn, Theorem 3.2 guarantees that \(\widehat{\alpha}(i)\in\text{UFR}_{n}\), as claimed. Next we show that the functions of Definition 3.2 commute whenever their domain is restricted to the set of permutations and are applied on nonconsecutive indices. **Theorem 3.4**.: _Let \(i,j\in[n-1]\) be nonconsecutive. If \(\pi\in\mathfrak{S}_{n}\), then \(\delta_{i}(\delta_{j}(\pi))=\delta_{j}(\delta_{i}(\pi))\)._ Proof.: Fix any pair of nonconsecutive integers \(i,j\in[n-1]\). Without loss of generality, let \(i<j\). By Lemma 2.1, it suffices to consider only the identity permutation \(\pi=12\cdots n\). Note that the block structure of \(\pi\) is \(b_{1}\,|\,b_{2}\,|\,\ldots\,|\,b_{n}\) with singleton blocks \(b_{i}=(i)\) for each \(i\in[n]\). Note that f \(\delta_{i}(\pi)\) has the block structure \(1\,|\,2\,|\,\cdots\,|\,i-1\,|\,i\,i\,|\,i+2\,|\,\cdots\,|\,n-1\,|\,n\). Then, since \(i<j\), \(\delta_{j}(\delta_{i}(\pi))\) has the block structure \[1\,|\,2\,|\,\cdots\,|\,i-1\,|\,i\,i\,|\,i+2\,|\,\cdots\,|\,j-1\,|\,j\,j\,|\,j+ 2\,|\,\cdots\,|\,n-1\,|\,n.\] Note if \(i+2=j\), then the block structure would be \[1\,|\,2\,|\,\cdots\,|\,i-1\,|\,i\,i\,|\,j\,j\,|\,j+2\,|\,\cdots\,|\,n-1\,|\,n.\] On the other hand, \(\delta_{j}(\pi)\) has the block structure \[1\,|\,2\,|\,\cdots\,|\,j-1\,|\,j\,j\,|\,j+2\,|\,\cdots\,|\,n-1\,|\,n.\] Then, since \(i<j\), \(\delta_{i}(\delta_{j}(\pi))\) has the block structure \[1\,|\,2\,|\,\cdots\,|\,i-1\,|\,i\,i\,|\,i+2\,|\,\cdots\,|\,j-1\,|\,j\,j\,|\,j+ 2\,|\,\cdots\,|\,n-1\,|\,n.\] Again, if \(i+2=j\), then the block structure would be \[1\,|\,2\,|\,\cdots\,|\,i-1\,|\,i\,i\,|\,j\,j\,|\,j+2\,|\,\cdots\,|\,n-1\,|\,n.\] Therefore, for \(\pi=12\cdots n\), then \(\delta_{i}(\delta_{j}(\pi))=\delta_{j}(\delta_{i}(\pi))\). Finally, note that for any \(\pi\neq 12\cdots n\), the blocks \((i,i)\) and \((j,j)\) will be in the positions where the consecutive blocks \(\cdots\,|\,i\,|\,i+1\,|\,\cdots\,\) and \(\cdots\,|\,j\,|\,j+1\,|\,\cdots\,\) originally appeared, respectively. **Remark 3.1**.: _In Theorem 3.4, it is important that \(i\) and \(j\) are nonconsecutive. To see this, let \(\pi\in\mathfrak{S}_{n}\) and \(j=i+1\). Then, the block structure of \(\pi\) changes in the following way upon application of \(\delta_{i+1}\) followed by \(\delta_{i}\):_ \[\delta_{i}(\delta_{i+1}(\pi))=\delta_{i}(\cdots\,|\,i-1\,|\,j\,j\,|\,i+2\,|\, \cdots)=\cdots\,|\,i-1\,|\,j\,j\,|\,i+2\,|\,\cdots. \tag{7}\] _On the other hand, the block structure of \(\pi\) changes in the following way upon application of \(\delta_{i}\) followed by \(\delta_{i+1}\):_ \[\delta_{i+1}(\delta_{i}(\pi))=\delta_{i+1}(\cdots\,|\,i-1\,|\,i\,i\,|\,i+2\,| \,\cdots)=\cdots\,|\,i-1\,|\,i\,i\,|\,i+2\,|\,\cdots. \tag{8}\] _Equations (7) and (8) show that \(\delta_{i+1}(\delta_{i}(\pi))\neq\delta_{i}(\delta_{i+1}(\pi))\)._ We now generalize the composition of the functions of Definition 3.2 to subsets consisting of nonconsecutive integers. **Definition 3.3**.: Let \(I=\{i_{1},i_{2},\ldots,i_{k}\}\subset[n-1]\) be a set of pairwise nonconsecutive integers satisfying \(i_{1}<i_{2}<\cdots<i_{k}\). If \(\pi\in\mathfrak{S}_{n}\), then we define the composition \[\delta_{I}(\pi)\coloneqq\delta_{i_{1}}\circ\delta_{i_{2}}\circ\cdots\circ \delta_{i_{k}}(\alpha). \tag{9}\] If \(I=\emptyset\), then \(\delta_{I}=\text{Id}\) is the identity map on \(\mathfrak{S}_{n}\). Next we show that the composition defined in Equation (9) can be done in any order. **Corollary 3.3**.: _Let \(I=\{i_{1},i_{2},\ldots,i_{k}\}\subseteq[n-1]\) be a set of nonconsecutive integers. If \(\pi\in\mathfrak{S}_{n}\), then the composition \(\delta_{I}(\pi)\in\text{UFR}_{n}\)._ Proof.: Upon repeated application, Theorem 3.4 implies that if \(I=\{i_{1},i_{2},\ldots,i_{k}\}\subset[n-1]\) consists of pairwise nonconsecutive integers and \(\pi\in\mathfrak{S}_{n}\), then the composition \[\delta_{i_{1}}\circ\delta_{i_{2}}\circ\cdots\circ\delta_{i_{k}}(\pi) \tag{10}\] is commutative. ## 4. Bijection By Theorem 3.2, \(\mathrm{UFR}_{n}\subseteq\mathrm{UPF}_{n}\), hence, we can treat unit Fubini rankings as parking functions. We define the outcome map \(\mathcal{O}:\mathrm{UFR}_{n}\to\mathfrak{S}_{n}\) by \(\mathcal{O}(\alpha)=\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\) where \(\pi\in\mathfrak{S}_{n}\) is written in one-line notation and denotes the order in which the cars park on the street. That is, if \(j\in[n]\), then \(\pi_{j}=i\) denotes that car \(i\) is the \(j\)th car parked on the street. Given \(\pi\in\mathfrak{S}_{n}\), we define the fiber of the outcome map: \[\mathcal{O}^{-1}(\pi)=\{\alpha\in\mathrm{UFR}_{n}:\mathcal{O}(\alpha)=\pi\}.\] **Remark 4.1**.: _Since no car can park in more than one spot, \(\mathcal{O}\) is a well-defined map._ In what follows, we write both Fubini rankings and permutations in one-line notation. We now provide some initial technical results. **Lemma 4.1**.: _Let \(\pi\in\mathfrak{S}_{n}\). Then \(\alpha=\pi^{-1}\) is the unique permutation with outcome \(\pi\)._ Proof.: Let \(\pi=\pi_{1}\cdots\pi_{n}\in\mathfrak{S}_{n}\). Suppose that \(\pi_{i}=j\). That is, car \(j\) parked in spot \(i\). Because we wish to find the permutation with parking outcome \(\pi\), this means we will restrict to car \(j\) having preference \(i\). That is, we need the \(j\)th entry of \(\alpha\) to be equal to \(i\) in order to produce the outcome \(\pi\). Note, that \(\pi_{j}^{-1}=i\). Since this was an arbitrary entry in \(\pi\), we have that \(\alpha=\pi^{-1}\), as desired. We note that since permutation inverses are unique, there is only one permutation \(\alpha\in\mathcal{O}^{-1}(\pi)\). For a fixed \(\pi\in\mathfrak{S}_{n}\), we are interested in determining the elements of \(\mathcal{O}^{-1}(\pi)\). To this end, we recall from Definition 2.1 that if \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{S}_{n}\), then \[\mathrm{Asc}(\pi)=\{j\in[n-1]:\pi_{j}<\pi_{j+1}\}.\] Next, we provide the connection between the elements in \(\mathcal{O}^{-1}(\pi)\) and the set \(\mathrm{Asc}(\pi)\). **Lemma 4.2**.: _Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{S}_{n}\). If \(j\in\mathrm{Asc}(\pi)\), \(\pi_{j+1}=i\), and \(\alpha=(a_{1},a_{2},\ldots,a_{n})\in\mathcal{O}^{-1}(\pi)\), then \(a_{i}\in\{j,j+1\}\)._ Proof.: Assume \(j\in\mathrm{Asc}(\pi)\), which implies that \(\pi_{j}<\pi_{j+1}\). This means that car \(\pi_{j+1}=i\) arrived after car \(\pi_{j}\) and is parked immediately to the right of \(\pi_{j}\). Under unit interval parking rule, there are only two ways in which car \(i\) can park in spot \(j+1\), either spot \(j+1\) was its preference and that spot was available, or its preference was the spot \(j\), which it found occupied by car \(\pi_{j}\). Thus \(a_{i}\in\{j,j+1\}\) as desired. These are the only preferences which ensure \(\alpha\) is a unit Fubini ranking and which would result in car \(i\) parking in spot \(j+1\), which is required so that \(\alpha\) has outcome \(\pi\). **Proposition 4.1**.: _Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{S}_{n}\) and \(\alpha=\pi^{-1}\in\mathcal{O}^{-1}(\pi)\). Then_ \[\mathcal{O}^{-1}(\pi)=\{\delta_{I}(\alpha):I\subseteq\mathrm{Asc}(\pi)\text{ with nonconsecutive entries}\}.\] Before we prove Proposition 4.1, we illustrate the effect of \(\delta_{I}\) on a permutation \(\pi\), when \(I\) is a subset of nonconsecutive elements from \(\mathrm{Asc}(\pi)\). **Example 4.1**.: Fix \(\pi=412356\) and note \(\mathrm{Asc}(\pi)=\{2,3,4,5\}\). Then \(\alpha=\pi^{-1}=234156\) is the unique permutation in \(\mathcal{O}^{-1}(\pi)\). Observe that the only possible subsets of \(\mathrm{Asc}(\pi)=\{2,3,4,5\}\) consisting of nonconsecutive integers are: \(\emptyset\), \(\{2\}\), \(\{3\}\), \(\{4\}\), \(\{5\}\), \(\{2,4\}\), \(\{2,5\}\), and \(\{3,5\}\). Note \[\begin{array}{ll}\delta_{\emptyset}(\alpha)=234156,&\delta_{\{2\}}(\alpha)=22 24156,&\delta_{\{3\}}(\alpha)=233156,&\delta_{\{4\}}(\alpha)=234146\\ \delta_{\{5\}}(\alpha)=234155,&\delta_{\{2,4\}}(\alpha)=224146,&\delta_{\{2,5\} }(\alpha)=224155,&\delta_{\{3,5\}}(\alpha)=233155.\end{array}\] Straightforward computations establish that the results are unit Fubini rankings with outcome \(\pi\). Moreover, Remark 3.1 establishes that if we take any subset of \(\operatorname{Asc}(\pi)\) that contains consecutive integers, the result would not yield any unit Fubini rankings not found in the above list. Together this confirms that for any subset of \(\operatorname{Asc}(\pi)\) consisting of nonconsecutive integers, \(\delta_{I}(\alpha)\in\mathcal{O}^{-1}(\pi)\). Now note that \(\delta_{1}(\alpha)=134156\) and \(\mathcal{O}(134156)=142356\neq\pi\). Hence \(\delta_{1}(\alpha)\notin\mathcal{O}^{-1}(\pi)\). Establishing that \(\delta_{j}(\alpha)\notin\mathcal{O}^{-1}(\pi)\) when \(j\in\operatorname{Des}(\pi)\). Proof of Proposition 4.1.: It suffices to show 1. \(\mathcal{O}^{-1}(\pi)\subseteq\{\delta_{I}(\pi^{-1}):I\subseteq\operatorname{ Asc}(\pi)\) with nonconsecutive entries\(\}\) and 2. \(\{\delta_{I}(\pi^{-1}):I\subseteq\operatorname{Asc}(\pi)\) with nonconsecutive entries\(\}\subseteq\mathcal{O}^{-1}(\pi)\). For (1): Let \(\beta\in\mathcal{O}^{-1}(\pi)\) such that the block structure of \(\beta\) contains exactly \(1\) block of size two. Let the entries of that block be \(ii\). We note that if \(i\) appears twice in \(\beta\), then there must be an ascent at position \(i\) in \(\pi\). We must also have that \(\delta_{i}(\pi^{-1})=\beta\). Therefore, \(\beta\in\{\delta_{I}(\pi^{-1}):I\subseteq\operatorname{Asc}(\pi)\) with nonconsecutive entries for \(I=\{i\}\}\). Inductively, for any \(\beta\in\mathcal{O}^{-1}(\pi)\) with \(k\) blocks of size \(2\), we can reconstruct the set \(I\) by looking at the entries in those \(k\) blocks. The indices in \(I\) must all be more than one unit away, are determined by the minimum element in each block of size two, and must have all come from the ascent set of \(\pi\). Thus \(\delta_{I}(\pi^{-1})=\beta\), which means that \(\beta\in\{\delta_{I}(\pi^{-1}):I\subseteq\operatorname{Asc}(\pi)\) with nonconsecutive entries for \(I=\{i\}\}\). For (2): Let \(I=\{i_{1},i_{2},\ldots,i_{k}\}\subseteq\operatorname{Asc}(\pi)\) consist of nonconsecutive integers. Without loss of generality assume \(i_{1}<i_{2}<\cdots<i_{k}\). By Corollary 3.3 we know \(\delta_{I}(\pi^{-1})\in\operatorname{UFR}_{n}\), and the block structure of \(\delta_{I}(\pi^{-1})\) is as follows: * For each \(i\in I\), there is a block of size two containing both instances of \(i\) in \(\delta_{I}(\pi^{-1})\), and * for each \(i\notin I\), there is a block of size one containing the only instance \(i\) in \(\delta_{I}(\pi^{-1})\). Since the entries in \(I\) are nonconsecutive, the block structure of \(\delta_{I}(\pi^{-1})\) ensures that if \(i\notin I\), car \(\pi_{i}\) with preference \(i\) parks in spot \(i\), as needed to have outcome \(\pi\). Moreover, if \(i\in I\), then under \(\delta_{I}(\pi^{-1})\), car \(\pi_{i}\) has preference \(i\) and parks in spot \(i\), and car \(\pi_{i+1}\) has preference \(i\) and as \(\pi_{i}<\pi_{i+1}\) it finds spot \(i\) occupied and parks in spot \(i+1\), as needed to have outcome \(\pi\). Thus establishing that \(\mathcal{O}(\delta_{I}(\pi^{-1}))=\pi\), as desired. Tenner established that Boolean intervals in the weak order all have the form \([v,w]\) where \(w=v\prod_{i\in I}s_{i}\) for some \(I\subseteq\operatorname{Asc}(v)\) whose elements are nonconsecutive [12, Corollary 4.4]. We use this result in the proof of the following. **Theorem 1.2**.: _The set of unit Fubini rankings with \(n-k\) distinct ranks is in bijection with the set of Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\)._ Proof.: Fix \(\pi\in\mathfrak{S}_{n}\). Let \(\mathcal{B}_{n}\) be the set of all Boolean intervals in \(W(\mathfrak{S}_{n})\), and \(\mathcal{B}_{n}(\pi)\) denote the set of all Boolean intervals in \(W(\mathfrak{S}_{n})\) with minimal element \(\pi\). Define the map \(\varphi_{\pi}:\mathcal{O}^{-1}(\pi)\to\mathcal{B}_{n}(\pi)\) defined by \[\varphi_{\pi}(\beta)=[\pi,\pi\prod_{i\in I}s_{i}]\] where \(I\subseteq\operatorname{Asc}(\pi)\) of nonconsecutive integers is determined by \(\beta=\delta_{I}(\pi^{-1})\). Namely, the set \(I\) consists of the repeated values in \(\beta\), which is unique by Proposition 4.1. We begin by establishing that \(\varphi_{\pi}\) is a bijection. The output \(\varphi_{\pi}(\beta)\) is computed using the unique set \(I\) associated with each \(\beta\), and hence is unique. Furthermore, the output \([\pi,\pi\prod_{i\in I}s_{i}]\in\mathcal{B}_{n}\) is a Boolean interval [12, Corollary 4.4]. Therefore \(\varphi_{\pi}\) is well-defined. For injectivity: If \(\varphi_{\pi}(\beta)=\varphi_{\pi}(\gamma)=[\pi,\pi\prod_{i\in I}s_{i}]\) for some (nonconsecutive) \(I\subseteq\operatorname{Asc}(\pi)\), then \(\delta_{I}(\pi^{-1})=\beta\) and \(\delta_{I}(\pi^{-1})=\gamma\). Therefore, \(\beta=\gamma\). For surjectivity: Every Boolean interval in \(\mathcal{B}_{n}(\pi)\) has the form \([\pi,\pi\prod_{i\in I}s_{i}]\) where \(I\subseteq\operatorname{Asc}(\pi)\) consists of nonconsecutive integers [12, Corollary 4.4]. Then, by Proposition 4.1, we know that \(\delta_{I}(\pi^{-1})\in\mathcal{O}^{-1}(\pi)\). Then \(\varphi_{\pi}(\delta_{I}(\pi^{-1}))=[\pi,\pi\prod_{i\in I}s_{i}]\). Together, this establishes that the map \(\varphi_{\pi}\) is a bijection. Now define \(\phi:\operatorname{UFR}_{n}\to\mathcal{B}_{n}\) by \(\phi(\alpha)\coloneqq\varphi_{\pi}(\alpha)\) where \(\mathcal{O}(\alpha)=\pi\). Note that since \(\varphi_{\pi}\) is a bijection for all \(\pi\) and since \(\mathcal{O}\) is well-defined (Remark 4.1), then \(\phi\) is a bijection. To conclude, we establish that \(\varphi_{\pi}\) preserves the statistic of \(n-k\) distinct ranks in \(\mathcal{O}^{-1}(\pi)\) and rank \(k\) in the Boolean interval. Let \(\beta\in\operatorname{UFR}_{n}\) such that \(\mathcal{O}(\beta)=\pi\) where ties occur at ranks denoted by \(r_{1},r_{2},\ldots,r_{k}\). Note, that \(\beta\) then has \(n-k\) distinct ranks. Then, by Proposition 4.1, the set \(I=\{r_{1},r_{2},\ldots,r_{k}\}\), is a subset of \(\operatorname{Asc}(\pi)\) consisting of \(k\) nonconsecutive integers, and \(\delta_{I}(\pi^{-1})=\beta\). Then \(\varphi_{\pi}(\beta)\) corresponds uniquely to the rank \(k\) Boolean interval given by \([\pi,\pi\prod_{i\in I}s_{i}]\). ## 5. Enumerations In this section, we provide enumerative formulas for: 1. \(f(n)\), the total number of Boolean intervals in \(W(\mathfrak{S}_{n})\), 2. \(f(n,k)\), the total number of rank \(k\) Boolean intervals in \(W(\mathfrak{S}_{n})\), and 3. the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) with minimal element \(\pi\). To establish (1), we begin with an immediate consequence of Theorem 1.2. **Corollary 5.1**.: _The total number of Boolean intervals in \(W(\mathfrak{S}_{n})\) is equal to the number of unit Fubini rankings of length \(n\)._ By setting \(q=1\) into the exponential generating function [9, Exercise 3.185(h)] \[F(x,q)=\sum_{n\geq 0}\sum_{k\geq 0}f(n,k)q^{k}\frac{x^{n}}{n!}=\frac{1}{1-x- \frac{q}{2}x^{2}}, \tag{11}\] Stanley [8] points out that the _total_ number of Boolean intervals in \(W(\mathfrak{S}_{n})\) (OEIS A080599) satisfies the recurrence relation \[f(n+1)=(n+1)f(n)+\binom{n+1}{2}f(n-1), \tag{12}\] where \(f(0)=1\) and \(f(1)=1\). In light of Corollary 5.1, we give a combinatorial proof of this result from the perspective of unit Fubini rankings. **Theorem 5.1**.: _Let \(g(n+1)\) denote the number of unit Fubini rankings of length \(n+1\). Then \(g(n+1)\) satisfies the recursion_ \[g(n+1)=(n+1)g(n)+\binom{n+1}{2}g(n-1),\] _where \(g(1)=1\) and \(g(2)=3\)._ Proof.: Let \(\alpha\) be unit Fubini ranking of length \(n\). The block structure of an element in \(\operatorname{UFR}_{n}\) means we have two options for the final block: it either ends in an \((n-1)(n-1)\) or an \(n\). We have total freedom in the remaining positions. Thus there are two mutually exclusive cases to consider. * The last block has the form \((n-1)(n-1)\): Then we may select one of the \(g(n-1)\) unit Fubini rankings in \(\operatorname{UFR}_{n-1}\). Place the elements in the unit Fubini rankings in any of the \(n+1\) possible spots for the unit Fubini ranking of length \(n+1\). For each unit Fubini ranking in \(\operatorname{UFR}_{n-1}\) there are \[\binom{n+1}{n-1}=\binom{n+1}{2}\] ways to do this. * The last block has the form \(n\): Then we may select one of the \(g(n)\) unit Fubini rankings in UFR\({}_{n}\). Place the elements in the unit Fubini ranking in any of the \(n+1\) possible spots for the unit Fubini ranking of length \(n+1\). For each unit Fubini ranking in UFR\({}_{n}\) there are \[\binom{n+1}{n}=n+1\] ways to do this. The recursion follows from taking the sum of the counts in each case. The initial values arise from the fact that \(|\text{UFR}_{1}|=\{(1)\}\), hence \(g(1)=1\), and \(|\text{UFR}_{2}|=\{(1,1),(1,2),(2,1)\}\), hence \(g(2)=3\). For (2), we begin with the following combinatorial proof. **Theorem 1.3**.: _Let \(f(n,k)\) denote the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) of rank \(k\). Then,_ \[f(n,k)=\frac{n!}{2^{k}}\binom{n-k}{k}. \tag{1}\] Proof.: Let \(g(n,k)\) denote the number of unit Fubini rankings of length \(n\) which have \(n-k\) distinct ranks. Note that Theorem 1.2 implies that \(g(n,k)=f(n,k)\), hence it suffices to show that \(g(n,k)=\frac{n!}{2^{k}}\binom{n-k}{k}\). If \(\alpha\in\text{UFR}_{n}\) has \(n-k\) distinct ranks, then its block structure has the form \[b_{1}\,|\,b_{2}\,|\,\cdots\,|\,b_{n-k},\] where exactly \(k\) of the blocks have size two and all remaining blocks have size one. To enumerate all such \(\alpha\), first select the indices of the blocks with size two. We can do this in \(\binom{n-k}{k}\) ways. To enumerate, we begin by selecting the indices at which we place the repeated values within the blocks of size two. We do so iteratively by first selecting two indices among \(n\) where we will place the smallest repeated values of \(\alpha\). This can be done in \(\binom{n}{2}\) ways. Then we repeat this process by selecting two indices among the remaining \(n-2\) indices in which we place the next smallest repeated values of \(\alpha\). This can be done in \(\binom{n-2}{2}\) ways. Through this process, the total ways in which we can place all repeated values in \(\alpha\) is given by the product \[\binom{n}{2}\binom{n-2}{2}\cdots\binom{n-2(k-1)}{2}=\prod_{i=0}^{k-1}\binom{n- 2i}{2}.\] Finally, we note that the values in the blocks of size one can appear in any order within the remaining available indices. We can do this in \((n-2k)!\) ways. Thus \[g(n,k)=\binom{n-k}{k}(n-2k)!\prod_{i=0}^{k-1}\binom{n-2i}{2},\] which simplifies to our desired result. **Remark 5.1**.: _In the introduction we referenced OEIS A001286, a sequence known as the Lah numbers, which gives the values \(f(n,1)=\frac{(n-1)n!}{2}\) for the number of \(B_{1}\) in \(W(\mathfrak{S}_{n})\). Theorem 1.3 implies that the Lah numbers also enumerate unit Fubini rankings with \(n-1\) distinct ranks. Aguillon et al. [1] showed that the number of unit interval parking functions in which exactly \(n-1\) cars park in their preference is also enumerated by the Lah numbers. This result was established via a bijection between those parking functions and ideal states in the game the Tower of Hanoi, which were enumerated by the Lah numbers._ We now prove that \(g(n,k)\) has the same generating function as (11). **Theorem 5.2**.: _The exponential generating function for \(g(n,k)\) has the closed form_ \[G(x,q)=\sum_{n\geq 0}\sum_{k\geq 0}g(n,k)q^{k}\frac{x^{n}}{n!}=\frac{1}{1-x- \frac{q}{2}x^{2}}.\] Proof.: From Theorem 1.3, we know that \(g(n,k)=\frac{n!}{2^{k}}\binom{n-k}{k}\). Then \[G(x,q)=\sum_{n\geq 0}\sum_{k\geq 0}g(n,k)q^{k}\frac{x^{n}}{n!}=\sum_{n\geq 0} \sum_{k\geq 0}\frac{1}{2^{k}}\binom{n-k}{k}q^{k}x^{n}. \tag{13}\] Note that, for the purpose of counting objects, \(\binom{n}{k}=0\) whenever \(k>n\) or \(n\) is negative. Setting \(n=0\) in Equation (13) yields \[\sum_{k\geq 0}\frac{1}{2^{k}}\binom{-k}{k}q^{k}x^{0}=1+\sum_{k\geq 1}\frac{1}{2 ^{k}}\binom{-k}{k}q^{k}=1+0. \tag{14}\] Substituting (14) into (13) gives \[G(x,q)=1+\sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\binom{n-k}{k}q^{k}x^{n}. \tag{15}\] Using the binomial identity \(\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}\), (15) becomes \[G(x,q)=1+\sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\left(\binom{n-k-1}{k}+ \binom{n-k-1}{k-1}\right)q^{k}x^{n}, \tag{16}\] which can be rewritten as \[G(x,q)=1+\sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\binom{n-k-1}{k}q^{k}x^{n}+ \sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\binom{n-k-1}{k-1}q^{k}x^{n}. \tag{17}\] We note that the first set of summands in (17) simplifies in the following way: \[\sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\binom{n-k-1}{k}q^{k}x^{n}=x \sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\binom{(n-1)-k}{k}q^{k}x^{n-1}=x \sum_{n\geq 0}\sum_{k\geq 0}\frac{1}{2^{k}}\binom{n-k}{k}q^{k}x^{n}, \tag{18}\] where the last equality in (18) follows from re-indexing with respect to \(n\), and the fact that \(\binom{n}{k}=0\) whenever \(k>n\). We note that the second set of summands in (17) simplifies in the following way: \[\sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k}}\binom{n-k-1}{k-1}q^{k }x^{n} =\frac{q}{2}x^{2}\sum_{n\geq 1}\sum_{k\geq 1}\frac{1}{2^{k-1}}\binom{(n-2 )-(k-1)}{k-1}q^{k-1}x^{n-2}\] \[=\frac{q}{2}x^{2}\sum_{n\geq 0}\sum_{k\geq 0}\frac{1}{2^{k}}\binom{ n-k}{k}q^{k}x^{n}, \tag{19}\] where the last equality in (19) follows from re-indexing with respect to \(n\) and \(k\). Substituting (18) and (19) into (17) allows us to reassemble everything to arrive at \[G(x,q)=1+xG(x,q)+\frac{q}{2}x^{2}G(x,q),\] from which we arrive at \[G(x,q)=\frac{1}{1-x-\frac{q}{2}x^{2}}.\qed\] We now present our final enumerative result settling (3), which further connects this work to Fibonacci numbers. **Theorem 1.1**.: _Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{S}_{n}\) be in one-line notation and partition its ascent set \(\operatorname{Asc}(\pi)=\{i\in[n-1]:\pi_{i}<\pi_{i+1}\}\) into maximal blocks \(b_{1},b_{2},\ldots,b_{k}\) of consecutive entries. Then, the number of Boolean intervals \([\pi,w]\) in \(W(\mathfrak{S}_{n})\) with fixed minimal element \(\pi\) and arbitrary maximal element \(w\) (including the case \(\pi=w\)) is given by_ \[\prod_{i=1}^{k}F_{|b_{i}|+2},\] _where \(F_{\ell}\) is the \(\ell\)th Fibonacci number, and \(F_{1}=F_{2}=1\)._ Proof.: It is straightforward to prove that the number of ways to select nonconsecutive entries from the set \([n]\) is given by \(F_{n+2}\). Thus, for each \(i\in[k]\), the number of ways to select nonconsecutive elements from \(b_{i}\) is given by \(F_{|b_{i}|+2}\). As the blocks \(b_{1},b_{2},\ldots,b_{k}\) are pairwise disjoint, the total number of ways to select subsets from \(\cup_{i=1}^{k}b_{i}\) consisting of nonconsecutive integers is given by \(\prod_{i=1}^{k}F_{|b_{i}|+2}\), as desired. Among the many results established by Tenner concerning Boolean intervals in both the Bruhat order and in the weak order [12], we highlight the following. **Proposition 5.1**.: _[_12_, Proposition 5.9]_ _Let \(i\in[n-1]\) be fixed. The number of Boolean intervals in \(W(\mathfrak{S}_{n})\) of the form \([s_{i},w]\) is \(F_{i+1}F_{n-i+1}\), where \(F_{i}\) is the \(i\)th Fibonacci number._ Note that for any \(i\in[n-1]\), we have that \(\operatorname{Asc}(s_{i})=[n]\setminus\{i\}\). Then \(b_{1}=[i-1]\) and \(b_{2}=\{i+1,i+2,\ldots,n-1\}\), and Theorem 1.1 implies that the number of Boolean intervals in \(W(\mathfrak{S}_{n})\) with minimal element \(s_{i}\) is given by \(F_{|b_{1}|+2}=F_{|b_{2}|+2}=F_{i+1}F_{n-i+1}\), recovering [12, Proposition 5.9]. We remark that in the statement of Theorem 1.1, we allow \([\pi,\pi]\) to be a Boolean interval. If we impose the condition that the maximal element \(w\) cannot be equal to the minimal element \(\pi\), then we have the following. **Corollary 5.2**.: _Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{S}_{n}\) be in one-line notation and partition its ascent set \(\operatorname{Asc}(\pi)=\{i\in[n-1]:\pi_{i}<\pi_{i+1}\}\) into maximal blocks \(b_{1},b_{2},\ldots,b_{k}\) of consecutive entries. Then, the number of Boolean intervals \([\pi,w]\) in \(W(\mathfrak{S}_{n})\) with \(w\neq\pi\) is given by_ \[\prod_{i=1}^{k}\left(F_{|b_{i}|+2}-1\right),\] _where \(F_{\ell}\) is the \(\ell\)th Fibonacci number and \(F_{1}=F_{2}=1\)._ Proof.: The result follows from Theorem 1.1, and noting that in creating a subset of \(\operatorname{Asc}(\pi)\) consisting of nonconsecutive elements we cannot utilize the empty set. ## 6. Future work To the best of our knowledge the formula of Theorem 1.3, which counts the number of Boolean intervals of rank \(k\) in \(W(\mathfrak{S}_{n})\) did not exist in the literature. We gave a combinatorial proof of this result via the enumeration of unit Fubini rankings with \(n-k\) distinct ranks. We wonder whether this new proof and combinatorial objects might shed light on how a symmetric group proof may be constructed. As noted at the end of Section 5, Tenner has provided many results for intervals in the weak (Bruhat) order [12]. The paper also provides results on the Bruhat order, which leads us to wonder if there are other connections from Fubini rankings that can be used to count intervals in the Bruhat order. We also wonder if it may be possible to utilize unit Fubini rankings, or a slight generalization thereof, to enumerate Boolean intervals in Bruhat and weak orders of other Coxeter systems. To this end we state the following: How many Boolean intervals are there in the weak order of the hyperoctahedral group (type \(B\) Coxeter group)? ## Acknowledgements The authors thank Bridget E. Tenner for helpful discussions and references. They also thank Richard P. Stanley for his suggestions on a previous draft of this manuscript. J. Elder was partially supported through an AWM Mentoring Travel Grant. P. E. Harris was supported through a Karen Uhlenbeck EDGE Fellowship. J. C. Martinez Mori was partially supported by NSF Grant No. 2144127, awarded to S. Samaranayake. J. C. Martinez Mori is supported by Schmidt Science Fellows, in partnership with the Rhodes Trust.
2310.18830
Translating away Translationese without Parallel Data
Translated texts exhibit systematic linguistic differences compared to original texts in the same language, and these differences are referred to as translationese. Translationese has effects on various cross-lingual natural language processing tasks, potentially leading to biased results. In this paper, we explore a novel approach to reduce translationese in translated texts: translation-based style transfer. As there are no parallel human-translated and original data in the same language, we use a self-supervised approach that can learn from comparable (rather than parallel) mono-lingual original and translated data. However, even this self-supervised approach requires some parallel data for validation. We show how we can eliminate the need for parallel validation data by combining the self-supervised loss with an unsupervised loss. This unsupervised loss leverages the original language model loss over the style-transferred output and a semantic similarity loss between the input and style-transferred output. We evaluate our approach in terms of original vs. translationese binary classification in addition to measuring content preservation and target-style fluency. The results show that our approach is able to reduce translationese classifier accuracy to a level of a random classifier after style transfer while adequately preserving the content and fluency in the target original style.
Rricha Jalota, Koel Dutta Chowdhury, Cristina España-Bonet, Josef van Genabith
2023-10-28T22:11:25Z
http://arxiv.org/abs/2310.18830v1
# Translating away Translationese without Parallel Data ###### Abstract Translated texts exhibit systematic linguistic differences compared to original texts in the same language, and these differences are referred to as translationese. Translationese has effects on various cross-lingual natural language processing tasks, potentially leading to biased results. In this paper, we explore a novel approach to reduce translationese in translated texts: translation-based style transfer. As there are no parallel human-translated and original data in the same language, we use a self-supervised approach that can learn from comparable (rather than parallel) mono-lingual original and translated data. However, even this self-supervised approach requires some parallel data for validation. We show how we can eliminate the need for parallel validation data by combining the self-supervised loss with an unsupervised loss. This unsupervised loss leverages the original language model loss over the style-transferred output and a semantic similarity loss between the input and style-transferred output. We evaluate our approach in terms of original vs. translationese binary classification in addition to measuring content preservation and target-style fluency. The results show that our approach is able to reduce translationese classifier accuracy to a level of a random classifier after style transfer while adequately preserving the content and fluency in the target original style. ## 1 Introduction Translated texts often exhibit distinct linguistic features compared to original texts in the same language, resulting in what is known as _translationese_. Translationese has a tangible impact on various cross-lingual and multilingual natural language processing (NLP) tasks, potentially leading to biased results. For instance in machine translation (MT), during training, when the translation direction of parallel training data matches the direction of the translation task, (i.e. when the source is original and the target is translated), MT systems perform better Kurokawa et al. (2009); Lembersky et al. (2012). Similarly, Toral et al. (2018), Zhang and Toral (2019) and Graham et al. (2020) show that translating already translated texts results in increased BLEU scores. More recently, Artetxe et al. (2020) observed that cross-lingual models when evaluated on translated test sets show false improvements simply due to induced translation artifacts. When investigating the effects of translationese in cross-lingual summarization, Wang et al. (2021) found that models trained on translated training data suffer in real-world scenarios. These examples show the importance of investigating and mitigating translationese. Despite this, removing translationese signals in already generated output of translations is an underexplored research topic. Dutta Chowdhury et al. (2022) remove translationese implicitly encoded in vector embeddings, and demonstrate the impact of eliminating translationese signals on natural language inference performance. Wein and Schneider (2023) leverage Abstract Meaning Representation (AMR) as an intermediate representation to abstract away from surface-level features of the text, thereby reducing translationese. Neither of these works explicitly analyzes surface forms of the resulting "debiased text" and its resemblance to original texts. In this work, we aim to reduce the presence of translationese in human-translated texts and make them closely resemble original texts, notably, without using any parallel data as parallel human original - translated data in the same language do not exist. To this end, we explore a self-supervised neural machine translation (NMT) system Ruiter et al. (2019) and its application to style transfer (ST) Ruiter et al. (2022). In both works, validation is performed on a parallel dataset, either bilingual (MT data) or monolingual (ST data). However, parallel human original-translationese data in the same language are unavailable. To overcome this challenge, we define an unsupervised validation criterion by combining a language model loss and a semantic similarity loss, inspired by Artetxe et al. (2020). Our baseline is the self-supervised approach (SSNMT) from Ruiter et al. (2019). However, we go a step further and propose a novel joint training objective that combines both self-supervised and unsupervised criteria, eliminating the need for parallel data during both training and validation. The contributions of this work are as follows: * We are the first to formulate reduction of translationese in human translations as a monolingual translation based style-transfer task, allowing for direct evaluation of the effects on the surface forms of the generated outputs. * translated dataset (in the same language) during training and validation. * We show that our method is able to reduce the accuracy of a translationese classifier to that of a random classifier, indicating that our approach is able to successfully eliminate translationese signals in its output. * We present an extensive evaluation that measures (i) the extent to which our methods mitigate translationese as well as adequacy and fluency of the outputs **(Quantitive Analysis)**, (ii) estimates the degree of translationese in the output using metrics derived from linguistic properties of translationese **(Qualitative Analysis)**. ## 2 Related Work ### Text Style Transfer Text style transfer is the task of altering the stylistic characteristics of a sentence while preserving the original meaning Toshevska and Gievska (2022). The amount of parallel data available for this task is usually limited. Therefore, readily available mono-stylistic data together with a smaller amount of style-labeled data are often exploited using approaches based on self- Ruiter et al. (2019), semi-Jin et al. (2019) or unsupervised Neural Machine Translation Lample et al. (2018); Artetxe et al. (2019). Common approaches involve disentangling the style and content aspects of the text. For content extraction, approaches based on variational auto-encoders (VAE) Shen et al. (2017); Fu et al. (2017), cycle consistency loss Lample et al. (2019), or reinforcement learning Xu et al. (2018) are commonly employed. To induce the target style, often a style discriminator is employed using a pretrained style classifier Prabhumoye et al. (2018); dos Santos et al. (2018); Gong et al. (2019) or the decoder head is specialized to generate target-style outputs Tokpo and Calders (2022), or the content representation is simply concatenated with the target-style representation Fu et al. (2017). Since unsupervised methods often perform poorly compared to their supervised counterparts Kim et al. (2020); Artetxe et al. (2020), recent approaches have explored semi-supervised Jin et al. (2019) and self-supervised learning Ruiter et al. (2022); Liu et al. (2022). Liu et al. (2022) combine sentence embeddings with scene graphs to mine parallel sentences on-the-fly for facilitating reinforcement learning-based style-transfer, while Ruiter et al. (2022) follow a simpler approach that exploits only the latent representations for online parallel sentence pair extraction from comparable data and leverage these pairs for self-supervised learning. Although their approach requires a parallel validation set for model selection and hyperparameter tuning, due to its simplicity, we adopt it as our starting point and baseline. We then present a novel version of this approach using unsupervised techniques, eliminating the need for a parallel validation set. ### Unsupervised Model Selection Several studies exploring unsupervised or semi/self-supervised settings either do not report their validation scheme Artetxe et al. (2020) or are only unsupervised (or semi/self-supervised) during training and rely on parallel in-domain validation sets for model tuning Marie and Fujita (2018); Marie et al. (2019); Dai et al. (2019); Ruiter et al. (2022). In contrast, some studies enforce strictly unsupervised settings in NMT by either using the validation set from a separate language pair Artetxe et al. (2018),b) or not using a validation scheme at all Lample et al. (2018), risking sub-optimal models. To address this, Lample et al. (2018, 2018); Artetxe et al. (2020) proposed using an unsuper vised validation criterion over monolingual data that conforms with the evaluation criterion or is guided by the target-distribution, similar to Artetxe et al. (2019) who combined cycle consistency loss with language model loss for unsupervised model selection. ### Translationese Mitigation Researchers have explored the effects and origins of translationese in previous studies. To mitigate translationse effects in machine translation (MT) models, a prevalent approach is tagged training (Caswell et al., 2019; Marie et al., 2020; Riley et al., 2020). This technique involves explicitly marking the translated and original data using tags, enabling the models to recognize and account for these distinctions. Yu et al. (2022) introduced an approach to mitigate translation artifacts in the translate-train* cross-lingual setting. To reduce translation artifacts in the target language, they learned an original-to-translationese mapping function from the source language. They do this by projecting the original and translated texts in the source language to a common multilingual embedding space and then learning to minimize the distance between the mapped representations of the originals and translationese. Dutta Chowdhury et al. (2022) tackle the reduction of translationese from a different perspective, treating it as a bias in translated texts. They employ a debiasing approach to mitigate translationese by attenuating it in the latent representations, while Wein and Schneider (2023) reduce translationese using AMR as intermediate representations. However, none of the above studies specifically analyze the surface forms of the "debiased text". Footnote *: In this setting, the training data is translated from the source language into the target language and the translated texts are used for training. To date, to the best of our knowledge, monolingual translation based style transfer on translation outputs to mitigate translationese has not been explored. To some extent, this is expected, as, at least for monolingual translation-based style transfer, parallel original and translated texts in the same language do not exist. Below we present a novel approach that builds on translation-based style transfer but unlike previous work without parallel data for both training and validation sets. ## 3 Translationese Mitigation via Style Transfer Our goal is to eliminate translationese signals from translated texts by transforming them into an original-like version. We define two style attributes, \(og\) and \(tr\), representing original style and translated style, respectively. Given a text sample \(x\) belonging to \(tr\), our aim is to convert this \(x_{tr}\) to \(x_{og}\), where \(x_{og}\) belongs to style \(og\) but retains the same semantic content as \(x_{tr}\). We denote the corpus with original sentences \(OG\) and the corpus with translated sentences \(TR\). We illustrate the process in Figure 1. ### Self-Supervised Architecture In our work we build on a Transformer-based ENC-DEC self-supervised system that jointly learns sentence pair extraction and translation in a virtuous loop. Given two comparable mono-stylistic corpora (\(OG\) and \(TR\)), a sentence-pair extraction (SPE) module (Ruiter et al., 2019) utilizes the internal representations of the sentence pairs to extract sentences with similar meanings. This similarity matching module employs two types of latent rep Figure 1: Model architecture. Here (\(tr\), \(og\)) is a (Translated (source), Original (target)) style sentence pair, and \(og\)-like is the translationese-mitigated output. The dashed arrows correspond to on-the-fly parallel pair extraction that facilitates Supervised Training, while the red arrows in bold represent the path of approximated decoder outputs used in Unsupervised Training. resentations: the sum of word embeddings \(w(*)\) and the sum of encoder outputs \(e(*)\). The embedded pairs (\(\{w(tr),w(og)\}\) and \(\{e(tr),e(og)\}\)) are individually scored using a margin-based measure Artetxe and Schwenk (2019), and the top candidate pairs are selected for further filtering. Following Ruiter et al. (2019), we apply two filtering criteria: * **Without Threshold [1]**: A sentence pair is accepted for training if it is highest-ranked in both candidate-pair representations. This is used in Ruiter et al. (2022). * **With Threshold [2]**: A sentence pair is accepted for training either if it is highest-ranked in both candidate-pair representations or if its encoder representation \(\{e(tr),e(og)\}\) surpasses a given threshold.* Examples of extracted accepted pairs are shown in Table 1. Extracted parallel sentence pairs are used in an online fashion to train the ENC-DEC model in a supervised manner by minimizing the cross-entropy loss (\(L_{sup}\)): \[L_{sup}=-\sum_{j=1}^{N}\sum_{i=1}^{V}\mathbf{Y_{i}^{j}}\log(\mathbf{H_{i}^{j}}) \tag{1}\] where \(N\) is the length of the target sequence, \(V\) is the shared ENC-DEC vocabulary size, \(\mathbf{Y_{i}^{j}}\) represents the \(i\)-th element of the one-hot encoded true distribution at \(j\)-th position and \(\mathbf{H_{i}^{j}}\) the \(i\)-th element of the predicted distribution (hypothesis) \(\mathbf{H_{i}}\). The joint SPE-translation learning loop continues until convergence. We use BART-style denoising autoencoding (DAE) Lewis et al. (2020) for model initialization (see details in Section 4.2). ### Joint Training Architecture In the baseline system, all sentence pairs rejected by SPE are simply discarded, which is a major loss of useful mono-stylistic information. One way to utilize the discarded pairs is by combining the supervised training criterion with unsupervised training. To this end, we introduce an unsupervised loss component to the final objective, which combines language model (LM) loss with semantic similarity loss. Both the losses, as shown in Figure 1, are computed over the decoder outputs when mono-stylistic \(tr\) is given as input. As the input to compute these losses is derived from a categorical distribution (i.e. after applying argmax on the decoder output), this breaks the overall differentiability of the model during training. Therefore, following Yang et al. (2018) and Unanue et al. (2021), during training, we use continuous approximations of the decoder output to compute the two losses. Let \(y_{\textit{Gg}}\) be the style-transferred output when \(x_{tr}\) is given as input. Using greedy decoding, the predictions from the decoder at the _j-th_ decoding step can be expressed as follows: \[\hat{y_{\textit{og}}}_{j}=\arg\max_{x_{tr}}p(\hat{x_{\textit{og}}}|x_{tr},\hat {y_{\textit{og}}}_{j-1},\theta);j=1\ldots,k \tag{2}\] To retain the end-to-end differentiability of the model, we replace the discrete argmax operation with a Gumbel-Softmax Maddison et al. (2017); Jang et al. (2017) distribution and obtain continuous approximations of the decoder outputs. Let \(p_{j}^{i}\) denote the probability of the \(i\)-_th_ token in the \(V\)-sized vocabulary at the \(j\)_-th_ decoding step in Equation 2 and \(\mathbf{p}_{j}\) represent the entire probability vector at step \(j\). Then the components of \(\mathbf{p}_{j}\) can be approximated using: \[\pi_{j}^{i}=\frac{\exp((\log p_{j}^{i})+g^{i})/\tau}{\sum_{v=1}^{V}\exp((\log p _{j}^{v}+g^{v})/\tau)} \tag{3}\] where \(g_{i}\) is a sample drawn from the Gumbel(0,1) distribution and \(\tau\) is a temperature parameter* that controls the sparsity of the resulting vectors. \begin{table} \begin{tabular}{c c} \hline \hline **Source [Translated]** & **Target [Original]** \\ \hline This is an area in which we need to press on. & This is another aspect we have to work on. \\ My group fully supports the substance of what you have said. & My group has discussed in detail the questions that you have posed. \\ That is not at all the case. & That is not the case at all. \\ I shall endeavour to be brief. & I will try to be brief. \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of accepted monolingual original-translationese pairs. The continuous probability vectors denoted as \(\mathbf{\pi}_{j}\) at each decoding step \(j\), represent probabilities over tokens in the shared ENC-DEC vocabulary. During the training phase, we use these vectors to compute the language model loss and the semantic similarity loss. We define our language model loss and semantic similarity loss below. **Language Model Loss**: To ensure target-style fluency, we follow the continuous-approximation approach from Yang et al. (2018). In this approach, we use the language model as a discriminator. The language model is initially pretrained on the target-style (i.e. the originals) to capture the target-style distribution and is denoted as \(LM_{og}\). We feed the approximated decoder output \(\mathbf{\pi}_{j}\) (from the \(j\)-th decoding step) to the pretrained language model \(LM_{og}\) by computing the expected embeddings \(E_{lm}\mathbf{\pi}_{j}\), where \(E_{lm}\) represents the embedding matrix of \(LM_{og}\). The output received from \(LM_{og}\) is a probability distribution over the vocabulary of the next word, \(\mathbf{q}_{j+1}^{og}\). Then the loss at \(j\)-th step is defined as the cross-entropy loss as shown below: \[L_{lm}=-\mathbf{\pi}_{j+1}\log\mathbf{q}_{j+1}^{og} \tag{4}\] When the output distribution from the decoder \(\mathbf{\pi}_{j+1}\) matches the language model output distribution \(\mathbf{q}_{j+1}^{og}\), the loss achieves its minimum. **Semantic Similarity Loss**: To enforce content preservation, the encoder representations of the input translation \(e(tr)\) and the expected token embeddings from the decoder \(e(E_{enc}\mathbf{\pi})\) are used to compute cosine similarity. Here, \(E_{enc}\) represents the embedding matrix of the style transfer transformer encoder and \(e(*)\) refers to the contextualized encoder representation. We define the loss as the mean-squared error of the cosine similarity loss:* Footnote *: We also experimented with directly minimizing the cosine embedding loss at a lower learning rate and observed no differences. \[L_{ss}=\frac{1}{M}\sum_{M}(1-\cos(e(x_{tr}),e(E_{enc}\mathbf{\pi}))^{2} \tag{5}\] where \(M\) refers to the number of input sentences in a batch. Training and ValidationTo achieve a continuous approximation of the decoder output at each time-step and ensure end-to-end differentiability, we employ a two-pass decoding approach (Mihaylova and Martins, 2019; Zhang et al., 2019; Duckworth et al., 2019). This approach works particularly well as we only feed mono-stylistic input to the Transformer (Vaswani et al., 2017) for unsupervised training. During training, the Transformer decoder is run once without accumulating gradients, and the (shifted) predicted sequence together with the encoder output are then fed into the Transformer decoder again to compute the unsupervised losses described above. During the validation phase, the output from the first-pass decoding is used to compute the semantic similarity in terms of BERTScore between the input translation and the style-transferred output. Additionally, mean per-word entropy is computed over the style-transferred output. Note that, to measure semantic similarity between input \(x_{tr}\) and output \(y_{\textit{\scriptsize\textit{\textit{\textit{\tiny\textit{\tiny\textit{\tiny \textit{\tiny\textit{\tiny\textit{\tiny\textit{\tiny\textit{\tiny\textit{\tiny\textit{ \tiny\textit{\textit{\textit{\textit{\textit{\textit{\textittextit{ \textittextittextittextit{ \textittextittextittextittextittextittextittextit{ \textit{\textittextittextit{ \textit{\textittextittextit{ \textit{ \textittextit{ \textittextit{ \textittextit{ \textit{ \textittextit{ \textittextit{ \textittextit{ \textit{ \textittextit{ \textittextit{ \textittextit{ \textit{ \textittextit{ \textit{ \textittextit{ \textittextit{ \textit{ \textit{ \textittextit{ \textit{ \textit{ \textittextit{ \textit{ \textit{ \textittextit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ \textit{ { \textit{ \textit{ \textit{ { \textit{ { \textit{ { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { {{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ \ Experimental Settings ### Data **Training Data**: We use a subset of the EuroParl corpus annotated with translationese information from (Amponsah-Kaakyire et al., 2021) (referred to as MPDE). Our focus is on two language scenarios: (i) **EN-ALL**: English originals (EN), and German and Spanish (ES+DE=ALL) translated into English and, (ii) **DE-ALL**: German originals (DE), and English and Spanish (ES+EN=ALL) translated into German. **Validation and Test data:** The Self-Supervised baseline (SSNMT) relies on a parallel validation set for hyperparameter tuning. However, as this kind of data does not exist naturally, we generate machine-translationese (mTR) validation and test data by round-trip translating the original sentences (og) in the target language. Similarly, we denote the human-translationese as hTR. For our baseline SSNMT-based monolingual style transfer model, instead of using unaligned (hTR,og) pairs for validation, we utilise aligned (mTR,og) pairs. For EN-ALL, we translate the original sentences from the EN-DE validation and test splits using Facebook's pre-trained WMT'19 EN\(\rightarrow\)DE and DE\(\rightarrow\)EN models (Ng et al., 2019) and for DE-ALL, we use M2M100 (Fan et al., 2021) with EN as the pivot language for round-trip translation. Refer to Appendix A.1 for the dataset statistics of the Style Transfer model. For DAE and Language Model pretraining (for Joint Training), English and German monolingual data are collected from the EuroParl Corpus (Koehn, 2005) from the OPUS website*(Tiedemann, 2012). The English corpus contains over \(1.5M\) train sentences and \(5k\) dev and test sentences, while the German one has \(2.1M\), \(5k\), \(5k\) train, test and dev sentences, respectively. Noisy input data for BART-style pretraining is generated with the values of parameters reported in Ruiter et al. (2022). For LM finetuning, we use the Original training split of the Comparable Dataset used for Style Transfer (see Appendix A.1). Footnote *: [http://opus.nlpl.eu/](http://opus.nlpl.eu/) Footnote *: [https://github.com/cristiane/fairseq/pull/4](https://github.com/cristiane/fairseq/pull/4) ### Model Specifications We implement our baseline system* and the proposed Joint-Training system* in Fairseq (Ott et al., 2019), using a _transformer-base_ with a maximum sequence length of 512 sub-word units. For online parallel-pair extraction, SSNMT requires indexing the mono-stylistic corpora for fast access. We use FAISS (Johnson et al., 2019) for this purpose. Sentence vectors are clustered into buckets using IVF100 (wherein \(100\) equals \(k\) in k-means*) and stored without compression (i.e. with Flat indexing). At search time, the top 20 buckets* matching a query are examined. All other parameters are set to the values reported in (Ruiter et al., 2022). Footnote *: [https://github.com/cristiane/fairseq/pull/5](https://github.com/cristiane/fairseq/pull/5) In Joint Training, we use the Fairseq implementation of a Transformer Decoder (Vaswani et al., 2017) Language Model to compute Language Model Loss during training and validation. The same model is used to measure perplexities during testing. Further information regarding the hyperparameters used for training the Style-Transfer models under different scenarios (i.e. with or without threshold across different data settings) can be found in Appendix A.5. #### 4.2.1 Classifier The style transfer models are evaluated using a BERT-based (Devlin et al., 2018) binary classifier trained to distinguish human-translated data from originals. For English, we finetune bert-base-cased for the translationese classification task, and for German, bert-base-german-cased. The binary classifier is trained on the MPDE data with equal amounts of human-translated and original data (hTR,og). For EN-ALL, the training, validation and test splits for binary classification consist of \(96536\), \(20654\) and \(20608\) sentences, respectively while for DE-ALL, they consist of \(87121\), \(18941\) and \(18976\) sentences, respectively. ## 5 Evaluation We perform evaluation on the outputs (\(\hat{o}\hat{g}\)) of the style transfer models, given the human-translationese (hTR) (without an original reference) half of the test data as input. ### Quantitative Analysis We compute three metrics: Acc. (Translationese Classification Accuracy), BERTScore (BERT-F1) and Perplexity (PPL) **Acc.**: This metric measures the extent to which the models mitigate translationese, in terms of the accuracy of classifying the styles (translated vs. original). We report classification results on the entire test set (hTR, og) and on the test sets generated from the outputs of our style transfer models (_o\(\hat{g}\)_, og). Note that, a near-random accuracy indicates better style transfer. **BERT-F1** is computed between the input translations and the style-transferred outputs to measure the degree of content preservation. For EN-ALL, we use pretrained roberta-base and for DE-ALL xlm-roberta-base. **PPL** is calculated for the style-transferred outputs using a language model (LM\({}_{og}\)) that has been fine-tuned on original text to evaluate the fluency and coherence of the target-style generated text. For EN-ALL, Table (a)a shows that the binary classifier achieves an accuracy of \(79.62\) on the (hTR, og) test set. This is our reference point. First, we study both self-supervised approaches -- without [1] and with a threshold [2]. Results show that both the baseline approaches [1,2] were able to reduce translationese in human-translated texts while preserving the semantics (as evidenced by the F1-BERTScore of \(0.94\)). More precisely, Self-Supervised [2] reduces the classification accuracy by \(16\%\) when compared to the reference and by \(11\%\) when compared to Self-Supervised [1]. Furthermore, the style-transferred outputs from Self-Supervised [2] show a reduction in LM\({}_{og}\) perplexity when compared to the hTR texts and a \(10.5\%\) reduction when compared to Self-Supervised [1] outputs. This indicates, for the baseline self-supervised approach, additionally relying on a threshold to extract more parallel pairs helps improve style transfer for EN-ALL MPDE*. Footnote *: See Appendix A.2.1 for the statistics on the accepted pairs. Next, we examine if the same holds true for the Joint Training architecture and if joint training further improves the style transfer. In Table (a)a, the consistent close-to-random accuracy across all the four variants of the Joint Training setup confirm the efficacy of this approach. The results show that in the Joint Training setup, regardless of the validation distribution (i.e., hTR or mTR), the style-transferred outputs from the models with no thresholds (i.e. Joint Training [1]-hTR and Joint Training [1]-mTR) achieve \(2\%-8\%\) reductions in accuracy w.r.t. their counterparts with thresholds. This observation suggests that when the model is trained with a larger amount of data through unsupervised training, it may be adequate to utilize only high-quality SPE pairs, without the need for including additional sub-optimal pairs based on a threshold. When comparing these two models against each other (i.e. Joint Training [1]-hTR vs. \begin{table} \end{table} Table 2: Notation: No Threshold:[1], With Threshold:[2]; Validation set: human-translation(hTR)/machine-translation (mTR): Acc.[1] classification accuracy on the entire test set (hTR, og) and on the style-transferred outputs (_o\(\hat{g}\)_, og); #Identical: #outputs that are same as the input. _Bold numbers highlight the best overall result under each metric while the second best results are underlined._ Joint Training [1]-mTR ), the generations from the model validated on hTR exhibit similar semantic similarity but with approximately \(8\%\) lower perplexity, which suggests a potential advantage in employing the same distribution (hTR) for validation as during training and testing. We replicate the same analysis for the **DE-ALL setup**, as demonstrated in Table 2b. Note that, this dataset is even smaller than the EN-ALL MPDE dataset (see Table 4). Here, the reference accuracy is \(79.30\) on (hTR, eg) test set. When using the baseline Self-Supervised methods [1,2], the classification accuracy reduces only marginally by \(2\%\) to \(4\%\), with no significant distinction between the two variants. Self-Supervised [1], however, unlike in EN-ALL, benefits from a \(42\%\) lower LM\({}_{og}\) perplexity than Self-Supervised [2]. Nonetheless, similar to the EN-ALL setup, with Joint Training, the degradation is more pronounced. In contrast to EN-ALL, however, the style-transferred outputs from Joint Training without threshold (i.e. Joint Training [1]-hTR and Joint Training [1]-mTR) exhibit only a marginal decrease in accuracy compared to their counterparts with threshold. Assessing content preservation and fluency in the target style within the Joint Training setup, all the models, except Joint Training [2]-mTR, yield similar results. Notably, the outputs from Joint Training [1]-hTR demonstrate the lowest perplexity. In case of Joint Training [2]-mTR, the oddly high perplexity when evaluating with LM\({}_{og}\) could be attributed to either the sub-optimal parallel pairs accepted by additionally applying a threshold or performing validation on mTR or both. Finally, we compare the Joint Training setup with the baseline Self-Supervised setup across both the MPDE datasets. Overall, Joint Training reduces the classification accuracy to a near-random accuracy, with a more pronounced drop in accuracy for EN-ALL MPDE *. However, we also observe a degradation in the F1-BERTScore as we move to the Joint Training setup. This discrepancy can be attributed to the fact that Self-Supervised training yields higher number of outputs that are identical to the given input translations*. Therefore, a higher F1-BERTScore does not necessarily mean that the non-identical outputs from Self-Supervised baselines preserve the content better. Earlier studies have shown that BERTScore is sensitive to both meaning and surface changes (Hanna and Bojar, 2021; Zhou et al., 2022), and therefore, one needs to manually examine the outputs. Footnote *: To closely inspect the impact of style transfer, in Appendix A.2, we report the classification accuracies on only the translationese half of the test set. This provides an insight into the number of sentences in hTR or \(\hat{og}\) that are considered by the classifier as original-like (see #OG-like in Table 6). ### Qualitative Analysis Prior research (Baker et al., 1993) has shown that translated texts are often simpler than original texts. In order to measure the level of translationese in a translation in terms of lexical simplicity, Toral (2019) introduced two metrics: Type-Token Ratio (TTR) and Lexical Diversity (LD). Following Riley et al. (2020), we conduct a qualitative analysis using these metrics. **TTR**: Measures lexical variety by dividing the number of unique word types in a text by the total number of words (tokens) in the same text. The lower the ratio, the more reduced the vocabulary of the style-transferred output, indicating weak resemblance to the target original style. **LD**: Calculated by dividing the number of content words (words that carry significant meaning - adjectives, adverbs, nouns and verbs) by the total number of words in the text. Higher content word density implies a text conveys more information, aligning it more closely with the original style. Table 2a shows that for EN-ALL MPDE, the TTR and LD scores for the outputs of the baseline Self-Supervised approach [2] are higher than that of Self-Supervised [1]. This is in line with the quantitative results, which clearly indicate that additionally applying a threshold to retrieve a higher number of parallel pairs helps improve style transfer. However, this does not hold for Joint Training. The TTR and LD scores for Joint Training [2]-(hTR/mTR) models are higher than for the models without a threshold, although they achieve higher LM\({}_{og}\) perplexities. Interestingly, the LD scores from Joint Training [2]-hTR/mTR even surpass the reference LD score on hTR. In case of DE-ALL MPDE (Table 2b), which has an even smaller size of training data, a lack of correlation between the quantitative and qualitative results is even more evident within the baseline self-supervised and the joint training setups. This suggests the need for an extrinsic evaluation on a downstream task, similar to the one performed by Dutta Chowdhury et al. (2022) on the Natural Language Inference (NLI) task. In Table 3, we present some examples from the different variants of our Style Transfer system and broadly analyse the generated outputs (_Gog_) with respect to some well-known linguistic indicators of translationese Volansky et al. (2015)*. In the first and second example, the intended meaning and surface form is retained in the outputs from all systems while the outputs obtained from Joint Training[1]-hTR introduces a higher level of formality, altering both the connective and the phrase. In the second example, Self-Supervised[2] retains key elements from the source text, while Joint Training brings about more significant changes in terms of connectives, lexical choices, and sentence structure. In the third example, while Self-Supervised[2] and Joint Training[1]-mTR change the formulation of the question while removing politeness ("please"), Joint Training[1]-hTR keeps it intact. Similarly, in the fourth example, all three versions use different connectives, with varying politeness and formality. In last example, Self-Supervised retains a substantial portion of the source text, while Joint Training[1]-mTR eliminates the specific mention of the "Copenhagen criteria" and replaces it with "these," which is a more general reference and Joint Training[1]-hTr changes the phrase by using "accession to the European Union" instead of "admittance to the EU." Footnote *: See Appendix A.3 for the analysis of German outputs. Overall, we observe that our proposed approach is able to mitigate translationese in its style-transferred outputs while preserving the semantics reasonably well and achieving greater fluency in target original style. ## 6 Conclusion In this work, we reduce the presence of translationese in human-translated texts and make them more closely resemble originally authored texts. We approach the challenge as a monolingual translation-based style transfer task. Due to the absence of parallel translated and original data in the same language, we employ a self-supervised style transfer approach that leverages comparable monolingual data from both original and translated sources. However, even this self-supervised approach necessitates the use of parallel data for the validation set to facilitate the learning of style transfer. Therefore, we develop a novel joint training objective that combines both self-supervised and unsupervised criteria, eliminating the need for parallel data during both training and validation. Our unsupervised criterion is defined by combining two components: the original language model loss over the style-transferred output and the semantic similarity loss between the input and style-transferred output. With this, we show that the generated outputs not only resemble the style of the original texts but also maintain semantic content. We evaluate our approach on the binary classification task between original and translations. Our findings show that training with joint loss significantly reduces the original and translation classification accuracy to a level comparable to that of a random classifier, indicating the efficacy of our approach in mitigating translation-induced artifacts from translated data. As future work, we intend to explore more sophisticated approaches to improve content preservation and evaluation. It would be also interesting to apply a version of our style transfer approach to the output of machine translation to alleviate translationese in cross-lingual scenarios. ## Limitations The availability of original and professionally translated data in the same language, domain and genre \begin{table} \begin{tabular}{c c c c} \hline \hline **Source (hTR)** & **Self-Supervised[2]** & **Joint Training[1]-mTR** & **Joint Training[1]-hTR** \\ \hline I happily leave it to you to & I leave it to you to & I will leave it to you to & I will therefore leave it to \\ examine this matter. & examine this matter. & reflect on this matter. & you to consider this \\ \hline Unfortunately, this hope & Unfortunately, this has not & Despite that, it has not & The reality is, however, \\ has not become reality. & become reality. & become reality. & rather different. \\ \hline Please could be just & Could be just explain & Could be just explain & Could be please explain \\ explain that? & that? & that? & that? \\ \hline Please could you take & Could you take suitable & Can you please do & Could you please take \\ suitable action here. & action? & something about it? & some action? \\ \hline I am in favour of Turkey’s & I am in favour of & I am very much in favour & I am very much in favour \\ admittance to the EU, but & Turkey’s admittance to & of Turkey’s admittance to & of Turkey’s accession to \\ the Copenhagen criteria & the EU, but the & the EU, but these must be & the European Union, but \\ must be met. & Copenhagen criteria must & taken into account. & the Copenhagen criteria \\ & be met. & must be met. \\ \hline \hline \end{tabular} \end{table} Table 3: Qualitative analysis of the outputs from different systems. is limited, both in terms of quantity and language coverage. While our system does not rely on parallel data, for our approach to work it is important to ensure that the data in both modalities (original and translationese) are comparable. Manually evaluating the quality of style transfer and the reduction of translationese is inherently subjective. While we conduct a preliminary analysis to evaluate the outputs, more nuanced linguistic expertise is required for in-depth analysis. Though we propose evaluation metrics, there is no universally accepted gold standard for measuring the effectiveness of translationese reduction. These factors may introduce biases and challenges in comparing the performance of different approaches. At the individual text level, even human experts struggle to distinguish between translationese and originals. Detecting translationese reliably involves analyzing large quantities of data or training classifiers on original and translated text.While Amposah-Kaakyire et al. (2022) show evidence that high-performance translationese classifiers may in some cases rely on spurious data correlations like topic information, recent work by Borah et al. (2023) indicates that the effect arising from spurious topic information accounts only for a small part of the strong classification results. A decrease in classifier accuracy strongly suggests reduced translationese signals. That said, further research on addressing spurious correlations in translationese classification is an important research not explored in our paper. Finally, it is worth noting that the proposed systems may attempt to improve translations already of high quality that should not really be touched. Our results include potential quality degradation due to system overcorrections. Future work aims to address this phenomenon using an oracle-based style transfer approach. ## 7 Acknowledgments We would like to thank Etienne Hahn for helping with the manual analysis of German outputs. We also thank the anonymous reviewers for their invaluable feedback. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - SFB 1102 Information Density and Linguistic Encoding.
2305.02406
Garland's method for token graphs
The $k$-th token graph of a graph $G=(V,E)$ is the graph $F_k(G)$ whose vertices are the $k$-subsets of $V$ and whose edges are all pairs of $k$-subsets $A,B$ such that the symmetric difference of $A$ and $B$ forms an edge in $G$. Let $L(G)$ be the Laplacian matrix of $G$, and $L_k(G)$ be the Laplacian matrix of $F_k(G)$. It was shown by Dalf\'o et al. that for any graph $G$ on $n$ vertices and any $0\leq \ell \leq k \leq \left\lfloor n/2\right\rfloor$, the spectrum of $L_{\ell}(G)$ is contained in that of $L_k(G)$. Here, we continue to study the relation between the spectrum of $L_k(G)$ and that of $L_{k-1}(G)$. In particular, we show that, for $1\leq k\leq \left\lfloor n/2\right\rfloor$, any eigenvalue $\lambda$ of $L_k(G)$ that is not contained in the spectrum of $L_{k-1}(G)$ satisfies \[ k(\lambda_2(L(G))-k+1)\leq \lambda \leq k\lambda_n(L(G)), \] where $\lambda_2(L(G))$ is the second smallest eigenvalue of $L(G)$ (a.k.a. the algebraic connectivity of $G$), and $\lambda_n(L(G))$ is its largest eigenvalue. Our proof relies on an adaptation of Garland's method, originally developed for the study of high-dimensional Laplacians of simplicial complexes.
Alan Lew
2023-05-03T19:59:03Z
http://arxiv.org/abs/2305.02406v1
# Garland's method for token graphs ###### Abstract The \(k\)-th token graph of a graph \(G=(V,E)\) is the graph \(F_{k}(G)\) whose vertices are the \(k\)-subsets of \(V\) and whose edges are all pairs of \(k\)-subsets \(A,B\) such that the symmetric difference of \(A\) and \(B\) forms an edge in \(G\). Let \(L(G)\) be the Laplacian matrix of \(G\), and \(L_{k}(G)\) be the Laplacian matrix of \(F_{k}(G)\). It was shown by Dalfo et al. that for any graph \(G\) on \(n\) vertices and any \(0\leq\ell\leq k\leq\lfloor n/2\rfloor\), the spectrum of \(L_{\ell}(G)\) is contained in that of \(L_{k}(G)\). Here, we continue to study the relation between the spectrum of \(L_{k}(G)\) and that of \(L_{k-1}(G)\). In particular, we show that, for \(1\leq k\leq\lfloor n/2\rfloor\), any eigenvalue \(\lambda\) of \(L_{k}(G)\) that is not contained in the spectrum of \(L_{k-1}(G)\) satisfies \[k(\lambda_{2}(L(G))-k+1)\leq\lambda\leq k\lambda_{n}(L(G)),\] where \(\lambda_{2}(L(G))\) is the second smallest eigenvalue of \(L(G)\) (a.k.a. the algebraic connectivity of \(G\)), and \(\lambda_{n}(L(G))\) is its largest eigenvalue. Our proof relies on an adaptation of Garland's method, originally developed for the study of high-dimensional Laplacians of simplicial complexes. ## 1 Introduction Let \(G=(V,E)\) be a graph. The \(k\)_-th token graph_ of \(G\), denoted by \(F_{k}(G)\), is the graph on vertex set \(\binom{V}{k}\) whose edges are the pairs \(\{A,B\}\) with \(|A\cap B|=k-1\) and \(A\triangle B=(A\setminus B)\cup(B\setminus A)\in E\). Token graphs were originally defined by Johns in [15] under the name of \(k\)_-tuple vertex graphs_ (see also e.g. [2, 21, 3]). In [4], they were reintroduced under the name of \(k\)_-th symmetric powers_. Finally, in [9], they were introduced once again under their current name. Token graphs also appear implicitly in the study of the "symmetric exclusion process" on graphs, introduced by Spitzer in [19] (see also e.g. [6]). Note that for \(k=0\) the graph \(F_{0}(G)\) is just the graph with one vertex (corresponding to the empty set) and no edges, for \(k=1\) we have \(F_{1}(G)\cong G\), and, if \(|V|=n\), then \(F_{k}(G)\cong F_{n-k}(G)\) for all \(0\leq k\leq n\) (see e.g. [9]). For a symmetric matrix \(M\in\mathbb{R}^{m\times m}\), we denote by \(\lambda_{i}(M)\) its \(i\)-th smallest eigenvalue. Let \(L(G)\) be the Laplacian matrix of \(G\), and let \(L_{k}(G)=L(F_{k}(G))\) be the Laplacian of its \(k\)-th token graph. The Laplacian spectrum of token graphs was studied by Dalfo, Duque, Fabila-Monroy, Fiol, Huemer, Trujillo-Negrete and Martinez in [7]. In particular, in was shown in [7] that for any \(0\leq\ell\leq k\leq\lfloor n/2\rfloor\) the spectrum of \(L_{\ell}(G)\) is contained in the spectrum of \(L_{k}(G)\). Let \(1\leq k\leq\lfloor n/2\rfloor\), and let \(\lambda\) be an eigenvalue of \(L_{k}(G)\). We say that \(\lambda\) is _non-trivial_ if the multiplicity of \(\lambda\) as an eigenvalue of \(L_{k}(G)\) is larger than its multiplicity as an eigenvalue of \(L_{k-1}(G)\). In particular, any eigenvalue of \(L_{k}(G)\) that is not contained in the spectrum of \(L_{k-1}(G)\) is non-trivial. We denote the maximal eigenvalue of \(L_{k}(G)\) by \(\lambda_{\max}^{(k)}(G)\), and its minimal non-trivial eigenvalue by \(\lambda_{\min}^{(k)}(G)\). For example, for \(k=1\), we have \(\lambda_{\max}^{(1)}(G)=\lambda_{n}(L(G))\) and \(\lambda_{\min}^{(1)}(G)=\lambda_{2}(L(G))\). Our main result consists of the following bounds on \(\lambda_{\max}^{(k)}(G)\) and \(\lambda_{\min}^{(k)}(G)\). **Theorem 1.1**.: _Let \(G=(V,E)\) be a graph with \(|V|=n\), and let \(2\leq k\leq\lfloor n/2\rfloor\). Then_ \[\lambda_{max}^{(k)}(G)\leq\frac{k}{k-1}\lambda_{max}^{(k-1)}(G)\] _and_ \[\lambda_{min}^{(k)}(G)\geq\frac{k}{k-1}\lambda_{min}^{(k-1)}(G)-k.\] As a consequence, we obtain the following bounds on the non-trivial spectrum of \(L_{k}(G)\). **Theorem 1.2**.: _Let \(G=(V,E)\) be a graph with \(|V|=n\), and let \(1\leq k\leq\lfloor n/2\rfloor\). Let \(\lambda\) be a non-trivial eigenvalue of \(L_{k}(G)\). Then,_ \[k(\lambda_{2}(L(G))-k+1)\leq\lambda\leq k\lambda_{n}(L(G)).\] Both inequalities in Theorem 1.2 are tight: the lower bound is attained when \(G\) is a complete balanced multi-partite graph with at least \(k\) parts, and the upper bound is attained when \(G\) is the union of at least \(k\) disjoint cliques, all of the same size (see Section 4). It was conjectured in [7] (see also [8, 18]) that for any graph \(G\) on \(n\) vertices and any \(1\leq k\leq n-1\), \(\lambda_{2}(L(G))=\lambda_{2}(L_{k}(G))\). In fact, as mentioned in [17], this follows as a special case of Aldous' spectral gap conjecture, proved by Caputo, Liggett and Richthammer (see [6, Section 4.1.1]). Note that, in the special case when \(\lambda_{2}(G)\geq k\), the equality \(\lambda_{2}(L(G))=\lambda_{2}(L_{k}(G))\) follows immediately from the lower bound in Theorem 1.2. Our proof of Theorem 1.1 relies on an adaptation of Garland's "local to global" method ([11], see also [5, 22]). In its original form, Garland's method relates between the spectrum of a high-dimensional Laplacian matrix on a simplicial complex to the Laplacian spectra of certain subgraphs of the complex. In [1], Aharoni, Berger and Meshulam developed a "global version" of Garland's argument, relating the spectrum of a high-dimensional Laplacian matrix on the clique complex of a graph \(G\) to the Laplacian spectrum of \(G\). This relation was later extended in [16] to more general classes of simplicial complexes. Our argument here can be seen as an analogue of the argument in [1], and is motivated by the similarity between the Laplacian of the \(k\)-th token graph of a graph \(G\) and the \((k-1)\)-dimensional Laplacian of the clique complex of \(G\). The paper is organized as follows. In Section 2 we present some background material on Laplacian matrices and on the Laplacian spectrum of token graphs. In Section 3 we prove our main results, Theorems 1.1 and 1.2. In Section 4 we present extremal examples showing the sharpness of Theorem 1.2. ## 2 Preliminaries Let \(G=(V,E)\) be a graph with \(|V|=n\). For convenience, we will assume \(V=[n]\). For a vertex \(v\in V\), let \(d_{G}(v)=|\{e\in E:\,v\in e\}|\) be the _degree_ of \(v\) in \(G\). The _Laplacian matrix_\(L(G)\in\mathbb{R}^{n\times n}\) is defined as \[L(G)_{u,v}=\begin{cases}d_{G}(v)&\text{if }u=v,\\ -1&\text{if }\{u,v\}\in E,\\ 0&\text{otherwise}.\end{cases}\] For \(0\leq\ell\leq k\leq n\), let \(B_{n,k,\ell}\in\mathbb{R}^{\binom{n}{k}\times\binom{n}{\ell}}\) be a matrix with rows indexed by the \(k\)-subsets of \([n]\) and columns indexed by its \(\ell\)-subsets, with elements \[(B_{n,k,\ell})_{\sigma,\eta}=\begin{cases}1&\text{if }\eta\subset\sigma,\\ 0&\text{otherwise},\end{cases}\] for \(\sigma\in\binom{[n]}{k}\) and \(\eta\in\binom{[n]}{\ell}\). It is well known (see e.g. [13, 14, 20]) that for \(k\leq\lfloor n/2\rfloor\), \(B_{n,k,\ell}\) has rank \(\binom{n}{\ell}\). In [7, Theorem 4.3], it was shown that for any \(1\leq k\leq n\), \[B_{n,k,k-1}L_{k-1}(G)=L_{k}(G)B_{n,k,k-1}. \tag{2.1}\] Equation (2.1) implies that both \(\operatorname{Im}(B_{n,k,k-1})\) and \(\operatorname{Im}(B_{n,k,k-1})^{\perp}=\operatorname{Ker}(B_{n,k,k-1}^{T})\) are invariant subspaces of \(L_{k}(G)\). Moreover, for \(1\leq k\leq\lfloor n/2\rfloor\), using (2.1) and the fact that \(B_{n,k,k-1}\) has full column rank, we obtain that if \(\phi_{1},\ldots,\phi_{\binom{n}{k-1}}\) form a basis of \(\mathbb{R}^{\binom{n}{k-1}}\) consisting of eigenvectors of \(L_{k-1}(G)\), with eigenvalues \(\lambda_{1},\ldots,\lambda_{\binom{n}{k-1}}\) respectively, then \(B_{n,k,k-1}\phi_{1},\ldots,B_{n,k,k-1}\phi_{\binom{n}{k-1}}\) form a basis of \(\operatorname{Im}(B_{n,k,k-1})\) consisting of eigenvectors of \(L_{k}(G)\), with the same eigenvalues \(\lambda_{1},\ldots,\lambda_{\binom{n}{k-1}}\) (see [7, Corollary 4.5]). In particular, the spectrum of \(L_{k-1}(G)\) is contained (including multiplicities) in the spectrum of \(L_{k}(G)\). As immediate consequences, we obtain the following useful results: **Lemma 2.1**.: _Let \(1\leq k\leq\lfloor n/2\rfloor\), and let \(\lambda\) be an eigenvalue of \(L_{k}(G)\). Then, \(\lambda\) is non-trivial if and only if there exists an eigenvector \(\phi\) of \(L_{k}(G)\) with eigenvalue \(\lambda\) satisfying \(B_{n,k,k-1}^{T}\phi=0\)._ **Lemma 2.2**.: _Let \(1\leq k\leq\lfloor n/2\rfloor\). Then_ \[\lambda_{min}^{(k)}(G)=\min\left\{\frac{\phi^{T}L_{k}(G)\phi}{\|\phi\|^{2}}\, :\,0\neq\phi\in\mathbb{R}^{\binom{n}{k}},\,B_{n,k,k-1}^{T}\phi=0\right\}.\] ## 3 A Garland-type argument In this section we prove our main results, Theorems 1.1 and 1.2. Let \(G=([n],E)\) be a graph, and let \(2\leq k\leq\lfloor n/2\rfloor\). Let \(L=L(G)\) and \(L_{k}=L_{k}(G)=L(F_{k}(G))\). We will denote the edge set of \(F_{k}(G)\) by \(E_{k}\). Moreover, for \(\sigma\in\binom{[n]}{k}\), let \(d_{k}(\sigma)=d_{F_{k}(G)}(\sigma)\) be the degree of \(\sigma\) in \(F_{k}(G)\). Let \(\phi\in\mathbb{R}^{\binom{n}{k}}\). For any \(u\in[n]\), we define \(\phi_{u}\in\mathbb{R}^{\binom{n}{k-1}}\) by \[\phi_{u}(\tau)=\begin{cases}\phi(\tau\cup\{u\})&\text{ if }u\notin\tau,\\ 0&\text{ if }u\in\tau,\end{cases}\] for any \(\tau\in\binom{[n]}{k-1}\). For a set \(\sigma\in\binom{[n]}{k}\), denote \(E_{\sigma}=\{e\in E:\,e\subset\sigma\}.\) We define a diagonal matrix \(D_{k}\in\mathbb{R}^{\binom{n}{k}\times\binom{n}{k}}\) by \[(D_{k})_{\sigma,\tau}=\begin{cases}|E_{\sigma}|&\text{ if }\sigma=\tau,\\ 0&\text{ otherwise},\end{cases} \tag{3.1}\] for all \(\sigma,\tau\in\binom{[n]}{k}\). Theorem 1.1 will follow from the following identity: **Proposition 3.1**.: _Let \(\phi\in\mathbb{R}^{\binom{n}{k}}\). Then_ \[(k-1)\phi^{T}L_{k}\phi=\left(\sum_{u=1}^{n}\phi_{u}^{T}L_{k-1}\phi_{u}\right)- 2\phi^{T}D_{k}\phi.\] For the proof of Proposition 3.1, we will need the following result about sums of degrees in \(F_{k}(G)\). **Lemma 3.2**.: _Let \(\sigma\in\binom{[n]}{k}\). Then,_ \[\sum_{u\in\sigma}d_{k-1}(\sigma\setminus\{u\})=(k-1)d_{k}(\sigma)+2|E_{\sigma }|.\] Proof.: Note that, for any \(0\leq j\leq n\) and any \(\eta\in\binom{[n]}{j}\), \(d_{j}(\eta)=|\{e\in E:\,|e\cap\eta|=1\}|\). Therefore, \[\sum_{u\in\sigma}d_{k-1}(\sigma\setminus\{u\}) =\sum_{u\in\sigma}\sum_{v\in\sigma\setminus\{u\}}\sum_{ \begin{subarray}{c}w([n]\setminus\sigma)\cup\{u\}\\ \{v,w\}\in E\end{subarray}}1=\sum_{u\in\sigma}\sum_{v\in\sigma\setminus\{u\}} \sum_{\begin{subarray}{c}w\in[n]\setminus\sigma,\\ v,w\in E\end{subarray}}1+\sum_{u\in\sigma}\sum_{\begin{subarray}{c}v\in \sigma\setminus\{u\}\\ \{u,v\}\in E\end{subarray}}1\] \[=(k-1)\sum_{v\in\sigma}\sum_{\begin{subarray}{c}w\in[n]\setminus \sigma,\\ v,w\}\in E\end{subarray}1+\sum_{u\in\sigma}\sum_{\begin{subarray}{c}v\in \sigma\setminus\{u\}\\ \{u,v\}\in E\end{subarray}}1=(k-1)d_{k}(\sigma)+2|E_{\sigma}|.\] Proof of Proposition 3.1.: We have \[\phi^{T}L_{k}\phi =\sum_{\{\sigma,\tau\}\in E_{k}}(\phi(\sigma)-\phi(\tau))^{2}=\sum_ {\sigma\in\binom{[n]}{k}}d_{k}(\sigma)\phi(\sigma)^{2}-2\sum_{\{\sigma,\tau\} \in E_{k}}\phi(\sigma)\phi(\tau)\] \[=\sum_{\sigma\in\binom{[n]}{k}}d_{k}(\sigma)\phi(\sigma)^{2}-2 \sum_{\eta\in\binom{[n]}{k-1}}\sum_{\{v,w\}\in E,\atop v,w\notin\eta}\phi(\eta \cup\{v\})\phi(\eta\cup\{w\}). \tag{3.2}\] Similarly, \[\sum_{u=1}^{n} \phi_{u}^{T}L_{k-1}\phi_{u}=\sum_{u=1}^{n}\sum_{\tau\in\binom{[n]} {k-1}}d_{k-1}(\tau)\phi_{u}(\tau)^{2}-2\sum_{u=1}^{n}\sum_{\eta\in\binom{[n]}{ k-2}}\sum_{\{v,w\}\in E,\atop v,w\notin\eta}\phi_{u}(\eta\cup\{v\})\phi_{u}( \eta\cup\{w\})\] \[=\sum_{\tau\in\binom{[n]}{k-1}}\sum_{u\in[n]\setminus\tau}d_{k-1 }(\tau)\phi(\tau\cup\{u\})^{2}-2\sum_{\eta\in\binom{[n]}{k-2}}\sum_{u\in[n] \setminus\eta}\sum_{\{v,w\}\in E,\atop v,w\notin\eta\cup\{u,v\}}\phi(\eta\cup \{u,w\})\phi(\eta\cup\{u,w\})\] \[=\sum_{\sigma\in\binom{[n]}{k}}\left(\sum_{u\in\sigma}d_{k-1}( \sigma\setminus\{u\})\right)\phi(\sigma)^{2}-2(k-1)\sum_{\tau\in\binom{[n]}{k-1 }}\sum_{\{v,w\}\in E,\atop v,w\notin\tau}\phi(\tau\cup\{v\})\phi(\tau\cup\{w\}). \tag{3.3}\] By Lemma 3.2 we have, for all \(\sigma\in\binom{[n]}{k}\), \[\sum_{u\in\sigma}d_{k-1}(\sigma\setminus\{u\})=(k-1)d_{k}(\sigma)+2|E_{\sigma }|.\] Therefore, by (3.2) and (3.3), we obtain \[(k-1)\phi^{T}L_{k}\phi=\sum_{u=1}^{n}\phi_{u}^{T}L_{k-1}\phi_{u}-2\sum_{\sigma \in\binom{[n]}{k}}|E_{\sigma}|\phi(\sigma)^{2}=\sum_{u=1}^{n}\phi_{u}^{T}L_{k- 1}\phi_{u}-2\phi^{T}D_{k}\phi.\] We will also need the following lemma. **Lemma 3.3**.: _Let \(\phi\in\mathbb{R}^{\binom{n}{k}}\). Then_ \[\sum_{u=1}^{n}\|\phi_{u}\|^{2}=k\|\phi\|^{2}.\] Proof.: \[\sum_{u=1}^{n}\|\phi_{u}\|^{2}=\sum_{u=1}^{n}\sum_{\tau\in\binom{[n]}{k-1}} \phi_{u}(\tau)^{2}=\sum_{\tau\in\binom{[n]}{k-1}}\sum_{u\in[n]\setminus\tau} \phi(\tau\cup\{u\})^{2}=\sum_{\sigma\in\binom{[n]}{k}}\sum_{u\in\sigma}\phi( \sigma)^{2}=k\sum_{\sigma\in\binom{[n]}{k}}\phi(\sigma)^{2}=k\|\phi\|^{2}.\] We can now prove Theorem 1.1: Proof of Theorem 1.1.: Let \(\lambda=\lambda^{(k)}_{\max}(G)\), and let \(\phi\in\mathbb{R}^{\binom{n}{k}}\) be an eigenvector of \(L_{k}\) with eigenvalue \(\lambda\). By Proposition 3.1, we have \[(k-1)\lambda\|\phi\|^{2}=(k-1)\phi^{T}L_{k}\phi=\sum_{u=1}^{n}\phi_{u}^{T}L_{k -1}\phi_{u}-2\phi^{T}D_{k}\phi.\] Since \(|E_{\sigma}|\geq 0\) for all \(\sigma\in{[n]\choose k}\), we have \(\phi^{T}D_{k}\phi\geq 0\), and therefore \[(k-1)\lambda\|\phi\|^{2}\leq\sum_{u=1}^{n}\phi_{u}^{T}L_{k-1}\phi_{u}\leq\sum_{u =1}^{n}\lambda_{\max}^{(k-1)}(G)\|\phi_{u}\|^{2}=k\lambda_{\max}^{(k-1)}(G)\| \phi\|^{2},\] where the last equality follows from Lemma 3.3. Hence, we obtain \(\lambda\leq k\lambda_{\max}^{(k-1)}(G)/(k-1)\), as wanted. Now, let \(\lambda=\lambda_{\min}^{(k)}(G)\). By Lemma 2.1, since \(\lambda\) is non-trivial, there is an eigenvector \(\phi\) of \(L_{k}\) with eigenvalue \(\lambda\) such that \(B_{n,k,k-1}^{T}\phi=0\). We will show that, for any \(u\in[n]\), \(B_{n,k-1,k-2}^{T}\phi_{u}=0\). Let \(u\in[n]\) and \(\eta\in{[n]\choose k-2}\). If \(u\in\eta\), we have \[B_{n,k-1,k-2}^{T}\phi_{u}(\eta)=\sum_{\begin{subarray}{c}\tau\in{[n]\choose k -1},\\ \eta\subset\tau\end{subarray}}\phi_{u}(\tau)=0,\] by the definition of \(\phi_{u}\). If \(u\notin\eta\), then \[B_{n,k-1,k-2}^{T}\phi_{u}(\eta)=\sum_{\begin{subarray}{c}\tau\in{[n]\choose k -1},\\ \eta\subset\tau\end{subarray}}\phi_{u}(\tau)=\sum_{\begin{subarray}{c}\tau\in{[n ]\choose k-1},\\ \eta\subset\tau,u\notin\tau\end{subarray}}\phi(\tau\cup\{u\})=\sum_{ \begin{subarray}{c}\sigma\in{[n]\choose k},\\ \eta\cup\{u\}\subset\sigma\end{subarray}}\phi(\sigma)=B_{n,k,k-1}^{T}\phi( \eta\cup\{u\})=0.\] Therefore, by Lemma 2.2, \(\phi_{u}^{T}L_{k-1}\phi_{u}\geq\lambda_{\min}^{(k-1)}(G)\|\phi_{u}\|^{2}\) for all \(u\in[n]\). Moreover, since \(|E_{\sigma}|\leq{k\choose 2}\) for all \(\sigma\in{[n]\choose k}\), we have \(\phi^{T}D_{k}\phi\leq{k\choose 2}\|\phi\|^{2}\), and thus, by Proposition 3.1, we obtain \[(k-1)\lambda\|\phi\|^{2} =(k-1)\phi^{T}L_{k}\phi=\sum_{u=1}^{n}\phi_{u}^{T}L_{k-1}\phi_{u}- 2\phi^{T}D_{k}\phi\] \[\geq\sum_{u=1}^{n}\lambda_{\min}^{(k-1)}(G)\|\phi_{u}\|^{2}-k(k-1) \|\phi\|^{2}=\left(k\lambda_{\min}^{(k-1)}(G)-k(k-1)\right)\|\phi\|^{2},\] where the last equality follows from Lemma 3.3. Hence, \(\lambda\geq k\lambda_{\min}^{(k-1)}(G)/(k-1)-k\). Finally, we obtain Theorem 1.2 by an inductive application of Theorem 1.1. Proof of Theorem 1.2.: We will show that \[\lambda_{\max}^{(k)}(G)\leq k\lambda_{n}(L)\] and \[\lambda_{\min}^{(k)}(G)\geq k(\lambda_{2}(L)-k+1).\] We argue by induction on \(k\). For \(k=1\) the claim is trivial. Let \(k\geq 2\), and assume that \[\lambda_{\max}^{(k-1)}(G)\leq(k-1)\lambda_{n}(L)\] and \[\lambda_{\min}^{(k-1)}(G)\geq(k-1)(\lambda_{2}(L)-(k-1)+1).\] Then, by Theorem 1.1, we obtain \[\lambda_{\max}^{(k)}(G)\leq\frac{k}{k-1}\lambda_{\max}^{(k-1)}(G)\leq k\lambda _{n}(L),\] and \[\lambda_{\min}^{(k)}(G)\geq\frac{k}{k-1}\lambda_{\min}^{(k-1)}(G)-k\geq k( \lambda_{2}(L)-k+1).\] Extremal examples The next result shows that the upper and lower bounds in Theorem 1.2 are sharp. **Proposition 4.1**.: _Let \(m\geq k\), and let \(n\) be divisible by \(m\). Let \(G\) be the union of \(m\) disjoint cliques, each of size \(n/m\). Then, the maximal eigenvalue of \(L_{k}(G)\) is exactly \(kn/m=k\lambda_{n}(L(G))\)._ _Let \(\bar{G}\) be the complement graph of \(G\), namely the complete balanced \(m\)-partite graph with sides of size \(n/m\). Then, the minimal non-trivial eigenvalue of \(L_{k}(\bar{G})\) is exactly \(k((m-1)n/m-k+1)=k(\lambda_{2}(L(\bar{G}))-k+1)\)._ Proposition 4.1 follows from an argument similar to the one in [7, Theorem 7.2(iv)]. For completeness, we include a proof. We will need the following result due to Fiedler. Recall that, given graphs \(G=(V,E)\) and \(G^{\prime}=(V^{\prime},E^{\prime})\), the Cartesian product \(G\Box G^{\prime}\) is the graph on vertex set \(V\times V^{\prime}\) with edges of the form \(\{(u,u^{\prime}),(v,v^{\prime})\}\), where either \(u=v\) and \(u^{\prime}\) is adjacent to \(v^{\prime}\) in \(G^{\prime}\), or \(u^{\prime}=v^{\prime}\) and \(u\) is adjacent to \(v\) in \(G\). **Lemma 4.2** (Fiedler [10, 3.4]).: _For \(1\leq i\leq m\), let \(H_{i}\) be a graph on \(n_{i}\) vertices, and let \(\lambda_{1}^{i}\leq\cdots\leq\lambda_{n_{i}}^{i}\) be its Laplacian eigenvalues. Then, the Laplacian eigenvalues of the Cartesian product \(H_{1}\Box\cdots\Box H_{m}\) are_ \[\left\{\sum_{i=1}^{m}\lambda_{t_{i}}^{i}:\,1\leq t_{i}\leq n_{i}\;\forall i \in[m]\right\}.\] We will also need the following result, describing the Laplacian spectra of token graphs of a complete graph (also known as Johnson graphs). **Lemma 4.3** (See [12, Thm. 6.3.2, Thm. 6.3.3], [7, Eq. 19]).: _Let \(K_{n}\) be the complete graph on \(n\) vertices. Then, the eigenvalues of \(L_{k}(K_{n})\) are_ \[\{j(n-j+1):\,0\leq j\leq k\}.\] _Moreover, for every \(0\leq j\leq k\), the eigenspace corresponding to the eigenvalue \(j(n-j+1)\) is the orthogonal complement of \(\text{Im}(B_{n,k,j-1})\) in \(\text{Im}(B_{n,k,j})\), and therefore the eigenvalue \(j(n-j+1)\) has multiplicity \(\binom{n}{j}-\binom{n}{j-1}\). In particular, the eigenvectors corresponding to the eigenvalue \(k(n-k+1)\) are exactly the vectors \(\phi\in\mathbb{R}^{\binom{n}{k}}\) satisfying \(B_{n,k,k-1}^{T}\phi=0\)._ Finally, we will need the following result relating the spectrum of \(L_{k}(G)\) and that of \(L_{k}(\bar{G})\), which follows from the proof of [7, Theorem 6.2]: **Lemma 4.4**.: _Let \(G\) be a graph on \(n\) vertices, and let \(\lambda\) be a non-trivial eigenvalue of \(L_{k}(G)\). Then \(\lambda=k(n-k+1)-\mu\), where \(\mu\) is some non-trivial eigenvalue of \(L_{k}(\bar{G})\)._ Proof.: Let \(\phi\) be an eigenvector of \(L_{k}(G)\) with eigenvalue \(\lambda\). By Lemma 2.1, we can assume \(B_{n,k,k-1}^{T}\phi=0\). It was shown in [7, Theorem 6.2] that \(\phi\) is also an eigenvector of \(L_{k}(\bar{G})\) with eigenvalue \(\mu\), and an eigenvector of \(L_{k}(K_{n})\) with eigenvalue \(\xi\), for some \(\mu\) and \(\xi\) such that \(\lambda+\mu=\xi\). Since \(B_{n,k,k-1}^{T}\phi=0\), by Lemma 4.3 we have \(\xi=k(n-k+1)\). We obtain \(\lambda=k(n-k+1)-\mu\). Furthermore, since \(\phi\) is an eigenvector of \(L_{k}(\bar{G})\) with eigenvalue \(\mu\) satisfying \(B_{n,k,k-1}^{T}\phi=0\), by Lemma 2.1\(\mu\) is a non-trivial eigenvalue of \(L_{k}(\bar{G})\). Proof of Proposition 4.1.: Let \(G_{1},\ldots,G_{m}\) be the connected components of \(G\), each isomorphic to the complete graph on \(n/m\) vertices. Let \[\mathcal{I}=\left\{(k_{1},\ldots,k_{m}):\,0\leq k_{i}\leq n/m,\,\sum_{i=1}^{m }k_{i}=k\right\}.\] Then, for each \((k_{1},\ldots,k_{m})\in\mathcal{I}\), \(F_{k}(G)\) has a connected component isomorphic to \(F_{k_{1}}(G_{1})\Box\cdots\Box F_{k_{m}}(G_{m})\) (see proof of Corollary 6.4 in [7]). By Lemma 4.2 and Lemma 4.3, every eigenvalue of \(L_{k}(G)\) is of the form \[\sum_{i=1}^{m}j_{i}(n/m-j_{i}+1)=\left(\sum_{i=1}^{m}j_{i}\right)n/m-\sum_{i=1 }^{m}j_{i}(j_{i}-1),\] for \((k_{1},\ldots,k_{m})\in\mathcal{I}\) and \(0\leq j_{i}\leq k_{i}\) for all \(1\leq i\leq m\). Since \(m\geq k\), we can choose each \(k_{i}\) to be either \(0\) or \(1\), and \(j_{i}=k_{i}\) for all \(i\), to obtain an eigenvalue \[\lambda=\left(\sum_{i=1}^{m}k_{i}\right)n/m-\sum_{i=1}^{m}k_{i}(k_{i}-1)=kn/m=k \lambda_{n}(L(G)). \tag{4.1}\] By Theorem 1.2, this is the maximal eigenvalue of \(L_{k}(G)\). Furthermore, note that, by Theorem 1.2, this is a non-trivial eigenvalue of \(L_{k}(G)\) (otherwise, it is an eigenvalue of \(L_{k-1}(G)\) larger than \((k-1)\lambda_{n}(L(G))\), a contradiction). Now, let \(\bar{G}\) be the complement of \(G\). Then \(\lambda_{2}(L(\bar{G}))=n-\lambda_{n}(L(G))=(m-1)n/m\). By Lemma 4.4, \(\lambda=k(n-k+1)-\mu\), where \(\mu\) is some non-trivial eigenvalue of \(L_{k}(G)\). By (4.1), we have \[\mu =k(n-k+1)-\lambda=k(n-k+1)-kn/m\] \[=k((m-1)n/m-k+1)=k(\lambda_{2}(L(\bar{G}))-k+1).\] By Theorem 1.2, this is the minimal non-trivial eigenvalue of \(L_{k}(\bar{G})\).
2301.06397
Room-scale CO2 injections in a physical reservoir model with faults
We perform a series of repeated CO2 injections in a room-scale physical model of a faulted geological cross-section. Relevant parameters for subsurface carbon sequestration, including multiphase flows, capillary CO2 trapping, dissolution, and convective mixing, are studied and quantified. As part of a forecasting benchmark study, we address and quantify six predefined metrics for storage capacity and security in typical CO2 storage operations. Using the same geometry, we investigate the degree of reproducibility of five repeated experimental runs. Our analysis focuses on physical variations of the spatial distribution of mobile and dissolved CO2, multiphase flow patterns, development in mass of the aqueous and gaseous phases, gravitational fingers, and leakage dynamics. We observe very good reproducibility in homogenous regions with up to 97 % overlap between repeated runs, and that fault-related heterogeneity tends to decrease reproducibility. Notably, we observe an oscillating anticline CO2 leakage behavior from an open anticline with a spill point in the immediate footwall of a normal fault, and discuss the underlying causes for the observed phenomenon within the constraints of the studied system.
Martin A. Ferno, Malin Haugen, Kristoffer Eikehaug, Olav Folkvord, Benyamine Benali, Jakub W. Both, Erlend Storvik, Casey W. Nixon, Robert L. Gawthrope, Jan Martin Nordbotten
2023-01-16T12:33:33Z
http://arxiv.org/abs/2301.06397v1
# Room-scale CO\({}_{2}\) injections in a physical reservoir model with faults ###### Abstract We perform a series of repeated CO\({}_{2}\) injections in a room-scale physical model of a faulted geological cross-section. Relevant parameters for subsurface carbon sequestration, including multiphase flows, capillary CO\({}_{2}\) trapping, dissolution, and convective mixing, are studied and quantified. As part of a forecasting benchmark study, we address and quantify six predefined metrics for storage capacity and security in typical CO\({}_{2}\) storage operations. Using the same geometry, we investigate the degree of reproducibility of five repeated experimental runs. Our analysis focuses on physical variations of the spatial distribution of mobile and dissolved CO\({}_{2}\), multiphase flow patterns, development in mass of the aqueous and gaseous phases, gravitational fingers, and leakage dynamics. We observe very good reproducibility in homogenous regions with up to 97 % overlap between repeated runs, and that fault-related heterogeneity tends to decrease reproducibility. Notably, we observe an oscillating anticline CO\({}_{2}\) leakage behavior from an open anticline with a spill point in the immediate football of a normal fault, and discuss the underlying causes for the observed phenomenon within the constraints of the studied system. 1. Introduction In its simplest form, carbon sequestration involves the injection of captured carbon dioxide (CO\({}_{2}\)) into deep subsurface porous and permeable sedimentary rocks, overlain by an impermeable sealing layer. The migration of the buoyancy-driven CO\({}_{2}\) is determined by: i. the intrinsic rock and fluid properties (e.g. porosity, permeability, fluid density, and viscosity); and ii. the distribution and properties of geological structures such as faults and fracture networks, that are inherent to both reservoir and seal rocks. Faults are discontinuities that form at a range of scales; they can act as conduits or barriers for flow, and they generally have directionally-dependent flow properties (Bastesen and Rotevatn, 2012). Large sealing faults control storage site geometries and compartmentalization, whereas networks of small faults and fractures may affect reservoir flow and seal integrity (Qgata et al, 2014). ###### 1.1.1 Faults, fractures and flow The properties of the fracture networks (i.e. topology, connectivity, permeability) that form damage zones around faults as they evolve (Nixon et al, 2020) are particularly important to CO\({}_{2}\) flow. Subsurface faults are discerned from reflection seismic data, but descriptions suffer from limitations in seismic resolution and coverage. Geologically analogous outcrops and dedicated laboratory experiments provide a means to investigate smaller structures around faults and shed light on flow and sealing properties. Being able to identify and forecast the behavior of potential subsurface bypass structures during CO\({}_{2}\) injection is essential; understanding the interplay between multiphase flow and fault evolution is critically needed for carbon sequestration projects. Despite this, the flow properties of faults and their damage zones remain insufficiently understood, and little is known about how their flow behavior evolves in the different stages of a carbon sequestration project. Our current understanding of large-scale CO\({}_{2}\) plume migration is mainly from time-lapse seismic surveys with limited a priori knowledge (Furre et al, 2017). With increases in reservoir pressures during CO\({}_{2}\) injection, there is a greater risk of reactivation and potential generation of new fracture networks that can enhance seal permeability, capillary flow and provide pathways for fluid escape to shallower reservoirs or the surface (e.g. Ogata et al., 2014; Karstens and Berndt, 2015; Karstens et al., 2017). #### Forecasting skills Accurate modeling and simulation of multiphase flow in porous media with faults is central to carbon sequestration forecasting, risk assessment and mitigation strategies. Geologically accurate models are needed for simulating flow responses of faults, where analogous outcrops and laboratory experiments may be used to discern spatial variations within faults [Rotevatn et al, 2009] to determine percolation potential. Susceptibility for reactivation upon pressurization can then be evaluated with geomechanical modelling. Forecasting of field-scale CO\({}_{2}\) migration and reservoir pressure response is commonly achieved with history-matching and extrapolation exercises, routinely applied in hydrocarbon production. This approach can be effective with a limited number of parameters [Sharma et al, 2019], but flow simulations that include geologically realistic descriptions of faults and fracture networks are computationally expensive in universally applied industrial simulators. Current field-scale simulation approaches requires significant approximations, with respect either to spatial and temporal resolution, or to coupled processes, or both. Therefore, insights into scale-dependent flow behavior are needed to better couple flow dynamics in the presence of fault-related heterogeneity. #### The laboratory FluidFlower rig and its relevance to subsurface storage The FluidFlower concept links research and dissemination through a new experimental rig constructed at University of Bergen (UIB) that enables meter-scale, multiphase, quasi two-dimensional flow on model geological geometries with unprecedented data acquisition and repeatability. The geological geometry of the physical room-scale model (cf. **Figure 1**) is motivated by typical North Sea reservoirs. Structurally, the benchmark geometry is characterized by broad open folds and normal faults: a major normal fault breaches the lower reservoir-seal system and terminates upward at the base of the upper reservoir. A broad open anticline, in the footwall of the fault, forms the main trap to the lower reservoir-seal system and has a spill point in the immediate footwall of the fault. The broad open anticline is also the main trap geometry for the upper reservoir-seal system, but this is affected by a graben bounded by two oppositely dipping normal faults. Hence, the designed geometry focuses on the need to couple fault and flow behavior during CO\({}_{2}\) injection, and was designed to achieve realistic CO\({}_{2}\) flow and trapping mechanisms to benchmark the numerical modelling capability of the porous media community with new physical measurement of key processes. While the present study is at the laboratory scale, the fundamental physical processes of multiphase, multi-component flows in heterogeneous porous media are the same as at reservoir conditions. The most important subsurface CO\({}_{2}\) trapping mechanisms are present in the FluidFlower rig: _structural trapping_ occurs under the sealing sand layers and within different reservoir zones; _dissolution trapping_ occurs almost instantaneously when the injected CO\({}_{2}\) dissolves into the water phase initially saturating the porous media; _residual trapping_ is observed in regions with intermediate water saturation, but is temporary because of rapid dissolution; _convective mixing_ occurs when the CO\({}_{2}\)-saturated water migrates downwards and generate gravitational fingers. _Mineral trapping_ is by design not part of the benchmark study for increased control of active chemistry (very pure silica sand is used, and the pressure and temperature conditions are set outside mineralization thresholds within the experimental time series). Hence, the observed flow and trapping behavior in the FluidFlower rig to a large degree represents the physical in a subsurface system, even if the petrophysical properties like porosity and permeability, as well as the pressure and temperature conditions are not directly comparable to subsurface conditions. Furthermore, we remark that the structural trapping in the FluidFlower relies more on capillary entry pressure, and less on permeability contrast, than expected at the field scale. Overall, we argue that the findings and observations in this study are indicative of field-scale simulation, although several observed phenomena scale differently in the FluidFlower compared with subsurface systems (for a detailed scaling analysis, see Kovacek et al 2023, this issue). Despite the physical similarities, actual field scale simulation will deviate from this study in several important aspects, of which we highlight (see Flemish et al 2023, this issue, for a comprehensive discussion): - _Heterogeneity_. The facies in the benchmark geometry were built with a single sand type aiming for homogenous petrophysical properties, and, hence, emphasizing larger scale structural heterogeneities. On the field scale, it is expected that there will be significant subscale heterogeneity also within each geological structure. * _Quality of geological characterization._ A high-resolution image of the geological geometry, with accompanying thicknesses before CO\({}_{2}\) injections, was issued to the benchmark participants (cf. Nordbotten et al 2022). At the field scale, the initial geological characterization will associated with higher uncertainty and lower spatial resolution data from seismic surveys. * _Pressure and temperature conditions._ The laboratory conditions in the reported study yield a gaseous CO\({}_{2}\) phase when injected, compared with liquid or supercritical phase at field conditions in typically reservoirs. The difference in phase condition has a minor impact on viscosity, but leads to a denser and less compressible CO\({}_{2}\) phase at the field scale. The importance of forecasting, risk assessment and mitigation strategies for carbon sequestration, with many of the critical coupled subsurface processes remaining poorly understood, merits a continued broad interdisciplinary engagement. The utility of numerical modeling and simulation as a key decision-making tool for industrial application of CO\({}_{2}\) storage is scrutinized in the FluidFlower international benchmark initiative. ## 2 Materials and Methods This section briefly describes the key operational considerations and methodology developed to perform the experimental part of the Benchmark study. It provides an overview of all procedural steps, a description of the geological geometry and parameters. The description is not exhaustive, and the reader is referred supplementary materials (SM) and cited work for more detailed descriptions. ### Fluids The main fluids and their composition and usage are listed in Table 1. Throughout the article we refer to the gaseous form of CO\({}_{2}\) as _'gas'_ - the dry gas injected will partially partition into the aqueous phase saturating the porous media, and will have a positive, non-zero water content due to solubility of water in CO\({}_{2}\). The water content in CO\({}_{2}\) was not explicitly quantified in this work. We refer to the aqueous phase partially saturated with dissolved CO\({}_{2}\) as the _'CO\({}_{2}\)-saturated water'_, and the aqueous phase without CO\({}_{2}\) as _'formation water'_. The aqueous, pH-sensitive solution ('formation water') was in equilibrium with the atmosphere when injected and contained dissolved atmospheric gases (predominantly nitrogen and oxygen). The presence of other gases influences the CO\({}_{2}\)-to-water mass transfer due to differences in gas-to-water Henry's constant (Van De Ven and Mumford, 2020): the CO\({}_{2}\) mass transfer to the formation water releases nitrogen and oxygen into the gaseous phase. Hence, over time the gaseous phase in the system becomes deprived of CO\({}_{2}\), with reduced solubility in water. This effect was predominately observed towards the later-life of the gas accumulation under the anticlines, and is discussed more below. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Fluid** & **Phase** & **Composition** & **Usage** \\ \hline pH sensitive solution, termed ‘formation water’ & Aqueous & deionized water with - 0.14 mM bromothymol blue (BTB\({}^{\text{A}}\)-) & Saturate the pore space and enable detecting of dissolved CO\({}_{2}\) in the aqueous phase \\ \hline \end{tabular} \end{table} Table 1: Fluid compositions and role in benchmark study ### 2.2 Sand handling and porous media flow properties Danish quartz sand was purchased (in total 3.5 tons) and systematically treated to achieve the required properties. Six different sand types were used in the benchmark study (see **Table 2**). Before use, each sand was manually sieved from the supplied sand stock and treated with a strong acid (HCL) to remove impurities (predominately calcite). The acid was neutralized with sodium hydroxide, rinsed with tap water while manually agitating to remove precipitates and dust until no visible particles, then rinsed in tap water multiple times until clear solution without particles. The sand was then dried at 60 \({}^{\circ}\)C until dry and stored in cleaned plastic containers with lid until use. The absolute permeability was measured for each sand, all with nominal porosity 0.44. Detailed sand description, properties and procedural steps are outlined in [Haugen et al. 2023, this issue]. ### 2.3 The FluidFlower rig and building the geometry The _FluidFlower_ enables meter-scale multiphase quasi two-dimensional flow experiments on model geological geometries with unprecedented data acquisition and repeatability. Time-lapsed images are acquired to monitor dynamic, multiphase flow patterns with high spatial resolution where single sand grains may be identified. The CO\({}_{2}\)-saturated water is distinguished from formation water by a color shift of aqueous pH sensitive solution, whereas the gas phase is observed by reduction in colored aqueous phase (formation or CO\({}_{2}\)-saturated water). The design allows for repeated injections tests with near identical initial conditions, allowing physical uncertainty and variability to be addressed using the same geological geometry. The model geological geometry is constructed using unconsolidated sands (cf. **Table 2**) and held in place between an optically transparent front panel and an opaque back panel. The rig has 56 perforations that enable a range of well configurations (injector, producer, monitoring, or plugged) for porous media flow studies. The FluidFlower rig is curved to sustain internal forces and capable of porous media up to approximately 6 m\({}^{2}\)(3 m length x 2 m height). The benchmark study monitored four wells (two for CO\({}_{2}\) injection and two for pressure measurements), but several other wells were active during the experiments. Technical wells/ports at the bottom and top enabled re-setting fluids between CO\({}_{2}\) injections and to maintain a fixed water column experiments. Technical considerations and mechanical properties of the FluidFlower rig are detailed elsewhere [Eikehaug et al 2023a, this issue]. The FluidFlower has no-flow boundaries at the bottom and both sides, whereas the top is open with a fixed free water column (constant hydrostatic head). Relevance for subsurface carbon sequestration processes is maintained as dominant multiphase flow parameters and trapping mechanisms are present in the room-scale laboratory flow rig, including capillarity, dissolution, and convective mixing. The dry, unconsolidated sands were meticulously poured from the top into the water-filled void between the front and back panels. Each layer (consisting of one sand type, except the heterogeneous fault) was constructed from the bottom and upwards, and faults and large dipping angle were created by manipulating the layer during pouring using guiding polycarbonate rectangles, funnels and plastic hoses. Mechanical manipulation (raking/scratching) was kept to a minimum and only in some areas in \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline & Grade & \(<\)grain size\(>\)a \(\pm\) \(\sigma\) & Nominal K & P\({}_{\text{C,entry}}\)b \\ & & [mm] & [D] & [mbar] \\ \hline Sand ESF & Fine & \(0.20\pm 0.11\) & 50 & 15.0 \\ \hline Sand C & Coarse (lower) & \(0.66\pm 0.09\) & 500 & 3.3 \\ \hline Sand D & Coarse (upper) & \(1.05\pm 0.14\) & 1 000 & 0.9 \\ \hline Sand E & Very coarse (lower) & \(1.45\pm 0.19\) & 2 000 & _0.26_ \\ \hline Sand F & Very coarse (upper) & \(1.77\pm 0.31\) & 4 000 & _0.10_ \\ \hline Sand G & Granules & \(2.51\pm 0.63\) & 10 000 & _0.01_ \\ \hline \end{tabular} a averaged smallest grain width reported for each sand. Grains are not circular. b Capillary entry pressures measured from gas column height (in mm) sustained under each sand and converted to mbar. Italic numbers extrapolated from trend, no observable gas column. \end{table} Table 2: Key parameters for each of the six sand types the vicinity of the faults. Faults were constructed though an iterative process, detailed in [Haugen et al 2023, this issue], and the sealed fault was created using a silicone rubber rectangle. The hydrostatic pressure during geometry assembly was 100 mm above operating conditions. When the geometry was complete, the water-level was lowered to operating water-level (kept constant during all injections). Multiple flushing sequences using injection rates 10% higher than the injection protocols (cf. **SM 4**) were performed to achieve an initial, pre-injection sand settling to improve conditions for reproducibility during CO\({}_{2}\) injections. The nominal porous media depth was 19 mm, but depth variations were observed and accounted for with a spatially resolved depth map (cf. **SM 2**). ### The rationale behind the Benchmark geometry The geological geometry of the physical room-scale model (cf. **Figure 1**) was motivated by typical North Sea reservoirs. It was developed in close interdisciplinary collaboration between UIB researchers from reservoir physics, earth science and applied mathematics based on the following four principles: 1. Incorporate relevant features frequently encountered in subsurface geological carbon sequestration 2. Enable realistic CO\({}_{2}\) flow patterns and trapping scenarios with increasing modeling complexity 3. Sufficiently idealized for the sand facies to be reproduced numerically with high accuracy 4. Be able to operate, monitor and reset the fluids within a reasonable time frame The geometry was designed to achieve realistic CO\({}_{2}\) flow and trapping mechanisms to evaluate the modelling capability of the porous media community. The anticipated CO\({}_{2}\) flow, migration and phase behavior from each of the two CO\({}_{2}\) injection wells are described below, along with a geological interpretation of the benchmark geometry where geological features described are found in **Figure 1** and highlighted in _italic_ below. #### Geological interpretation of benchmark geometry The benchmark geometry is a compromise between geological realism, building a physical model from unconsolidated sand, and accurate gridding for numerical simulations of the geometry. The benchmark geometry comprises two stacked reservoir-seal systems, each capped by regional seals (represented by sand ESF). The lower reservoir is a homogeneous, high permeability reservoir (sand F) overlain by a laterally continuous seal. In contrast, the upper reservoir is stratigraphically more heterogeneous, forming an overall upward fining succession, but with permeability variations within the coarse sand layers (alternation of sands E, F, D and C), and additional stratigraphic complexity around a _sealed fault_ associated with the local development of sands C and D. Structurally, the benchmark geometry is relatively simple, characterized by broad open folds and normal faults. The major left-dipping normal fault (_heterogeneous fault_) breaches the lower reservoir-seal system and terminates upward at the base of the upper reservoir (within sand F). A broad open anticline, in the footwall of the fault, forms the main trap to the lower reservoir-seal system and has a spill point in the immediate footwall of the fault. The broad open anticline is also the main trap geometry for the upper reservoir-seal system, but this is affected by a graben bounded by two oppositely dipping normal faults; one _sealed fault_ and one _open fault_. An additional, subtle, low relief anticline forms an additional trap in the footwall of the graben-bounding sealed fault. The graben-bounding faults tips -out downdip into the basal layer of the upper reservoir (sand E) and updip into the base of the top regional seal (the uppermost sand layer in the model), as such they only affect the stratigraphy in the uppermost reservoir. The _sealed_ and _open faults_ have different properties and sealing potential: the _sealed fault_ is designed as a sealing fault with a low permeability fault core, whereas the _open fault_ has a high permeability fault core and would potentially act as a conduit for cross-formational fluid flow. #### Anticipated flow from well [9, 3] The buoyant gas phase flows upwards and reach the anticline sealing layer (sand ESF) above the injection point [9, 3]. CO\({}_{2}\)-saturated water is observed in the near-well region directly after onset of CO\({}_{2}\) injection. The anticline dipping angle facilitates gas migration into _Box A_ and accumulation at the highest point of the CO\({}_{2}\) trap. The trap fills with gas and a layer with CO\({}_{2}\)-saturated water forms underneath the downwards expanding gas accumulation. The CO\({}_{2}\)-saturated water flows downwards into _Box C_ over time due to i) the positive pressure gradient from the expanding gas and ii) convection because of the increased density relative to formation water. The gas accumulation increases upon continued injection until the gas-water interface aligns with the _spill point_; the excess gas flows through the _heterogeneous fault_ and into _Box B_ containing the fining upwards sequence and upper fault zone. The layered sequence (sands F, E, D and C, bottom to top) temporarily traps buoyant gas and laterally spreads the gas phase at the capillary barriers between layers. The increased density of CO\({}_{2}\)-saturated water relative to the formation water leads to gravitational fingers. The CO\({}_{2}\) injection ends (after 305 min) when the gas reaches the upper sand layer (sand C) under the seal, and CO\({}_{2}\) in all forms is contained between the left no-flow boundary and the _sealed fault_. _Anticipated flow from well [17, 7]_. The gas phase (injected in sand F) flows upwards and spreads laterally at layer boundaries in the fining upwards sequence (except between sand F and E, cf. **Table 2**). The gas phase advances upwards sequentially when it exceeds the capillary entry pressure in each layer. The CO\({}_{2}\)-saturated water flows downwards due increased density and the pressure gradient of the gas accumulation - its flow pattern is influenced by the permeability variations in the layered sequence. The gas phase accumulates under the top seal above the injection well and migrates laterally until CO\({}_{2}\) injection is terminated (after 165 min). Depending on the amount of CO\({}_{2}\) injected, the gas phase will reach the _open fault_, and CO\({}_{2}\) in all forms will be contained between the _open fault_ and the right no-flow boundary. Figure 1: The benchmark geometry with color enhanced layers for facies identification. Each sand type (ESF, C, D, E, F and G; cf. **Table 2**) has a separate color indicated to the left. Sand/color correlation: ESF/yellow; C/light blue; D/light brown; E/red; F/green; G/dark blue. The geometry includes three faults: _sealed_ (silicone strip), _open_ (sand G), and _heterogeneous_ (sands G, F, D and C). Total length of visible porous media is 2800 mm, and porous media height is nominally 1300 mm. Edge shadows visible on the left and right, and the active porous extends 30 mm behind the black metal frame on each side. The three no-flow boundaries (left, right and bottom) are indicated grey, whereas the open boundary is blue (top). A 100 x 100 mm cartesian grid with the origin [0,0] in the lower left corner with the x-axis positively oriented towards the right and the y-axis positively oriented towards the top aids the following coordination. Four monitored ports: two CO\({}_{2}\) injection well (red circles, coordinates [9,3] and [17,7]) and pressure ports (purple circles, coordinates [15,5] and [17,11]). Areas for reporting (Box A, B and C) are defined with the following coordinates (top right = TR; top left = TL; bottom right = BR; bottom left = BL): Box A: TL [11,6] \(>\) TR [28,6], BL [11,0] \(>\) BR [28,0]; Box B: TL [0,12] \(>\)TR [11,12], BL [0,6] \(>\) BR [11,6]; Box C: TL [11, 4] \(>\)TR [26,4], BL [11,1] \(>\)BR [26,1]. ### Image acquisition and analysis The camera (Sony A7III, lens SAMVANG AF 45 mm F1.8) used the following settings (kept constant through all injections): shutter speed 1/30 sec; F number F2.8; ISO 100; color temperature 4100 K; and, manual focus). The camera was positioned in the curve focal point with a 3.6 m distance from the center point in the rig, halfway up the window height. Images were captured at high spatial (7952 x 4472 pixels, for a total of 35.5 megapixels) and temporal (between 10 sec to 5 min intervals, depending on active experimental phase) resolution to capture displacement and mass-transfer dynamics. Each run consists of more than 1000 images; a subset that captures key events, displacement processes and mass-transfer dynamics is available for open-access download (Eikehaug et al 2023b). The subset contains 137 high-resolution images with the following intervals: 10 images before CO\({}_{2}\) injection at 20 second intervals; images every 5 min during the first 360 min (6 hours) of the experiment (73 images); images every hour until 48 hours (42 images); images every 6 hours until end of experiment (12 images). #### Image analysis toolbox To use the high-resolution images as measurement data, image analysis is required. As part of the benchmark study, the open-source image analysis software _DarSIA_ (short for Darcy Scale Image Analysis, Both et al, 2023a) has been developed, detailed in (Nordbotten et al 2023, this issue). DarSIA provides the capability to extract physically interpretable data from images for quantitative analysis of the image sequences of the time-lapsed CO\({}_{2}\) injection and storage experiments. In particular, DarSIA includes preprocessing tools to align images; project suitable regions of interest of images onto two-dimensional Cartesian coordinate systems; correcting for geometrical discrepancies due to e.g. the curved nature of the physical asset; as well as correcting white balance fluctuations and perform color correction utilizing the color checker attached to the physical asset, overall, resulting in unified image sequences. Furthermore, additional analysis tools are available to e.g. determine spatial deformation maps comparing different configurations and extract concentration profiles or identify phases, to mention a few. The latter aims at a Darcy scale interpretation of the high-resolution images taken of the physical asset, effectively, removing sand grains and upscaling fluid quantities. #### Phase identification The image analysis toolbox was used to separate between the different CO\({}_{2}\) phases (gaseous and aqueous) present in the experiments, and a set of assumptions enabled the quantification of each phase to be calculated during the CO\({}_{2}\) injection and associated mixing. Four main phases are anticipated: 1. Free gas (potentially flowing gas phase with non-zero gas permeability, referred to as mobile gas) 2. Trapped gas (residually trapped CO\({}_{2}\) with zero gas permeability, referred to as immobile gas) 3. CO\({}_{2}\)-saturated water (aqueous phase with a non-zero CO\({}_{2}\) content) 4. Formation water (aqueous solution with zero CO\({}_{2}\) content) A range of assumptions (cf. **SM 3**) was needed to quantitatively describe the observed multiphase flow phenomena during repeated CO\({}_{2}\) injections in the physical flow rig. Based on these assumptions, a geometric separation of the formation water from any CO\({}_{2}\) in the system and of the gaseous CO\({}_{2}\) from the CO\({}_{2}\)-saturated water is sufficient. This separation was possible due to the use of the pH-indicator mix (cf. **Table 1** and **Figure 2**). Through pixel-wise image comparison to the image corresponding to the injection start, a thresholding approach both in terms of monochromatic color space and signal intensity accomplishes the separation (in addition further techniques are used to convert the signal to Darcy-scale quantities, cf. Nordbotten et al 2023, this issue). The heterogeneous nature of the geometry is considered in the analysis by choosing facies-based threshold parameters, and thereby allows for tailored and relatively accurate phase segmentation, cf. Figure 2. The parameters are chosen such that transition zones are included as demonstrated. The same unified setup has been used for analyzing all experimental runs. It must be noted, that based on the choice of the assumptions and the resulting image analysis, the identification of gaseous phases for which assumption **SM 3.1** is not satisfied, may be erroneous; transition zones smear out and the saturation decays which leads to a sudden disappearance of the post-processed gaseous phase due to the use of fixed threshold parameters. In all experimental runs, two gaseous regions are detected, cf. **Figure 2**, and the described effect takes place for the upper gaseous region, whereas the lower region is detected stably. While the upper region fully dissolves, the lower region results in remaining gas, cf. **SM 3.1ii**, which is detected as gaseous CO\({}_{2}\). Consequently, the subsequent quantitative analysis reports on a small amount of non-vanishing gas accumulation towards the end of the experimental runs. _Procedure in the quantitative analysis_ The subsequent quantitative analysis results from post-processing the phase identification. We briefly elaborate on the procedure of key computations. 1. _Mass calculations and concentration maps._ Total CO\({}_{2}\) mass of dissolved and mobile CO\({}_{2}\) are determined through integration of the pixel-wise defined areal densities of mobile CO\({}_{2}\),\(m_{CO_{2}}^{g}=\phi\cdot d\cdot s_{g}\cdot X_{c}^{g}\), and dissolved CO\({}_{2}\), \(m_{CO_{2}}^{w}=\phi\cdot d\cdot s_{w}\cdot X_{c}^{w}\), with the single components determined as follows. Based on assumption **SM 3.V**, the porosity \(\phi\) and the depth \(d\) can be accurately determined. Resulting from assumption **SM 3.1**, the phase identification provides saturation maps \(s_{g}\) for the gaseous phase and \(s_{w}\) for the aqueous phase, taking values either 0 or 1. It remains to quantify the mass concentrations of CO\({}_{2}\), \(X_{c}^{g}\) and \(X_{c}^{w}\) in gaseous and aqueous phases, respectively. Based on assumption **SM 3.1**, \(X_{c}^{g}\) is provided as the density of gaseous CO\({}_{2}\) under operational conditions, cf. **SM 1**, obtained from the NIST database (Lemmon et al., 2022). With that, the pixel-wise areal density \(m_{CO_{2}}^{g}\) is known. Assumption **SM 3.1** allows now for obtaining the remaining mass concentration \(X_{c}^{w}\) through sparsification, as follows. As illustrated in **Figure 2**, two CO\({}_{2}\) plumes originating from the two injection ports remain unconnected throughout almost the entire run time (until 84 hours). The total CO\({}_{2}\) mass in each plume is known at any point in time based on the injection protocol, cf. **SM 4**, while the respective total mass of mobile CO\({}_{2}\) is determined through integration of \(m_{CO_{2}}^{g}\) over the area of the plumes. Subtraction of both provides the total mass of dissolved CO\({}_{2}\) for each plume. Finally, by assumption **SM 3.1i**, \(X_{c}^{w}\)set to be 0 in the formation water; constant and equal to the proportionality constant between the total volume and the total mass of Figure 2: Resulting phase identification of formation water, CO\({}_{2}\)-saturated water and free gas using DarSIA, at injection stop; two plumes are identified, containing free gas regions (yellow contour) and CO\({}_{2}\)-saturated water (green contour). Subfigure B: The pH-indicator mix (left and right, with and without contours, resp.) allows for visual separation of the different phases based on color spectra. Subfigure C: Detection of free gas in the open fault. Subfigure D: Due to the use of regularization in the upscaling, DarSIA smears out fingers and thus merely detects fingertips for fingers that are closer than a few grain diameters. dissolved CO\({}_{2}\) in each connected region of CO\({}_{2}\)-saturated water\(x_{c}^{w}\); and not relevant for the mass calculations, yet for the discussion of convective mixing, in the remaining gaseous regions, \(x_{c}^{w}\) is set to \(x_{c,max}^{w}\) =1.8 kg/m\({}^{3}\). 2. _Physical variability._ Given a set of phase segmentations, associated to different configurations, the intersections and complements of phase segmentations can be directly determined. Furthermore, we introduce metrics based on volume-weighted ratios of these, to quantify corresponding overlap and unique appearances of detected regions. 3. _Fronts and fingers._ When restricted to a region of interest, the internal interface between the detected water formation and the CO\({}_{2}\)-saturated water can be interpreted as propagating front. Its length can be determined by making use of the Cartesian coordinate system attached to the images. Extremal points can be identified as fingertips, allowing to count them over time. Due to the use of regularization in DarSIA, when converting grain-scale data to Darcy scale, fingers are slightly smeared out. This affects the detection of the free space in between fingers, cf. **Figure 2**. Thus, in these regions the resulting interface between the formation water and CO\({}_{2}\)-saturated water can be understood as approximating non-convex hull of the fingers with its length being a lower estimate to the actual contour length of the fingers. The detection of single fingertips is however not affected resulting in lower uncertainty. Results and Discussion This section is divided into two parts: Part 1 relates to the sparse dataset requested in the benchmark study [Flemish et al 2023, this issue], and includes a discussion on temporal behavior for studied parameters across repeated runs; Part 2 expands our analysis and focuses on physical variability between repeated injections, drivers for the observed variability. ### The benchmark sparse data set The sparse data set [Nordbotten _et al._2022] requested six data points to assess the ability of the participating modeling groups to forecast relevant properties of the physical system. The CO\({}_{2}\) phase was to be reported in the following three categories: _mobile free phase_ (gas at saturations with a positive gas relative permeability), _immobile free phase_ (gas at saturations with zero gas relative permeability), _dissolved_ (mass of CO\({}_{2}\) in CO\({}_{2}\)-saturated water). The sum of the _mobile_, _immobile_ and _dissolved_ phases equals the total mass of CO\({}_{2}\). The sparse data set is included for completeness here, but the reader is referred to [Flemish et al 2023, this issue] for comprehensive analysis and discussion. The following sparse data were requested (cf. **Figure 1** for described regions and pressure ports): 1. _As a proxy for assessing risk of mechanical disturbance of the overburden_: Maximum pressure [N/m\({}^{2}\)] at pressure port a) [15,5] and b) [17,11]. 2. _As a proxy for when leakage risk starts declining_: Time [s] of maximum mobile CO\({}_{2}\) [g] in Box A. 3. _As a proxy for our ability to accurately forecast near-well phase partitioning_: CO\({}_{2}\) mass [g] of a) mobile; b) immobile; c) dissolved; and d) total in seal in Box A at 72 hours after injection start. 4. _As a proxy for our ability to handle uncertain geological features_: CO\({}_{2}\) mass [g] of a) mobile; b) immobile; c) dissolved; and d) total in seal in Box B at 72 hours after injection start. 5. _As a proxy for our ability to capture onset of convective mixing_: Time [s] for which the quantity \[M(t)\equiv\int_{C}\left|\nabla\left(\frac{\chi_{c}^{w}(t)}{\chi_{c,\max}^{w}} \right)\right|\ dx\] first exceeds 110% of the width of Box C, where \(\chi_{c}^{w}\) is the mass fraction of CO\({}_{2}\) in CO\({}_{2}\)-saturation water. 6. _As a proxy for our ability to capture migration into low-permeable seals_: Total mass of CO\({}_{2}\) [g] in the top seal facies (sand ESF) at final time within Box A. Here we report laboratory sparse dataset (cf. **Table 3**) using the dataset [Eikehaug et al. 2023b] and dedicated DarSIA scripts (Both et al 2023b) with assumptions (cf. **SM 3**). The CO\({}_{2}\) distribution after 72 hours with locations of Box A, Box B and Box C is included to aid interpretation (see **Figure 3**). _Maximum pressure at ports [15,5] and [17,11] (parameters 10 and 1b)._ The maximum pressures at the pressure ports ([15,5] and [17,11]) located in the sealing structures (sand ESF, cf. **Figure 1**) were initially recorded with five pressure transducers (ESI, GSD4200-USB, -1 to 2 bara) because single digits millibar pressure gauges were not available for the benchmark study. The results were, however, discarded because 75% of the transducers recorded pressures less than the atmospheric pressure in the room. Hence, we use historical atmospheric pressure data reported from a nearby meteorological weather station (cf. **SM 1**), and adjust for differences in height and hydrostatic pressures (see **Table 3**). We apply an uncertainty of \(\pm\) 1 mbar, five times stated instrument accuracy, to account for the possible overpressure during CO\({}_{2}\) injections. _Time of maximum mobile CO\({}_{2}\) in Box A (parameter 2)._ The development in mobile gas in Box A for all five runs (cf. **Figure 4**) increased linearly with the injection rate until the gas accumulation aligned with the spill point (defined in **Figure 1**). On average, the maximum mass of mobile gas was observed after \(4.11\pm 0.17\) hours. While there appears to be is some noise in the identification of the mobile gas, the time of maximum value is a clearly defined peak in the time series. Seen together with temporal resolution of the image series (20 seconds per frame), we expect the uncertainty of our identification of the time of maximum mobile CO\({}_{2}\) to have an uncertainty of no more than three frames, i.e. \(\pm\) 1 min. The nature behind the fluctuating mass after the initial spill (cf. black rectangle, **Figure 4**) is discussed in more detail in **chapter 3.2**. _Mobile, immobile and dissolved CO\({}_{2}\) in Box A and Box B (parameters 3, 4 and 6)._ The mass of mobile gas in Box A (parameter 3\(a\) in **Table 3**) was on average \(0.232\pm 0.047\) g, and is considered an upper bound for this parameter. The lower bound was found indirectly from the observation of non-zero mass of mobile gas at the end of the experiments (cf. **Figure 4**), related to atmospheric gases in the formation water due to insufficient degassing (cf. **chapter 2.1** and **Haugen et al 2023, this issue). Based on our physical understanding of the studied system, we anticipate that the mass of mobile CO\({}_{2}\) should be zero at the end of the experiment. Hence, we subtract the end point mass from the upper bound to find an estimate of the lower bound, cf. **Table 3**. An alternative, but also physically plausible, lower Figure 3: Distribution of CO\({}_{2}\) after 72 hours for run C3. The positions of Box A (green, dashed line), Box B (white, dashed line) and Box C (blue, dashed line) are used to populate the sparse benchmark data set. The shaded regions in the benchmark geometry (top right and bottom left) are outside the defined boxes. CO\({}_{2}\) (in any form) in the shaded regions was not included in the analysis for the sparse data set. bound for parameter 3a is zero, where all the mobile gas (CO\({}_{2}\)) is dissolved in the CO\({}_{2}\)-saturated water. The mass of mobile gas in Box B after 72 hours (parameter 4a) is reported as zero because mobile gas was not observed in the segmented images. The mass of immobile gas in Box A and Box B (parameters _3b_ and _4b_ in **Table 3**) were reported as zero because the formation water did not generate a unique and characteristic color for immobile gas. Hence, DarSIA and its color/signal-based segmentation (cf. chapter 2.5) is not able to distinguish immobile gas from the other phases. Careful visual inspection identified small amounts of immobile gas at early times, but visual inspection at 72 hours did not identify any immobile gas. This is consistent with our physical understanding of the system, where isolated gas bubbles are expected to dissolve quickly. The mass of dissolved gas in the CO\({}_{2}\)-saturated water in Box A and Box B after 72 hours (parameters _3c_ and _4c_ in **Table 3**) were 3.10 \(\pm\) 0.07 g (Box A) and 0.778 \(\pm\) 0.066 g (Box B), see **Figure 5**. The mass calculations use the known injected CO\({}_{2}\) mass in well [9, 3] for Box A and well [17, 11] for Box B, and apply DarSIA to segment the separate plumes originating from each well to calculate the mass of mobile and dissolved gas (cf. chapter 2.5). The two plumes remain unconnected throughout almost the entire run time (until 84 hours), and the total CO\({}_{2}\) mass in each plume is known at any point in time based on the injection protocol. After 84 hours the plumes merge and the plots are extrapolated to 120 hours (end of experiment) based on current trends. The mass of CO\({}_{2}\) in the sealing structures in Box A and Box B after 72 hours (parameters _3d_ and _4d_ in Table 3) were 0.382 \(\pm\) 0.012 (Box A, cf. **Figure 5**) and 0.00 (Box B). Mobile and dissolved gas did not enter the top regional seal confined within Box B, but minute amounts of dissolved gas (in the order of 10\({}^{3}\) g) entered the sealing structure in the lower, right corner of Box B after 72 hours. Hence, the final mass of CO\({}_{2}\) in the sealing structure confined within Box A (parameter \(6\), cf. **Figure 6**) was on average 0.567 \(\pm\) 0.035. For the parameters discussed here (3c, 3d, 4c, 4d and 6) we attribute a nominal measurement uncertainty of \(\pm\) 20 % based on the limitations and influence of underlying assumptions (cf. **SM 3**), stated weakness in the analysis of the color scheme (cf. chapter 2.5), extrapolating trends and operational difficulties with mineralization of methylene red. Figure 4: Development in mass [g] of mobile gas in Box A for the whole experimental time (120 hours) for all five runs (C1 – C5) and the average (black, dashed line). The mass increased linearly with the injection rate until spill time (cf. **Table 3**), and then decreased because the mobile gas dissolved into the formation water. The development in mobile mass associated with the spill point (black rectangular) is discussed in detail below. #### Development in M (t) relative to the width of Box C (parameter 5). The \(M(t)\) (parameter 5 in Table 3) is a measure of the total variation of the concentration field. As such, it is related to the contour lengths of the density driven fingers, and we normalize it relative to the length of Box C, so that a value of \(M_{norm}(t)=1\) corresponds to no fingers below a gas cap spanning the whole length of the top of Box C. As CO\({}_{2}\)-saturated water migrated downwards due to gravity, the contour lines and the \(M_{norm}(t)\) increase (see **Figure 7**). On average for the five runs \(M_{norm}(t)\) exceeds 110% of the width of Box C after \(4.14\pm 0.4\) hours, where the stated times for each run may be considered as an upper bound due to the assumption that the concentrations are constant, which decreases the measure of the gradient in the integral. A lower bound is the time when Figure 5: The development in mass of dissolved CO\({}_{2}\) [g] in CO\({}_{2}\)-saturated water in Box A (open circles) and Box B (crosses) for runs C1-C5 during the whole experimental time (120 hours). All mass curves increase from the onset because mobile gas dissolved into the formation water to form CO\({}_{2}\)-saturated water and reach plateau values when most of the gas within each box is dissolved. The curves in Box B remain zero until the gas exceeds the spill point and flow into the fault (after approximately 4 hours). The somewhat different development for run C1 in Box A (blue circles) and run C5 in Box B (purple crosses) relates to the inconsistencies for these runs, discussed in chapter 3.2. Note that the average curves (black, dashed lines) are calculated until 84 hours. Figure 6: Development of CO\({}_{2}\) (in any form) in sealing layer (sand ESF) confined within Box A during the whole experimental time (120 hours) for all five runs (C1-C5). Only CO\({}_{2}\)-saturated water (no gas) was observed in the sealing layer in Box A, and advection from the underlaying gas was the main driving force for increased mass initially. After gas injection stopped (after approximately 5 hours), there was a slight decrease of CO\({}_{2}\) mass in the sealing layers, explained by gravity of the denser CO\({}_{2}\)-saturated water and diminishing advective forces due to a reducing gas cap under the anticline. After approximately 20 hours, the mass increase again because CO\({}_{2}\)-saturated water from injector [17, 7] flows downwards and enters the top boundary of Box A (cf. **Figure 4** after 72 hours). \(M_{norm}(t)\) reached 100% of the length of Box C, which is closely correlated to gas filling the upper boundary of Box C, a necessary prerequisite for \(M_{norm}(t)\) exceeding 110%. ### Physical repeatability of multiphase flow during laboratory carbon sequestration runs The benchmark study consisted of five operationally identical CO\({}_{2}\) injection experiments using the same geological geometry and initial conditions. The experiments were designed to generate physical data for model comparison, with the motivation to achieve a physical 'ground truth'. Here we discuss physical repeatability between the five runs (C1-C5) by comparing the degree of areal sweep overlap incorporating all forms of CO\({}_{2}\) (mobile, immobile, dissolved) in three regions (Box A, Box B' and Box D, cf. **Figure 8**) with increasing geological complexity. We quantify the degree of overlap of runs C2, C3 and C4, and discuss the uniqueness of each run. Figure 8: Degree of physical overlap and description of Box A, Box B’ and Box D with increasing geological complexity. Box A is identical to Figure 1; Box B’ is an extension of Box B (cf. **Figure 1**) and includes the lower part of the geometry left to the heterogenous fault; Box D includes the fining upwards sequence associated with injector [17, 7] and the open fault (cf. **Figure 1**). The CO\({}_{2}\) distribution (all forms) for all five runs (C1-C5) in three Figure 7: Development in \(M_{norm}(t)\) for all five runs from injection start until end of experiment (120 hours). For the initial state of a zero CO\({}_{2}\) concentration within Box C, \(M_{norm}(t)\) takes the value 0. Run C1 (blue) is ahead of the other runs, both in the start and at the end (fingers start to leave Box C). The rapid increase between 3 and 4 hours arises because the mobile gas fills the top of Box C. The reverse is true after approximately 10 hours (6 hours for run C1) when the gas accumulation (due to shrinking by dissolution) exits the upper boundary of Box C and the parameter \(M_{norm}(t)\) rapidly decreases. This is counterbalanced to some extent by the further development of the density-driven fingers, as seen around 20 hours, until dissolution and diffusion eventually leads to a more uniform distribution of dissolved CO\({}_{2}\), and \(M_{norm}(t)\) approaches 0 again. boxes (Box A, Box B' and Box D) after 155 min of CO\({}_{2}\) injection. Spatially distributed overlap for all runs, with the following color scheme: gray (overlap C2+C3+C4); blue (unique C1); orange (unique C2); green (unique C3); red (unique C4); purple (C5 unique); brown (combinations all runs with at least one of C2, C3 or C4), white (other combinations). The reader is referred to **SM 5** for additional time steps. Physical reproducibility with increasing reservoir complexity We investigate the reproducibility between five runs in the same geometry, with the hypothesis that increased reservoir complexity tends to reduce the degree of physical reproducibility. As mentioned above, our motivation to achieve a physical "ground truth" was not fully achieved. This was because our "identical" experiments indeed were not truly identical, even if the gas injection protocol was (within measurement uncertainty, cf. **SM 4**). Next, we describe the two known variables that influence the displacement patterns: 1. _Inconsistent water chemistry_. The formation water (cf. **Table 1**) in run C1 unintentionally used tap water instead of deionized water. The inconsistent water chemistry for C1 resulted in a unique dissolution rate and convective mixing behavior (cf. **Figure SM.3**). Run C1 is thus omitted from the analysis of physical reproducibility. 2. _Atmospheric pressure variations_. The atmospheric pressure variations in Bergen (cf. **Figure SM.1**) resulted in a low-pressure outlier for run C5 (968 mbar) compared with the other runs (on average 999 mbar injection period, cf. **Table SM 1**). Hence, the larger volume of the injected CO\({}_{2}\) (equal mass injected for all runs) influenced key parameters in the experiment (most prominently parameter 2 in **Table 3**, but also rate of dissolution). Run C5 is thus omitted from the analysis of physical reproducibility. The described operational (water chemistry) and environmental (atmospheric pressure) inconsistencies provide the rationale for excluding C1 and C5 in our analysis of physical reproducibility for operationally identical experiments with comparable pressure and temperature conditions. An analysis of sand settling between runs showed only minor changes (cf. **SM 6**). Hence, we focus on runs with comparable system parameters, and report the development in overlap between runs C2, C3 and C4 (cf. **Figure 9**). To compute the overlap percentages, we first weight all pixels in the segmented images with their corresponding volume (see **SM 2**). Then, the ratio between the number of volume-weighted pixels where CO\({}_{2}\) (gas and dissolved) in C2, C3, and C4 overlap and the number of volume-weighted pixels where CO\({}_{2}\) (gas and dissolved) in any of the three runs appear is reported. Next, we describe the development in physical overlap within Box A, Box B' and Box D. The development in physical overlap in Box A may be divided into four intervals: _i. pre-spilling; ii. gravitational fingers, iii. dissolution-driven flow_ and _iv. homogenization_. The _pre-spilling_ interval (from the injection start to approximately 4 hours) occurred before the gas column height exceeded the spill point. The onset of gravitational fingers occurred in this interval, but they are still only minor and do not develop into pronounced gravitational fingers. The overlap increased from injection start and reached a global maximum (97 % overlap) after approximately 4 hours, with an average 92 % C\({}_{2,3,4}\) overlap for the whole interval. The uniqueness of runs C2, C3 and C4 were on average 0.14 % (cf. **Figure SM.4**) during the _pre-spilling_ period. The _gravitational fingers_ interval (approximately 4 to 30 hours) was characterized by development of pronounced gravitational fingers under the gas accumulation in the anticline trap in Box A. The physical overlap of C\({}_{2,3,4}\) decreased from 97 to 79 % (local minimum), dominated by the differences in number of fingers and individual finger dynamics (discussed in more detail below). The _dissolution-driven flow_ interval (approximately 30 to 70 hours) describes the period when the gravitational fingers reached the no-flow at the lower Box A boundary, and fingers start to move lateral and merge as the gas accumulation dissolves and pull aqueous phase from surrounding regions into Box A. The physical overlap increased to above 95 % in this period. The _homogenization_ interval (approximately 70 to 120 hours) was characterized by a constant physical overlap (above 95 %) with only minor movement of aqueous phases confined in Box A. Box B' generally follows the overall behavior of Box A in the four intervals defined above. Importantly, the reduction in physical overlap observed in the _gravitational fingers_ interval (after approximately 4 hours) was related to variable spilling times for runs C2, C3 and C4, not related to finger development (cf. parameter 2, **Table 3** that approximates the spilling time for each run). The variation in spill times resulted initially in reduced overlap with slight variation in fault migration and displacement patterns for runs C2,C3 and C5. The sustained reduction of physical overlap stems from an apparent stochastic variation for run C3 (cf. **Figure SM.3**; 10 hours), corroborated with development of the uniqueness for each run (cf. **Figure SM.4**; middle). The physical explanation for the observed variation in run C3 is not clear, but this only occurred for that single run, with subsequent runs (C4 and C5) reverting to the flow patterns seen for the earlier runs (C1 and C2). Hence, we do not expect the deviation in run C3 to stem from any physical alterations within the experiment (sand settling, or chemical alterations). Remaining explanations could be related to variations in atmospheric pressure, or factors outside our experimental control. The development in Box D was delayed in time relative to Box A and Box D due to the later injection start of well [17, 11], but follows the overall trend: initially increasing overlap, slight reduction due to finger development and convective mixing, then increase through homogenization. Small amounts of dissolved gas were observed in a localized point the top regional seal contained in Box D for most runs (cf. **Figure SM 3**). The seal breach occurred around a plugged port (CO\({}_{2}\) migrated along the sealing silicone), resembling a of CO2 leakage scenario along a poorly abandoned well. Dynamics of gravitational fingers in Box C Box C is the homogenous zone under the lower anticline under the main gas accumulation and where most of the gravitational fingers emerge during and after CO\({}_{2}\) injection. From image analysis it was possible to extract the development of fingers as a function of time for all runs (cf. **Figure 10**). The fingers appear after an onset time of approximately 3 hours, and the number is reasonably stable around 25-30, which corresponds to a characteristic spacing of about 5-6 cm. The stability of the number of fingers is an indication that the system is near the regime of the "maximally unstable" fingers spacing, predicted by theoretical considerations (see e.g. Riaz et al 2006; Elenius et al 2012). This observation is supported by the finger lengths, which indicate a linear growth regime after onset. Repeatability was observed in terms of onset location and finger dynamics, even at time significantly after onset (cf. **Figure SM.5** and **Table SM.3**). Figure 9: Degree of physical reproducibility between operationally identical CO\({}_{2}\) injection runs with comparable pressure and temperature conditions (runs C2, C3 and C4). Box A (green line) represents the most homogenous case; Box B’ (red line) represents the case with the heterogenous fault zone and fining upwards sequence; Box D (purple line) represents the middle case with a fining upwards sequence. Overlap considering the whole geometry (dashed line) is included for comparison. Oscillating CO\({}_{2}\) leakage from anticline The benchmark geometry and injection protocol were designed to achieve realistic displacement processes relevant for subsurface carbon storage, where most observed phenomena and mass transfer dynamics were anticipated; showcased in the description of expected behavior (cf. **chapter 2.4**) and benchmark description [Nordbotten et al 2022]. An oscillating CO\({}_{2}\) spilling event from the lower anticline was observed in our study, something that was not anticipated. Non-monotonic leakage behavior has previously been suggested in the literature [Preuss 2005], and in natural analogues [Shipton et al, 2004], attributed to the interplay between multiphase flow, Joule-Thomson cooling, and heat transfer effects in the fault plane. To our knowledge, oscillating CO\({}_{2}\) leakage behavior from an anticline into a fault zone in the absence of thermal effects has not previously been observed experimentally nor received attention in the literature. Below we discuss the displacement dynamics during multiphase flow in the fault plane generating the observed oscillating anticline CO\({}_{2}\) leakage behavior. The mass of mobile gas in Box A oscillated after the initial spilling event for all runs (cf. **Figure 11**). The gas escapes the anticline trap in bursts and flows into the narrow restriction at the bottom of the fault (aligned in height with the spill point). When gas migrates upward in the fault zone (essentially a localized permeable pathway), it displaces resident aqueous fluids downwards. The inflow of aqueous phase effectively reduces and ultimately blocks the upwards migration of gas. This is in essence because the localized pathway in the inlet region of the fault cannot accommodate stable countercurrent flow (upwards gas flow and downwards water flow), possibly due to viscous coupling effects (see e.g. the review paper by Ayub and Bentsen, 1999). When the upwards migration of gas is temporarily blocked, the anticline gas column height increases again with continued CO\({}_{2}\) injection. The process then repeats itself when the aqueous phase flow dissipates. A secondary effect is that the inflowing aqueous phase increases the local water saturation between the spill point and the inlet point of the fault and traps gas. The gas quickly dissolves into aqueous phase, and the subsequent spilling events (up to four events per run) are essentially local drainage processes, characterized by oscillating mass of mobile gas under the anticline (Box A). Interestingly, the process appears hysteretic in nature, with decreasing peak mass values for each event, most likely related either to increased gas relative permeability between the spill point and the fault, or changes in the local CO\({}_{2}\) concentration in the aqueous phase. The fluctuations stopped when the CO\({}_{2}\) injection terminated (after approximately 300 min, cf. **SM 4**), and the gas column height (and, hence, the mass of mobile gas) decreased under the spilling point. Figure 10: Dynamics of convective mixing and gravitational fingers in Box C for all runs C1-C5. Left: Number of gravitational fingers, all runs follow the general trend: a rapid increase until a maximum is reached, followed by a declining number as some fingers merge. Right: The length [m] of the boundary of the phase segmentation, also identifying (an approximation) of the fingers. Note that the contour length only considers the boundary inside Box C. Both graphs end when the first finger reached the lower boundary of Box C (20 hours). To generalize the underlying causes for the observed phenomenon is difficult based on the reported experiments alone, and should be coupled with dedicated numerical simulations including more effects. The observations are to some degree influenced by the physical system (no-flow boundaries in the vicinity of the spill point and fault, and the fault geometry aligned with the spill point acting as a restriction of upwards migration of gas) and presence and shape of the gas accumulation effectively reducing the area available for water flow. A systematic evaluation of the cyclic behavior including coupled processes and parameters of the problem remains a task for future work. ## 4 Concluding remarks The open-access, high-quality laboratory dataset, accompanied with dedicated analysis tools, represents an asset and opportunity for the carbon storage community to expand the current analysis in future studies. The physical data, describing many of the relevant processes for subsurface carbon storage, may also be used for model validation, comparison, and data-driven forecasts for different stages of a carbon storage operation. Blueprints of the experimental infrastructure enhance reproducibility of scientific research, and enable the porous media community at large to build physical assets and collectively join our efforts. Our outlook, based on the observations identified in this study, is to probe the origin and premises for establishing non-thermally induced oscillating flows, and to broaden the understanding of at what length scales and to what accuracy multiphase flows in porous media are deterministic. In conclusion, the observed processes and phenomena qualitatively corroborates the physical understanding and knowledge within the carbon storage community. This supports the assertion that we have a sufficient understanding to claim that industrial carbon storage operations can be conducted in an efficient and safe manner. ## 5 Acknowledgements The work of JWB is funded in part by the UIB Akademia-project <<FracFlow>> and the Wintershall DE-funded project <<PoroTwin>> MH is funded from Research Council of Norway (RCN) project no. 280341. KE and MH are partly funded by Centre for Sustainable Subsurface Resources, RCN project no. 33184. BB is funded from RCN project no. 324688. Figure 11: Fluctuations in mass of mobile gas [g] in Box A after initial spilling event. The mass curves all demonstrate oscillations due to recurring spilling events from the anticline to the adjacent fault. For all runs, the maximum mass was observed before the initial gas escape. The lower atmospheric pressure for run CS (purple circles) results in a lower initial spilling time.
2305.14264
Active Learning Principles for In-Context Learning with Large Language Models
The remarkable advancements in large language models (LLMs) have significantly enhanced the performance in few-shot learning settings. By using only a small number of labeled examples, referred to as demonstrations, LLMs can effectively grasp the task at hand through in-context learning. However, the process of selecting appropriate demonstrations has received limited attention in prior work. This paper addresses the issue of identifying the most informative demonstrations for few-shot learning by approaching it as a pool-based Active Learning (AL) problem over a single iteration. Our objective is to investigate how AL algorithms can serve as effective demonstration selection methods for in-context learning. We compare various standard AL algorithms based on uncertainty, diversity, and similarity, and consistently observe that the latter outperforms all other methods, including random sampling. Notably, uncertainty sampling, despite its success in conventional supervised learning scenarios, performs poorly in this context. Our extensive experimentation involving a diverse range of GPT and OPT models across $24$ classification and multi-choice tasks, coupled with thorough analysis, unambiguously demonstrates that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
Katerina Margatina, Timo Schick, Nikolaos Aletras, Jane Dwivedi-Yu
2023-05-23T17:16:04Z
http://arxiv.org/abs/2305.14264v2
# Active Learning Principles for In-Context Learning ###### Abstract The remarkable advancements in large language models (LLMs) have significantly enhanced the performance in few-shot learning settings. By using only a small number of labeled examples, referred to as demonstrations, LLMs can effectively grasp the task at hand through in-context learning. However, the process of selecting appropriate demonstrations has received limited attention in prior work. This paper addresses the issue of identifying the most informative demonstrations for few-shot learning by approaching it as a pool-based Active Learning (AL) problem over a single iteration. Our objective is to investigate how AL algorithms can serve as effective demonstration selection methods for in-context learning. We compare various standard AL algorithms based on uncertainty, diversity, and similarity, and consistently observe that the latter outperforms all other methods, including random sampling. Notably, uncertainty sampling, despite its success in conventional supervised learning scenarios, performs poorly in this context. Our extensive experimentation involving a diverse range of GPT and OPT models across \(24\) classification and multi-choice tasks, coupled with thorough analysis, unambiguously demonstrates that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples. ## 1 Introduction The field of Natural Language Processing (NLP) has recently witnessed a remarkable paradigm shift with the emergence of in-context learning, also referred to as few-shot learning (Brown et al., 2020). Traditionally, NLP systems heavily relied on supervised learning approaches, where vast amounts of labeled training data were necessary to achieve desirable performance. However, in-context learning has revolutionized this landscape by enabling NLP models to learn from limited, context-specific examples and adapt to new tasks and domains with remarkable proficiency (Zhao et al., 2021; Chowdhery et al., 2022; Garcia et al., 2023; Wei et al., 2023; Touvron et al., 2023; Bubeck et al., 2023). Unlike traditional models, which require extensive retraining or fine-tuning for every new task, in-context learning empowers large language models to generalize from a few examples and rapidly acquire knowledge in a targeted context, without any weight updates. The data efficiency of few-shot learning is indeed remarkable, as large language models (LLMs) can achieve impressive results with only a small number of exemplars.1 Still, demonstrations constitute _labeled_ data examples. This raises two key questions. Firstly, when faced with unlabeled data, how can we select the most appropriate examples to label and use as in-context demonstrations? Secondly, when we have labeled data, how can we efficiently identify the most informative combination Figure 1: Performance of different in-context selection algorithms in classification and multi-choice tasks. of descriptors for in-context learning? Answering these questions is essential to ensure effective and efficient few-shot learning using LLMs. There is a growing line of research focused on investigating how in-context learning works (Reynolds and McDonell, 2021; Razeghi et al., 2022; Xie et al., 2022), which examples to use as demonstrations for in-context learning (Liu et al., 2022; Zhang et al., 2022; Wu et al., 2022; Kim et al., 2022), how to form the prompt (Zhao et al., 2021; Lu et al., 2022) and whether ground truth labels matter (Webson and Pavlick, 2022; Min et al., 2022; Yoo et al., 2022; Wang et al., 2022; Wei et al., 2023b). In parallel with these works, we aim to explore the problem of in-context example selection through the lens of active learning (AL). Based on the core principle that not data points are equally useful, AL (Cohn et al., 1996; Settles, 2009) aims to identify the most informative instances from a pool or stream of unlabeled data for annotation. Through iterative iterations of model training, data acquisition, and human annotation, the goal is to achieve data efficiency. A data-efficient AL algorithm ensures that a model achieves satisfactory performance on a withheld test set by utilizing only a fraction of the acquired data during training. Active learning is by definition a supervised learning paradigm, so its formulation and purpose in the no-weight-update setting of in-context learning is not trivial. To address this, we need to redefine the concept of data efficiency within the framework of in-context learning. Our formulation is the following: Given a pool of labeled or unlabeled data, the objective is to identify a set of \(k\) examples that will serve as demonstrations to an LLM, resulting in optimal performance on a separate test set. Given this setting we explore the effectiveness of the most prevalent AL approaches based in uncertainty (Lewis and Gale, 1994; Cohn et al., 1996; Gal et al., 2017), diversity (Brinker, 2003; Bodo et al., 2011; Sener and Savarese, 2018) and similarity (Margatina et al., 2021; Kirsch et al., 2021; Liu et al., 2022), as demonstration selection methods for in-context learning. AL for data efficiency is well studied and uncertainty-based algorithms seem to be often the best performing data selection methods. We aim to explore how such principles generalize in the different paradigm of in-context learning, hoping to get useful insights that will help us formulate efficient and effective data selection strategies tailored to in-context demonstrations. Our key contributions are as follows: * We formulate the selection of in-context examples as a single iteration active learning problem and explore the effectiveness of four standard approaches: _uncertainty_, _diversity_, _similarity_ and _random_ sampling. * We evaluate \(15\) models, between \(125\)M and \(30\)B parameters, from the GPT (Radford et al., 2019; Brown et al., 2020; Black et al., 2022) and OPT (Zhang et al., 2022a) families in \(15\) classification and \(9\) multi-choice tasks, using different AL sampling techniques to get in-context examples for few-shot learning. * We demonstrate that while diversity and uncertainty sampling perform similarly to random sampling, choosing in-context examples that are similar to the test example outperforms consistently all methods by a large margin for all model families and sizes in all tasks. * We show that, interestingly, while uncertainty sampling is one of the strongest AL approaches in supervised learning, this does not generalize to in-context learning, where the method shows to underperform. This underpins the importance of our work to examine how active learning principles change when we move from the supervised to the few-shot learning paradigm. ## 2 Related Work ### Understanding In-Context Learning Few-shot, i.e., in-context, learning with LLMs has garnered significant attention in recent NLP research. Simply concatenating a few labeled examples to form the prompt that will be used to do inference through the model has been shown to result in high performance, even outperforming fine-tuned models (Brown et al., 2020; Chung et al., 2022; Ouyang et al., 2022; Dong et al., 2022). This has naturally lead researchers to explore its effectiveness with multiple few-shot learning benchmarks such as Crossfit(Ye et al., 2021) and BigBench(Srivastava et al., 2022). An active area of research is to try to understand how in-context learning actually works (Xie et al., 2022; Garg et al., 2022; Akyurek et al., 2022; Xie et al., 2022; Pan et al., 2023), and what are its strengths and limitations (Webson and Pavlick, 2022; Jang et al., 2022; Agrawal et al., 2022; Wei et al., 2023; Shi et al., 2022). Researchers explore the effectiveness of the successful chain-of-thought prompting technique (Wei et al., 2023; Wang et al., 2022; Madaan and Yazdanbakhsh, 2022), while others try to determine the importance of ground truth labels of the in-context examples with the conclusions being mixed (Min et al., 2022; Yoo et al., 2022). Wei et al. (2023) explain that model size might hide the answer, showing that small LMs ignore flipped labels, while LLMs can override semantic priors when presented with in-context exemplars that contradict priors. Interestingly, Razeghi et al. (2022) find that in-context learning performance is highly correlated with how many times the terms in each instance appear in the pretraining corpus. ### Finding Better Demonstrations Typically, papers assessing models in the few-shot learning setting commonly state that they randomly sample examples to compose the in-context prompt (Brown et al., 2020; Zhang et al., 2022; Chowdhery et al., 2022; Chung et al., 2022; Touvron et al., 2023). Nonetheless, it has been demonstrated that the effectiveness of few-shot performance significantly depends on the selection of in-context examples. Consequently, there is ongoing research dedicated to developing algorithms that generate or select the most valuable prompts, aiming to maximize the downstream few-shot performance (Kocielnik et al., 2022; Ye et al., 2023; Diao et al., 2023; Xu et al., 2023). Some approaches are based on a retrieval component that sources the most relevant examples from a pool. The prompt retriever can be trainable (Rubin et al., 2022) or based on pretrained embeddings (Liu et al., 2022; Agrawal et al., 2022). Similar to our work, Gonen et al. (2022) use uncertainty to evaluate the usefulness of in-context examples find that the lower the perplexity of the prompt is, the better the prompt is able to perform the task. Zhang et al. (2022) formulate example selection for in-context learning as a sequential decision problem and show modest performance improvements by acquiring data with their proposed algorithm based on reinforcement learning. Other works, instead of focusing on the part of acquiring data for in-context examples, show that ordering (Lu et al., 2022) and calibration (Zhao et al., 2021) are additional properties that influence the few-shot learning performance significantly. ### Active Learning for NLP Active learning has attracted significant interest within the NLP community over the years, with researchers extensively investigating its applications in various NLP tasks, including machine translation (Miura et al., 2016; Zhao et al., 2020), natural language inference (Snijders et al., 2023), named entity recognition (Erdmann et al., 2019; Shen et al., 2017; Wei et al., 2019), and text classification (Ein-Dor et al., 2020; Schroder and Niekler, 2020; Margatina et al., 2022; Schroder et al., 2023), among others. Still, its importance and potential value is on the rise (Zhang et al., 2022), as the current language model pretraining paradigm continues to advance the state-of-the-art (Tamkin et al., 2022). Given the fundamental premise that"not all data is equal" it is reasonable to expect researchers to actively seek the "most informative" data for pretraining or adapting their large language models (LLMs), as well as identifying the most valuable in-context examples for few-shot learning scenarios. Under the initial "not all data is equal" assumption, it is logical to assume that researchers would try to find the "most informative" data to pretrain or adapt their LLMs, or to find the most useful in-context examples for few-shot learning. Relatedly, Koksal et al. (2022) explore active learning for prompt-based finetuning, showing that their method based in inter-prompt uncertainty sampling with diversity coupled with the PET architecture (Schick and Schutze, 2021, 2021) outperforms all AL baselines. ## 3 Active In-context Learning ### Active Learning Formulation We consider a standard pool-based active learning setting where we have a large pool of unlabeled data from which we want to sample a batch of \(k\) data points. We assume that after selecting these \(k\), we use humans to provide their corresponding labels (Figure 2). Instead of following the standard approach that involves multiple iterations of data selection and model training, we only perform a single iteration (Longpre et al., 2022) since we do not train any model-in-the-loop. We use the acquired set of \(k\) examples as demonstrations for in-context learning with an LLM. We assume the existing datasets as the pool from which to select these \(k\) examples. The goal is to find the most useful examples from the pool, which are expected to yield improved performance on the test set when employed as a few-shot prompt, compared to demonstrations randomly sampled from the same pool. The resulting prompt consists of the concatenation of the \(k\) acquired examples, alongside the test example, repeated for all data instances in the test set (Figure 2). ### Active Learning Algorithms We focus on the most prevalent families of active learning algorithms that are _uncertainty_ sampling, _diversity_ sampling and _similarity_ (also known as test-aware sampling) (Zhang et al., 2022). We acknowledge that there are more data selection algorithms for few-shot exemplars that are not considered in our experiments, such as MEAL (Koksal et al., 2022), Q-learning (Zhang et al., 2022), Self Adaptive (Wu et al., 2022), SG-ICL (Kim et al., 2022), MI (Sorensen et al., 2022), _inter alia_. However, these algorithms fall beyond the scope of our analysis, as our objective is to gain insights into active learning principles for in-context learning, rather than benchmarking all available in-context demonstration sampling algorithms. Additionally, there are techniques, complementary to the aforementioned few-shot exemplar selection methods, such as prompt re-ordering (Lu et al., 2022) and calibration (Zhao et al., 2021), which can further enhance the performance of few-shot learning. RandomThe overarching objective of any data selection method, such as active learning algorithms, is to indentify data points that, however used, yield superior models compared to randomly sampled data from the same pool. DiversityThe first data selection method that we use as a representative for the diversity family of methods is a simple clustering technique. Specifically, we first encode all data points in the pool with Sentence-BERT(Reimers and Gurevych, 2019) embeddings and then we perform k-means clustering.2 We choose the number of clusters to be \(k\) and select, similar to Yu et al. (2022). The intuition behind this approach is that a diverse set of in-context example might be more beneficial than randomly sampling the set, as it ensures that the selected demonstration will most likely include complementary information. Footnote 2: We use the implementation from [https://www.sbert.net/examples/applications/clustering/](https://www.sbert.net/examples/applications/clustering/). UncertaintyThe second approach is an uncertainty-based sampling algorithm that is based in the SPELL method proposed by Gonen et al. (2022). Since we use an off-the-shelf LLM that does not have a fine-tuned classification layer, we cannot compute the model probabilities associated with each class (for a classification or multi-choice task). This essentially means that we cannot use standard AL uncertainty baselines such as maximum entropy or least confidence. Instead, we can use the loss, i.e., perplexity, of the LLM to score each candidate example from the pool. Gonen et al. (2022) define perplexity of the prompt as the perplexity of the full prompt sequence, including the input itself, and without the label, averaged over \(1,000\) examples. Our approach is different since we want to evaluate the perplexity of each in-context example individually. We also do not do the averaging over a thousand examples Figure 2: Top: Active data collection (single iteration). Bottom: Prompt construction and model inference. as we wanted to make the method more general, without the need to assume access to that many examples. The underlying principle guiding this approach is the belief that a high perplexity set of in-context examples can yield greater advantages compared to randomly sampling from the dataset (or at least for data efficiency in a supervised learning setting this is proven to enhance the learning process). SimilarityFinally, the third active learning algorithm we consider is based on KATE a knn-augmented in-context example selection method proposed by Liu et al. (2022). The algorithm retrieves examples from the pool that are semantically-similar to a test query sample. We again use Sentence-BERTReimers and Gurevych (2019) representations of both the pool and the test set for k nearest neighbours. The rationale behind this approach is that the most similar demonstrations to the test example will best help the model answer the query. We have to highlight, however, that by definition each test example will have a different prompt, as the \(k\) most similar demonstrations will be different. This is a crucial limitation of this approach compared to the others, as it assumes that we are able to acquire labels for any in-context example selected from the pool. ## 4 Experimental Setup ModelsWe evaluate \(15\) LLMs in total, \(8\) models from the GPTRadford et al. (2019); Brown et al. (2020); Black et al. (2022) and \(7\) from the OPTZhang et al. (2022) family. We chose our models to span from a few million to tens of billions parameters, as we wanted to explore how the model size affects the effectiveness of in-context example selection methods. All models used are publicly available. Tasks & DatasetsFollowing Min et al. (2022), we evaluate our models in \(15\) classification and \(9\) multi-choice tasks taken from the CrossfitYe et al. (2021) benchmark. We provide details for all tasks and datasets considered in the Appendix A.1. HyperparametersUnless specified otherwise, we sample \(k\)=\(16\) demonstrations, i.e., labeled data, from the pool with each active learning method. After collecting the \(k\) input-label pairs, we concatenate them together to form a prompt (Figure 2). Our implementation is based on those by Min et al. (2022) and Yoo et al. (2022). ## 5 Results We provide the results on few-shot learning with \(k\)=\(16\) demonstrations per prompt in Figure 3. The results are categorized based on the model family (GPT/OPT) and the task type (classification/multi-choice question answering). Overall, we observe the anticipated trend of performance enhancement with increasing scale, particularly notable in the multi-choice tasks for both OPT and GPT models. However, the most remarkable finding is the significant performance improvement achieved by selecting _similar_ in-context examples for few-shot learning, particularly in the context of classification tasks. This observation aligns with the findings reported by Liu et al. (2022), who demonstrated similar patterns in sentiment analysis tasks specifically for GPT-3. The present result offers valuable insights, indicating that the selection of appropriate in-context examples holds greater significance than the number of model parameters, at least within the scope of the analyzed models in this study. In the context of multi-choice tasks, _similarity_ remains the top-performing acquisition method, while the other three approaches exhibit closely competitive performance. Conversely, in classification tasks, a clearer pattern emerges where _diversity_ follows _similarity_ as the second-best performing active learning approach, with _random_ sampling ranking third. Remarkably, _uncertainty_ sampling, typically regarded as a promising approach for traditional supervised active learning Shen et al. (2017); Margatina et al. (2022); Schroder et al. (2023), exhibits the poorest performance Gonen et al. (2022). This finding contradicts the conventional active learning principles that suggest a few highly uncertain labeled data points facilitate data efficiency. It highlights the limited applicability of uncertainty sampling in the in-context learning paradigm, particularly when considering models with a scale ranging from a few million to several billion parameters. ## 6 Analysis ### Model Size In order to gain some intuition on the effect of scale, we group GPT and OPT models together that have similar number of parameters. We provide the results in Figure 4. Even after aggregating the results from both model families, we do not see any specific pattern as the model parameters increase. We wanted to explore whether the largest models of our collection would behave differently under the varying in-context learning settings, thus perhaps attributing such a behaviour to potential emergent abilities of the bigger LLMs, but we observe the same patterns. We believe that this is an interesting avenue of research, especially as models grow and will continue to grow exponentially in terms of model parameters. Our findings show that the in-context learning ability of models from a few millions to a few billions of parameters follows similar patterns. However, this might not be the case when studying even larger models, as primary results hint (Rae et al., 2022; Wei et al., 2023; Chowdhery et al., 2022; Touvron et al., 2023). ### Ground Truth Demonstrations We want to delve into the debate of whether ground truth demonstrations, i.e., providing the correct label to the in-context examples, is crucial for high performing in-context learning or not. Various findings have shown mixed results for randomly sampled data, which essentially means that the benefit of ground truth labels can depend on the label space or the distribution of inputs specified by the demonstrations (Min et al., 2022; Yoo et al., 2022). In Figure 4: Results per model size. Figure 5: Effect of ground truth labels on in-context learning with with the similarity AL selection method. Figure 3: Results for various GPT (top) and OPT (bottom) models and AL methods averaged over \(15\) classification and \(9\) multi-choice tasks. _Similarity_ is consistently the best performing approach overall, followed by _diversity_ and _random_. Interestingly, we observe that _uncertainty_ sampling underperforms in this setting of in-context learning. our analysis, we differentiate from prior work by exploring the importance of ground truth demonstration in the case of leveraging similar in-context examples. The rationale is that if the findings of Min et al. (2022) ubiquitously hold, then the performance should only marginally drop if we replace ground truth labels with random, while retaining the high performance of this acquisition algorithm, we would be able to construct an impressively efficient and effective in-context selection algorithm. However, we find that this is not the case. As expected, we show in Figure 5 that for almost all datasets considered in this part of analysis, the performance with random labels drops significantly. There are cases where replacing the original with random labels as in Min et al. (2022) retains the same performance (e.g., in the glue-rte dataset), but this is certainly a finding that does not generalize overall. In summary, we find that grounds truth demonstrations are crucial for high performing, robust in-context learning Yoo et al. (2022). ### Most vs. Least Similar Demonstrations To investigate the striking effectiveness of the _similarity_ active learning strategy in the selection of in-context examples, we conduct additional experiments where we invert the approach and choose the _least_ similar examples from the pool as the prompt. This investigation aims to ascertain whether the remarkable performance gains can be attributed solely to the semantic similarity between the prompt and the test input. The results depicted in Figure 6 substantiate our hypothesis, demonstrating a significant performance drop when employing opposite examples from the pool as in-context exemplars. While this pattern is particularly pronounced in the classification tasks, it consistently emerges across different model sizes and task types. Hence, we can assert that _maximizing semantic similarity between the prompt and the input test_ is an unequivocally vital attribute for achieving successful in-context learning outcomes with LLMs. Future endeavors in the field of building effective in-context learning frameworks should incorporate this principle to enable data-efficient algorithms that can fully harness the potential of LLMs. ### Most vs. Least Uncertain Demonstrations Along these lines, we also opt to examine the duality between selecting the most or the least uncertain in-context examples from the pool. We show the results of these experiments for the GPT models in Figure 7. Interestingly, we observe that while the smaller language models (gpt2, gpt2-medium, gpt-large) perform better with the least uncertain prompts, the larger models seem to start benefiting from the high uncertainty prompts, with this being clear in the largest model of our collection, GPT-Neox (\(20\)B parameters). This is a very interesting finding that shows that even larger models will most likely start performing better with high entropy in-context examples, similar to their supervised learning counterparts. Such findings open a plethora of research questions regarding understanding how in-context learning works Reynolds and McDonell (2021); Razeghi et al. (2022); Xie et al. (2022); Min et al. (2022), how active learning and data efficiency methods reshape with larger language models or whether we can properly investigate potential emergent abilities of LLMs acquired though scale Wei et al. (2022); Schaeffer et al. (2023). ### Evaluation with Different Metrics Finally, we want to provide a clear overview of our experiments and summary of our findings, while making some clarification regarding how we evaluate and compare different approaches to in-context learning. We provide in Figure 8 results for in-context learning with random sampling, three data selection techniques inspired by active learning (SS3.2), namely diversity, uncertainty and Figure 6: Most vs. least similar in-context examples. Figure 7: Most vs. least uncertain in-context examples. similarity, and a zero-shot baseline where no labeled examples are in the prompt (no_demo). We show that in-context learning with \(k\)=\(16\) demonstrations consistently outperform zero-shot learning for an average of \(15\) classification tasks for gpt2-large, gpt-j and gpt-neox. Next, we observe that the best performing in-context example selection method is by a clear margin similarity, followed by diversity. This finding corroborates the original hypothesis of active learning that, indeed, _not all data is equal_ and there exist _better_ subsets in the pool that can be used as in-context exemplars. We can see that the uncertainty baseline, which is usually top performing in supervised AL, generally underperforms in the few-shot setting. Still, there is some evidence that this could change with even larger and better models (SS6.4). Finally, delving into the debate on whether ground truth labels matter or not (Min et al., 2022; Yoo et al., 2022), we show that replacing original with random in-context labels hurt significantly the performance of similarity, the best data selection method (SS6.2). We further emphasize the significance of employing a meticulous evaluation framework, particularly in the selection of appropriate metrics. In Figure 8, we illustrate the same classification experiments, but with the \(F_{1}\) score plotted on the left and accuracy on the right. The use of \(F_{1}\), the conventional metric for classification tasks, reveals a distinct ranking among the various active learning (AL) methods, with similarity exhibiting the best performance, followed by diversity. Conversely, when employing accuracy to compare the methods, diversity emerges as the top approach, followed by similarity and random selection. This disparity highlights the potential for misconceptions or obscured findings, underscoring the need for caution when evaluating and comparing different methods across various models within the in-context learning framework (Dehghani et al., 2021; Min et al., 2022; Yoo et al., 2022; Tedeschi et al., 2023). ## 7 Main Takeaways & Conclusion In this study, we have examined the selection of demonstrations, i.e., labeled data that provide examples of solving a task, for in-context learning with LLMs. We formulated the selection process as a _single iteration active learning problem_ and evaluated four standard approaches: _uncertainty_, _diversity_, _similarity_, and _random_ sampling. Our evaluation involved \(15\) models with varying parameters from the GPT and OPT families, encompassing \(15\) classification tasks and \(9\) multi-choice tasks. Through extensive experimentation, we have demonstrated that selecting in-context examples that are similar to the test examples consistently outperforms all other methods by a significant margin across all model families, sizes, and tasks. This corroborates findings of several previous and concurrent studies that explore the properties of "good" in-context examples (Liu et al., 2022; Shi et al., 2022). Interestingly, our findings reveal that uncertainty sampling, although effective in supervised learning, underperforms in the in-context learning paradigm. This highlights the importance of our work in exploring the principles of active learning in the context of few-shot learning. With the increasing size of language models, which aligns with enhanced reasoning capabilities, few-shot learning is poised to become one of the prevailing methodologies for leveraging these models. Consequently, it becomes imperative to investigate the selection of appropriate data points as demonstrations and identify their defining charac Figure 8: Same experiments, different metrics, different patterns. teristics. Our study, which emphasizes the principles of _semantic similarity with the test domain_ and _low uncertainty_, contributes to the expanding body of research that seeks to address this issue and establish a set of properties for effective and efficient prompt creation in the future (Webson and Pavlick, 2022; Liu et al., 2022; Min et al., 2022; Yoo et al., 2022; Wang et al., 2022; Gonen et al., 2022; Wei et al., 2023b). By elucidating these principles, we take a significant step towards facilitating the utilization of large language models in real-world applications.
2304.03431
Domain Generalization In Robust Invariant Representation
Unsupervised approaches for learning representations invariant to common transformations are used quite often for object recognition. Learning invariances makes models more robust and practical to use in real-world scenarios. Since data transformations that do not change the intrinsic properties of the object cause the majority of the complexity in recognition tasks, models that are invariant to these transformations help reduce the amount of training data required. This further increases the model's efficiency and simplifies training. In this paper, we investigate the generalization of invariant representations on out-of-distribution data and try to answer the question: Do model representations invariant to some transformations in a particular seen domain also remain invariant in previously unseen domains? Through extensive experiments, we demonstrate that the invariant model learns unstructured latent representations that are robust to distribution shifts, thus making invariance a desirable property for training in resource-constrained settings.
Gauri Gupta, Ritvik Kapila, Keshav Gupta, Ramesh Raskar
2023-04-07T00:58:30Z
http://arxiv.org/abs/2304.03431v2
# Domain Generalization in Robust Invariant Representation ###### Abstract Unsupervised approaches for learning representations invariant to common transformations are used quite often for object recognition. Learning invariances makes models more robust and practical to use in real-world scenarios. Since data transformations that do not change the intrinsic properties of the object cause the majority of the complexity in recognition tasks, models that are invariant to these transformations help reduce the amount of training data required. This further increases the model's efficiency and simplifies training. In this paper, we investigate the generalization of invariant representations on out-of-distribution data and try to answer the question: Do model representations invariant to some transformations in a particular seen domain also remain invariant in previously unseen domains? Through extensive experiments1, we demonstrate that the invariant model learns unstructured latent representations that are robust to distribution shifts, thus making invariance a desirable property for training in resource-constrained settings. Footnote 1: [https://github.com/GauriGupta19/Domain-Generalisation-in-Invariance](https://github.com/GauriGupta19/Domain-Generalisation-in-Invariance) ## 1 Introduction In the real world, two images of the same object might only be related through some identity-preserving transformations. Many interesting data properties have these inherent symmetries but are represented in a way that does not attend to these symmetries. Prior work has revealed that incorporating these correspondences in the network can improve model performance significantly and make it more robust to variations in the data Cohen et al. (2019). Invariance in deep neural networks refers to a model's ability to produce the same output for a given input, regardless of certain changes in the input. For instance, when presented with an image of an object, a translation-invariant model will produce the same result regardless of the object's location in the image. The network achieves this property by detecting the presence of certain features in a local neighborhood. Prior theoretical work shows that the complexity in recognition tasks is predominantly due to simple transformations such as rotation, translation, viewpoint, and illumination nuisances that swamp the intrinsic characteristics of the object Lee & Soatto (2011); Liao et al. (2013). Making a model invariant to such transformations helps reduce the amount of training data required because the model does not have to learn to recognize objects in all possible positions and orientations Zhu et al. (2021). Utilizing prior knowledge on intra-class variance resulting from transformations is an efficient technique that can be utilized in critical use cases with limited training data Rath & Condurache (2022); Li & Li (2021). Since most downstream tasks, including object recognition and label prediction, are invariant to specific group actions like translations and rotations, invariant models are extremely useful Winter et al. (2022); Sohn & Lee (2012). We conjecture that the crux of object detection is to achieve invariance to identity-preserving transformations without losing discriminability. Object recognition has potential applications across various fields, including Agriculture, where it can be used to identify and count crops, monitor crop health, and detect the presence of pests and diseases Yang et al. (2022). Also, in the Healthcare domain, it can help in identifying and tracking patients, monitoring vital signs, and assisting with diagnosis and treatment Elakkiya et al. (2022). However, these technologies are generally difficult to adopt in developing countries due to either data scarcity or limited training resources. Thus, building resource-efficient object recognition systems is of utmost importance as it can help bridge the digital divide, provide access to technology and services, and improve people's lives, especially in developing countries. These systems can also help to reduce costs, increase efficiency, and create new opportunities for economic development. In this paper, we show how learning invariant model representations is a resource-efficient solution to the same underlying problem of object recognition. We are interested in algorithms that can generalize well on previously unseen data, as humans are capable of doing. Consider a scenario in which a model is trained on patient data from a specific population. Now, with each new patient, the model training process must be repeated. This can be time-consuming and thus problematic, especially in medical diagnostics where time is critical. Now, given a model trained on a particular domain, we explore the notion of generalization of model performance on a new unseen domain. We show that invariant representations learn domain-agnostic information from training data, which is then used to generalize the classifier to new, previously unseen data without retraining. Overall, the paper examines invariance to identity-preserving transformations as a property that is robust to domain shifts, i.e., invariant model generalization on new unseen data, imitating human-like recognition. We find that despite using a very simple classifier (thresholding the similarity between object representations), the model achieves strong performance in these highly unconstrained cases as well. ## 2 Related Work Invariant networks use symmetries in the data which improves their performance and potentially reduces the amount of data required for training Cohen & Welling (2016). Invariance can either be learned or can be explicitly embedded in the network. The former approach includes techniques like Data Augmentation which involve generating new data samples from the original data by applying various transformations such as rotation, scaling, cropping, etc van Dyk & Meng (2001). Although this improves generalization performance, it is inefficient in terms of training time and compute resources Thomas et al. (2018). In explicit invariant integration, the model is designed in a way that imposes constraints on the functions that are learned by the network, which therefore restricts the model's architecture design Schutt et al. (2018). For instance, graph neural networks have been used to establish powerful prediction models through message passing on graph-structured data that are invariant to permutation symmetries Gilmer et al. (2017). Invariance embedding in convolutional neural networks (CNNs) has brought a paradigm shift in the analysis of images by detecting equivariant and invariant image features Lecun & Bengio (1995). Weight sharing is another approach for incorporating invariance. This involves using the same set of weights for multiple network parts, like different filters in a convolutional layer. This helps the network learn more general features that are invariant to transformations. Invariance embedding in convolutional neural networks (CNNs) has brought a paradigm shift in the analysis of images by detecting equivariant and invariant image features Lecun & Bengio (1995). Deep transformation-invariant approaches have also been used for clustering and aligning images Monnier et al. (2020). While all of these previous works provide various methods for learning invariances in the network, our work focuses on introducing transfer learning in object recognition by utilizing invariance generalization in domain shift, which is perfectly useful in data-limited and resource-constrained settings. ## 3 Robustness of invariant representations on out-of-distribution data We propose that deep models invariant to certain transformations should also generalize well to out-of-distribution data, i.e., they should generate invariant representations even on data that the model was not previously trained on. For instance, if a model is well trained on a particular dataset \(X_{1}\) making its representation invariant to rotation, the model should also be invariant to the rotations in data \(X_{2}\) it has never seen before. We investigate this notion of invariance for identity-preserving transformations by performing experiments to verify if the model learned a data agnostic invariance and disentanglement of information in the latent space. In particular, we examine the possibility of representations that are invariant to all the task-irrelevant variabilities present in the datasets. To the best of our knowledge, we are the first ones to investigate this claim for invariant deep learning models. ## 4 Method If a group \(G\) has an action on a data-space \(X\), i.e, \(g(x)\neq x,x\in X,g\in G\). The invariant encoder \(\eta\) maps the elements in the same orbit (here same class) in \(X\) to the same point (orbit) \(z\in Z=X/G\)\(\forall\ g\in G\), where \(z\) is the invariant representation in the latent space of all the data points in the same orbit. That is, \(\eta(x)=z\ \forall\ x\in O_{x},O_{x}=\{g(x)\ |\ g\in G\}\). However, the decoder \(\delta\) at best can map the invariant embedding \(\eta(x)\) to an element in the orbit of \(x\), i.e., \(\delta(\eta(x))\in O_{x}\) for some \(g\in G\). We also need to extract the information of the group action \(g\in G\) under which the element is transformed. Only then can we recover the original element in \(O_{x}\). An encoder thus maps data points to its invariant representation \(z\) and equivariant group action \(g\), both of which are then used as input to the decoder to reconstruct the original object. During inference, when \(g\) is identity, we get the object in the standard viewpoint. This approach is general enough to be extended to any kind of group transformation or even the composition of different transformations. For instance, Bepler et al. (2019); Winter et al. (2022) show how the above approach can be extended to rotations, translations, their composition, and other general coordinate transforms. In this paper, we study the problem of identification or pair-matching, e.g., for face verification. Given two images of objects never encountered during training, that are transformed under some particular transformation, the task is to decide if they depict the same object or not. We used the following procedure to assess the model's adaptability to previously unseen out-of-distribution data. The basic pipeline is shown in Figure 1. First, we train the invariant model on dataset \(X_{1}\) transformed under some group \(G\). To test the model's performance on unseen data, a classifier classifies two images of objects not seen before from dataset \(X_{2}\) as either "same" or "different" based on a threshold. The classifier simply calculates the cosine similarity (a normalized dot product) between the latent representations of the two object images and outputs "same" if it is more than a threshold and "different" otherwise. We use this naive classifier since the goal is to determine the effectiveness of these latent representations as a feature. We believe that accuracies for a majority of these tasks can be enhanced by using more advanced classifiers. ## 5 Experiments The goal of these experiments is to explore the unconstrained notion of domain generalization of invariance under identity-preserving transformations. Here in these experiments, we study the performance of rotation invariance which can be easily extended to other common identity-preserving transformations. To test the model's generalization on unseen data, we first train a model on a partic Figure 1: Evaluation framework for pairwise matching ular domain \(X_{1}\), which is then used to generate embeddings of the unseen domain \(X_{2}\). A classifier then classifies two unseen images of objects from \(X_{2}\) as same or not. Borrowing the notations from Section 4, a positive pair consists of an image and its rotated version i.e. \((x_{1},gx_{1})\), \(x_{1}\in X_{2},g\in G\). A negative pair consists of two randomly sampled images from \(X_{2}\) such that they belong to different classes, i.e. \((x_{1},x_{2}),x_{1},x_{2}\in X_{2}\), \(s.t.\)\(O_{x_{1}}\neq O_{x_{2}}\). The pipeline is shown in Figure 1. In our setup, the test domain uses the entire orbital and never contains any of the same labels as the training domain. We train both a vanilla VAE and a rotational invariant VAE (RotInvVAE ) inspired from Bepler et al. (2019) and present our analysis in this section. We perform experiments on the following datasets: MNIST, FashionMNIST, and the Labeled Faces in the Wild (LFW) Huang et al. (2007), and show ROC curves for the above-described classification task. These results clearly show RotInvVAE's high performance on all these datasets, depicting that the model's latent representations remain invariant to rotation even in the unseen domain. For MNIST and FashionMNIST datasets, we first train the model on the training domain \(X_{1}\) which includes the labels from 0-4, and test on the remaining labels 5-9. The results can be found in Figure 2 (Top). Further, to verify the extent of generalization, we see that RotInvVAE's performance remains consistently high even if we train on fewer and fewer labels across both MNIST and FashionMNIST and test on the remaining labels as shown in Figure 2 (Bottom). This means that we only need to train our model on very few labels and it will still perform well on unseen labels, making the model robust to domain shift and reducing the amount of data required for training. For investigating performance across completely unconstrained tasks, we train the RotInvVAE model on MNIST while testing on the FashionMNIST dataset and vice-versa, and present the consolidated results in Figure 3 (Right). It is interesting to note that the model performed well on completely different data domains, even when we train on the MNIST dataset and tested on an unseen and much more complicated domain of FashionMNIST. Additionally, the results of RotInvVAE on the LFW dataset face verification are even more encouraging. The implementation details and the train-test split are as described in Section 5.1. The model essentially achieves invariance to the identity-preserving transformation (i.e., rotation in this case) and performs exceptionally well on unseen faces as well as shown in Figure 3 (Left). We also visualize the latent space of both vanilla VAE and RotInvVAE (for dim z = 2) for the experiments in Figure 2 for the MNIST dataset. As in Figure 4, we can see the latent space is divided into well-structured clusters for the training class labels. Whereas in RotInvVAE, as compared to Figure 2: Analysis for MNIST and FashionMNIST is shown on the left and right respectively. \(X_{1}\): Training domain, \(X_{2}\): Testing domain (Top) ROC curves for out-of-distribution domain generalization with \(X_{1}\) = rotated MNIST/FashionMNIST 0-4, \(X_{2}\) = rotated MNIST/FashionMNIST 5-9 (Bottom) Area under the ROC curve (AUC) for verification task as we vary the number of the classes in \(X_{1}\) where remaining classes form \(X_{2}\) vanilla VAE, the angles are randomly distributed in the latent space, indicating that the latent space is invariant to the angle of rotation for the training dataset. The latent space of RotInvVAE is invariant to rotations even on out-of-distribution data, which is in coherence with our hypothesis. Not only that, we also observe the emergence of clusters in the representation space for the new unseen labels for RotInvVAE. A similar analysis on FashionMNIST is presented in Appendix A. ### Implementation details For implementing the LFW image dataset, we make a few transformations to make the data suitable for our task and make training faster. We first crop the images around the face and then down-sample (pixellate) them to size (50, 50). We then zero-pad the image to create a black background to perform rotation in images without inducing a bias of background effects. For the train-test split, we randomly sample face labels and include all faces with a given label in the test set until the size of the test set is one-tenth of the dataset, which gives us the set \(X_{2}\). The remaining labels form our training set \(X_{1}\). Images of a particular label either belong to the train or test set and are not shared across these sets. Since the goal is to test the generalization of invariant representations, it is also important to note that we compare an original image of an individual's face (which belongs to the unseen data domain) with the rotated version of the same image as a positive pair. For all these experiments, the dimension of the latent representations is consistently kept at 10 to make a fair comparison across the different models and datasets. The models are trained for 100 epochs. Figure 4: Visualization of latent space of (Top Left) Vanilla VAE on \(X_{1}\) = rotated MNIST 0-4 (Top Right) RotInvVAE on \(X_{1}\) = rotated MNIST 0-4 (Bottom Left) Vanilla VAE on the unseen \(X_{2}\) = rotated MNIST 5-9 (Bottom Right) RotInvVAE on the unseen \(X_{2}\) = rotated MNIST 5-9 Figure 3: ROC curves for (Left) Face verification on unseen LFW dataset with \(X_{1},X_{2}\) described in Section 5.1 (Right) Generalization on completely different OOD data (1) blue, orange - \(X_{1}\) = MNIST, \(X_{2}\) = FashionMNIST (2) green, red - \(X_{1}\) = FashionMNIST, \(X_{2}\) = MNIST ## 6 Discussions and Conclusion In this work, we show that the model learns to generate data-agnostic representations invariant to some group transformations that also generalize well on unseen data. The model learns the group action that transforms the given data, instead of learning any particular intrinsic property of the data. Prior work only involves training and testing invariant models on the same data distribution, thus making our study unique and the first of its kind. This model property has applications in recognition classifier systems like face verification, where the input data is usually only transformed under some identity-preserving transformations. Through experiments, we show that the model generalizes well on out-of-distribution data and does not need to be retrained every time on new unseen objects. This makes the model resource and time-efficient which is particularly suitable for deployment in developing countries with limited data and training resources. Despite our recognition classifier's simplicity, our model depicts compelling accuracy on multiple datasets. Future work could extend this approach to other objects, datasets, and additional tasks.
2306.05639
Secular dynamics of stellar spin driven by planets inside Kozai-Lidov resonance
In many exoplanetary systems with `hot Jupiters', it is observed that the spin axes of host stars are highly misaligned to planetary orbital axes. In this study, a possible channel is investigated for producing such a misalignment under a hierarchical three-body system where the evolution of stellar spin is subjected to the gravitational torque induced from the planet inside Kozai--Lidov (KL) resonance. In particular, two special configurations are explored in detail. The first one corresponds to the configuration with planets at KL fixed points, and the second one corresponds to the configurations with planets moving on KL librating cycles. When the planet is located at the KL fixed point, the corresponding Hamiltonian model is of one degree of freedom and there are three branches of libration centres for stellar spin. When the planet is moving on KL cycles, the technique of Poincar\'e section is taken to reveal global structures of stellar spin in phase space. To understand the complex structures, perturbative treatments are adopted to study rotational dynamics. It shows that analytical structures in phase portraits under the resonant model can agree well with numerical structures arising in Poincar\'e sections, showing that the complicated dynamics of stellar spin are governed by the primary resonance under the unperturbed Hamiltonian model in combination with the 2:1 (high-order and/or secondary) spin-orbit resonances.
Hanlun Lei, Yan-Xiang Gong
2023-06-09T02:54:00Z
http://arxiv.org/abs/2306.05639v1
# Secular dynamics of stellar spin driven by planets inside Kozai-Lidov resonance ###### Abstract In many exoplanetary systems with 'hot Jupiters', it is observed that the spin axes of host stars are highly misaligned to planetary orbital axes. In this study, a possible channel is investigated for producing such a misalignment under a hierarchical three-body system where the evolution of stellar spin is subjected to the gravitational torque induced from the planet inside Kozai-Lidov (KL) resonance. In particular, two special configurations are explored in detail. The first one corresponds to the configuration with planets at KL fixed points, and the second one corresponds to the configurations with planets moving on KL librating cycles. When the planet is located at the KL fixed point, the corresponding Hamiltonian model is of one degree of freedom and there are three branches of libration centres for stellar spin. When the planet is moving on KL cycles, the technique of Poincare section is taken to reveal global structures of stellar spin in phase space. To understand the complex structures, perturbative treatments are adopted to study rotational dynamics. It shows that analytical structures in phase portraits under the resonant model can agree well with numerical structures arising in Poincare sections, showing that the complicated dynamics of stellar spin are governed by the primary resonance under the unperturbed Hamiltonian model in combination with the 2:1 (high-order and/or secondary) spin-orbit resonances. keywords: celestial mechanics - planets and satellites: dynamical evolution and stability - planetary systems - stars: rotation ## 1 Introduction In recent years, more and more exoplanetary systems containing 'hot Jupiters' (giant planets with masses \(\geq 0.25\) Jupiter's mass and periods \(\leq 10\) days) are observed to hold high misalignment between the stellar spin axes and planetary orbital axes (Albrecht et al., 2022; Dawson & Johnson, 2018). Because of large stellar tidal gravity and radiation fields close to host stars, it is generally believed that hot Jupiters form in regions beyond a few AU distance from the host stars and then migrate inward to their current orbits (Storch et al., 2014; Dawson & Johnson, 2018). Moreover, aligned configurations are expected for planet migration in protoplanetary disks (e.g. Bate et al., 2010). Regarding the puzzle of misaligned 'hot Jupiters', Batygin (2012) provided a possible explanation that the misalignment exists in the primordial planetary disk relative to the stellar equator. However, if 'hot Jupiters' are initially formed in aligned protoplanetary disks, other dynamical channels are required to induce spin-orbit misalignment. In this regard, an overview of three main classes of hot Jupiter origin theory are made by Dawson & Johnson (2018), including in situ formation, gas disk migration and high-eccentricity tidal migration. To produce retrograde orbits with respect to the total angular momentum, Naoz et al. (2011) proposed a dynamical channel of secular planet-planet interaction by combining octupole-order gravitational effects with tidal friction. High planetary eccentricities induced by secular interaction stimulates strong planet-star tidal interaction, which can rapidly reduce the orbit energy, leading to inward migration and circularization of planetary orbit and finally forming a retrograde hot Jupiter. This formation channel requires a distant giant planet moving on an eccentric and inclined orbit as a perturber. Due to secular interaction, the angular momentum along the \(z\) axis of the inner planet (\(H_{z}\)) may change sign, leading to orbit flips. Such a phenomenon is referred to as 'eccentric KL effect' (Lithwick & Naoz, 2011). In recent years, varieties of dynamical outcomes and applications of eccentric KL mechanism have been widely explored (Lithwick and Naoz, 2011; Antognini, 2015; Hamers, 2021; Huang and Lei, 2022; Lei and Gong, 2022; Lei and Huang, 2022; Lei, 2022; Li et al., 2014, 2018; Sidorenko, 2018; Katz et al., 2011; Petrovich, 2015). Now, we know that the eccentric KL effect is due to the apsidal resonance, which is a octupole-order secular resonance under hierarchical planetary systems (Sidorenko, 2018; Lei and Huang, 2022; Lei and Gong, 2022; Lei, 2022; Huang and Lei, 2022). Please refer to Naoz (2016) and Shevchenko (2016) for an overview about the eccentric KL mechanism and its applications to varieties of astrophysical problems. In the formation channel proposed in Naoz et al. (2011), the stellar spin-orbit coupling is not included, meaning that the stellar equator is always fixed and aligned with the invariant plane of the system. In this sense, the planetary inclination stands for stellar obliquity and thus variation of inclination represents change of stellar obliquity. However, in reality, the central star is an oblate body and it rotates around its spin axis. When the planet moves around the central star on KL cycles (eccentricity and inclination are in coupled oscillations), the rotation-induced stellar quadrupole could produce planet-star interaction torque, forcing the stellar spin axis and planetary orbital axis precess around each other. Based on this fact, Storch et al. (2014) proposed a "Kozai + tide" scenario with consideration of stellar spin-orbit coupling. In this scenario, the gravitational interaction between the planet and its oblate host star induced from spin-orbit resonances can cause chaotic evolution of stellar obliquity, showing that stellar spin-orbit misalignment can be produced from aligned configurations. To understand the origin of chaotic behaviours for stellar rotation, Storch and Lai (2015) adopted Hamiltonian perturbation theory to deal with a dynamical model where the planets are assumed on periodic KL cycles. In the adiabatic regime (corresponding to regime III in accordance with the classification made in Storch et al., 2014), they identified a set of secular spin-orbit resonances and showed that the wide-spread chaos in the stellar spin evolution are caused by resonance overlaps. Extending to the non-adiabatic regime (corresponding to regime I in accordance with the classification made in Storch et al., 2014), Storch et al. (2017) included the effects of short-range forces and tidal dissipation and categorised different paths to spin-orbit misalignment. They pointed out two spin-orbit evolution paths can lead to retrograde configurations. Considering the influence of the octupole-level effects, Anderson et al. (2017) derived the required condition for producing spin-orbit misalignment in the inner binary. In the previous studies (Storch et al., 2014; Storch and Lai, 2015; Storch et al., 2017), the authors assumed that the planets are placed on fixed KL circulating cycles (close to KL separatrix), which are outside the KL resonance. Thus, it is unclear about the influence of different-scale KL cycles upon the secular dynamics of stellar spin. In addition, a more formal canonical perturbation theory is absent in their series of studies. Inspired by these considerations, we revisit the stellar spin dynamics in this work under the configurations with planets inside KL resonance. In particular, two special configurations are considered: the first one with planets located at the KL fixed point and the second one with planets moving on KL librating cycles. In the configuration with planets at the KL fixed point, the resulting Hamiltonian determines a one-degree-of-freedom dynamical model and distributions of the so-called 'Cassini's states' (i.e., equilibrium points under the 1 DOF Hamiltonian model) are produced. In the configuration with planets moving on KL librating cycles, numerical technique of Poincare section is adopted to obtain the global structures of stellar spin in phase space, and then theory of perturbative treatment is taken to understand the complex structures arising in Poincare sections. It shows that analytical structures in phase portraits under the resonant model can agree well with numerical structures arising in Poincare sections, making clear the dynamical mechanism governing the spin structures. The remaining part of this work is organised as follows. In Section 2, the Hamiltonian functions governing stellar spin evolution and planet's KL oscillations are briefly introduced. In Section 3, secular dynamics of stellar spin are investigated under a special configuration with planets locating at the KL centre. Section 4 studies stellar spin dynamics under the configurations with planets moving on KL cycles by taking advantage of numerical approach (Poincare sections) and analytical approach (perturbative treatments). Conclusions are summarised in Section 5. ## 2 Hamiltonian model In this study, we consider a hierarchical three-body system, consisting of an oblate star with mass \(M_{*}\), an inner giant planet with mass \(m_{p}\) and a distant binary companion with mass \(m_{b}\). In practical simulations, we take \(M_{*}=m_{b}\gg m_{p}\), as adopted by Storch and Lai (2015) and Storch et al. (2017) in their studies. The planet moves around the central star in the gravitational field generated by the stellar binary. For simplicity, the following two assumptions are made about the dynamical system (Storch et al., 2014; Storch and Lai, 2015; Storch et al., 2017): (a) the evolution of stellar obliquity is dominated by the gravitational torque induced from the planet, and (b) the orbital motions of the planet and Figure 1: Definition of the variables used in this work and relative configuration of three fundamental planes: the invariant plane of binary system, orbital plane of the planet around the central star and the stellar equator. Normal directions of these planes are denoted by unitary vectors \(\boldsymbol{L}_{b}\), \(\boldsymbol{L}_{p}\) and \(\boldsymbol{S}\). The ascending nodes are denoted by \(N\), \(N^{\prime}\) and \(N^{\prime\prime}\), and the longitudes of ascending nodes are \(h_{p}\), \(h\) and \(\phi\). The orbital inclination of planet relative to the binary’s orbit is \(i_{p}\), the absolute obliquity of stellar equator relative to the binary’s orbit is \(K\) and the relative obliquity of stellar equator with respect to planet’s orbit is \(\epsilon\). Here, \(\epsilon\) can be used to measure the ‘orbital inclination’ of planets relative to stellar equator. The point \(\gamma_{0}\) denotes the pericentre of binary’s orbit. perturner around the central star are not influenced by stellar rotation (i.e., the back-reaction from the stellar rotation to orbital evolution is ignored). The second assumption indicates that the orbital evolution of the considered system is decoupled from stellar rotation. Thus, the orbital dynamics can be studied separately. Relaxing the second assumption to a coupled case will be performed in future. Figure 1 shows the relative geometry of the orbital plane of binary, orbital plane of the planet and stellar equator. \(\mathbf{L}_{b}\) stands for the angular momentum vector of the binary's orbit, \(\mathbf{L}_{p}\) is the angular momentum vector of the planet's orbit and \(\mathbf{S}\) represents the stellar spin axis vector. Their magnitudes are \(L_{b}\), \(L_{p}\) and \(S\) and their unitary vectors are denoted by \(\hat{\mathbf{L}}_{b}\), \(\hat{\mathbf{L}}_{p}\) and \(\hat{\mathbf{S}}\). For the considered system, it holds \(L_{b}\gg L_{p}\gg S\). Thus, it is reasonable to assume the orbital plane of binary as the invariant plane of system. Based on the invariant plane, an inertial coordinate system (named \(O\)-\(xyz\)) is introduced: the \(x\)-axis directs from the central star towards binary's pericenter \(\gamma_{0}\), the \(z\)-axis is along the angular momentum vector of the binary and the \(y\)-axis completes a right-handed system (see Fig. 1). The ascending nodes of three fundamental planes are denoted by \(N\), \(N^{\prime}\) and \(N^{\prime\prime}\), and the corresponding longitudes of ascending nodes are \(h\), \(h_{p}\) and \(\phi\) (see Fig. 1 for detailed definitions). The relative angle between \(\mathbf{L}_{p}\) and \(\mathbf{L}_{b}\) corresponds to the inclination of planet's orbit (\(i_{p}\)), the angle between \(\mathbf{L}_{b}\) and \(\mathbf{S}\) is the absolute obliquity of the stellar equator relative to the invariant plane (\(K\)), and the angle between \(\mathbf{L}_{p}\) and \(\mathbf{S}\) is the relative obliquity of stellar equator with respect to its orbit around planet (\(\epsilon\)). According to the above definitions, we have the following relations: \[\hat{\mathbf{L}}_{p}\cdot\hat{\mathbf{L}}_{b}=\cos i_{p},\quad\hat{\mathbf{S}}\cdot\hat{ \mathbf{L}}_{b}=\cos K,\quad\hat{\mathbf{S}}\cdot\hat{\mathbf{L}}_{p}=\cos\epsilon.\] Among these variables, \((h,\cos K)\), \((h_{p},\cos i_{p})\) and \((\phi,\cos\epsilon)\) are three pairs of conjugate variables. For simplicity, we denote \(p=\cos\epsilon\). The transformations between these sets of conjugate variables can be realised by \[\sin\epsilon\sin\phi = \sin K\sin\left(h-h_{p}\right),\] \[-\sin\epsilon\cos\phi = \cos K\sin i_{p}-\sin K\cos i_{p}\cos\left(h-h_{p}\right),\] \[\cos\epsilon = \cos K\cos i_{p}+\sin K\sin i_{p}\cos\left(h-h_{p}\right),\] and \[\sin K\sin\left(h-h_{p}\right)= \sin\epsilon\sin\phi,\] \[-\sin K\cos\left(h-h_{p}\right)= -\cos\epsilon\sin i_{p}-\sin\epsilon\cos i_{p}\cos\phi,\] \[\cos K= \cos\epsilon\cos i_{p}-\sin\epsilon\sin i_{p}\cos\phi.\] Under the coordinate system \(O\)-\(xyz\), the unitary vector \(\hat{\mathbf{L}}_{b}\) is along the \(z\)-axis, the unitary vectors \(\hat{\mathbf{L}}_{p}\) and \(\hat{\mathbf{S}}\) are given by \[\hat{\mathbf{L}}_{p}=\left[\begin{array}{c}\sin i_{p}\sin h_{p}\\ -\sin i_{p}\cos h_{p}\\ \cos i_{p}\end{array}\right],\quad\hat{\mathbf{S}}=\left[\begin{array}{c}\sin K \sin h\\ -\sin K\cos h\\ \cos K\end{array}\right].\] ### Kozai-Lidov dynamics As assumed above, orbital evolution of planet is decoupled from stellar rotation, thus the orbital dynamics can be described separately. Since \(m_{p}\) is much smaller than \(M_{*}\) and \(m_{b}\), the gravitational influence coming from the planet upon the binary can be ignored. Thus, the binary moves around their barycentre in Keplerian orbits, and the planet moves around the central star in a perturbed Keplerian orbit under perturbation from the distant perturber. In the coordinate system \(O\)-\(xyz\), the perturber's orbit is characterised by the semi-major axis \(a_{b}\) and eccentricity \(e_{b}\), and the planet's orbit is characterised by the semi-major axis \(a_{p}\), eccentricity \(e_{p}\), inclination \(i_{p}\), longitude of ascending node \(\Omega_{p}\), argument of pericentre \(\omega_{p}\) and mean anomaly \(M_{p}\). In hierarchical configurations, \(a_{p}\) is much smaller than \(a_{b}\), thus the orbital dynamics of the planet can be approximated by the quadruple-order Hamiltonian. For convenience, we introduce Delaunay variables as follows (Morbidelli, 2002): \[l_{p} = M_{p},\quad L_{p}=\sqrt{\mu a_{p}},\] \[g_{p} = \omega_{p},\quad G_{p}=L_{p}\sqrt{1-e_{p}^{2}},\] \[h_{p} = \Omega_{p},\quad H_{p}=G_{p}\cos i_{p},\] where the gravitational parameter is \(\mu=\mathcal{G}\left(M_{*}+m_{p}\right)\). Under the influence of generality relativity (GR) effect, the quadrupole-order Hamiltonian, averaged over the orbital periods of planet and perturber, can be written as (Kozai, 1962; Wu & Murray, 2003; Liu, Munoz & Lai, 2015; Naoz, 2016) \[\mathcal{H}_{\mathrm{KL}}= -\left(5-3\frac{G_{p}^{2}}{L_{p}^{2}}\right)\left(3\frac{H_{p}^{ 2}}{G_{p}^{2}}-1\right)\] \[-15\left(1-\frac{G_{p}^{2}}{L_{p}^{2}}\right)\left(1-\frac{H_{p}^{ 2}}{G_{p}^{2}}\right)\cos 2g_{p}-\frac{3\mu^{4}\beta}{\mathcal{C}_{0}L_{p}^{3}G_{p}c^{2}}\] where \(c\) is the speed of light and the coefficient \(\mathcal{C}_{0}\) is given by \[\mathcal{C}_{0}=\frac{1}{16}\frac{\mathcal{G}m_{b}}{a_{b}}\beta\Big{(}\frac{a_{ p}}{a_{b}}\Big{)}^{2}\frac{1}{\left(1-e_{b}^{2}\right)^{3/2}}\] with \(\beta\) being the reduced mass \(\beta=\frac{M_{*}m_{p}}{M_{*}+m_{p}}\). From Lagrange planetary equations (Murray & Dermott, 1999), it is possible to derive the time derivatives of \(i_{p}\) and \(h_{p}\) as follows: \[\frac{\mathrm{d}i_{p}}{\mathrm{d}t} = -\frac{15}{G_{p}}e_{p}^{2}\sin 2i_{p}\sin 2g_{p}, \tag{2}\] \[\frac{\mathrm{d}h_{p}}{\mathrm{d}t} = \frac{6}{G_{p}}\left[-\left(2+3e_{p}^{2}\right)\cos i_{p}+5e_{p}^{2 }\cos i_{p}\cos 2g_{p}\right].\] Equation (2) determines the evolution of the unitary vector \(\hat{\mathbf{L}}_{p}\) (normal direction of planet's orbit) and it will be used in the formulation of stellar spin Hamiltonian model. The last term arising in equation (1) stands for the GR effect. It is known that the GR effect usually tends to reduce the maximum eccentricity \(e_{\mathrm{max}}\) reached by a KL cycle but does not change the dynamical structures (Storch et al., 2017). If the GR effect is ignored, it becomes the well-known Hamiltonian for studying Kozai-Lidov (KL) resonance (Kozai, 1962; Naoz, 2016). The dynamical model represented by equation (1) is of one degree of freedom, depending on the motion integral \(H_{p}=G_{p}\cos i_{p}=L_{p}\sqrt{1-e_{p}^{2}}\cos i_{p}\). In the long-term evolution, the semi-major axis as well as the \(z\)-component of angular momentum \(H_{p}\) remains unchanged (Naoz, 2016). Usually, the motion integral \(H_{p}\) can be specified by the maximum inclination \(i_{\mathrm{max}}\) by means of \(H_{p}=L_{p}\cos i_{\mathrm{max}}\)(Kozai, 1962). It is known that, when \(i_{\mathrm{max}}\) is greater than a critical inclination, KL resonance can take place. For the dynamical model without GR effect, the critical inclination is equal to \(39.2^{\circ}\) or \(140.8^{\circ}\)(Kozai, 1962). However, when the GR effect is taken into account, the critical inclination may change, as shown later. With the GR effect, the location of Kozai-Lidov centre is determined by the following equality: \[\left(1-e_{p}^{2}\right)^{3/2}-\frac{5}{3}\big{(}1-e_{p}^{2}\big{)}^{1/2}{\cos }^{2}i_{p}=\frac{1}{12}\frac{\mu^{2}\beta}{\mathcal{C}_{0}a_{p}^{2}c^{2}}, \tag{3}\] which determines the critical inclination \(i_{c}\) (at \(e_{p}=0\)) by \[{\cos}^{2}i_{c}=\frac{3}{5}\left(1-\frac{1}{12}\frac{\mu^{2}\beta}{\mathcal{C }_{0}a_{p}^{2}c^{2}}\right).\] Obviously, the value of \(i_{c}\) is dependent on the planet's semi-major axis \(a_{p}\) as well as mass \(m_{p}\). Without the GR effect, the equality for determining the location of Kozai-Lidov centre becomes \[e_{p}^{2}+\frac{5}{3}{\cos}^{2}i_{p}=1,\] which determines the critical inclination \(i_{c}\) (at \(e_{p}=0\)) by \[{\cos}^{2}i_{c}=\frac{3}{5}\Rightarrow i_{c}=39.2^{\circ}\ \mathrm{or}\ 140.8^{\circ}.\] It should be mentioned that the angular coordinate of KL centre is at \(2g_{p}=\pi\). Dynamical structure of quadrupole-level Hamiltonian with GR effect is shown in the left panel of Fig. 2. The parameters of dynamical model are to be described in Section 2.3. When the maximum inclination is at \(i_{\mathrm{max}}=60^{\circ}\), the KL centre is located at \((2\omega_{p}=0,e_{p}=0.534)\). In the right panel of Fig. 2, KL centres under the dynamical models with and without GR effect are shown in the \((i_{p},e_{p})\) plane. For convenience, the level cures of the motion integral \(H_{p}\) are presented as background and the specific curve corresponding to \(i_{\mathrm{max}}=60^{\circ}\) is shown by a red dashed line. It is observed that KL centres with GR effect are located on the right hand of the ones without GR effect, showing that KL centres with GR effect hold higher inclinations at a given eccentricity. The KL cycles under the quadrupole-level Hamiltonian are periodic and their analytical expressions can be provided in terms of elliptic functions (Kinoshita & Nakai, 2007). On the other hand, the dynamical model shown by equation (1) is of one degree of freedom, thus the KL cycles under this model correspond to the level curves of Hamiltonian in the phase space (see the left panel of Fig. 2). The dynamical separatrix, shown by a red line, can divide the entire phase space into regions of libration and circulation. Those trajectories inside the island of libration are referred to as KL librating cycles and the ones outside the island are called KL circulating cycles. ### Hamiltonian of stellar spin Because we are interested in the evolution of stellar obliquity \(\epsilon\) driven by the planet, it is convenient to formulate the Hamiltonian governing the dynamics of stellar spin in the rotating frame in which the \(x\)-axis is directed from the star towards the ascending node \(N\), the \(z\)-axis is along the vector \(\hat{\mathbf{L}}_{p}\) and the \(y\)-axis is chosen to complete a right-handed coordinate system. In this rotating frame, the normalised Hamiltonian can be written as (Storch & Lai, 2015) \[\mathcal{H}=-\frac{1}{\mathcal{C}_{0}}\frac{3\mathcal{G}m_{p}\left(I_{3}-I_{1} \right)}{4a_{p}^{3}\big{(}1-e_{p}^{2}\big{)}^{3/2}}\frac{\cos^{2}\epsilon}{S^{ *}}-\frac{\mathbf{R}\cdot\mathbf{S}^{*}}{S^{*}}, \tag{4}\] where \(\mathbf{S}^{*}\) is the scaled stellar spin axis vector (\(\mathbf{S}^{*}=\mathbf{S}/\beta\)) and its magnitude is denoted by \(S^{*}\). Here, \(I_{3}\) and \(I_{1}\) stand for the principal moments of inertia of the central star, defined by \[I_{3}-I_{1}=k_{q}M_{*}R_{*}^{2}\hat{\Omega}_{*}^{2},\] where \(\hat{\Omega}_{*}=\Omega_{*}/\sqrt{\frac{GM_{p}}{R_{*}^{2}}}\) is the dimensionless stellar spin frequency, \(R_{*}\) is the stellar radius and \(S=k_{*}M_{*}R_{*}^{2}\Omega_{*}\). For a solar-type star, it holds \(k_{q}=0.05\) and \(k_{*}=0.1\)(Storch & Lai, 2015). In equation (4), \(\mathbf{R}\) is the rotational vector of planet's orbit relative to the invariant plane, given by (Kinoshita, 1993; Storch & Lai, 2015), \[\mathbf{R}=\frac{\mathrm{d}h_{p}}{\mathrm{d}t}\hat{\mathbf{L}}_{b}+\frac{\mathrm{d}i_{ p}}{\mathrm{d}t}\left(\frac{\hat{\mathbf{L}}_{b}\times\hat{\mathbf{L}}_{p}}{\sin i_{p}} \right)=\left[\begin{array}{c}\frac{\mathrm{d}i_{p}}{\mathrm{d}t}\\ \frac{\mathrm{d}h_{p}}{\mathrm{d}t}\sin i_{p}\\ \frac{\mathrm{d}h_{p}}{\mathrm{d}t}\cos i_{p}\end{array}\right], \tag{5}\] and \(\mathbf{S}^{*}\) is the rotational angular momentum vector of the star measured in the rotating frame by \[\mathbf{S}^{*}=S^{*}\left[\begin{array}{c}\sin\epsilon\sin\phi\\ -\sin\epsilon\cos\phi\\ \cos\epsilon\end{array}\right]. \tag{6}\] Replacing equations (5) and (6) in equation (4), we can get the Hamiltonian as follows: \[\mathcal{H}= -\frac{3\mathcal{C}_{1}}{2\mathcal{C}_{0}S^{*}G_{p}^{2}}\mathrm{ cos}^{2}\epsilon-\sin\epsilon\sin\phi\frac{\mathrm{d}i_{p}}{\mathrm{d}t} \tag{7}\] \[-\left(\cos\epsilon\cos i_{p}-\sin\epsilon\sin i_{p}\cos\phi \right)\frac{\mathrm{d}h_{p}}{\mathrm{d}t},\] where the coefficient \(\mathcal{C}_{1}\) is given by \[\mathcal{C}_{1}=\frac{\mathcal{G}m_{p}\mu^{3/2}\left(I_{3}-I_{1}\right)}{2a_{p }^{3/2}}.\] Substituting \(\frac{\mathrm{d}i_{p}}{\mathrm{d}t}\) and \(\frac{\mathrm{d}h_{p}}{\mathrm{d}t}\) given by equation (2) into equation (7), one can obtain the explicit expression of Hamiltonian for stellar spin in terms of conjugate variables (\(\phi,p=\cos\epsilon\)) as follows: \[\mathcal{H}= -\frac{3\mathcal{C}_{1}}{2\mathcal{C}_{0}S^{*}G_{p}^{2}}p^{2}+ \frac{6H_{p}^{2}}{L_{p}^{2}G_{p}^{2}}\left[5L_{p}^{2}-3G_{p}^{2}-5\left(L_{p}^{ 2}-G_{p}^{2}\right)\right. \tag{8}\] \[\left.\times\cos 2g_{p}\right]p+\frac{6H_{p}}{L_{p}^{2}G_{p}^{2}} \sqrt{\left(1-p^{2}\right)\left(G_{p}^{2}-H_{p}^{2}\right)}\] \[\times\left[5\left(L_{p}^{2}-G_{p}^{2}\right)\cos\left(2g_{p}- \phi\right)-\left(5L_{p}^{2}-3G_{p}^{2}\right)\cos\phi\right],\] where \((g_{p},G_{p})\) are known periodic functions of time (with the same period of KL oscillation), determined by the KL Hamiltonian. As the Hamiltonian (8) is time dependent, it determines a non-autonomous dynamical model and it is no longer a constant during the long-term evolution. It should be mentioned that an equivalent expression of spin Hamiltonian (without GR effect) can be found in Storch & Lai (2015) under a different notation system (see equation (30) in their work). The equations of motion can be derived from Hamiltonian canonical relations: \[\begin{split}\dot{\phi}=&\frac{\partial\mathcal{H}}{ \partial p},\quad\dot{p}=-\frac{\partial\mathcal{H}}{\partial\phi},\\ \dot{g}_{p}=&\frac{\partial\mathcal{H}_{\rm KL}}{ \partial G_{p}},\quad\dot{G}_{p}=-\frac{\partial\mathcal{H}_{\rm KL}}{ \partial g_{p}},\end{split} \tag{9}\] where \(\mathcal{H}_{\rm KL}\) is given by equation (1) and \(\mathcal{H}\) is given by equation (8). In the long-term spin-orbit evolution, \(L_{p}\) and \(H_{p}\) remain unchanged (because \(l_{p}\) and \(h_{p}\) are cyclic variables). It should be once again mentioned that the evolution of \((g_{p},G_{p})\) is decoupled from the stellar rotation state \((\phi,p)\) while the evolution of \((\phi,p)\) is dependent on the orbital state \((g_{p},G_{p})\). The time histories \(g_{p}(t)\) and \(G_{p}(t)\) governed by \(\mathcal{H}_{\rm KL}\) are called KL cycles, which are periodic functions. ### Numerical integration Unless otherwise stated, the following representative parameters of system are adopted for simulations performed in the entire work: \[\begin{split}\dot{\Omega}_{\star}&=0.05,\quad k_{ q}=0.05,\quad k_{\star}=0.1,\quad R_{\star}=1.0\,R_{\sun},\\ M_{\star}&=1.0\,M_{\sun},\quad m_{b}=1.0\,M_{ \sun},\quad m_{p}=5.0\,M_{J},\\ a_{p}&=1.0\,{\rm AU},\quad a_{b}=200\,{\rm AU}, \quad e_{b}=0,\end{split}\] where \(M_{\sun}\) is the mass of the Sun, \(R_{\sun}\) is the radius of the Sun and \(M_{J}\) is the mass of the Jupiter. Because of the small semi-major axis ratio \(\alpha=1/200\), the octupole-order and higher-order influences are significantly weak, compared to the leading-order term. Moreover, in the configuration with the binary companion's orbit being circular (\(e_{b}=0\)), the octupole-order effect vanishes. Thus, the quadrupole-level Hamiltonian is adequate to describe the KL cycles. For convenience of computation, we take the mass of the Sun as the mass unit and the mean distance between the Sun and Earth (AU) as the unit of length. The time unit is taken to make the orbital period of the Earth around the Sun be \(2\pi\). Under the system of dimensionless units, the universal gravitation constant \(\mathcal{G}\) and the mean motion of the Earth are unitary, and the coefficients \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) are \[\mathcal{C}_{0}=3.711935\times 10^{-11},\quad\mathcal{C}_{1}=6.499119\times 10^{ -12}.\] With these parameters, the critical inclination \(i_{c}\) with consideration of GR effect is \[i_{c}=42.9^{\circ}\ \rm{or}\ 137.1^{\circ}.\] Thus, with consideration of the GR effect, Kozai-Lidov resonance happens in the interval of \(i_{\rm max}\in[42.9^{\circ},137.1^{\circ}]\), which is narrower than the conventional interval of \(i_{\rm max}\in[39.2^{\circ},140.8^{\circ}]\). To see the variation of stellar obliquity, we numerically integrate the equations of motion represented by equation (9) over 500 KL periods and record the state of stellar spin every time the eccentricity reaches a maximum (this process is similar to producing Poincare section defined by \(g_{p}=\pi/2\) and \(\dot{g}_{p}>0\)). The initial conditions are taken as \(\omega_{0}=\pi/2\), \(\epsilon_{0}=1.0^{\circ}\) (a nearly aligned configuration) and \(\phi_{0}=0^{\circ}\). Figure 3 shows the distribution of stellar obliquity as a function of the initial inclination \(i_{0}\) for the cases of \(e_{0}=0.1\) (in the left panel) and \(e_{0}=0.2\) (in the right panel). Storch et al. (2014) and Storch & Lai (2015) referred to such a kind of distribution as 'bifurcation' diagrams. From Fig. 3, we can observe: (a) 'bifurcation' diagrams are changed with different initial eccentricities, (b) the misalignment angle can reach higher than \(90^{\circ}\) from an aligned configuration when the initial inclination is larger than \(\sim\)\(30^{\circ}\), (c) multiple periodic islands or spin-orbit resonances can be observed from the 'bifurcation' diagrams, (c) the evolution of stellar spin behaves regular when the initial inclination \(i_{0}\) is smaller than \(\sim\)\(50^{\circ}\) and, within this interval, the maximum obliquity increases with initial inclination, and (d) both chaotic and regular behaviours of stellar spin can be found when the initial inclination is greater than \(\sim\)\(50^{\circ}\), meaning that in this interval the evolution of stellar obliquity is complex. It is mentioned that chaotic spin-orbit resonances are popular in the solar system, such as Saturn's satellite Hyperion (Wisdom et al., 1984), terrestrial planets (Laskar & Robutel, 1993), etc. Figure 2: Dynamical structure at the quadrupole-level approximation with GR effect when the maximum inclination \(i_{\rm max}\) is equal to \(60^{\circ}\) (_left panel_) and the distribution of KL centres in the \((i_{p},e_{p})\) panel for the model with and without GR effects (_right panel_). In the left panel, the dynamical separatrix, dividing the phase space into regions of circulation and libration, is shown by a red line, and the KL centre is marked by a black dot. In the right panel, the level curves of the motion integral \(H_{p}=G_{p}\cos i_{p}\) are presented as background and the level curve corresponding to \(i_{\rm max}=60^{\circ}\) is shown by a red dashed line. To understand the complex dynamics of stellar spin driven by planets, in the following sections, we concentrate on two cases under the GR effect: (i) the planet is located at the KL fixed point and (ii) the planet is moving on KL librating cycles. The first case can approximate stellar spin dynamics within the configurations deeply inside KL resonance. The second case can be used to approximate stellar spin dynamics within more general configurations inside KL resonances. ## 3 Spin dynamics at KL fixed point The location of KL fixed point \((e_{k},i_{k},2g_{k}=\pi)\) can be determined by equation (3). Thus, the distribution of KL fixed point is a function of \(i_{\rm max}\) (please see the right panel of Fig. 2 for details). When the planet is located at the KL fixed point, orbital elements including the eccentricity, inclination and argument of pericentre remain stationary. In this special case, the Hamiltonian governing stellar spin given by equation (8) becomes \[\begin{split}\mathcal{H}=&-\frac{3\mathcal{C}_{1} }{2\mathcal{C}_{0}S^{*}G_{k}^{3}}p^{2}+\frac{12H_{k}^{2}}{L_{p}^{2}G_{k}^{3}} \left(5L_{p}^{2}-4G_{k}^{2}\right)p\\ &-\frac{12H_{k}}{L_{p}^{2}G_{k}^{3}}\sqrt{\left(1-p^{2}\right) \left(G_{k}^{2}-H_{k}^{2}\right)}\left(5L_{p}^{2}-4G_{k}^{2}\right)\cos\phi, \end{split} \tag{10}\] where \(G_{k}=L_{p}\sqrt{1-e_{k}^{2}}\) and \(H_{k}=G_{k}\cos i_{k}\) are constant. In this special configuration, the Hamiltonian (10) determines a dynamical system with one degree of freedom. For such an ideal system, the global structures in the phase space can be explored by plotting level curves of Hamiltonian, as shown in Fig. 4. From the phase portraits, it is observed that the structures are dependent upon the motion integral characterised by \(i_{\rm max}\). When the maximum inclinations are \(i_{\rm max}=50^{\circ}\), \(i_{\rm max}=60^{\circ}\) and \(i_{\rm max}=70^{\circ}\), the basic structures arising in the phase portraits are similar and they show that there are two libration centres: one is located at \(\phi=0^{\circ}\) with \(\epsilon\) greater than \(90^{\circ}\) and the other one is located at \(\phi=\pi\) with \(\epsilon\) smaller than \(90^{\circ}\). However, it becomes different for the case of \(i_{\rm max}=82^{\circ}\). Besides the libration centres shown in the former cases, an additional libration centre appears at \(\phi=0^{\circ}\) with \(\epsilon\) smaller than \(90^{\circ}\). In all the panels shown in Fig. 4, the dynamical separarrices, shown by red lines, play a role of dividing the whole phase space into regions of libration and circulation. In addition, we can further observe from the phase portraits that configurations with \(\epsilon=0^{\circ}\) or \(\epsilon=180^{\circ}\) are marginally stable since they are located on or close to the separatrices. Under the dynamical model governed by the Hamiltonian (10), equilibrium points are determined by the following stationary conditions: \[\dot{\phi}=\frac{\partial\mathcal{H}}{\partial p}=0,\quad\dot{p}=-\frac{ \partial\mathcal{H}}{\partial\phi}=0.\] Equilibrium points are commonly called "Cassini's states" (Peale, 1969) but here it is for the central star rather than for planets. The second condition shows that equilibrium points are located at \(\phi=0\) or \(\phi=\pi\). Stable equilibrium points correspond to libration centres. By analysing the phase structures shown in Fig. 4, we can further measure the resonant width by evaluating the distance between separatrices at the libration centre. Figure 5 shows the distribution of libration centres by black lines and the resonant zones by shaded areas as functions of \(i_{\rm max}\). For libration centres at \(\phi_{\rm c}=\pi\), the resonant width first increases and then decreases with \(i_{\rm max}\). For libration centres at \(\phi_{\rm c}=0\), there are two branches: one is located in retrograde region of \(\epsilon\) and the other one is in the prograde region of \(\epsilon\). For the first branch, it exists in the entire domain of \(i_{\rm max}\) and its resonant width first increases and then decreases with \(i_{\rm max}\). The second branch begins to appear when the maximum inclination is greater than \(\sim\)80\({}^{\circ}\) and its resonant width first increases and then decreases with \(i_{\rm max}\). ## 4 Spin dynamics at KL librating cycles In this section, spin dynamics of the central star are studied in configurations where planets are moving on KL librating cycles. Figure 3: ‘Bifurcation’ diagrams of stellar spin-orbit misalignment angle as a function of the initial inclination \(i_{0}\) for the cases of \(e_{0}=0.1\) (_left panel_), \(e_{0}=0.2\) (_right panel_). For producing the ‘bifurcation’ diagrams, the equations of motion represented by equation (9) are numerically integrated over 500 KL periods with initial conditions \(\omega_{0}=\pi/2\), \(e_{0}=1.0^{\circ}\) and \(\phi_{0}=0^{\circ}\) and the states of stellar spin are recorded every time the eccentricity reaches a maximum. It should be mentioned that the dynamics of stellar spin in configurations with planets located outside KL resonance specified by extremely high \(i_{\rm max}\) have been investigated by Storch et al. (2014), Storch & Lai (2015) and Storch et al. (2017). The KL cycles they adopted are fixed and close to the KL separatrix. Thus, it is not clear about the influence of different-size KL cycles upon stellar spin dynamics. In addition, for the non-adiabatic case, Storch et al. (2017) consider the dynamical structure caused by the \(N=0\) Hamiltonian. However, the contribution of higher \(N\) Hamiltonian to the dynamical structures is not clear. In this work, we attempt to address these two points. Figure 4: Phase-space structures of stellar spin dynamics when the planet is located at the KL centre. Dynamical separatrices are shown in red lines. Here, the cases of \(i_{\rm max}=50^{\circ}\), \(i_{\rm max}=60^{\circ}\), \(i_{\rm max}=70^{\circ}\) and \(i_{\rm max}=82^{\circ}\) are taken into account. Figure 5: Resonant width as a function of \(i_{\rm max}\) for resonant centres at \(\phi_{c}=\pi\) (_left panel_) and \(\phi_{c}=0\) (_right panel_) when the planet is located at KL centre. The existence of KL resonance requires that \(i_{\rm max}\) should be larger than \(42.9^{\circ}\). The black lines show the distribution of resonant centres, and shaded regions stand for the resonant zones. In practice, we assume that the orbits of planet follow along the KL cycles shown in Fig. 6. The motion integral is specified by \(i_{\rm max}=60^{\circ}\) and the KL centre is located at \((e_{p}=0.534,2\omega_{p}=\pi)\). These cycles are characterised by \(\Delta e=0.1\) (cycle 1), \(\Delta e=0.2\) (cycle 2), \(\Delta e=0.3\) (cycle 3) and \(\Delta e=0.4\) (cycle 4). Their eccentricity variations during one KL period are presented in the right panel of Fig. 6. It should be emphasised that these KL cycles are exactly periodic and they are not influenced by stellar rotation. In the following, we intend to study the spin dynamics of the central star under the configurations where planets move on these typical KL librating cycles. ### Approximate Hamiltonian To simplify the dynamical model, let us denote the coefficient arising in the Hamiltonian (8) by \(\alpha(t)\), \[\alpha\left(t\right)=\frac{3\mathcal{C}_{1}}{\mathcal{C}_{0}S^{*}G_{p}^{3}}.\] Similar to Storch and Lai (2015), a scaled time variable \(\tau\) is introduced by \[\tau(t)=\frac{n_{e}}{\bar{\alpha}}\int\limits_{0}^{t}\alpha(t^{\prime})\mathrm{ d}t^{\prime}, \tag{11}\] where the averaged value of \(\alpha\) is calculated by \[\bar{\alpha}=\frac{n_{e}}{2\pi}\int\limits_{0}^{2\pi/n_{e}}\alpha(t)\mathrm{ d}t.\] In equation (11), \(n_{e}\) is the angular frequency of KL cycle. In \(\tau\) space, KL cycles shown in Fig. 6 have periods of \(2\pi\) and the Hamiltonian (8) becomes \[\begin{split}\mathcal{H}=&\frac{\bar{\alpha}}{n_{e} }\left\{-\frac{1}{2}p^{2}+\frac{1}{\alpha}\frac{6H_{p}^{2}}{L_{p}^{2}G_{p}^{3} }\left[5L_{p}^{2}-3G_{p}^{2}-5\left(L_{p}^{2}-G_{p}^{2}\right)\right.\right.\\ &\times\left.\left.\cos 2g_{p}\right]p+\frac{1}{\alpha} \frac{6H_{p}}{L_{p}^{2}G_{p}^{3}}\sqrt{G_{p}^{2}-H_{p}^{2}}\sqrt{1-p^{2}}\\ &\times\left[\left(5\left(L_{p}^{2}-G_{p}^{2}\right)\cos 2g_{p}- \left(5L_{p}^{2}-3G_{p}^{2}\right)\right)\cos\phi\right.\\ &\left.\left.+5\left(L_{p}^{2}-G_{p}^{2}\right)\sin 2g_{p}\sin \phi\right]\right\}.\end{split} \tag{12}\] To simplify equation (12), we denote \[\begin{split} A(\tau)=&\frac{1}{\alpha(\tau)}\frac {6H_{p}^{2}}{L_{p}^{2}G_{p}^{3}}\left[5L_{p}^{2}-3G_{p}^{2}-5\left(L_{p}^{2}- G_{p}^{2}\right)\cos 2g_{p}\right],\\ B(\tau)=&\frac{1}{\alpha(\tau)}\frac{6H_{p}}{L_{p}^ {2}G_{p}^{3}}\sqrt{G_{p}^{2}-H_{p}^{2}}\left[5\left(L_{p}^{2}-G_{p}^{2}\right) \cos 2g_{p}\right.\\ &\left.-\left(5L_{p}^{2}-3G_{p}^{2}\right)\right],\\ C(\tau)=&\frac{1}{\alpha(\tau)}\frac{30H_{p}}{L_{p}^ {2}G_{p}^{3}}\sqrt{G_{p}^{2}-H_{p}^{2}}\left(L_{p}^{2}-G_{p}^{2}\right)\sin 2g _{p}.\end{split} \tag{13}\] By replacing equation (13) in the Hamiltonian (12), we can obtain \[\begin{split}\mathcal{H}=&\frac{\bar{\alpha}}{n_{e }}\left[-\frac{1}{2}p^{2}+A(\tau)p\right.\\ &\left.+\sqrt{1-p^{2}}\left[B(\tau)\cos\phi+C(\tau)\sin\phi \right]\right].\end{split} \tag{14}\] The coefficients \(A(\tau)\), \(B(\tau)\) and \(C(\tau)\) are periodic and their periods are \(2\pi\) in \(\tau\) space, equal to the period of KL cycle. Similar to Storch and Lai (2015), \(A(\tau)\), \(B(\tau)\) and \(C(\tau)\) can be decomposed as Fourier series (truncated at order \(N\)) as Figure 6: Representative KL cycles specified by \(\Delta e=0.1\) (red line, cycle 1), \(\Delta e=0.2\) (green line, cycle 2), \(\Delta e=0.3\) (blue line, cycle 3) and \(\Delta e=0.4\) (black line, cycle 4) with the motion integral specified by \(i_{\rm max}=60^{\circ}\). These cycles are periodic and they remain unchanged during the evolution of stellar spin. In the long-term spin-orbit evolution, the planet is assumed to move along these typical KL cycles. In the left panel, the starting points (i.e., the points at the initial moment \(\tau=0\)) are marked by pink stars, and the black star corresponds to the location of KL centre at \((e_{p}=0.534,2\omega_{p}=\pi)\). In the right panel, the time histories of eccentricity during one KL period are shown for these representative KL cycles (it should be noted that KL periods of these KL cycles are different). follows: \[\begin{split} A(\tau)=&\sum_{n=0}^{N}A_{n}\cos{(n\tau)},\\ B(\tau)=&\sum_{n=0}^{N}B_{n}\cos{(n\tau)},\\ C(\tau)=&\sum_{n=1}^{N}C_{n}\sin{(n\tau)},\end{split} \tag{15}\] where \(A_{n}\), \(B_{n}\) and \(C_{n}\) are coefficients depending on the'shape' of KL cycle. Figure 7 shows the curves of \(A(\tau)\), \(B(\tau)\) and \(C(\tau)\) produced from the original expression (13) and the Fourier decomposition (15) up to order 6 for KL cycle 1 in the left panel and KL cycle 4 in the right panel. One can see that the Fourier decomposition can reproduce the curves of \(A\), \(B\) and \(C\) in an adequate accuracy. By replacing the Fourier decomposition (15) in equation (14), the Hamiltonian becomes \[\begin{split}\mathcal{H}=&\frac{\bar{\alpha}}{n_{e }}\left(-\frac{1}{2}p^{2}+A_{0}p+B_{0}\sqrt{1-p^{2}}\cos{\phi}\right)+\frac{ \bar{\alpha}}{n_{e}}\\ &\times\left\{p\sum_{n\geq 1}A_{n}\cos{(n\tau)}+\frac{1}{2}\sqrt{1-p ^{2}}\ \sum_{n\geq 1}[(B_{n}+C_{n})\right.\\ &\left.\times\cos{(\phi-n\tau)}+(B_{n}-C_{n})\cos{(\phi+n\tau)} ]\right\}.\end{split} \tag{16}\] The Hamiltonian (16) is dependent on \(\tau\), showing that it is non-autonomous. An equivalent Hamiltonian can be found in Storch & Lai (2015) under a different notation system (see equation (44) in their study). In order to study high-order as well as secondary spin-orbit resonances, the Hamiltonian system is augmented into a 2-DOF one by introducing an action variable \(T\) which is conjugated to \(\tau\) as follows (here we still use \(\mathcal{H}\) to denote the augmented Hamiltonian): \[\begin{split}\mathcal{H}=& T+\frac{\bar{\alpha}}{n_{e }}\left(-\frac{1}{2}p^{2}+A_{0}p+B_{0}\sqrt{1-p^{2}}\cos{\phi}\right)\\ &+\frac{\bar{\alpha}}{n_{e}}\left\{p\sum_{n\geq 1}A_{n}\cos{(n\tau) }+\frac{1}{2}\sqrt{1-p^{2}}\times\right.\\ &\left.\sum_{n\geq 1}\left[(B_{n}+C_{n})\cos{(\phi-n\tau)}+(B_{n}-C_{n })\cos{(\phi+n\tau)}\right]\right\}.\end{split} \tag{17}\] Hamiltonian canonical relations lead to the equations of motion, expressed by \[\begin{split}\phi^{\prime}=&\frac{\partial\mathcal{H }}{\partial p},\quad p^{\prime}=-\frac{\partial\mathcal{H}}{\partial\phi},\\ \tau^{\prime}=&\frac{\partial\mathcal{H}}{\partial \mathcal{T}}=1.0,\quad T^{\prime}=-\frac{\partial\mathcal{H}}{\partial\tau}, \end{split} \tag{18}\] where \(x^{\prime}\) denotes the derivative of \(x\) with respect to \(\tau\). As for the equations of motion, equation (9) is given in time space and (18) is given in \(\tau\) space. However, both versions of equations of motion are equivalent. For convenience, we referred to equation (9) as the full model and equation (18) as the approximated model. In Fig. 8, the equations of motion under the full model and approximated modes up to different orders are numerically integrated over 50 KL periods by starting from the same initial condition. As expected, the approximated model up to a higher order can approximate the full model better. Especially, the 6th-order model works very well. In the following simulations, we will discuss secular dynamics of stellar spin under the 6th-order approximate model, governed by the Hamiltonian (17). In particular, the numerical technique of Poincare section is taken in Section 4.2 to explore the global dynamical structures in the phase space. Then, perturbative treatments are adopted in Sections 4.3 and 4.4 to analyse the dynamical mechanism causing complex structures of spin. Figure 7: Accurate and approximate curves of \(A(\tau)\), \(B(\tau)\) and \(C(\tau)\) during one KL period for cycle 1 (_left panel_) and cycle 4 (_right panel_). The approximate curves are produced from the Fourier fitting up to order \(N=6\). ### Poincare sections The Hamiltonian (17) determines a two-degree-of-freedom dynamical model. For such a 2-DOF model, Poincare section is a powerful tool to investigate global structures in phase space. To this end, the following Poincare section is defined: \[g_{p}=\pi/2,\quad\dot{g}_{p}>0. \tag{19}\] We can see that, at the points of section, the eccentricity reaches its maximum during the long-term evolution (see Fig. 6). Figure 8: Evolution of stellar spin-orbit states propagated under the full spin model (black line) and under the approximated model up to order 2 (green line), 4 (blue line), 6 (red line). Figure 9: Poincaré sections defined by \(g_{p}=\pi/2\) and \(\dot{g}_{p}>0\) when the planet is moving on KL cycle 1 (_upper-left panel_), cycle 2 (_upper-right panel_), cycle 3 (_bottom-left panel_) and cycle 4 (_bottom-right panel_). The cyan dashed lines stand for the nominal locations of 2:1 spin-orbit (high-order and/or secondary) resonances (see the descriptions in Sect. 4.3). In \(\tau\) space, it is much easier to produce Poincare sections by recording spin states at the moments when \[\text{mod}\;(\tau,2\pi)=0.\] Figure 9 shows Poincare sections for the configurations in which planets are moving on the KL cycles shown in Fig. 6. In Poincare sections, those continuous lines stand for regular motions and those scattered points are chaotic trajectories. In particular, those continuous lines inside an island stand for quasi-periodic trajectories, and the centre of an island (resonant centre) corresponds to a periodic orbit. Thanks to Poincare sections, we can understand global structures of stellar spin in configurations with planets moving on different KL cycles. In the case of \(\Delta e=0.1\) (cycle 1), the whole phase space is mainly filled by regular orbits. The chaotic layers existing between different islands of resonance are too narrow to observe. There are two islands centred at \(\phi=0\) and there is a primary island centred at \(\phi=\pi\). Inside the primary resonance, islands of secondary resonances appears and there are three sub-islands inside the primary island. The Poincare sections in the cases of \(\Delta e=0.2\) (cycle 2) and \(\Delta e=0.3\) (cycle 3) hold similar structures. However, chaotic layer becomes wider with increasing \(\Delta e\). This is because the perturbation becomes stronger when the KL cycle has a larger amplitude. In the case of \(\Delta e=0.3\) (cycle 3), multiple periodic islands corresponding to high-order spin-orbit resonances can be found. It is different for the case of \(\Delta e=0.4\) (cycle 4). In this case, the phase space is filled mainly by chaotic sea (see the last panel of Fig. 9). However, the primary island of resonance centred at \(\phi=\pi\) still exists. In the chaotic sea, periodic islands corresponding to high-order spin-orbit resonances can be found. About the complex dynamics of stellar rotation, we may ask: what is the dynamical mechanism governing the basic structures arising in the Poincare sections? In the following, we take Hamiltonian perturbation theory developed by Henrard & Lemaitre (1986) and Henrard (1990) to deal with this problem and attempt to provide some preliminary understanding. ### Dynamics under the unperturbed Hamiltonian model According to equation (17), we can see that the Hamiltonian can be divided into two parts from the viewpoint of perturbative treatment: \[\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{1}, \tag{20}\] where the unperturbed Hamiltonian is \[\mathcal{H}_{0}=T+\frac{\bar{\alpha}}{n_{e}}\left(-\frac{1}{2}p^{2}+A_{0}p+B _{0}\sqrt{1-p^{2}}\cos\phi\right) \tag{21}\] and the part of perturbation is given by \[\mathcal{H}_{1}= \frac{\bar{\alpha}}{n_{e}}\left\{p\sum_{n\geq 1}A_{n}\cos \left(n\tau\right)+\frac{1}{2}\sqrt{1-p^{2}}\times\right.\] \[\left.\sum_{n\geq 1}\left[\left(B_{n}+C_{n}\right)\cos\left( \phi-n\tau\right)+\left(B_{n}-C_{n}\right)\cos\left(\phi+n\tau\right)\right] \right\}.\] The unperturbed Hamiltonian (21) is equivalent to the \(N=0\) Hamiltonian given in Storch et al. (2017) (see equation (29) in their study). In this section, we will discuss the spin dynamics described by the unperturbed Hamiltonian \(\mathcal{H}_{0}\). Under the unperturbed Hamiltonian model, \(\tau\) is absent, showing that its conjugate momentum \(T\) becomes an integral of motion. Similar to the discussions made in Sect. 3, the global structures in the phase space can be explored by plotting level curves of the unperturbed Hamiltonian (i.e., producing phase portraits). In Fig. 10, dynamical structures under the unperturbed Hamiltonian model are presented for the cases of KL cycles 1, 2, 3 and 4. It is observed that these four phase portraits have two islands of libration: one is centred at \(\phi=\pi\) and the other one is centred at \(\phi=0\). The differences arising in the structures of these phase portraits are very small, showing that the basic dynamics governed by the unperturbed Hamiltonian is not sensitive to the amplitude of KL cycles. The dynamical separatrices are shown by red lines. Also, we can see that the configuration with \(\epsilon=0\) or \(\pi\) is marginally stable because it is located close to the separatrix. To study spin-orbit resonances, the following set of action-angle variables are introduced (Morbidelli, 2002): \[\phi^{*}=\phi-\rho_{\phi}\left(\phi^{*},p^{*}\right)=\frac{2\pi}{T_{\phi}}\tau,\quad p^{*}=\frac{1}{2\pi}\oint p\mathrm{d}\phi \tag{23}\] for the case inside the island of libration, and \[\phi^{*}=\phi-\rho_{\phi}\left(\phi^{*},p^{*}\right)=\frac{2\pi}{T_{\phi}} \tau,\quad p^{*}=\frac{1}{2\pi}\int\limits_{0}^{2\pi}p\mathrm{d}\phi \tag{24}\] for the case outside the island of libration. \(T_{\phi}\) stands for the period of \(\phi\) under the unperturbed Hamiltonian model. \(\rho_{\phi}\) are a periodic function of \(\tau\) and its period is equal to \(T_{\phi}\). \(p^{*}\) is referred to as Arnold action (Morbidelli, 2002), and \(\phi^{*}\) is a linear function of \(\tau\). The transformations (23) and (24) are canonical with the generating function \[S\left(\phi,p^{*}\right)=\int p\left[\mathcal{H}_{0}\left(p^{*}\right),\phi \right]\mathrm{d}\phi.\] After the transformation, the unperturbed Hamiltonian is independent on the angle variable \(\phi^{*}\), indicating that its conjugate momentum \(p^{*}\) becomes an integral of motion under the unperturbed Hamiltonian model. It means the unperturbed Hamiltonian is only dependent on the action variables, \[\mathcal{H}_{0}\left(T;\phi,p\right)\rightarrow\mathcal{H}_{0}\left(T;p^{*} \right). \tag{25}\] As a result, the fundamental frequencies can be obtained by \[\left(\phi^{*}\right)^{\prime}=\frac{\partial\mathcal{H}_{0}\left(T;p^{*} \right)}{\partial p^{*}},\quad\tau^{\prime}=\frac{\partial\mathcal{H}_{0} \left(T;p^{*}\right)}{\partial T}=1.0. \tag{26}\] Under the unperturbed Hamiltonian model, the normalised spin frequency \(\left(\phi^{*}\right)^{\prime}\) changes in the interval \([0.446,0.601]\) for KL cycle 1, in the interval \([0.452,0.627]\) for KL cycle 2, in the interval \([0.465,0.670]\) for KL cycle 3, and in the interval \([0.494,0.741]\) for KL cycle 4. We know that the normalised frequency of KL cycles is equal to 1. Thus, for the considered configurations, the spin frequency is always smaller (but not much smaller) than the frequency of KL oscillation. According to the classifications discussed in Storch et al. (2014), we can get that the dynamical systems considered in this work lie in regime I ("non-adiabatic"), where the spin axis \(\hat{\mathbf{S}}\) is expected to precess around the companion's orbital axis \(\hat{\mathbf{L}}_{b}\) effectively. It is remarked that Storch & Lai (2015) and Storch et al. (2017) studied the dynamics of stellar spin under the configurations with planets moving on KL circulating cycles in the adiabatic and non-adiabatic regimes, respectively. In the adiabatic regime, it is shown that the dynamical structures in the time-independent Hamiltonian model are nearly symmetric with respect to \(p=0\) (see Figs. 4 and 6 in Storch & Lai, 2015). However, it is totally different for the results in the non-adiabatic regime (see Fig. 5 in Storch et al. 2017 and Fig. 11 in the current work). Spin-orbit resonance happens if the following condition is satisfied: \[k_{1}\left(\phi^{*}\right)^{\prime}-k_{2}\tau^{\prime}=k_{1}\left(\phi^{*} \right)^{\prime}-k_{2}=0,\quad k_{1}\in\mathbb{N},\;k_{2}\in\mathbb{Z}. \tag{27}\] The associated critical argument of \(k_{1}\):\(k_{2}\) resonance is \(\sigma=k_{1}\phi^{*}-k_{2}\tau\). Considering the range of spin frequency \((\phi^{*})^{\prime}\), the 2:1 spin-orbit resonance may happen in the phase space. If the 2:1 spin-orbit resonance takes place inside the primary resonance, we call it the 2:1 secondary resonance, otherwise we call it the 2:1 high-order resonance. The nominal locations of 2:1 spin-orbit resonance (\(k_{1}=2,k_{2}=1\)) are shown in Fig. 9 and Fig. 10 in cyan dashed lines. It is observed that both the 2:1 high-order and secondary resonances can happen in the cases of \(\Delta e=0.1,0.2,0.3\) while only the 2:1 high-order resonance can take place in the case of \(\Delta e=0.4\). In the following subsection, we will further study the 2:1 high-order (or secondary) resonance in detail by taking advantage of canonical perturbation theory developed by Henrard & Lemaitre (1986) and Henrard (1990). This theory has been adopted to study orbital flips induced by eccentric KL mechanism in restricted hierarchical planetary systems (Lei & Gong, 2022) and in the non-restricted hierarchical planetary systems (Lei & Huang, 2022). ### Perturbative treatments Through the canonical transformations shown by (23) or (24), the Hamiltonian (17) can be described as a function of the new set of variables \(\left(\tau,\phi^{*},T,p^{*}\right)\) by \[\mathcal{H}\left(\tau,\phi^{*},T,p^{*}\right)=\mathcal{H}_{0}\left(T,p^{*} \right)+\mathcal{H}_{1}\left(\tau,\phi^{*},T,p^{*}\right). \tag{28}\] It is noted that it is very difficult to provide the explicit expression of \(\mathcal{H}\) for equation (28) because the transformation Figure 10: Dynamical structures of the unperturbed Hamiltonian model when the planet is moving on KL cycles 1, 2, 3 and 4. The red lines represent the dynamical separatrices, dividing the whole phase space into regions of circulation and of libration. The cyan dashed lines stand for the nominal locations of 2:1 spin-orbit (high-order and/or secondary) resonances. In the case of cycle 4, both 2:1 resonances occur outside the primary resonance (i.e., no 2:1 secondary resonance). between \((\phi,p)\) and \((\phi^{*},p^{*})\) given by equation (23) or (24) is achieved by means of numerical quadrature. Usually, \(\mathcal{H}_{1}\) is much smaller than \(\mathcal{H}_{0}\). From the viewpoint of perturbative treatment, the term \(\mathcal{H}_{0}\) determines the unperturbed dynamical model (or Kernel Hamiltonian model) and the term \(\mathcal{H}_{1}\) plays the role of perturbation to the unperturbed Hamiltonian model. Storch et al. (2017) studied the spin dynamics in the non-adiabatic and adiabatic regimes under the unperturbed Hamiltonian model specified by \(\mathcal{H}_{0}\). However, a more formal canonical perturbation theory considering the contribution of the higher-order Hamiltonian is absent in their work. To study the 2:1 spin-orbit resonance, let us introduce the following linear transformation: \[\begin{split}\sigma_{1}&=\phi^{*}-\frac{1}{2} \tau,\quad\Sigma_{1}=p^{*},\\ \sigma_{2}&=\tau,\qquad\Sigma_{2}=T+\frac{1}{2}p^{* }.\end{split} \tag{29}\] Here, \(\sigma_{1}\) is the resonant argument of the 2:1 spin-orbit resonance. The transformation (29) is canonical with the generating function, \[S\left(\tau,\phi^{*},\Sigma_{1},\Sigma_{2}\right)=\phi^{*}\Sigma_{1}+\tau \left(\Sigma_{2}-\frac{1}{2}\Sigma_{1}\right).\] Using the transformation (29), the Hamiltonian (28) can be organised as \[\mathcal{H}\left(\sigma_{1},\sigma_{2},\Sigma_{1},\Sigma_{2}\right)=\mathcal{ H}_{0}\left(\Sigma_{1},\Sigma_{2}\right)+\mathcal{H}_{1}\left(\sigma_{1}, \sigma_{2},\Sigma_{1},\Sigma_{2}\right), \tag{30}\] and the equations of motion can be obtained by Hamiltonian canonical relations: \[\begin{split}\frac{\mathrm{d}\sigma_{1}}{\mathrm{d}\tau}& =\frac{\partial\mathcal{H}}{\partial\Sigma_{1}},\quad\frac{\mathrm{ d}\Sigma_{1}}{\mathrm{d}\tau}&=-\frac{\partial\mathcal{H}}{ \partial\sigma_{1}},\\ \frac{\mathrm{d}\sigma_{2}}{\mathrm{d}\tau}&=\frac{ \partial\mathcal{H}}{\partial\Sigma_{2}},\quad\frac{\mathrm{d}\Sigma_{2}}{ \mathrm{d}\tau}&=-\frac{\partial\mathcal{H}}{\partial\sigma_{2}}. \end{split} \tag{31}\] In particular, when the configuration is inside the 2:1 spin-orbit resonance, the critical argument \(\sigma_{1}\) becomes a long-period variable, and the angle \(\sigma_{2}\) is a fast angular variable. Thus, this is a typical separable Hamiltonian model. According to mean element theory (Kozai, 1959), the terms in Hamiltonian (30) can be classified into secular, long-period and short-period terms. The long-term evolution of stellar spin is dominated by the secular and long-period terms in the Hamiltonian. As a result, those short-period effects can be removed. In practice, we adopt averaging technique (the lowest-order perturbation theory) to filter out those short-period effects in order to formulate the resonant Hamiltonian as follows: \[\begin{split}\mathcal{H}^{*}&=\frac{1}{4\pi}\int \limits_{0}^{4\pi}\mathcal{H}\left(\sigma_{1},\sigma_{2},\Sigma_{1},\Sigma_{2} \right)\mathrm{d}\sigma_{2}\\ &=\mathcal{H}_{0}\left(\Sigma_{1},\Sigma_{2}\right)+\frac{1}{4\pi }\int\limits_{0}^{4\pi}\mathcal{H}_{1}\left(\sigma_{1},\sigma_{2},\Sigma_{1}, \Sigma_{2}\right)\mathrm{d}\sigma_{2}.\end{split} \tag{32}\] The angular variable \(\sigma_{2}\) is absent from the resonant Hamiltonian \(\mathcal{H}^{*}\), thus its conjugate momentum \(\Sigma_{2}\) becomes the motion integral under the resonant model, leading to the fact that the resonant model is of one degree of freedom. Under the resonant model, the dynamical structures can be explored by taking advantage of phase portraits (i.e., level curves of resonant Hamiltonian). When the motion integral \(\Sigma_{2}=T+\frac{1}{2}p^{*}\) is given, the dynamical structures in the \((\sigma_{1},\Sigma_{1})\) plane can be obtained. For convenience, we project phase portraits shown in the \((\sigma_{1},\Sigma_{1})\) plane to the ones in the \((\phi,\cos\epsilon)\) plane. Here \((\phi,\cos\epsilon)\) corresponds to the spin state of star when the eccentricity of planet reaches a maximum, corresponding to the points on Poincare section defined by equation (20). By doing so can we compare the phase portraits and Poincare surfaces of section directly. The main results of this work are presented in Fig. 11 for stellar spin dynamics under the configurations with planets moving on KL cycles 1-4. In the left panels, phase portraits under the resonant model are provided and, in the right panels, the associated Poincare sections are shown. It can be observed that analytical structures in the phase portraits agree well with the numerical structures in the Poincare sections, supporting the conclusion that the complex dynamics of stellar spin are governed by the primary resonance under the unperturbed Hamiltonian model in combination with the 2:1 (high-order and/or secondary) spin-orbit resonances. The red lines shown in the left panels of Fig. 11 stand for the dynamical separatrices, which divide the entire phase space into regions of libration and circulation. For convenience of comparison, the dynamical separatrices are also shown on the Poincare sections. It is observed that the dynamical separatrices can provide good boundaries for islands of libration in the Poincare sections. In addition, chaotic layers are distributed around the dynamical separatrices. It is remarked that, for the cases of cycles 3 and 4, libration islands caused by higher-order spin-orbit resonances can be found. In addition, wider chaotic layers can be observed around the dynamical separatrices due to stronger perturbation (compared to the cases of cycles 1 and 2). However, the main structures arising in the Poincare sections can be well understood with the aid of resonant dynamics. ## 5 Conclusions In this study, secular dynamics of stellar spin are investigated by means of numerical and analytical approaches under the hierarchical configuration where the planet is undergoing KL libration driven by a distant and inclined binary companion. About the dynamical model, it is assumed that the orbital evolution of the bodies involved is independent upon the stellar rotation (i.e., back-reaction effect is ignored). The planet is assumed to move on KL librating cycles with zero and nonzero amplitudes (it is noted that KL cycle with zero amplitude corresponds to the KL fixed point). The zero-amplitude case can be used to approximate those dynamical configurations which are deeply inside KL resonance. The nonzero-amplitude cases can approximate more general configurations with planets moving on KL librating cycles. When the planet is assumed at the KL fixed point, the Hamiltonian governing stellar spin determines a one-degree-of-freedom dynamical system. The phase space structures are revealed by plotting level curves of the Hamiltonian. In particular, when the maximum inclination \(i_{\mathrm{max}}\) is smaller than \(\sim\)80\({}^{\circ}\), phase portraits of stellar spin exhibit similar structures: there are two islands of libration, one is centred at \(\phi=\pi\) with \(\epsilon<90^{\circ}\) and the other one is centred at \(\phi=0\) Figure 11: Analytical structures in the phase portraits under the resonant model (_left-column panels_) and the related numerical structures in the Poincaré\(\acute{e}\) sections (_right-column panels_) for secular dynamics of stellar spin under the configurations with planets moving on KL cycles 1–4. The red lines in all panels are dynamical separatrices, which correspond to the level curves of resonant Hamiltonian passing through saddle points. with \(\epsilon>90^{\circ}\). However, when the maximum inclination \(i_{\rm max}\) is greater than \(\sim\)80\({}^{\circ}\), besides the islands of libration arising in the former case, an additional island of libration centred at \(\phi=0\) with \(\epsilon<90^{\circ}\) appears. It is found that the resonant width of stellar spin first increases and then decreases with \(i_{\rm max}\) for all the three branches of libration. When the planet is assumed on nonzero-amplitude KL cycles, the approximate Hamiltonian determines a 1.5-degree-of-freedom dynamical model. In order to study spin-orbit resonances, the Hamiltonian is further augmented into a two-degree-of-freedom system by introducing an action conjugated to the time-like variable \(\tau\). For such a 2-DOF Hamiltonian model, the technique of Poincare sections is applied and the global structures in the phase space are revealed. There are complex structures arising in Poincare sections: regular and chaotic behaviours of stellar spin can be observed. In order to understand the basic structures, the perturbative treatments developed by Henrard & Lemaitre (1986) are adopted. In particular, the spin Hamiltonian is divided into two parts: the unperturbed Hamiltonian and the part of perturbation. In terms of magnitude, the part of perturbation is much smaller than that of the unperturbed one. The unperturbed Hamiltonian specifies an integrable dynamical model, where the global dynamical structures can be explored by means of phase portraits. By analysing the distribution of unperturbed fundamental frequencies, it is found that the 2:1 high-order and/or secondary spin-orbit resonances happen in the phase space. To study the 2:1 spin-orbit resonances, we introduce a canonical transformation, which makes the Hamiltonian model be separable in terms of fundamental frequencies. Thus, it is possible to formulate the resonant Hamiltonian by performing an average for the Hamiltonian over the period of the fast degree of freedom. The resulting resonant model is a one-degree-of-freedom Hamiltonian system, where the global dynamical structures can be explored by analysing phase portraits. Our main results are provided in Fig. 11. It is found that analytical structures in phase portraits can agree well with the numerical structures arising in Poincare sections, indicating that the complex dynamics of stellar spin in the phase space are dominated by the primary resonance under the unperturbed Hamiltonian model in combination with the 2:1 (high-order and/or secondary) spin-orbit resonances. The dynamical separatrices determined under the resonant model can provide good boundaries for islands of libration. Chaotic layers are distributed around dynamical separatrices and they become wider when the amplitude of KL cycle is larger. At last, it is observed that the configurations with \(\epsilon=0\) and \(\epsilon=\pi\) are marginally stable because they are close to the separatrices. ## Acknowledgements Hanlun Lei wishes to thank an anonymous reviewer for helpful suggestions that improved the quality of this manuscript. This work is supported by the National Natural Science Foundation of China (Nos. 12073011, 12073019 and 12233003) and the National Key R&D Program of China (No. 2019YFA0706601). ## Data availability The analysis and codes are available upon request.
2305.04077
Bilinear Bochner-Riesz means for convex domains and Kakeya Maximal function
In this paper we introduce bilinear Bochner-Riesz means associated with convex domains in the plane $\mathbb R^2$ and study their $L^p-$boundedness properties for a wide range of exponents. One of the important aspects of our proof involves the use of bilinear Kakeya maximal function in the context of bilinear Bochner-Riesz problem. This amounts to establish suitable $L^p-$ estimates for the later. We also point out some natural connections between bilinear Kakeya maximal function and Lacey's bilinear maximal function.
Ankit Bhojak, Surjeet Singh Choudhary, Saurabh Shrivastava
2023-05-06T15:49:45Z
http://arxiv.org/abs/2305.04077v1
# Bilinear Bochner-Riesz means for convex domains and Kakeya maximal function ###### Abstract. In this paper we introduce bilinear Bochner-Riesz means associated with convex domains in the plane \(\mathbb{R}^{2}\) and study their \(L^{p}-\)boundedness properties for a wide range of exponents. One of the important aspects of our proof involves the use of bilinear Kakeya maximal function in the context of bilinear Bochner-Riesz problem. This amounts to establish suitable \(L^{p}-\) estimates for the later. We also point out some natural connections between bilinear Kakeya maximal function and Lacey's bilinear maximal function. 2010 Mathematics Subject Classification: Primary 42B15, 42B25 ###### Contents * 1 Introduction * 2 Main results * 3 Basic framework for Theorem 2.1 * 4 Auxiliary results for Theorem 2.1 * 5 Proof of Theorem 2.1: Bilinear Bochner-Riesz means * 6 Proof of Theorem 2.4: Fixed scale bilinear Kakeya maximal function * 7 Proof of Theorem 2.5: Bilinear Kakeya maximal function * 8 Proof of Theorem 2.7: Vector-valued extension of bilinear Kakeya maximal function * 9 Examples for sharpness of constants * 10 Further discussions ## 1. Introduction The study of bilinear Bochner-Riesz means has become an active area of research in harmonic analysis in recent years. The bilinear Bochner-Riesz mean of index \(\lambda\geqslant 0\) is the bilinear multiplier operator defined by \[\mathcal{B}^{\lambda}(f,g)(x) := \int_{\mathbb{R}^{2n}}(1-|\xi|^{2}-|\eta|^{2})^{\lambda}_{+}\hat{ f}(\xi)\hat{g}(\eta)e^{2\pi ix\cdot(\xi+\eta)}\ d\xi d\eta,\] Introduction Let \(\mathcal{B}^{\lambda}\) be a bounded bounded complex field with boundary \(\lambda>0\). We consider the following Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R}^{2}:\xi\leq 0\text{ and } \lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.1}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R}^{2}:\xi\leq 0\text{ and } \lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.2}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.3}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.4}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.5}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.6}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.7}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.8}\] where \(\lambda\) is a bounded complex field with boundary \(\lambda>0\). We consider the Dirichlet boundary problem \[\left\{\begin{array}{ll}\lambda\in\mathbb{R},\\ \lambda\in\mathbb{R},\end{array}\right. \tag{1.9}\] where \(f,g\in\mathcal{S}(\mathbb{R}^{n})-\) the Schwartz class on \(\mathbb{R}^{n}\), and \((1-|\xi|^{2}-|\eta|^{2})_{+}^{\lambda}=(1-|\xi|^{2}-|\eta|^{2})^{\lambda}\chi_{ {}_{D}}(\xi,\eta)\). The notation \(D\) stands for the unit ball in \(\mathbb{R}^{n}\times\mathbb{R}^{n}\). We are interested in the study of \(L^{p_{1}}(\mathbb{R}^{n})\times L^{p_{2}}(\mathbb{R}^{n})\to L^{p_{3}}( \mathbb{R}^{n})\) boundedness of the operator \(\mathcal{B}^{\lambda}\), i.e., estimates of the form \[\left\|\mathcal{B}^{\lambda}(f,g)\right\|_{p_{3}}\lesssim\left\|f\right\|_{p_{ 1}}\left\|g\right\|_{p_{2}}, \tag{1.1}\] for all \(f\) and \(g\) in a suitable class of functions in \(L^{p_{1}}(\mathbb{R}^{n})\times L^{p_{2}}(\mathbb{R}^{n})\) with the implicit constant independent of \(f\) and \(g\). Moreover, we shall always assume that the exponents \(p_{1},p_{2}\) and \(p_{3}\) in (1.1) satisfy the Holder relation \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\) with \(1\leq p_{1},p_{2}\leq\infty\). First results for the operator \(\mathcal{B}^{\lambda}\) were obtained by Bernicot and Germain [2] for \(n=1\). These were later improved upon and extended to higher dimensions by Bernicot, Grafakos, Song and Yan [1]. In the case of dimension \(n=1\), they gave a complete picture of \(L^{p}-\)boundedness of \(\mathcal{B}^{\lambda},\lambda>0\), for exponents in the Banach triangle \(\{(p_{1},p_{2},p_{3}):1\leq p_{1},p_{2},p_{3}\leq\infty\}\). However, in higher dimensions the results were not sharp. Jeong and Lee [14] established sharp results for a certain range of exponents \(p_{1},p_{2},p_{3}\) and index \(\lambda>0\). In particular, they showed that \(\mathcal{B}^{\lambda}\) maps \(L^{2}(\mathbb{R}^{n})\times L^{2}(\mathbb{R}^{n})\) into \(L^{1}(\mathbb{R}^{n})\) for all \(\lambda>0\), which is sharp in \(\lambda\). Recently, Kaur and Shrivastava [15] obtained new results for the operator \(\mathcal{B}^{\lambda}\) and its maximal variant. The results in [15] are the best known so far and also are sharp in some cases. Since the results are technical and require new notation, which are not needed otherwise in the rest of the paper, we skip the details here. We would like to refer to Liu and Wang [19] for results in non-Banach triangle, i.e. when \(p_{3}<1\), to Choudhary, Kaur, Shrivastava and Shuin [3] and Choudhary and Shrivastava [4] for results about bilinear Bochner-Riesz square function and its applications to bilinear multipliers. The case of \(\lambda=0\) is more subtle. This was first addressed by Grafakos and Li [11] in dimension \(n=1\). They proved that the operator \(\mathcal{B}^{0}\), commonly referred to as the bilinear disc multiplier operator, maps \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) into \(L^{p_{3}}(\mathbb{R})\) for exponents in local \(L^{2}-\) range: \(\{(p_{1},p_{2},p_{3}):2\leq p_{1},p_{2},p_{3}^{\prime}<\infty\}\). In general, the question whether characteristic function of a given geometric shape in \(\mathbb{R}^{2}\) gives rise to bilinear multiplier has been addressed for many interesting shapes. In the seminal papers Lacey and Thiele [17, 18] proved that the bilinear Hilbert transform associated with the characteristic function \(\chi_{{}_{H_{\alpha}}}(\xi,\eta)\) of half plane \(H_{\alpha}=\{(\xi,\eta)\in\mathbb{R}^{2}:\xi-\alpha\eta>0\},\alpha\in\mathbb{R}\), satisfies (1.1) for a wide range of exponents \(p_{1},p_{2}\) and \(p_{3}\). Demeter and Gautam [9] proved estimate (1.1) for the bilinear operator associated with infinite lacunary polygon inscribed in the disc. Through infinite lacunary polygon they tried to approximate the boundary of disc at a point and showed that positive results can be obtained for such bilinear multipliers even outside the local \(L^{2}-\)range. Next, we refer to Muscalu [21], where estimate (1.1) is proved for bilinear multipliers determined by graph of convex functions with bounded slopes. Recently, Saari and Thiele [24] studied bilinear paraproducts associated with certain convex sets. In particular, they showed that the bilinear operator associated with multiplier symbol \(\chi_{{}_{C}}(\xi,\eta)\), where \(C\) is the convex set \(\{(\xi,\eta)\in\mathbb{R}^{2}:\xi\leq 0\text{ and }2^{\xi}\leq\eta<1\}\), satisfies estimate (1.1) for exponents \(p_{1},p_{2}\) and \(p_{3}\) in the local \(L^{2}-\)range. Motivated by these recent developments in the direction of bilinear multipliers and results for Bochner-Riesz means associated with convex domains by Seeger and Ziesler [25] and Cladek [5, 6], we plan to investigate analogous questions for bilinear Bochner-Riesz means associated with convex domains in \(\mathbb{R}^{2}\). We will see that our investigation naturally gives rise to bilinear analogue of Kakeya maximal function. In this paper our aim is to * Generalize the notion of bilinear Bochner-Riesz means in the context of open and bounded convex domains in the plane \(\mathbb{R}^{2}\). The existing techniques employed to deal with bilinear Bochner-Riesz means in [1, 2, 14, 13, 15] do not extend to the case of general convex domains as they rely on the explicit form of the multiplier \((1-|\xi|^{2}-|\eta|^{2})_{+}^{\lambda}\). The framework developed in [25] allows us to begin with the study of this case. This approach naturally leads to bilinear analogues of Kakeya maximal functions. In [2], this connection was pointed out briefly for the bilinear multiplier \((1-|\xi|^{2}-|\eta|^{2})_{+}^{\lambda}.\) We develop this approach systematically in this paper. * Extend the classical approach of using geometric maximal functions to prove \(L^{p}-\)boundedness results for Fourier multipliers to the bilinear setting. In this direction we introduce the bilinear Kakeya maximal function in the plane and study its \(L^{p}-\)boundedness properties. We establish a connection between bilinear Bochner-Riesz means and Kakeya maximal function and consequently deduce the \(L^{p}-\) estimates for the multiplier operator. See Section 2.2 for definitions and results. ## 2. Main results ### Bilinear Bochner-Riesz means associated with convex domains Let \(\Omega\) be an open and bounded convex set in the plane \(\mathbb{R}^{2}\) containing the origin. Let \(\partial\Omega\) denote the boundary of \(\Omega\). Consider the Minkowski functional associated with \(\Omega\) given by, \[\rho(\xi,\eta)=\inf\{t>0:\;(t^{-1}\xi,t^{-1}\eta)\in\partial\Omega\}.\] The bilinear Bochner-Riesz mean of index \(\lambda>0\) associated with the convex domain \(\Omega\) is defined by \[\mathcal{B}^{\lambda}_{\Omega}(f,g)(x)=\int_{\mathbb{R}}\int_{\mathbb{R}}(1- \rho(\xi,\eta))_{+}^{\lambda}\hat{f}(\xi)\hat{g}(\eta)e^{2\pi ix(\xi+\eta)}\;d \xi d\eta.\] Observe that if \(\Omega\) is the unit disc in \(\mathbb{R}^{2}\), the operator \(\mathcal{B}^{\lambda}_{\Omega}\) is same as \(\mathcal{B}^{\lambda}\) defined as earlier. The following theorem is the main result of this paper for bilinear Bochner-Riesz means \(\mathcal{B}^{\lambda}_{\Omega}\). **Theorem 2.1**.: _Let \(\Omega\) be an open and bounded convex set and \(\mathcal{B}^{\lambda}_{\Omega}\) be the bilinear Bochner-Riesz mean described as above. Then for \(\lambda>0\) and exponents \(p_{1},p_{2},p_{3}\) satisfying \(p_{1},p_{2}\geqslant 2\) and \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\), the operator \(\mathcal{B}^{\lambda}_{\Omega}\) maps \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) into \(L^{p_{3}}(\mathbb{R})\), i.e., there exists a constant \(C=C(\Omega,\lambda,p_{1},p_{2})>0\) such that for all \(f,g\in\mathcal{S}(\mathbb{R})\) we have_ \[\left\|\mathcal{B}^{\lambda}_{\Omega}(f,g)\right\|_{p_{3}}\leqslant C\|f\|_{p_ {1}}\|g\|_{p_{2}}.\] As an application of Theorem 2.1, we obtain the following result for quasiradial bilinear multipliers. **Corollary 2.2**.: _Let \(\rho\) be the Minkowski functional associated with a convex domain \(\Omega\) described as above and \(m:[0,\infty)\to\mathbb{R}\) be a function such that for \(\lambda>0\), the following holds_ \[\int_{0}^{\infty}s^{\lambda}|m^{\lambda+1}(s)|ds<\infty.\] _Then the bilinear multiplier \(T_{m}\) defined as_ \[T_{m}(f,g)(x)=\int_{\mathbb{R}}\int_{\mathbb{R}}m(\rho(\xi,\eta))\hat{f}(\xi) \hat{g}(\eta)e^{2\pi ix(\xi+\eta)}\;d\xi d\eta\] _maps \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) into \(L^{p_{3}}(\mathbb{R})\) for all \(p_{1},p_{2}\geq 2\) and \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\)._ The Corollary 2.2 is a direct consequence of Theorem 2.1 along with the well-known subordination formula given below, see [29] for more details. \[(m\circ\rho)(\xi,\eta)=\frac{(-1)^{[\lambda]+1}}{\Gamma(\lambda+1)}\int_{0}^{ \infty}s^{\lambda}m^{\lambda+1}(s)\left(1-\frac{\rho(\xi,\eta)}{s}\right)_{+} ^{\lambda}ds.\] ### Bilinear Kakeya maximal function In this section we describe the main results of the paper for bilinear Kakeya maximal function. Let \(\mathfrak{F}\) be a collection of finite measure sets in \(\mathbb{R}^{n}\). Consider the maximal averaging operator associated with the collection \(\mathfrak{F}\) defined by \[M_{\mathfrak{F}}f(x)=\sup_{F\in\mathfrak{F}:\;x\in F}\frac{1}{|F|}\int_{F}|f(y) |\;dy. \tag{2.1}\] Maximal averaging operators play key roles in differentiation theory. Under certain geometric conditions on the sets in \(\mathfrak{F}\), the operator \(M_{\mathfrak{F}}\) enjoys \(L^{p}-\)boundedness properties. For example, if \(\mathfrak{F}\) is the collection of cubes (or balls) in \(\mathbb{R}^{n}\), the operator \(M_{\mathfrak{F}}\), commonly known as the Hardy-Littlewood maximal operator, maps \(L^{p}(\mathbb{R}^{n})\) into itself for all \(1<p\leq\infty\) with a weak-type boundedness at \(p=1\). However, if \(\mathfrak{F}\) is the collection of all rectangles in \(\mathbb{R}^{n}\), then by a well-known Besicovitch set construction, see [26], it is known that the corresponding operator \(M_{\mathfrak{F}}\) fails to be \(L^{p}-\)bounded for all \(1\leq p<\infty\). The Kakeya maximal function involves the averages over rectangle with an extra condition on sides of rectangle. In this paper we will restrict ourselves to Kakeya maximal function in dimension \(n=2\). For an integer \(N>1\) and \(\delta>0\), let \(\mathcal{R}_{\delta,N}\) be the class of all rectangles in \(\mathbb{R}^{2}\) with dimensions \(\delta\times\delta N\) and \(\mathcal{R}_{N}=\cup_{\delta>0}\mathcal{R}_{\delta,N}\). A standard dilation argument implies that the \(L^{p}-\)boundedness of a fixed scale maximal operator \(M_{\mathcal{R}_{\delta,N}}\) is equivalent to that of the operator \(M_{\mathcal{R}_{1,N}}\). Cordoba [7] proved that \[\|M_{\mathcal{R}_{1,N}}\|_{L^{2}\to L^{2}}\lesssim(\log N)^{\frac{1}{2}}, \tag{2.2}\] and the logarithmic dependence on the "eccentricity" \(N\) is sharp. Later, Stromberg [27] proved the following sharp bounds for the maximal operator \(M_{\mathcal{R}_{N}}\). \[\|M_{\mathcal{R}_{N}}\|_{L^{2}\to L^{2}}\lesssim\log N \tag{2.3}\] We consider the bilinear analogue of Kakeya maximal functions defined above. The fixed scale bilinear Kakeya maximal function is defined by \[\mathcal{M}_{\mathcal{R}_{\delta,N}}(f,g)(x)=\sup_{\begin{subarray}{c}R\in \mathcal{R}_{\delta,N}\\ (x,x)\in R\end{subarray}}\frac{1}{|R|}\int_{R}|f(y_{1})||g(y_{2})|dy_{1}dy_{2}.\] The bilinear Kakeya maximal function associated with the collection \(\mathcal{R}_{N}\) is defined by \[\mathcal{M}_{\mathcal{R}_{N}}(f,g)(x)=\sup_{k\leqslant N}\sup_{ \begin{subarray}{c}R\in\mathcal{R}_{k}\\ (x,x)\in R\end{subarray}}\frac{1}{|R|}\int_{R}|f(y_{1})||g(y_{2})|dy_{1}dy_{2},\] The bilinear Kakeya maximal functions arise naturally in the study of the bilinear Bochner-Riesz means. Therefore, sharp \(L^{p}-\)estimates for the maximal functions yield the corrsponding \(L^{p}-\)estimates for Bochner-Riesz means. **Remark 2.3**.: _Formally, the bilinear Kakeya maximal function \(\mathcal{M}_{\mathcal{R}_{1,N}}(f,g)\) may also be obtained by restricting the (linear) two-dimensional Kakeya maximal function \(M_{\mathcal{R}_{1,N}}(f\otimes g)\) to the diagonal \(\{(x,x):x\in\mathbb{R}\}\), where \((f\otimes g)(x,y)=f(x)g(y).\)_ We have the following result for the operator \(\mathcal{M}_{\mathcal{R}_{\delta,N}}\). As earlier, it is enough to consider the case of \(\delta=1\). **Theorem 2.4**.: _Let \(1\leqslant p_{1},p_{2}\leqslant\infty\) be such that \(\frac{1}{p_{3}}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\), then we have the following estimates._ 1. **Banach case:** 1. _For_ \(1\leqslant p_{1},p_{2},p_{3}\leqslant\infty\)_,_ \(\mathcal{M}_{\mathcal{R}_{1,N}}\) _maps_ \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) _to_ \(L^{p_{3},\infty}(\mathbb{R})\) _with operator norm bounded by a constant independent of_ \(N\)_. Note that standard bilinear interpolation arguments yield strong type bounds for all_ \(p_{3}>1\) _with operator norm independent of_ \(N\)_._ 2. \(\mathcal{M}_{\mathcal{R}_{1,N}}\) _maps_ \(L^{p}(\mathbb{R})\times L^{p^{\prime}}(\mathbb{R})\to L^{1}(\mathbb{R})\) _with operator norm bounded by a constant multiple of_ \((\log N)^{\frac{1}{\min(p_{1},p_{2})}}\)_. Here_ \(p^{\prime}\) _denotes the conjugate index_ \(\frac{1}{p}+\frac{1}{p^{\prime}}=1.\)__ 2. **Non-Banach case:** 1. _For_ \(1\leqslant p_{1},p_{2}\leqslant\infty\) _and_ \(\frac{1}{2}<p_{3}\leqslant 1\)_,_ \(\mathcal{M}_{\mathcal{R}_{1,N}}\) _is bounded from_ \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) _to_ \(L^{p_{3}}(\mathbb{R})\) _with constant_ \(N^{\frac{1}{p_{3}}-1}\)_._ 2. **End-point \((1,1,1/2)\):**__\(\mathcal{M}_{\mathcal{R}_{1,N}}\) _maps_ \(L^{1}(\mathbb{R})\times L^{1}(\mathbb{R})\) _to_ \(L^{\frac{1}{2},\infty}(\mathbb{R})\) _with constant_ \(N\)_._ Next, we have the following \(L^{p}-\)boundedness result for the operator \(\mathcal{M}_{\mathcal{R}_{N}}\). **Theorem 2.5**.: _Let \(1\leqslant p_{1},p_{2}\leqslant\infty\) and \(\frac{1}{p_{3}}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\). The following bounds hold true._ 1. **Banach case:** 1. _For all_ \(1\leqslant p_{1},p_{2},p_{3}\leqslant\infty\) _we have that_ \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}, \infty}\lesssim 1\)_. Observe that as a consequence of this we get strong type bounds_ \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}} \lesssim 1\) _for all_ \(p_{3}>1\)_._ 2. _For all_ \(1<p_{1},p_{2}\leqslant\infty\) _we have that_ \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{1}} \lesssim\log N.\) _Moreover, the bound_ \(\log N\) _is sharp here._ 2. **Non-Banach case:** 1. _For_ \(1<p_{1},p_{2}\leqslant\infty\) _and_ \(\frac{1}{2}<p_{3}<1\)_, we have_ \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}} \lesssim N^{\frac{1}{p_{3}}-1}\)_._ 2. **End-point case:** _If atleast one of_ \(p_{1}\) _or_ \(p_{2}\) _is_ \(1\) _then_ \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}, \infty}\lesssim N\)_._ In Section 9 we will provide some examples towards the sharpness of constants with respect to the parameter \(N\) in the theorems above. **Remark 2.6**.: _Indeed, in the proof of Theorem 2.1 we will require to consider bilinear Kakeya maximal function over rectangles whose eccentricity is less than or equal to \(N\). We will denote such a maximal function by \(\mathcal{M}_{\mathcal{R}_{\leqslant N}}\). An analogue of Theorem 2.5 holds for \(\mathcal{M}_{\mathcal{R}_{\leqslant N}}\) with an additional constant of \(\log N\) on the operator norm of \(\mathcal{M}_{\mathcal{R}_{\leqslant N}}\). Due to notational inconvenience and repetition we will skip the details._ Finally, we describe a vector-valued result for the operator \(\mathcal{M}_{\mathcal{R}_{N}}\). This will be required in the proof of Theorem 2.1. **Theorem 2.7**.: _Let \(1<p_{1},p_{2}<\infty\), \(1\leqslant p_{3}<\infty\) and \(1<r_{1},r_{2}\leqslant\infty\), \(1\leqslant r_{3}\leqslant\infty\) satisfy \(\frac{1}{p_{3}}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\) and \(\frac{1}{r_{3}}=\frac{1}{r_{1}}+\frac{1}{r_{2}}\). Then for any \(\epsilon>0\), we have_ \[\left\|\left(\sum_{j}|\mathcal{M}_{\mathcal{R}_{N}}\left(f_{j},g_{j}\right)|^ {r_{3}}\right)^{\frac{1}{r_{3}}}\right\|_{p_{3}}\lesssim N^{\epsilon}\left\| \left(\sum_{j}|f_{j}|^{r_{1}}\right)^{\frac{1}{r_{1}}}\right\|_{p_{1}}\left\| \left(\sum_{j}|g_{j}|^{r_{2}}\right)^{\frac{1}{r_{2}}}\right\|_{p_{2}}.\] The proof of \(L^{p}(R)\times L^{p^{\prime}}(\mathbb{R})\to L^{1}(\mathbb{R})-\)boundedness of \(\mathcal{M}_{\mathcal{R}_{N}}\) in Theorem 2.5 is based on an interpolation trick between suitably chosen exponents. The idea is motivated by the linear counterpart from Cordoba [7] and Stromberg [27]. This approach yields sharp constants with respect to \(N\) in Theorem 2.5. However, for the fixed scale maximal function \(\mathcal{M}_{\mathcal{R}_{\delta,N}}\) in Theorem 2.4, this method does not give sharp constant in \(N\). Indeed, the connection of \(\mathcal{M}_{\mathcal{R}_{N}}\) with linear Kakeya maximal function on product type functions as mentioned before, allows us to exploit the ideas of Tanaka [28] to prove better bounds in Theorem 2.4. This method is more direct and involves a certain counting argument for rectangles under consideration. ## 3. Basic framework for Theorem 2.1 In this section we develop the basic framework required to deal with general convex domains. This part involves new definitions, reduction of the problem to smooth domains and parametrization of the boundary \(\partial\Omega.\) We mostly follow Seeger and Ziesler [25] for this part. ### Reduction to domains with smooth boundary First, observe that using a standard dilation argument for bilinear multipliers, we may without loss of generality, assume that \[B(0,4)\subset\Omega\subset\overline{\Omega}\subset B(0,2^{M}),\] where \(M\geq 3\) is a fixed constant. Next, observe that the boundary \(\partial\Omega\) may not be smooth. At this point we invoke the approach used by Seeger and Ziesler [25] in the linear case. This allows us to reduce the problem to domains with smooth boundary. We approximate \(\partial\Omega\) by a sequence of smooth curves using polygons whose boundary is smoothened near the vertices. We require some preliminary definitions in the context of convex domains in order to perform this reduction. Given a point \(P\in\partial\Omega\), we say that a line \(\ell\) passing through \(P\) is a supporting line for \(\Omega\) at \(P\) if \(\Omega\) is contained in the closed half plane whose boundary is the line \(\ell\). Let \(T(\Omega,P)\) denote the set of all supporting lines for \(\Omega\) at \(P\). Note that if \(\partial\Omega\) is \(C^{1}-\)smooth, the tangent at \(P\) is the unique supporting line for \(\Omega\) at \(P\). For \(\delta>0\), consider the ball centered at \(P\) on \(\partial\Omega\) along \(\ell\) given by \[B(P,\ell,\delta)=\{X\in\partial\Omega:\ \text{dist}(X,\ell)<\delta\}.\] Denote the collection of such balls by \(\mathfrak{N}_{\delta}=\{B(P,\ell,\delta):\ P\in\partial\Omega,\ell\in\text{T} (\Omega,P)\}\). Let \(\text{N}(\Omega,\delta)\) be the minimum number of balls in \(\mathfrak{N}_{\delta}\) required to cover the boundary \(\partial\Omega\). The upper Minkowski dimension \(\kappa_{\Omega}\) of \(\Omega\) is defined by \[\kappa_{\Omega}=\limsup_{\delta\to 0}\frac{\log\text{N}(\Omega,\delta)}{\log \delta^{-1}}. \tag{3.1}\] Note that for any convex set \(\Omega\), we have \(0\leq\kappa_{\Omega}\leq\frac{1}{2}\). Further, \(\kappa_{\Omega}=0\) if \(\Omega\) is a convex polygon and \(\kappa_{\Omega}=\frac{1}{2}\) if \(\Omega\) is a smooth domain, for example, if \(\Omega\) is the unit ball, then \(\kappa_{\Omega}=\frac{1}{2}\). With these notions we are ready to invoke the approximation lemma from [25]. **Lemma 3.1**.: _[_25_]_ _There exists a sequence of domains \(\Omega_{n}\) whose boundary \(\partial\Omega_{n}\) is \(C^{\infty}-\)smooth and the Minkowski functional \(\rho_{n}\) corresponding to \(\Omega_{n}\) satisfy the following conditions._ 1. \(\Omega_{n}\subseteq\Omega_{n+1}\subset\Omega\) _and_ \(\Omega=\bigcup_{n}\Omega_{n}\)_._ 2. \(\rho(\xi)\leq\rho_{n+1}(\xi)\leq\rho_{n}(\xi)\) _with_ \(\rho_{n}(\xi)-\rho(\xi)\leq 2^{-n-1}\rho(\xi)\)_. In particular_ \(\lim_{n\to\infty}\rho_{n}(\xi)=\rho(\xi)\) _with uniform convergence on compact sets._ 3. _If_ \(\delta\geq 2^{-n+2}\)_, then_ \[N(\Omega_{n},2\delta)\lesssim N(\Omega,\delta).\] Observe that it is enough to prove Theorem 2.1 for domain \(\Omega_{n}\) as in the lemma above with bounds uniform in \(n\). Then, Theorem 2.1 for the domain \(\Omega\) follows using Fatou's lemma. ### Decomposition of the boundary \(\partial\Omega\) Following the approach of Seeger and Ziesler [25] we consider the following parametrization of the smooth boundary \(\partial\Omega\). **Lemma 3.2**.: _[_25_]_ _Let \(\{u_{p}\}_{p=1}^{2^{2M}}\) be the set of \(2^{2M}\) uniformly distributed unit vectors in \(\mathbb{R}^{2}\) and \(\mathfrak{G}_{u_{p}}=\{(\xi,\eta)\in\mathbb{R}^{2}:\langle(\xi,\eta),u_{p} \rangle\leqslant 0,\ |\langle(\xi,\eta),u_{p}^{\perp}\rangle|\leqslant 2\}\) be the half strip associated with \(u_{p}\). We can parametrize \(\partial\Omega\cap\mathfrak{G}_{u_{p}}\) by_ \[t\mapsto tu_{p}^{\perp}+\gamma(t)u_{p},\ -2\leqslant t\leqslant 2,\] _where \(\gamma:[-2,2]\to[-2^{M},-2]\) is a convex function with left and right derivatives \(\gamma_{L}^{\prime}\) and \(\gamma_{R}^{\prime}\) satisfying,_ \[-2^{M-1}\leqslant\gamma_{L}^{\prime}(t)\leqslant\gamma_{R}^{\prime}(t) \leqslant 2^{M-1},\ -2\leqslant t\leqslant 2.\] _Moreover, for a supporting line \(\ell\) at the point \(P\in\partial\Omega\) and a outward unit normal vector \(\vec{n}\) we have_ \[\left\langle\frac{P}{|P|},\vec{n}\right\rangle\geqslant 2^{-M}. \tag{3.2}\] Next, for a given \(\delta>0\), we decompose the boundary \(\partial\Omega\cap\mathfrak{G}_{u_{p}}\) into pieces such that the kernel corresponding to each piece is integrable and its growth is controlled by the covering number \(N(\Omega,\delta)\). As in [25] consider a partition \(\mathfrak{U}_{u_{p}}(\delta)=\{-1=a_{0}<a_{1}<\cdots<a_{Q_{u_{p}}}=1\}\) of \([-1,1]\) such that for \(j=0,\ldots,Q_{u_{p}}(\delta)-1\), we have \[(a_{j+1}-a_{j})(\gamma_{L}^{\prime}(a_{j+1})-\gamma_{R}^{\prime}(a_{j})) \leqslant\delta,\] and \[(t-a_{j})(\gamma_{L}^{\prime}(t)-\gamma_{R}^{\prime}(a_{j}))\leqslant\delta, \ \text{if}\ t>a_{j+1}.\] The following lemma gives a control on quantity \(Q_{u_{p}}(\delta)\) with respect to the covering number \(N(\Omega,\delta)\). **Lemma 3.3**.: _[_25_]_ _There exists a constant \(C_{M}>0\) such that_ 1. \(Q_{u_{p}}(\delta)\leqslant C_{M}\delta^{-\frac{1}{2}}\)_._ 2. \(C_{M}^{-1}N(\Omega,\delta)\leqslant\sum_{p=1}^{2^{2M}}Q_{u_{p}}(\delta) \leqslant C_{M}N(\Omega,\delta)\log\delta^{-1}\)_._ We need to refine the partition further in order to obtain sharper estimates for the underlying kernels. For each fixed \(j\), consider points \(\{a_{j,\nu}:\ \nu=-2M-l,\ldots,2M+l\}\) such that * For any interval \(A_{j,\nu}=[a_{j,\nu},a_{j,\nu+1}]\), we have \(|A_{j,\nu}|\geqslant 2^{-5M}\delta\). * For any two consecutive intervals \(A_{j,\nu}\) and \(A_{j^{\prime},\nu^{\prime}}\), we have, (3.3) \[(t-s)(\gamma_{L}^{\prime}(t)-\gamma_{R}^{\prime}(s))\leqslant\delta,\ \text{if}\ t<s,\ \text{and}\ t,s\in A_{j,\nu}\cup A_{j^{\prime},\nu^{\prime}}.\] ## 4. Auxiliary results for Theorem 2.1 In this section we discuss some supporting results which will be required in proving the main result Theorem 2.1. The first lemma is well-known and is an easy consequence of Minkowski's integral inequality. It says that integrability of the kernel of a bilinear multiplier operator is sufficient for it to be bounded from \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) to \(L^{p_{3}}(\mathbb{R})\) for \(p_{3}\geq 1\). **Lemma 4.1**.: _Let \(S_{m}\) be a bilinear operator associated with the multiplier \(m\) defined as_ \[S_{m}(f,g)(x)=\int_{\mathbb{R}}\int_{\mathbb{R}}m(\xi,\eta)\hat{f}(\xi)\hat{g} (\eta)e^{2\pi ix(\xi+\eta)}d\xi d\eta.\] _Suppose \(\|\mathcal{F}^{-1}m\|_{L^{1}(\mathbb{R}^{2})}<\infty\), then for \(p_{1},p_{2},p_{3}\geq 1,\;\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\), we have,_ \[\|S_{m}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}}\;\raise 1.29pt\hbox{$<$} \kern-10.0pt\lower 2.58pt\hbox{$\sim$}\;\|\mathcal{F}^{-1}m\|_{1}.\] _Here the notation \(\mathcal{F}^{-1}\) stands for the inverse Fourier transform._ Next, we recall a lemma from [25] which provides us with local integrability estimates for the multipliers under consideration. **Lemma 4.2**.: _[_25_]_ _Let \(h:[0,\infty)\to\mathbb{R}\) be an absolutely continuous function such that \(\lim\limits_{t\to\infty}h(t)=0\) and \(\|th^{\prime}(t)\|_{L^{1}[0,\infty)}<0\). Suppose the function_ \[F(\tau)=\int_{0}^{\infty}h^{\prime}(s)e^{is\tau}\;ds\] _satisfies \(|F(\tau)|+|F^{\prime}(\tau)|\;\raise 1.29pt\hbox{$<$}\kern-10.0pt\lower 2.58pt \hbox{$\sim$}\;(1+|\tau|)^{-2}\). Let \(A_{k}=B(0,2^{k})\backslash B(0,2^{k-1}),\;k\geq 1\). Then we have,_ \[\|\mathcal{F}^{-1}(h\circ\rho)\|_{L^{1}(B(0,1))} \;\raise 1.29pt\hbox{$<$}\kern-10.0pt\lower 2.58pt\hbox{$\sim$}\;1,\;\mbox{and}\] \[\|\mathcal{F}^{-1}(h\circ\rho)\|_{L^{1}(A_{k})} \;\raise 1.29pt\hbox{$<$}\kern-10.0pt\lower 2.58pt\hbox{$\sim$}\;k2^{-k}.\] The following lemma provides pointwise estimates for the kernels that we encounter while proving Theorem 2.1. **Lemma 4.3**.: _Let \(A_{k}\) be the annulus as above and \(l\geq 1\). If \(K\) is a kernel defined in \(\mathbb{R}^{2}\) such that_ \[|K(x)|\leq\frac{a}{(1+|ax_{1}|)^{2}}\frac{1}{1+x_{2}^{2}},\ x=(x_{1},x_{2}),\] _for some \(a\geq 2^{-l}\). Then the following holds._ 1. _If_ \(H\) _is another kernel defined in_ \(\mathbb{R}^{2}\) _such that_ \(\|H\chi_{A_{k}}\|_{1}\leq k2^{l-k}\)_, then_ \[H*K(x)\chi_{{}_{\lfloor|\cdot|>2^{10l}\rfloor}}(x)=\left(H\chi_{{}_{\lfloor| \cdot|>2^{5l}\rfloor}}\right)*K(x)\chi_{{}_{\lfloor|\cdot|>2^{10l}\rfloor}}(x)+ L_{1}(x),\] _where_ \(\|L_{1}\|_{1}\;\raise 1.29pt\hbox{$<$}\kern-10.0pt\lower 2.58pt\hbox{$\sim$}\;2^{-2l}\)_._ 2. _If_ \(H\) _satisfies_ \(\|H\chi_{A_{k}}\|_{1}\lesssim 1\)_, then_ \[H*K(x)\chi_{\{|\cdot|\leq 2^{10l}\}}(x)=\left(H\chi_{\{|\cdot|\leq 2^{20l}\}} \right)*K(x)\chi_{\{|\cdot|\leq 2^{10l}\}}(x)+L_{2}(x)\] _where_ \(\|L_{2}\|_{1}\lesssim 2^{-9l}\)_._ Proof.: To estimate the first inequality, we write \[H*K(x)\chi_{\{|\cdot|>2^{10l}\}}(x) =\chi_{\{|\cdot|>2^{10l}\}}(x)\int_{\mathbb{R}^{2}}H(y)K(x-y)dy\] \[=\chi_{\{|\cdot|>2^{10l}\}}(x)\left(\int\limits_{|y|>2^{5l}}H(y)K (x-y)dy+\int\limits_{|y|\leq 2^{5l}}K_{1}(y)K_{2}(x-y)dy\right)\] \[=\chi_{\{|\cdot|>2^{10l}\}}(x)\int_{\mathbb{R}^{2}}H(y)\chi_{\{| \cdot|>2^{5l}\}}(x)K(x-y)dy\] \[+\chi_{\{|\cdot|>2^{10l}\}}(x)\int_{|y|\leq 2^{5l}}H(y)K(x-y)dy\] \[=\left(H\chi_{\{|\cdot|>2^{5l}\}}\right)*K(x)\chi_{\{|\cdot|>2^{1 0l}\}}(x)+L_{1}(x).\] For kernel \(L_{1}\), we have the estimate \[\left\|L_{1}\right\|_{1} \leq\int_{|x|>2^{10l}}\int_{|y|\leq 2^{5l}}|H(y)K(x-y)|dydx\] \[=\int_{|y|\leq 2^{5l}}|H(y)|\int_{|x|>2^{10l}}|K(x-y)|dxdy\] \[\leq\int_{|y|\leq 2^{5l}}|H(y)|dy\int_{|z|>2^{5l}}|K(z)|dz,\] where we have used that \(|z|=|x-y|>2^{5l}\) when \(|x|>2^{10l}\) and \(|y|\leq 2^{5l}\). For the integral involving the kernel \(H\), we have \[\int_{|y|\leq 2^{5l}}|H(y)|dy=\sum_{k=0}^{5l}\int_{A_{k}}|H(y)|dy\leq\sum_{k =0}^{5l}k2^{l-k}\leq l2^{l}.\] When \(|z|>2^{5l}\), either \(|z_{1}|>2^{5l-1}\) or \(|z_{2}|>2^{5l-1}\). If \(|z_{1}|>2^{5l-1}\), then \[\int_{|z|>2^{5l}}|K(z)|dz \leq\int_{|z_{1}|>2^{5l}}\frac{a}{(1+|az_{1}|)^{2}}dz_{1}\int_{ \mathbb{R}}\frac{1}{1+z_{2}^{2}}dz_{2}\] \[\leq C\frac{1}{a}\int_{|z_{1}|>2^{5l}}\frac{1}{|z_{1}|^{2}}dz_{1}\] \[\leq 2^{-4l}.\] Similarly, we can get the estimate when \(|z_{2}|>2^{5l-1}\), \[\int_{|z|>2^{5l}}|K(z)|dz\leq 2^{-5l}.\] Therefore, we obtain that \[\|L_{1}\|_{1}\lesssim 2^{-2l}\] We now estimate the second term in Lemma 4.3. \[H*K(x)\chi_{\{|\cdot|\leqslant 2^{10l}\}}(x) =\chi_{\{|\cdot|\leqslant 2^{10l}\}}(x)\int_{\mathbb{R}^{2}}H(y)K(x-y)dy\] \[=\chi_{\{|\cdot|\leqslant 2^{10l}\}}(x)\left(\int\limits_{|y|>2^{20 l}}H(y)K(x-y)dy+\int\limits_{|y|\leqslant 2^{20l}}K_{1}(y)K_{2}(x-y)dy\right)\] \[=\chi_{\{|\cdot|\leqslant 2^{10l}\}}(x)\int_{\mathbb{R}^{2}}H(y) \chi_{\{|\cdot|\leqslant 2^{20l}\}}(x)K(x-y)dy\] \[\quad+\chi_{\{|\cdot|\leqslant 2^{10l}\}}(x)\int_{|y|>2^{20l}}H(y )K_{2}(x-y)dy\] \[=\left(H\chi_{\{|\cdot|\leqslant 2^{20l}\}}\right)*K(x)\chi_{\{| \cdot|\leqslant 2^{10l}\}}(x)+L_{2}(x).\] For kernel \(L_{2}\), we have the estimate \[\left\|L_{2}\right\|_{1} \leqslant\int_{|x|\leqslant 2^{10l}}\int_{|y|>2^{20l}}|K_{1}(y)K(x-y )|dydx\] \[=\int_{|y|>2^{20l}}|H(y)|\int_{|x|\leqslant 2^{10l}}|K(x-y)|dxdy\] \[=\sum_{k=20l}^{\infty}\int_{A_{k}}|H(y)|\int_{|x|\leqslant 2^{10l} }|K(x-y)|dxdy\] \[\leqslant\sum_{k=20l}^{\infty}\int_{A_{k}}|H(y)|\int_{|z|>2^{k-10l }}|K(z)|dxdz,\] where we have used that \(|z|=|x-y|>2^{k-10l}\) when \(|x|\leqslant 2^{10l}\) and \(|y|>2^{k}\). We know that \(\|H\|_{L^{1}(A_{k})}\leqslant C.\) Using integral estimate on \(K\) as above, we get that \[\int_{|z|>2^{k-10l}}|K(z)|dz\leqslant 2^{-k+11l}.\] Therefore, we obtain that \[\|L_{2}\|_{1}\leqslant\sum_{k=20l}^{\infty}C2^{k-11l}\lesssim 2^{-9l}.\] This completes the proof of Lemma 4.3. ## 5. Proof of Theorem 2.1: Bilinear Bochner-Riesz means Observe that in view of Lemma 3.1 in Section 3.1 and Fatou's lemma, it is enough to establish Theorem 2.1 for domain \(\Omega\) with \(C^{\infty}-\)smooth boundary with implied bounds depending only on the \(C^{1}-\)parametrization of the boundary \(\partial\Omega\). We shall complete the proof of Theorem 2.1 with an additional assumption on \(\Omega\) that no portion of the boundary \(\partial\Omega\) is parallel to the coordinates axes. Observe that, since \(\Omega\) is a convex domain, the boundary \(\partial\Omega\) can turn parallel to the coordinates axes atmost four times. This assumption will be removed at a later stage to complete the proof for general convex domains as considered in Theorem 2.1. Let \(\phi,\psi\in C_{c}^{\infty}(\mathbb{R}^{2})\) be such that \(\operatorname{supp}(\phi)\subset B(0,1)\), \(\phi(x)=1\) for \(x\in B(0,\frac{1}{2})\) and \(\operatorname{supp}(\psi)\subset\{(\xi,\eta)\in\mathbb{R}^{2}:\frac{1}{2} \leqslant|(\xi,\eta)|\leqslant 2\}\) and \[(1-\rho(\xi,\eta))_{+}^{\lambda} = \phi(\xi,\eta)(1-\rho(\xi,\eta))_{+}^{\lambda}+\sum_{l=1}^{ \infty}2^{-\lambda l}\psi(2^{l}(1-\rho(\xi,\eta))(1-\rho(\xi,\eta))_{+}^{\lambda}\] \[= m_{0}(\xi,\eta)+\sum_{l=1}^{\infty}2^{-\lambda l}m_{l}(\xi,\eta).\] We shall prove suitable estimates for each of the multiplier \(m_{l}\) as above. We will decomposition these pieces further. The parametrization of the boundary \(\partial\Omega\cap\mathfrak{G}_{u_{p}}\) as in Lemma 3.2 allows us to decompose the multiplier \(m_{l}\) into \(2^{2M}\) pieces. Let us use the notation from Lemma 3.2 here. Let \(S_{p}\) be the sector with its bisector passing through the vector \(u_{p}\) and having arc length \(2^{-2M+1}\) on the unit circle. Let \(b_{p}\in C_{c}^{\infty}(\mathbb{R}^{2})\) be a radial function supported in \(S_{p}\) such that \[m_{l}=\sum_{p=1}^{2^{2M}}m_{l}b_{p}.\] Next, we invoke the refinement of the boundary decomposition from Section 3.2 to decompose the multiplier further. For the vector \(u_{p}=e^{i\theta_{p}}\), consider the set of intervals \(\{A_{l,p,j,v}:\;j=1,\ldots,Q,\;\nu=-2M-l,\ldots,2M+l\}\) as obtained in Section 3.2. Let \(I_{l,p,j,\nu}^{*}\) denote the union of two intervals containing \(a_{j,\nu}\) and \(I_{l,p,j,\nu}=\frac{2}{3}I_{l,p,j,\nu}^{*}\). Let \(\beta_{j,v}^{p}\in C_{c}^{\infty}(\mathbb{R})\) be the function supported in the interval \(I_{l,p,j,\nu}\) such that \[\sum_{j,\nu}\beta_{j,\nu}^{p}(t)=1,\;-1\leqslant t\leqslant 1,\;\text{and}\] \[\left|\frac{d^{n}}{dt^{n}}\beta_{j,\nu}^{p}(t)\right|\lesssim|I_{l,p,j,\nu}|^ {-n},\;\text{for}\;n=1,2,3,4. \tag{5.1}\] This gives us the following decomposition of the multiplier \(m_{l}(\xi,\eta)\). \[m_{l}(\xi,\eta)=\sum_{p=1}^{2^{2M}}\sum_{j,\nu}m_{l}(\xi,\eta)b_{p}(\xi,\eta) \beta_{j,\nu}^{p}(\langle u_{p}^{\perp},(\xi,\eta)\rangle)=:\sum_{p=1}^{2^{2M }}\sum_{j,\nu}m_{l,p,j,\nu}(\xi,\eta).\] Denote \(K_{l,p,j,\nu}:=\mathcal{F}^{-1}(m_{l,p,j,\nu})\). We can write \[K_{l,p,j,\nu}:=\mathcal{F}^{-1}(m_{l,p,j,\nu})=\sum_{k=0}^{10l}K_{l,p,j,\nu} \chi_{A_{k}}+\sum_{k=10l+1}^{\infty}K_{l,p,j,\nu}\chi_{A_{k}}. \tag{5.2}\] Let us first estimate terms with \(k>10l\). We have \[K_{l,p,j,\nu}(x)=\mathcal{F}^{-1}m_{l}*H_{l,p,j,\nu}(x),\] where \(H_{l,p,j,\nu}=\mathcal{F}^{-1}(b_{p}(.)\beta_{j,\nu}^{p}(\langle u_{p}^{\perp},(.)\rangle))\). Lemma 4.2 applied to the function \(h=\psi(2^{l}(1-t))\) yields that \[\|\mathcal{F}^{-1}m_{l}\|_{L^{1}(A_{i})}\leq 2^{l-\frac{i}{2}},\ i\in\mathbb{N}.\] The estimate (5.1) along with integration by parts argument applied to \(H_{l,p,j,\nu}\) twice gives us \[|H_{l,p,j,\nu}(x)|\lesssim\frac{|I|}{(1+|I||\langle x,u_{p}\rangle|)^{2}}\frac {1}{1+|\langle x,u_{p}^{\perp}\rangle|^{2}}.\] Observe that with this kernel estimate we can apply Lemma 4.3 (part 1) to get that \[\|\sum_{k=10l+1}^{\infty}K_{l,p,j,\nu}\chi_{A_{k}}\|_{L^{1}}\lesssim 2^{-3l}. \tag{5.3}\] This decay with respect to \(l\) in the estimate above, allows us to verify that the kernel is integrable. Therefore, for \(p_{1},p_{2},p_{3}\geq 1,\ \frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}\), we can apply Lemma 4.1 to get the bilinear multiplier operator under consideration maps \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) into \(L^{p_{3}}(\mathbb{R})\) with its norm bounded by \[\left\|\sum_{l=1}^{\infty}2^{-\lambda l}\sum_{p=1}^{2^{2M}}\sum_{ j,\nu}\sum_{k=10l}^{\infty}K_{l,p,j,\nu}\chi_{A_{k}}\right\|_{1} \lesssim\sum_{l=1}^{\infty}2^{-(\lambda+3)l}lQ\] \[\lesssim\sum_{l=1}^{\infty}2^{-(\lambda+3-\kappa_{\Omega}-\epsilon )l}l\lesssim 1,\] where we have used the fact that \(Q\leq 2^{(\kappa\Omega+\epsilon)l}\) for any \(\epsilon>0\), refer to the estimate (3.1). Therefore, we are left with estimating the bilinear operators corresponding to kernels with \(k\leq 10l\) in (5.2). Observe that \[\beta_{j,\nu}^{p}\left(\frac{\langle u_{p}^{\perp},(\xi,\eta)\rangle}{\rho( \xi,\eta)}\right)=1,\ \text{for}\ (\xi,\eta)\in\text{supp}(m_{l,p,j,\nu}).\] We can write \[m_{l,p,j,\nu}(\xi,\eta)=m_{l}(\xi,\eta)\beta_{j,\nu}^{p}\left(\frac{\langle u _{p}^{\perp},(\xi,\eta)\rangle}{\rho(\xi,\eta)}\right)b_{p}(\xi,\eta)\beta_{j,\nu}^{p}(\langle u_{p}^{\perp},(\xi,\eta)\rangle).\] The kernel takes the form \(K_{l,p,j,\nu}(x)=J_{l,p,j,\nu}*H_{l,p,j,\nu}(x)\), where Let \(P^{1}_{l,p,j,\nu}\) and \(P^{2}_{l,p,j,\nu}\) be the projection of the support of the multiplier \(m_{l,p,j,\nu}\) onto \(\xi\)-axis and \(\eta\)-axis respectively. For each \(i=1,2\), the intervals \(\{P^{i}_{l,p,j,\nu},\ j=1,\ldots,Q_{u_{p}}(2^{-l})\}\) have bounded overlap, independent of the parameter \(l\). Consider the Fourier projection operators defined by \(\hat{f}_{l,p,j,\nu}=\chi_{P^{1}_{l,p,j,\nu}}\hat{f}\) and \(\hat{g}_{l,p,j,\nu}=\chi_{P^{2}_{l,p,j,\nu}}\hat{g}\). This helps us rewrite the bilinear operator associated with kernel \(K_{l,p,j,\nu}\) as follows. \[K_{l,p,j,\nu}*(f,g)(x)=K_{l,p,j,\nu}*(f_{l,p,j,\nu},g_{l,p,j,\nu})(x).\] Here we have used the notation \(K*(f,g)(x)=K*(f\otimes g)(x,x)\). In order to prove the required estimate on the kernel \(J_{l,p,j,\nu}\), we introduce a homogeneous coordinate system associated with the boundary \(\partial\Omega\) given by \[(s,\alpha)\mapsto(\xi,\eta)(s,\alpha)=s(u_{p}^{\perp}\alpha+u_{p}\gamma(\alpha )),\] where \(s=\rho(\xi,\eta)\) and \(\gamma\) is the map used in parametrizing the boundary as in Lemma 3.2. It is easy to verify that the Jacobian of this change of variables is given by \(s(\alpha\gamma^{\prime}(\alpha)-\gamma(\alpha))\). Therefore, we can write the kernel \(J_{l,p,j,\nu}\) as \[J_{l,p,j,\nu}(R_{u_{p}}x)=\int\limits_{s=\frac{1}{2}}^{2}sm_{l}(s)\int\limits _{\alpha=-2^{-2M-2}}^{2^{-2M-2}}\beta_{j,\nu}^{p}(\alpha)e^{is(\alpha x_{1}+ \gamma(\alpha)x_{2})}(\alpha\gamma^{\prime}(\alpha)-\gamma(\alpha))\;d\alpha ds.\] Let \(\eta\in C^{\infty}_{c}(\mathbb{R}^{2})\) be such that \(\operatorname{supp}(\eta)\in B(0,2^{-2M-10})\) and \(\eta=1\) on \(B(0,2^{-2M-11})\). Define \[\Phi_{0}(x,\alpha)=\phi(|I_{l,p,j,\nu}|(x_{1}+x_{2}\gamma^{\prime}(\alpha))) \eta\left(\frac{x_{1}+x_{2}\gamma^{\prime}(\alpha)}{|x|}\right),\text{ and}\] \[\Phi_{n}(x,\alpha)=\left(\phi(2^{-n-1}|I_{l,p,j,\nu}|(x_{1}+x_{2}\gamma^{\prime }(\alpha)))-\phi(2^{-n}|I_{l,p,j,\nu}|(x_{1}+x_{2}\gamma^{\prime}(\alpha))) \right)\eta\left(\frac{x_{1}+x_{2}\gamma^{\prime}(\alpha)}{|x|}\right).\] We can write \[J_{l,p,j,\nu}(R_{u_{p}}x)\] \[=\sum\limits_{n=0}^{\infty}\int\limits_{s=\frac{1}{2}}^{2}sm_{l} (s)\int\limits_{\alpha=-2^{-2M-2}}^{2^{-2M-2}}\Phi_{n}(x,\alpha)\beta_{j,\nu} ^{p}(\alpha)e^{is(\alpha x_{1}+\gamma(\alpha)x_{2})}(\alpha\gamma^{\prime}( \alpha)-\gamma(\alpha))\;d\alpha ds\] \[\quad+\int\limits_{s=\frac{1}{2}}^{2}sm_{l}(s)\int\limits_{ \alpha=-2^{-2M-2}}^{2^{-2M-2}}\left[1-\eta\left(\frac{x_{1}+x_{2}\gamma^{ \prime}(\alpha)}{|x|}\right)\right]\beta_{j,\nu}^{p}(\alpha)e^{is(\alpha x_{1}+ \gamma(\alpha)x_{2})}(\alpha\gamma^{\prime}(\alpha)-\gamma(\alpha))\;d\alpha ds\] \[:=\sum\limits_{n=0}^{\infty}J_{l,p,j,\nu}^{n}(R_{u_{p}}x)+\tilde{ J}_{l,p,j,\nu}(R_{u_{p}}x)\] Observe that due to the supports of functions \(\phi\) and \(\eta\), only finitely many terms in the expression above contribute non-trivially. Indeed, we need to consider only \(C_{M}\log(1+|I_{l,p,j,\nu}||x|)\) many terms. The kernel \(J_{l,p,j,\nu}^{n}\) satisfies the following kernel estimates. **Lemma 5.1**.: _The following estimates holds true,_ \[|J^{0}_{l,p,j,\nu}(x)|\lesssim\int\limits_{\begin{subarray}{c}\alpha\in I^{ \#}_{l,p,j,\nu}:\\ |\langle x,u_{p}\rangle+\langle x,u_{p}^{\perp}\rangle\gamma^{\prime}(\alpha)| \\ \leq|I_{l,p,j,\nu}|^{-1}\end{subarray}}\frac{2^{-l}\;d\alpha}{(1+2^{-l}|\langle x,u_{p}\rangle\alpha+\langle x,u_{p}^{\perp}\rangle\gamma(\alpha)|)^{4}}, \tag{5.4}\] \[|J^{n}_{l,p,j,\nu}(x)|\lesssim\int\limits_{\begin{subarray}{c}\alpha\in I^{ \#}_{l,p,j,\nu}:\\ |\langle x,u_{p}\rangle+\langle x,u_{p}^{\perp}\rangle\gamma^{\prime}(\alpha)| \\ \sim 2^{n}|I_{l,p,j,\nu}|^{-1}\end{subarray}}\frac{(1+2^{-n}|\langle x,u_{p}^{ \perp}\rangle||I_{l,p,j,\nu}|)|\gamma^{\prime\prime}(\alpha)|+|I_{l,p,j,\nu}| ^{-1}}{|\langle x,u_{p}\rangle+\langle x,u_{p}^{\perp}\rangle\gamma^{\prime}( \alpha)|} \tag{5.5}\] \[\times\frac{2^{-l}\;d\alpha}{(1+2^{-l}|\langle x,u_{p}\rangle \alpha+\langle x,u_{p}^{\perp}\rangle\gamma(\alpha)|)^{4}}, \tag{5.6}\] The proof of these estimates follows by an integration by parts argument in the variables \(\alpha\) and \(s\). We refer to [25] for similar estimates. As a consequence of Lemma 5.1 we obtain the following integral estimates on the kernels. **Lemma 5.2**.: _We have the following estimates for all \(n\in\mathbb{N}\) and \(t\geq 0\),_ \[\|J^{0}_{l,p,j,\nu}\|_{1} \lesssim 1, \tag{5.8}\] \[\|J^{n}_{l,p,j,\nu}\|_{L^{1}(A_{t})} \lesssim 2^{-|t-l|},\] (5.9) \[\|\tilde{J}_{l,p,j,\nu}\|_{L^{1}(A_{t})} \lesssim 1. \tag{5.7}\] \[\|K_{l,p,j,\nu}\|_{1} \lesssim l. \tag{5.10}\] Proof.: Apply the change of variables given by \[v_{1}=2^{-l}(\langle x,u_{p}\rangle+\langle x,u_{p}^{\perp}\rangle\gamma^{ \prime}(\alpha)),\;v_{2}=2^{-l}(\langle x,u_{p}\rangle\alpha+\langle x,u_{p}^ {\perp}\rangle\gamma(\alpha)).\] Note that the Jacobian of this map is bounded by \(c2^{2l}\), therefore, we obtain that, \[\|J^{0}_{l,p,j,\nu}\|_{1}\lesssim\int\limits_{\alpha\in I^{\#}_{l,p,j,\nu}} \iint\limits_{|v_{1}|\leq 2^{-l}|I_{l,p,j,\nu}|^{-1}}\frac{2^{l}\;dv_{1}dv_{2}d \alpha}{(1+|v_{2}|)^{4}}\lesssim 1.\] The estimate (5.8) follows from the above change of variables argument. Indeed, we have (5.11) \[\|J^{n}_{l,p,j,\nu}\|_{L^{1}(A_{t})}\lesssim\int\limits_{\alpha\in I^{\#}_{l, p,j,\nu}}((1+2^{t-n}|I_{l,p,j,\nu}|)|\gamma^{\prime\prime}(\alpha)|+|I_{l,p,j,\nu}| ^{-1})\iint\limits_{\begin{subarray}{c}|v_{1}|\sim 2^{n-l}|I_{l,p,j,\nu}|^{-1}\\ (v_{1},v_{2})\sim 2^{t-l}\end{subarray}}\frac{dv_{1}dv_{2}d\alpha}{|v_{1}|(1+|v_{2}|)^ {4}}\] Using (3.3), we get that \(\int_{I^{*}_{l,p,j,\nu}}|I_{l,p,j,\nu}|\gamma^{\prime\prime}(\alpha)d\alpha\leqslant 2 ^{-l}\). Hence the integral in \(\alpha\) is dominated by a multiple of \((2^{t-n-l}+1)\). Moreover, \(\operatorname{supp}(J^{n}_{l,p,j,\nu})\cap A_{t}\neq\emptyset\) implies that \(2^{n}|I_{l,p,j,\nu}|^{-1}\leqslant 2^{-M}2^{t}\), whence \(|v_{2}|\gtrsim 2^{t-l}\) for the domain of integration in (5.11). Thus, we have \[\|J^{n}_{l,p,j,\nu}\|_{L^{1}(A_{t})}\lesssim(2^{t-n-l}+1)\min\{2^{t-l},2^{3(l-t )}\},\] and the estimate (5.8) follows. The estimate for \(\tilde{J}_{l,p,j,\nu}\) follows from a similar argument. We leave the details to the reader. Next, we prove (5.10). Consider, \[\|K_{l,p,j,\nu}\|_{1}\lesssim \|K_{l,p,j,\nu}\chi_{|.|\geqslant 10l}\|_{1}\] \[+\|J^{0}_{l,p,j,\nu}\|_{L^{1}}\|H^{0}_{l,p,j,\nu}\|_{1}\] \[+\sum_{t=0}^{\infty}\sum_{n=1}^{t}\|J^{n}_{l,p,j,\nu}\|_{L^{1}(A_ {t})}\|H^{0}_{l,p,j,\nu}\|_{1}\] \[+\|\tilde{J}_{l,p,j,\nu}*H_{l,p,j,\nu}\chi_{|.|\leqslant 10l}\|_{1}.\] Observe that the first term in the above is already estimated in (5.3). The estimates for the second and third terms follow from (5.7) and (5.8) respectively along with the integrability of the kernel \(H_{l,p,j,\nu}\). For the last term, we use the equation (5.9) to get \[\|\tilde{J}_{l,p,j,\nu}*H_{l,p,j,\nu}\chi_{|.|\leqslant 10l}\|_{1} \lesssim\sum_{t=0}^{20l}\|\tilde{J}_{l,p,j,\nu}\|_{L^{1}(A_{t})} \|H_{l,p,j,\nu}\|_{1}+\sum_{t=20l}^{\infty}\|\tilde{J}_{l,p,j,\nu}\|_{L^{1}(A_ {t})}\|H_{l,p,j,\nu}\|_{L^{1}(.|\geqslant 2^{k})}\] \[\lesssim l+\sum_{t=20l}^{\infty}2^{l-t}\lesssim l.\] With these estimates on kernels, we can show that the corresponding bilinear multiplier operators can be controlled in pointwise manner by bilinear Kakeya maximal function with eccentricity depending on the parameter \(l\). More precisely, we have that **Lemma 5.3**.: _Let \(n\in\mathbb{N}\cup\{0\}\) and \(0\leqslant k\leqslant 20l\). The following pointwise domination holds_ \[\left|(J^{0}_{l,p,j,\nu}\chi_{\{|.|\leqslant 2^{20l}\}})*(f,g)(x )\right| \lesssim\mathcal{M}_{\mathcal{R}_{\leqslant 30l}}(f,g)(x) \tag{5.13}\] \[\left|(J^{n}_{l,p,j,\nu}\chi_{A_{k}})*(f,g)(x)\right| \lesssim\mathcal{M}_{\mathcal{R}_{\leqslant 30l}}(f,g)(x),\] (5.14) \[\left|(\tilde{J}_{l,p,j,\nu}\chi_{A_{k}})*(f,g)(x)\right| \lesssim\mathcal{M}_{\mathcal{R}_{\leqslant 30l}}(f,g)(x). \tag{5.12}\] Proof.: **Proof of (5.12):** Recall the estimate (3.2) and observe that it implies that the angle between the tangent vector \((u_{p}+\gamma^{\prime}(\alpha)u_{p}^{\perp})\) and the perpendicular vector to the position vector \((\alpha u_{p}+\gamma(\alpha)u_{p}^{\perp})\) is less than a constant, say \(C_{M}\) which is smaller than \(\frac{\pi}{2}\). Therefore, the parallelogram spanned by the vectors \((\alpha u_{p}+\gamma(\alpha)u_{p}^{\perp})\) and can be dominated by a rectangle of comparable area with sides parallel to \((\alpha u_{p}+\gamma(\alpha)u_{p}^{\perp})\) and \((-\gamma(\alpha)u_{p}+\alpha u_{p}^{\perp})\). This observation along with the estimate (5.4) allows us to deduce that \[\big{|}(J^{0}_{l,p,j,\nu}\chi_{\{|\cdot|\leqslant 2^{20l}\}})(x)\big{|}\lesssim \int_{I^{*}_{l,p,j,\nu}}|I_{l,p,j,\nu}|^{-1}\sum_{t=0}^{20l}\frac{2^{-3t}}{|R_ {\alpha}|}\chi_{R_{\alpha}}(x)d\alpha,\] where \(R_{\alpha}\) is the rectangle with sides parallel to \((\alpha u_{p}+\gamma(\alpha)u_{p}^{\perp})\) and \((-\gamma(\alpha)u_{p}+\alpha u_{p}^{\perp})\) and side-lengths \(2^{t+l}\) and \(|I_{l,p,j,\nu}|^{-1}\) respectively. Since \(2^{-l}\leqslant|I_{l,p,j,\nu}|\), the estimate (5.12) follows. **Proof of (5.13):** This involves the kernel \(J^{n}_{l,p,j,\nu}\) which satisfies the estimate given by (5.5). We decompose the integral over \(I^{*}_{l,p,j,\nu}\) in (5.5) into two pieces given by \(I^{1}=\{\alpha\in I^{*}_{l,p,j,\nu}:\ |\langle x,\alpha u_{p}+\gamma(\alpha)u_{p}^{ \perp}\rangle|\geqslant|\langle x,u_{p}+\gamma^{\prime}(\alpha)u_{p}^{\perp} \rangle|\}\) and \(I^{2}=I^{*}_{l,p,j,\nu}\backslash I^{1}\). This gives us \[\big{|}(J^{n}_{l,p,j,\nu}\chi_{A_{k}})(x)\big{|}\leqslant\mathfrak{I}_{1}(x)+ \mathfrak{I}_{2}(x)\] where \(\mathfrak{I}_{i}(x)\) corresponds to the equation (5.5) with integral over \(I^{i}\) for \(i=1,2\). Note that for \(\mathfrak{I}_{1}(x)\), we have \[\mathfrak{I}_{1}(x)\lesssim\int_{I^{1}}((1+2^{k-n}|I_{l,p,j,\nu}|)|\gamma^{ \prime\prime}(\alpha)|+|I_{l,p,j,\nu}|^{-1})\frac{\min\{2^{k-l},2^{3(l-k)}\}} {|R_{\alpha}|}\chi_{R_{\alpha}}(x)d\alpha,\] where \(R_{\alpha}\) is the rectangle with sides parallel to \((\alpha u_{p}+\gamma(\alpha)u_{p}^{\perp})\) and \((-\gamma(\alpha)u_{p}+\alpha u_{p}^{\perp})\) and the corresponding side-lengths given by a constant multiples of \(2^{k}\) and \(2^{n}|I_{l,p,j,\nu}|^{-1}\) respectively. In particular, the eccentricity of \(R_{\alpha}\) is bounded by a constant multiple of \(2^{30l}\). Also, we have that \[\int_{I^{*}_{l,p,j,\nu}}|I_{l,p,j,\nu}|\gamma^{\prime\prime}(\alpha)d\alpha \leqslant 2^{-l}.\] Therefore, the bilinear operator corresponding to the kernel \(\mathfrak{I}_{1}\) can be easily dominated by the bilinear Kakeya maximal function \(\mathcal{M}_{\mathcal{R}_{\leqslant 30l}}\). Next, consider the term \(\mathfrak{I}_{2}(x)\) which involves integral over \(I^{2}=I^{*}_{l,p,j,\nu}\backslash I^{1}\). Note that for \(\alpha\in I^{2}\), we have \(2^{n}|I_{l,p,j,\nu}|^{-1}\sim|\langle x,u_{p}+\gamma^{\prime}(\alpha)u_{p}^{ \perp}\rangle|\sim|x|\sim 2^{k}\). In particular, \(k\leqslant n+l\). Thus, \[\mathfrak{I}_{2}(x)\lesssim\int_{I^{2}}((1+2^{k-n}|I_{l,p,j,\nu}|)|\gamma^{ \prime\prime}(\alpha)|+|I_{l,p,j,\nu}|^{-1})\sum_{i=1}^{k}\min\{2^{i-l},2^{3(l -i)}\}\frac{1}{|R_{\alpha}|}\chi_{R_{\alpha}}(x)d\alpha,\] where \(R_{\alpha}\) is the rectangle with sides parallel to \((\alpha u_{p}+\gamma(\alpha)u_{p}^{\perp})\) and \((-\gamma(\alpha)u_{p}+\alpha u_{p}^{\perp})\) and side-lengths given by a constant multiples of \(2^{i}\) and \(2^{k}\) respectively. This proves the required estimate. **Proof of (5.14)** follows using the arguments as in the case of \(\mathfrak{I}_{2}(x)\). Recall that we are chasing \(L^{p}-\)boundedness of bilinear operators associated with kernels given in Equation (5.2) for terms \(k\leqslant 10l\). We note that the required estimates for the terms \(\sum_{k=0}^{10l}((J_{l,p,j,\nu}\chi_{|x|\geqslant 2^{20l}})*H_{l,p,j,\nu})\chi_{A_{k}}\) follows by employing Lemma 5.2 and Lemma 4.3. For the remaining terms, we observe that the quantity \(H_{l,p,j,\nu}*(f,g)\) can be dominated by product of Hardy-Littlewood maximal function. Thus, using Lemma 5.3, we have \[\left\|\sum_{l=1}^{\infty}2^{-\lambda l}\sum_{p=1}^{2^{2M}}\sum_{j,\nu}((J_{l,p,j,\nu}\chi_{|\cdot|\leqslant 2^{20l}})*H_{l,p,j,\nu})\chi_{\{| \cdot|\leqslant 2^{10l}\}}*(f_{l,p,j,\nu},g_{l,p,j,\nu})\right\|_{p_{3}}\] \[\lesssim \sum_{l=1}^{\infty}2^{-\lambda l}\sum_{p=1}^{2^{2M}}\left\|\sum_ {j,\nu}\sum_{k=0}^{20l}(J_{l,p,j,\nu}\chi_{A_{k}}*(Mf_{l,p,j,\nu},Mg_{l,p,j,\nu })\right\|_{p_{3}}\] \[\lesssim \sum_{l=1}^{\infty}2^{-\lambda l}\sum_{p=1}^{2^{2M}}\left\|\sum_ {j,\nu}\sum_{k=0}^{20l}(J_{l,p,j,\nu}\chi_{A_{k}}*(Mf_{l,p,j,\nu},Mg_{l,p,j,\nu })\right\|_{p_{3}}\] \[\lesssim \sum_{l=1}^{\infty}2^{-\lambda l}\sum_{p=1}^{2^{2M}}\left\|\sum_ {j,\nu}\sum_{k=0}^{20l}l\mathcal{M}_{2^{30l}}(Mf_{l,p,j,\nu},Mg_{l,p,j,\nu}) \right\|_{p_{3}}\] \[\lesssim \sum_{l=1}^{\infty}2^{-\lambda l}l^{2}\sum_{p=1}^{2^{2M}}\left\| \sum_{j,\nu}\mathcal{M}_{2^{30l}}(Mf_{l,p,j,\nu},Mg_{l,p,j,\nu})\right\|_{p_{ 3}}\] Now by an application of the vector valued boundedness of Kakeya maximal function (Theorem 2.7) with \(\epsilon=\frac{\lambda}{2}\) and that of Hardy-Littlewood maximal function (see Theorem 5.6.6. in [10]), the above term can be dominated by \[\sum_{l=1}^{\infty}2^{-\frac{\lambda l}{2}}l^{2}\sum_{p=1}^{2^{2 M}}\left\|\left(\sum_{j,\nu}|Mf_{l,p,j,\nu}|^{2}\right)^{\frac{1}{2}}\right\|_{p_{ 1}}\left\|\left(\sum_{j,\nu}|Mg_{l,p,j,\nu}|^{2}\right)^{\frac{1}{2}}\right\|_ {p_{2}}\] \[\lesssim \sum_{l=1}^{\infty}2^{-\frac{\lambda l}{2}}l^{2}\sum_{p=1}^{2^{2 M}}\left\|\left(\sum_{j,\nu}|f_{l,p,j,\nu}|^{2}\right)^{\frac{1}{2}}\right\|_{p_{ 1}}\left\|\left(\sum_{j,\nu}|g_{l,p,j,\nu}|^{2}\right)^{\frac{1}{2}}\right\|_ {p_{2}}\] \[\lesssim \sum_{l=1}^{\infty}2^{-\frac{\lambda l}{2}}l^{3}\|f\|_{p_{1}}\| g\|_{p_{2}}\lesssim\|f\|_{p_{1}}\|g\|_{p_{2}},\] where we have used the Rubio de Francia's Littlewood-Paley inequality [23] for the collection of boundedly overlapping intervals \(\{P_{l,p,j,\nu}^{i},\;j=1,\ldots,Q_{u_{p}}(2^{-l})\}\) in the second inequality. This completes the proof of Theorem 2.1 with the assumption that the no part of the boundary \(\partial\Omega\) is parallel to coordinate axes. This assumption is easy to get around. For, let us consider the case when a portion of the boundary is parallel to a coordinate axis. Observe that we can decompose the bilinear multiplier into "annulus" as before to obtain \(m=\sum_{l=0}^{\infty}m_{l}\). Next, we consider a smooth decomposition of each \(m_{l}\) as where \(m_{l}^{1}\) is supported in the union of atmost four rectangles parallel to the axes. Since the symbol \(m_{l}^{1}\) is adapted to a rectangle we can use the Hilbert transform in \(\xi\) and \(\eta\) variables separately to deduce boundedness of the bilinear operator corresponding to \(m_{l}^{1}\). Finally, the case of \(m_{l}^{2}\) is dealt with similarly as above. This completes the proof of Theorem 2.1. **Remark 5.4**.: _We remark here that \(L^{2}(\mathbb{R})\times L^{2}(\mathbb{R})\to L^{1}(\mathbb{R})-\)boundedness of the operator \(\mathcal{B}^{\lambda},\lambda>0,\) can be deduced with a simpler argument. Indeed, the estimates (5.10) and Lemma 4.1 imply that the operator associated to the multiplier \(m_{l,p,j,\nu}\) maps \(L^{2}(\mathbb{R})\times L^{2}(\mathbb{R})\) into \(L^{1}(\mathbb{R})\) with operator norm controlled by \(l\). Therefore, a simple use of Cauchy-Schwartz inequality yields the desired boundedness result. For, consider_ \[\|\mathcal{B}^{\lambda}(f,g)\|_{1} \lesssim\sum_{l}2^{-\lambda l}l\sum_{p}\sum_{j,\nu}\|f_{l,p,j, \nu}\|_{2}\|g_{l,p,j,\nu}\|_{2}\] \[\lesssim\] \[\lesssim \sum_{l=1}^{\infty}2^{-\lambda l}l^{2}\|f\|_{2}\|g\|_{2}\lesssim \|f\|_{2}\|g\|_{2}.\] ## 6. Proof of Theorem 2.4: Fixed scale bilinear Kakeya maximal function First, note that an easy observation using standard dilation argument we can get that for a triplet \((p_{1},p_{2},p_{3})\) satisfying the Holder relation \(\frac{1}{p_{3}}=\frac{1}{p_{1}}+\frac{1}{p_{2}},\) we have that \[\|\mathcal{M}_{\mathcal{R}_{1},N}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}}= \|\mathcal{M}_{\mathcal{R}_{\delta,N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{ 3}}}.\] Therefore, we only need to prove Theorem 2.4 for \(\delta=1\). However, in the next estimate we work with arbitrary \(\delta>0,\) as it will be used later in the paper in this form. **Proof of Banach case part (a):** Observe that it is enough to prove the following two estimates for a given rectangle \(R\in\mathcal{R}_{\delta,N}.\) 1. For \(1<s<\infty,\) we have that (6.1) \[\frac{1}{|R|}\int\limits_{R}|f(x-y_{1})||g(x-y_{2})|\ dy_{1}dy_{2} \lesssim M_{s}f(x)M_{s^{\prime}}g(x).\] 2. For \(s=1,\) we have that (6.2) \[\frac{1}{|R|}\int\limits_{R}|f(x-y_{1})||g(x-y_{2})|\ dy_{1}dy_{2}\lesssim\min \{\|g\|_{\infty}Mf(x),\|f\|_{\infty}Mg(x)\}.\] with the implicit constants in both the inequalities above independent of \(R\). Here we have used the notation \(M_{s}f(x)=(M(f^{s})(x))^{\frac{1}{s}},\ s>0.\) Let us assume that the longest side of \(R\) makes an angle \(\theta_{0}\) with \(x\)-axis. Due to the symmetry in \(f\) and \(g,\) we can without loss of generality, assume that \(0<\theta_{0}\leqslant\frac{\pi}{4}\). We express the rectangle \(R\) as a star shaped set \(R=\{(t\cos\theta,t\sin\theta):\ \theta\in[0,\frac{\pi}{2}]\cup[\pi,\frac{3\pi}{2}],\ 0 \leqslant\ \frac{\pi}{4}\}\). \(t<r(\theta)\}\), where \(r(\theta)\) is the function of the boundary of \(R\) with respect to \(\theta\). By expressing the average over \(R\) in polar coordinates, we have \[\frac{1}{|R|}\int\limits_{R}|f(x-y_{1})||g(x-y_{2})|\ dy_{1}dy_{2}\] \[\leqslant \frac{1}{\delta^{2}N}\int\limits_{\theta=0}^{\frac{\pi}{2}}\int \limits_{r=0}^{r(\theta)}|f(x-t\sin\theta)||g(x-t\cos\theta)|t\ dtd\theta\] \[\leqslant \frac{1}{\delta^{2}N}\int\limits_{\theta=0}^{\frac{\pi}{2}}r^{2}( \theta)\left(\frac{1}{r(\theta)}\int\limits_{r=0}^{r(\theta)}|f(x-t\sin\theta )|^{s}\ dt\right)^{\frac{1}{s}}\left(\frac{1}{r(\theta)}\int\limits_{r=0}^{r( \theta)}|g(x-t\cos\theta)|^{s^{\prime}}\ dt\right)^{\frac{1}{s^{\prime}}}d\theta\] \[\leqslant \left(\frac{1}{\delta^{2}N}\int\limits_{\theta=0}^{\frac{\pi}{2 }}r^{2}(\theta)d\theta\right)\!M_{s}f(x)M_{s^{\prime}}g(x)\] \[\leqslant \frac{1}{\delta^{2}N}\left(\int\limits_{\theta=0}^{\theta_{0}- \frac{C}{N}}(\delta\text{cosec }(\theta_{0}-\theta))^{2}d\theta+\int\limits_{\theta= \theta_{0}-\frac{C}{N}}^{\theta_{0}+\frac{C}{N}}(\delta N)^{2}\ d\theta+\int \limits_{\theta=\theta_{0}+\frac{C}{N}}^{\frac{\pi}{2}}(\delta\text{cosec }(\theta-\theta_{0}))^{2}d\theta\right)\!M_{s}f(x)M_{s^{ \prime}}g(x)\] \[\leqslant M_{s}f(x)M_{s^{\prime}}g(x).\] where we have used the fact that \(r(\theta)\lesssim\delta N\) when \(\theta\) is the angle between \(x\)-axis and the line passing through origin and a point on the shorter side of \(R\). In the remaining cases we have \(r(\theta)\sim\delta|\text{cosec }(\theta-\theta_{0})|\). This completes the proof of the first inequality. The proof of the other inequality may be completed in the same manner. **Proof of Banach case part (b):** For \(i\in\mathbb{Z}\), let \(I_{i}\) denote the interval \([i-\frac{1}{2},i+\frac{1}{2})\). Write \(\mathbb{R}=\bigcup\limits_{i\in\mathbb{Z}}I_{i}\). By the local integrability of \(f\) and \(g\), we can find for every interval \(I_{i}\), a rectangle \(R_{i}\in\mathcal{R}_{1,N}\) such that 1. \(\{(x,x),x\in I_{i}\}\cap R_{i}\neq\emptyset\), 2. \(\mathcal{M}_{\mathcal{R}_{1,N}}(f,g)(x)\leqslant\frac{2}{|R_{i}|}\int_{R_{i}} f(y_{1})g(y_{2})dy_{1}dy_{2},\quad\forall x\in I_{i}\). Let \(e_{i}=(e_{i,1},e_{i,2})\) denote the unit vector parallel to the longest side of \(R_{i}\). We organize the rectangles \(R_{i}\) into three collection with the help of following sets. \[A_{1} =\{i\in\mathbb{Z}\mid\frac{1}{\sqrt{2}}<|e_{i,1}|\leqslant 1\},\] \[A_{2} =\{i\in\mathbb{Z}\mid 0\leqslant|e_{i,1}|<\frac{1}{2}\},\] \[\text{and}\quad A_{3} =\{i\in\mathbb{Z}\mid\frac{1}{2}\leqslant|e_{i,1}|\leqslant\frac{ 1}{\sqrt{2}}\}.\] Let \(Q_{j}=J_{1,j}\times J_{2,j}\) be the square in \(\mathbb{R}^{2}\), where \(J_{1,j}=(j_{1}-\frac{1}{2},j_{1}+\frac{1}{2})\) and \(J_{2,j}=(j_{2}-\frac{1}{2},j_{2}+\frac{1}{2}),\ j=(j_{1},j_{2})\in\mathbb{Z}^{2}.\) Define \[\gamma_{i}=\{j\in\mathbb{Z}^{2}\mid Q_{j}\cap R_{i}\neq\emptyset\}.\] The following lemma quantifies the intersection of the rectangles \(R_{i}^{\prime}\)s when projected onto the coordinate axes. **Lemma 6.1**.: _(Key lemma) Let \(h_{l,k}(y_{l}),l=1,2;k=1,2,3,\) be functions on \(\mathbb{R}\) defined as_ \[h_{l,k}(y_{l})=\sum_{i\in A_{k}}\sum_{j\in\gamma_{i}}\chi_{J_{l,j}}(y_{l}).\] _Then we have,_ 1. \(\|h_{l,k}\|_{\infty}\lesssim N\log N\) _for_ \(l,k=1,2\) _with_ \(l\neq k.\)__ 2. \(\|h_{l,l}\|_{\infty}\lesssim N\) _for_ \(l=1,2\)_._ 3. \(\|h_{l,3}\|_{\infty}\lesssim N\) _for_ \(l=1,2.\)__ Let us assume Lemma 6.1 for the moment and complete the proof of Theorem 2.4. Consider \[\mathcal{M}_{\mathcal{R}_{1,N}}(f,g)(x) \leqslant\sum_{i\in\mathbb{Z}}\frac{2}{|R_{i}|}\int_{R_{i}}f(y_{1} )g(y_{2})dy_{1}dy_{2}\chi_{I_{i}}(x)\] \[=\frac{2}{N}\sum_{k=1}^{3}\sum_{i\in A_{k}}\int_{R_{i}}f(y_{1})g( y_{2})dy_{1}dy_{2}\chi_{I_{i}}(x)\] \[=\frac{2}{N}\sum_{k=1}^{3}\sum_{i\in A_{k}}\sum_{j\in\gamma_{i}} \int_{J_{1,j}}f(y_{1})dy_{1}\int_{J_{2,j}}g(y_{2})dy_{2}.\] This estimate above along with Holder's inequality yields \[\int_{\mathbb{R}}|\mathcal{M}_{\mathcal{R}_{1,N}}(f,g)(x)|^{p_{3 }}dx\] \[\leqslant \left(\frac{2}{N}\right)^{p_{3}}\sum_{k=1}^{3}\sum_{i\in A_{k}} \left(\sum_{j\in\gamma_{i}}\int_{J_{1,j}}f(y_{1})dy_{1}\int_{J_{2,j}}g(y_{2}) dy_{2}\right)^{p_{3}}\] \[\leqslant \left(\frac{2}{N}\right)^{p_{3}}\sum_{k=1}^{3}\sum_{i\in A_{k}} \left(\sum_{j\in\gamma_{i}}\int_{J_{1,j}}f(y_{1})^{p_{1}}dy_{1}\right)^{ \frac{p_{2}}{p_{1}}}\left(\sum_{j\in\gamma_{i}}\int_{J_{2,j}}g(y_{2})^{p_{2}} dy_{2}\right)^{\frac{p_{3}}{p_{2}}}\] \[\leqslant \left(\frac{2}{N}\right)^{p_{3}}\sum_{k=1}^{3}\left(\sum_{i\in A _{k}}\sum_{j\in\gamma_{i}}\int_{J_{1,j}}f(y_{1})^{p_{1}}dy_{1}\right)^{\frac{ p_{3}}{p_{1}}}\left(\sum_{i\in A_{k}}\sum_{j\in\gamma_{i}}\int_{J_{2,j}}g(y_{2})^{p_{2} }dy_{2}\right)^{\frac{p_{3}}{p_{2}}}\] \[\leqslant \left(\frac{2}{N}\right)^{p_{3}}\sum_{k=1}^{3}\left(\int_{\mathbb{ R}}\left(\sum_{i\in A_{k}}\sum_{j\in\gamma_{i}}\chi_{J_{1,j}}(y_{1})\right)f(y_{1})^{p _{1}}dy_{1}\right)^{\frac{p_{3}}{p_{1}}}\] \[\times\left(\int_{\mathbb{R}}\left(\sum_{i\in A_{k}}\sum_{j\in\gamma_{i}}\chi_{J_{ 1,j}}(y_{1})\right)g(y_{2})^{p_{2}}dy_{2}\right)^{\frac{p_{3}}{p_{2}}}.\] Invoking the estimates from Lemma 6.1 we get that \[\int_{\mathbb{R}}|\mathcal{M}_{\mathcal{R}_{1,N}}(f,g)(x)|^{p_{3}}dx\] \[\lesssim \frac{1}{N^{p_{3}}}\left(N^{\frac{p_{3}}{p_{1}}}(N\log N)^{\frac{ p_{3}}{p_{2}}}\|f\|_{p_{1}}^{p_{3}}\|g\|_{p_{2}}^{p_{3}}+(N\log N)^{\frac{p_{3}}{p_{1} }}N^{\frac{p_{3}}{p_{2}}}\|f\|_{p_{1}}^{p_{3}}\|g\|_{p_{2}}^{p_{3}}+N^{\frac{p_ {3}}{p_{1}}+\frac{p_{3}}{p_{2}}}\|f\|_{p_{1}}^{p_{3}}\|g\|_{p_{2}}^{p_{3}}\right)\] \[\leqslant N^{1-p_{3}}(\log N)^{\frac{p_{3}}{\min[p_{1},p_{2}]}}\|f\|_{p_{1 }}^{p_{3}}\|g\|_{p_{2}}^{p_{3}},\] This completes the proof of Banach case. **Proofs of non-Banach case:** This part follows easily using interpolation for bilinear operators. First, observe that any rectangle \(R\in\mathcal{R}_{1,N}\), we can dominate the bilinear average over \(R\) by a bilinear average over square with its side-length comparable to \(N\) and containing \(R\). This gives us \[\mathcal{M}_{\mathcal{R}_{1,N}}(f,g)(x)\leqslant NMf(x)Mg(x).\] The Holder's inequality along with weak-type \((1,1)\) bounds for the Hardy-Littlewood maximal operator \(M\) yields the end-point result \(\|\mathcal{M}_{\mathcal{R}_{1,N}}\|_{L^{1}\times L^{1}\to L^{\frac{1}{2}, \infty}}\lesssim N\). Finally, we obtain \((p_{1},p_{2},p_{3})\) boundedness of \(\mathcal{M}_{\mathcal{R}_{1,N}}\) in the non-Banach range \((\frac{1}{2}<p_{3}<1)\) by interpolating between points \((1,\infty,1),(1,1,\frac{1}{2})\) and \((\infty,1,1)\). Note that we get the constant bounded by \(N^{\frac{1}{p_{3}}-1}.\) This completes the proof of Theorem 2.4 modulo Lemma 6.1, whose proof is given in the next section. ### Proof of Key Lemma 6.1 By the definition of \(h_{l,k}\), we know that it is constant on \(Q_{j}\). Therefore, it is enough to show that \[h_{l,k}(0)\leqslant CN\log N\] for sufficiently large \(N\), where \(C\) is a constant independent of the choice of \(R_{i}\). Let \(\Gamma_{1}=[-\frac{1}{2},\frac{1}{2})\times\mathbb{R}\) and \(\Gamma_{2}=\mathbb{R}\times[-\frac{1}{2},\frac{1}{2})\). Then, \[\sum_{j\in\gamma_{i}}\chi_{J_{1,j}}(0)=\operatorname{card}\left(\left\{j\in \mathbb{Z}^{2}|Q_{j}\cap(\Gamma_{1}\cap R_{i})\neq\emptyset\right\}\right)\] and \[\sum_{j\in\gamma_{i}}\chi_{J_{2,j}}(0)=\operatorname{card}\left(\left\{j\in \mathbb{Z}^{2}|Q_{j}\cap(\Gamma_{2}\cap R_{i})\neq\emptyset\right\}\right).\] Note that we need to consider rectangles \(R_{i}\) which intersect either \(\Gamma_{1}\) or \(\Gamma_{2}\) and the maximum side-length of rectangles is \(N\). Thus for \(y=(0,0)\), we only need to consider \(i\in[-2N,2N]\). By symmetry and definition of \(h_{l,k}\), we have that \[h_{l,k}(0)\leqslant 2\sum_{\begin{subarray}{c}i\in A_{k}\\ 0\leqslant i\leqslant N\end{subarray}}\operatorname{card}\left(\left\{j\in \mathbb{Z}^{2}|Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq\emptyset\right\}\right).\] Suppose \(R_{i}\cap\Gamma_{l}\neq\emptyset\), then the length of the projection of \(R_{i}\) on the \(y_{l}\)-axis is greater than \(i-1\). On the other hand the length of the projection is always less than \(N|e_{i,l}|+1\). Therefore, \[i-1\leqslant N|e_{i,l}|+1,\quad\text{i.e.}\quad|e_{i,l}|^{-1}\leqslant\frac{N} {i-2}. \tag{6.3}\] Let \(L\) be a line parallel to \(e_{i}\), then the length of \(\Gamma_{l}\cap L\) is \(|e_{i,l}|^{-1}\). Since width of \(R_{i}\) is \(1\), \(\Gamma_{l}\cap R_{i}\) is covered by atmost \([|e_{i,l}|^{-1}]+1\) segments of length \(1\). Thus, we get the following estimate from (6.3), \[\text{card}\left(\big{\{}j\in\mathbb{Z}^{2}|Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq \emptyset\big{\}}\right)\leqslant[|e_{i,l}|^{-1}]+2\leqslant\frac{N}{i-2}+2 \lesssim\frac{N}{i-2}.\] Also, note that if \(i\in A_{l}\), \(|e_{i,l}|^{-1}\leqslant\sqrt{2}\) for \(l=1,2\) and for \(i\in A_{3}\), \(|e_{i,1}|^{-1},|e_{i,2}|^{-1}\leqslant 2\). Therefore, for \(l=1,2\) we have \[h_{l,l}(0) \leqslant 2\sum_{\begin{subarray}{c}i\in A_{l}\\ 0\leqslant i\leqslant N\end{subarray}}\text{card}\left(\big{\{}j\in\mathbb{Z}^ {2}|Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq\emptyset\big{\}}\right)\] \[\leqslant 2\sum_{i=0}^{N}3\lesssim N,\] and \[h_{l,3}(0) \leqslant 2\sum_{\begin{subarray}{c}i\in A_{3}\\ 0\leqslant i\leqslant N\end{subarray}}\text{card}\left(\big{\{}j\in\mathbb{Z}^ {2}|Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq\emptyset\big{\}}\right)\] \[\leqslant 2\sum_{i=0}^{N}4\lesssim N,\] When \(l\neq k\) and \(k\neq 3\), we obtain \[h_{l,k}(0)\] \[\leqslant 2\sum_{\begin{subarray}{c}i\in A_{k}\\ 0\leqslant i\leqslant N\end{subarray}}\text{card}\left(\big{\{}j\in\mathbb{Z}^ {2}|Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq\emptyset\big{\}}\right)\] \[\leqslant 2\left(\sum_{i=0}^{2}\text{card}\left(\big{\{}j\in\mathbb{Z}^{2} |Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq\emptyset\big{\}}\right)+\sum_{ \begin{subarray}{c}i\in A_{k}\\ 3\leqslant i\leqslant N\end{subarray}}\text{card}\left(\big{\{}j\in\mathbb{Z}^ {2}|Q_{j}\cap(\Gamma_{l}\cap R_{i})\neq\emptyset\big{\}}\right)\right)\] \[\leqslant 2[3(N+1)+N\log N]\] \[\lesssim N\log N\] This completes the proof. ## 7. Proof of Theorem 2.5: Bilinear Kakeya maximal function The proof of Theorem 2.5 for Banach case part (a) and non-Banach case can be completed using the corresponding arguments as done in the proof of Theorem 2.4. Therefore, we only need to prove Banach case part (b). Let us list down some of the estimates from these cases as we will require them to prove Banach case part (b). We have the estimate \[\|{\mathcal{M}}_{{\mathcal{R}}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}} \lesssim A,\] for triplets \((p_{1},p_{2},p_{3})\) in each of the cases below. * \((p_{1},p_{2},p_{3})=(\frac{3s}{s+2},\frac{3s^{\prime}}{s^{\prime}+2},\frac{3}{4})\) with \(A=N^{\frac{1}{3}}\). * \((p_{1},p_{2},p_{3})=(s,\frac{3s^{\prime}}{s^{\prime}+2},\frac{3s}{3s+1})\) with \(A=N^{\frac{1}{3s}}\). * \((p_{1},p_{2},p_{3})=(\frac{3s}{s+2},s^{\prime},\frac{3s^{\prime}}{3s^{\prime }+1})\) with \(A=N^{\frac{1}{3s^{\prime}}}\). Cordoba [7] and Stromberg [27] used an interpolation idea to deduce the logarithmic bounds in \(L^{2}-\)estimate for the linear Kakeya maximal function. We develop an appropriate bilinear analogue of the same to prove our result. We state the interpolation result as a lemma. This may be of independent interest. The proof of Banach case part (b) follows immediately by using this interpolation lemma with the \(L^{p}-\)estimates mentioned as above. **Lemma 7.1**.: _Let \(1<s<\infty\). Suppose \(T\) is a bi-sublinear operator satisfying_ \[\|T\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3},\infty}}\lesssim A,\] _for the following Holder indices \((p_{1},p_{2},p_{3})\):_ 1. \((\infty,\infty,\infty)\)_,_ \((\infty,s^{\prime},s^{\prime})\)_,_ \((s,\infty,s)\)_,_ \((s,s^{\prime},1)\)_,_ \((\infty,\frac{3s^{\prime}}{s^{\prime}+2},\frac{3s^{\prime}}{s^{\prime}+2})\)_, and_ \((\frac{3s}{s+2},\infty,\frac{3s}{s+2})\) _with_ \(A=1\)_._ 2. \((\frac{3s}{s+2},\frac{3s^{\prime}}{s^{\prime}+2},\frac{3}{4})\) _with_ \(A=N^{\frac{1}{3}}\)_._ 3. \((s,\frac{3s^{\prime}}{s^{\prime}+2},\frac{3s}{3s+1})\) _with_ \(A=N^{\frac{1}{3s}}\)_._ 4. \((\frac{3s}{s+2},s^{\prime},\frac{3s^{\prime}}{3s^{\prime}+1})\) _with_ \(A=N^{\frac{1}{3s^{\prime}}}\)_._ _Then, we have the following strong type estimate,_ \[\|T\|_{L^{s}\times L^{s^{\prime}}\to L^{1}}\lesssim\log N.\] **Proof of Lemma 7.1:** We describe here the proof only for the case of \((2,2,1)\) boundedness which corresponds to \(s=2\). The case of other values of \(s\) may be completed as indicated at the end of this proof. Let \(f,g\in L^{2}(\mathbb{R})\) and \(\lambda>0\). Without loss of generality we assume \(\|f\|_{2}=\|g\|_{2}=1\). Decompose \(f\) as \[f=f_{1}+f_{2}+f_{3},\mbox{ where}\] \[f_{1}=f\chi_{|f(x)|<\frac{\lambda^{\frac{1}{2}}}{4}},\ \ f_{2}=f\chi_{\frac{ 1}{4}<|f(x)|\leq N\lambda^{\frac{1}{2}}},\ \ f_{3}=f\chi_{|f(x)|\geq N\lambda^{\frac{1}{2}}}.\] Similarly, we write \(g=g_{1}+g_{2}+g_{3}\). Consider \[\|T(f,g)\|_{1}=\int_{0}^{\infty}|\{x\in\mathbb{R}:|T(f,g)(x)|>\lambda\}|\;d\lambda\] \[\leqslant\left(\int_{\mathbb{R}}\int_{\lambda=\frac{|f(y_{1})|^{2}}{N^{2}}}^{ 16|f(y_{1})|^{2}}\frac{d\lambda}{\lambda}|f(y_{1})|^{2}\ dy_{1}\right)^{\frac{1 }{2}}\left(\int_{\mathbb{R}}\int_{\lambda=\frac{|g(y_{2})|^{2}}{N^{2}}}^{16|g(y_ {2})|^{2}}\frac{d\lambda}{\lambda}|g(y_{2})|^{2}\ dy_{2}\right)^{\frac{1}{2}}\] \[\leqslant\log N.\] The term corresponding to \(i=1,j=3\) is estimated by using the \(L^{\infty}(\mathbb{R})\times L^{\frac{3}{2}}(\mathbb{R})\to L^{\frac{3}{2}}( \mathbb{R})-\)boundedness of \(T\) as follows, \[\int_{0}^{\infty}\left|\left\{x\in\mathbb{R}:|T(f_{1},g_{3})(x)|> \frac{\lambda}{9}\right\}\right|\ d\lambda \leqslant\int_{0}^{\infty}\frac{1}{\lambda^{\frac{3}{2}}}\|f_{1 }\|_{\infty}^{\frac{3}{2}}\|g_{3}\|_{\frac{3}{2}}^{\frac{3}{2}}\ d\lambda\] \[\lesssim\int_{0}^{\infty}\frac{1}{\lambda^{\frac{3}{4}}}\int_{|g _{j}|\geqslant N\lambda^{\frac{1}{2}}}|g(y_{2})|^{\frac{3}{2}}\ dy_{2}d\lambda\] \[\lesssim \int_{\mathbb{R}}\int_{\lambda=0}^{\frac{|g(y_{2})|^{2}}{N^{2}}} \frac{d\lambda}{\lambda^{\frac{3}{4}}}|g(y_{2})|^{\frac{3}{2}}\ dy_{2}\] \[\lesssim 1.\] For the term with \(i=2,j=3\), the \(L^{2}(\mathbb{R})\times L^{\frac{3}{2}}(\mathbb{R})\to L^{\frac{6}{7}}( \mathbb{R})\) boundedness of \(T\) implies that \[\int_{0}^{\infty}\left|\left\{x\in\mathbb{R}:|T(f_{2},g_{3})(x)|> \frac{\lambda}{9}\right\}\right|\ d\lambda\] \[\lesssim N^{\frac{1}{7}}\int_{0}^{\infty}\frac{1}{\lambda^{\frac{6}{7}}} \|f_{2}\|_{2}^{\frac{6}{7}}\|g_{3}\|_{\frac{3}{2}}^{\frac{6}{7}}\ d\lambda\] \[\leqslant N^{\frac{1}{7}}\int_{0}^{\infty}\left(\frac{1}{\lambda}\int_{ \frac{1}{\frac{1}{4}}\lambda^{\frac{1}{2}}\leq|f|\leqslant N\lambda^{\frac{1} {2}}}|f(y_{1})|^{2}\ dy_{1}\right)^{\frac{3}{7}}\left(\frac{1}{\lambda^{\frac{ 3}{4}}}\int_{|g|\geqslant N\lambda^{\frac{1}{2}}}|g(y_{2})|^{\frac{3}{2}}\ dy_{2 }\right)^{\frac{4}{7}}d\lambda\] \[\lesssim N^{\frac{1}{7}}\left(\int_{\mathbb{R}}\int_{\lambda=\frac{|f(y_ {1})|^{2}}{N^{2}}}^{16|f(y_{1})|^{2}}\frac{d\lambda}{\lambda}|f(y_{1})|^{2}\ dy_{1} \right)^{\frac{3}{7}}\left(\int_{\mathbb{R}}\int_{\lambda=0}^{\frac{|g(y_{2}) |^{2}}{N^{2}}}\frac{d\lambda}{\lambda^{\frac{3}{4}}}|g(y_{2})|^{\frac{3}{2}}\ dy_{2 }\right)^{\frac{4}{7}}\] \[\lesssim 1.\] Finally, the term with \(i=j=3\) is estimated using the \(L^{\frac{3}{2}}(\mathbb{R})\times L^{\frac{3}{2}}(\mathbb{R})\to L^{\frac{3}{ 4}}(\mathbb{R})\) bound of \(T\). Indeed, by an application of Cauchy-Schwartz inequality we have, \[\int_{0}^{\infty}\left|\left\{x\in\mathbb{R}:|T(f_{3},g_{3})(x)|> \frac{\lambda}{9}\right\}\right|\ d\lambda\] \[\lesssim N^{\frac{1}{4}}\int_{0}^{\infty}\frac{1}{\lambda^{\frac{3}{4}}} \|f_{3}\|_{\frac{3}{2}}^{\frac{3}{4}}\|g_{3}\|_{\frac{3}{2}}^{\frac{3}{4}}\ d\lambda\] \[\lesssim N^{\frac{1}{4}}\left(\int_{0}^{\infty}\frac{1}{\lambda^{\frac{3}{4 }}}\int_{|f|\geqslant N\lambda^{\frac{1}{2}}}|f(y_{1})|^{\frac{3}{2}}\ dy_{1}d \lambda\right)^{\frac{1}{2}}\left(\int_{0}^{\infty}\frac{1}{\lambda^{\frac{3}{ 4}}}\int_{|g|\geqslant N\lambda^{\frac{1}{2}}}|g(y_{2})|^{\frac{3}{2}}\ dy_{2 }d\lambda\right)^{\frac{1}{2}}\] \[\lesssim N^{\frac{1}{4}}\left(\int_{\mathbb{R}}\int_{\lambda=0}^{\frac{|f(y _{1})|^{2}}{N^{2}}}\frac{d\lambda}{\lambda^{\frac{3}{4}}}|f(y_{1})|^{\frac{3}{ 2}}\ dy_{1}\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}}\int_{\lambda=0}^{\frac {|g(y_{2})|^{2}}{N^{2}}}\frac{d\lambda}{\lambda^{\frac{3}{4}}}|g(y_{2})|^{ \frac{3}{2}}\ dy_{2}\right)^{\frac{1}{2}}\lesssim 1.\] This completes the proof for \(s=2\). The case \(s\neq 2\) follows similarly. In this case we need to run the proof with the following decomposition \(f=f_{1}+f_{2}+f_{3}\) and \(g=g_{1}+g_{2}+g_{3}\) where \[f_{1}=f\chi_{|f(x)|<\frac{1}{4},}\ \ f_{2}=f\chi_{\frac{1}{4}<|f(x)| \leqslant N^{\frac{1}{s-1}}\lambda^{\frac{1}{s}}_{\lambda}},\ \ f_{3}=f\chi_{|f(x)|\ni N^{\frac{1}{s-1}}\lambda^{\frac{1}{s}}_{ \lambda}},\ \ \text{and}\] \[g_{1}=g\chi_{|g(x)|<\frac{1}{4},}\ \ g_{2}=g\chi_{\frac{1}{4}<|g(x)| \leqslant N^{s-1}\lambda^{\frac{1}{s^{\prime}}}},\ \ g_{3}=g\chi_{|g(x)|\geqslant N^{s-1}\lambda^{\frac{1}{s^{\prime}}}}.\] ## 8. Proof of Theorem 2.7: Vector-valued extension of bilinear Kakeya maximal function To prove Theorem 2.7, we will employ the arguments similar to [20] that they used to obtain similar vector valued inequalities for bilinear maximal function defined in (10.1). First, observe that if \(\mathcal{M}_{\mathcal{R}_{N}}\) is bounded from \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) into \(L^{p_{3}}(\mathbb{R})\) for a Holder related triplet \((p_{1},p_{2},p_{3})\) with \(1<p_{1},p_{2}\leqslant\infty\), then it admits vector-valued extension \[\mathcal{M}_{\mathcal{R}_{N}}:L^{p_{1}}(l^{p_{1}})(\mathbb{R})\times L^{p_{2} }(l^{p_{2}})(\mathbb{R})\to L^{p_{3}}(l^{p_{3}})(\mathbb{R}) \tag{8.1}\] with operator norm same as that of \(\mathcal{M}_{\mathcal{R}_{N}}\) in the scalar case. By using estimates (6.1) and (6.2) and Holder's inequality we have that \[\mathcal{M}_{\mathcal{R}_{N}}:L^{p}(l^{s})(\mathbb{R})\times L^{\infty}(l^{ \infty})(\mathbb{R})\to L^{p}(l^{s})(\mathbb{R}), \tag{8.2}\] and \[\mathcal{M}_{\mathcal{R}_{N}}:L^{\infty}(l^{\infty})(\mathbb{R})\times L^{p} (l^{s})(\mathbb{R})\to L^{p}(l^{s})(\mathbb{R}), \tag{8.3}\] where \(1<p<\infty\) and \(1<s\leqslant\infty\). First, we interpolate between estimates (8.2) and (8.3) to obtain the following boundedness for \(1<p_{1},p_{2},p_{3}<\infty\) and \(1<r_{1},r_{2},r_{3}\leqslant\infty\), \[\mathcal{M}_{\mathcal{R}_{N}}:L^{p_{1}}(l^{r_{1}})(\mathbb{R})\times L^{p_{2 }}(l^{r_{2}})(\mathbb{R})\to L^{p_{3}}(l^{r_{3}})(\mathbb{R}). \tag{8.4}\] Now for \(\epsilon>0\), we interpolate between (8.4) and (8.1) (with \(p_{3}=\frac{1}{1+\epsilon}\) and \(1<p_{1},p_{2}\leqslant\infty\)) to get that \[\left\|\left(\sum_{j}|\mathcal{M}_{\mathcal{R}_{N}}\left(f_{j},g_{j}\right)|^{ r_{3}}\right)^{\frac{1}{r_{3}}}\right\|_{L^{p_{3}}(\mathbb{R})}\lesssim N^{\epsilon}\left\|\left(\sum_{j}|f_{j}| ^{r_{1}}\right)^{\frac{1}{r_{1}}}\right\|_{L^{p_{1}}(\mathbb{R})}\left\|\left( \sum_{j}|g_{j}|^{r_{2}}\right)^{\frac{1}{r_{2}}}\right\|_{L^{p_{2}}(\mathbb{R })},\] where \(1<p_{1},p_{2}\leqslant\infty\), \(1\leqslant p_{3}<\infty\) and \(1<r_{1},r_{2}\leqslant\infty\), \(1\leqslant r_{3}\leqslant\infty\). In particular, we interpolate between \(L^{\frac{2}{1+\epsilon}}(l^{\frac{2}{1+\epsilon}})(\mathbb{R})\times L^{\frac {2}{1+\epsilon}}(l^{\frac{2}{1+\epsilon}})(\mathbb{R})\to L^{\frac{1}{1+ \epsilon}}(l^{\frac{1}{1+\epsilon}})(\mathbb{R})\}\) and \(L^{q_{1}}(l^{s_{1}})(\mathbb{R})\times L^{q_{2}}(l^{s_{2}})(\mathbb{R})\to L^{q _{3}}(l^{s_{3}})(\mathbb{R})\) with \(1<q_{1},q_{2},q_{3}<\infty\) and \(1<s_{1},s_{2},s_{3}\leqslant\infty\). Observe that such triplets \((q_{1},q_{2},q_{3})\) and \((r_{1},r_{2},r_{3})\) exist as we have \[\frac{1}{p_{1}}=\frac{\theta}{\frac{2}{1+\epsilon}}+\frac{1-\theta}{q_{1}}.\] Then \[\frac{1}{q_{1}}=\frac{1}{1-\theta}\left(\frac{1}{q_{1}}-\frac{\theta(1+ \epsilon)}{2}\right).\] Note that we need to make sure that \[0<\frac{1}{1-\theta}\left(\frac{1}{q_{1}}-\frac{\theta(1+\epsilon)}{2}\right)<1\] or equivalently, \[\frac{\theta(1+\epsilon)}{2}<\frac{1}{q_{1}}<1-\theta+\frac{\theta(1+\epsilon)}{2}\] We can choose \(\theta\) so that the condition above is satisfied. The choice of \(q_{2},r_{1}\) and \(r_{2}\) can be made similarly. ## 9. Examples for sharpness of constants In this section, we provide examples to establish the sharpness of the dependence of norm of \(\mathcal{M}_{\mathcal{R}_{N}}\) on the parameter \(N\) in Theorem 2.5. **Proposition 9.1**.: _Let \((p_{1},p_{2},p_{3})\) be such that \(\frac{1}{p_{3}}=\frac{1}{p_{1}}+\frac{1}{p_{2}}.\) Then the following lower bounds on the operator norm \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}}\) hold._ 1. \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}} \gtrsim N^{\frac{1}{p_{3}}-1},\text{ for }p_{3}<1.\)__ 2. \(\|\mathcal{M}_{\mathcal{R}_{N}}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{1}} \gtrsim\log N.\)__ Proof.: Let \(f_{N}(x)=x^{-\frac{2}{p_{1}}}\chi_{\{x:3\leqslant x\leqslant N\}}(x)\) and \(g_{N}(x)=x^{-\frac{2}{p_{2}}}\chi_{\{x:3\leqslant x\leqslant N\}}(x)\). Note that \(\|f_{N}\|_{p_{1}}=\|g_{N}\|_{p_{2}}\simeq C\). Let \(6<x<N-1\) and consider the rectangle containing \((x,x)\) and of dimensions \(x-3\times\frac{x-3}{N}\) in the direction of unit vector \((\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})\) with \((4,4)\) as mid-point of the small side. Then \[\mathcal{M}_{\mathcal{R}_{N}}(f_{N},g_{N})(x) \geqslant \frac{N}{(x-3)^{2}}\int_{4+\frac{x-3}{2\sqrt{2N}}}^{x+1-\frac{x-3} {2\sqrt{2N}}}\int_{y-\frac{x-3}{\sqrt{2N}}}^{y}f_{N}(y)g_{N}(z)dzdy\] \[\geqslant \frac{N}{(x-3)^{2}}\int_{4+\frac{x-3}{2\sqrt{2N}}}^{x+1-\frac{x-3 }{2\sqrt{2N}}}\int_{y-\frac{x-3}{\sqrt{2N}}}^{y}\frac{1}{y^{\frac{2}{p_{1}}}z ^{\frac{2}{p_{2}}}}dzdy\] \[\geqslant \frac{N}{(x-3)^{2}}\int_{4+\frac{x-3}{2\sqrt{2N}}}^{x+1-\frac{x-3 }{2\sqrt{2N}}}\frac{x-3}{\sqrt{2N}}\frac{1}{y^{\frac{2}{p_{3}}}}dy\] \[= \frac{1}{\sqrt{2}(x-3)(1-\frac{2}{p_{3}})}\left(\left(x+1-\frac{ x-3}{2\sqrt{2}N}\right)^{1-\frac{2}{p_{3}}}-\left(4+\frac{x-3}{2\sqrt{2}N} \right)^{1-\frac{2}{p_{3}}}\right)\] \[\gtrsim \frac{1}{x}\] Therefore, \[\|\mathcal{M}_{\mathcal{R}_{N}}(f_{N},g_{N})\|_{p_{3}}^{p_{3}}\geqslant c\int_ {6}^{N-1}\frac{1}{x^{p_{3}}}\gtrsim\begin{cases}N^{1-p_{3}},&p_{3}<1\\ \log N,&p_{3}=1\end{cases}.\] This completes the proof. ### Remarks on (linear) Kakeya maximal operator acting on product type functions In this section, we construct examples to show that the norm dependence of the linear Kakeya maximal functions in (2.2) and (2.3) on the parameter \(N\) is sharp even when we restrict the class of functions to the family of functions of product type. More precisely, we have the following, **Theorem 9.2**.: _The following lower bounds holds for the operators \(M_{\mathcal{R}_{1,N}}\) and \(M_{\mathcal{R}_{N}}\) acting on product type functions._ 1. _There exists a function of the form_ \(f(x,y)=f_{1}(x)f_{2}(y)\) _such that_ \(\|M_{\mathcal{R}_{1,N}}f\|_{2}\gtrless(\log N)^{\frac{1}{2}}\|f\|_{2}.\)__ 2. _There exists a function of the form_ \(f(x,y)=f_{1}(x)f_{2}(y)\) _such that_ \(\|M_{\mathcal{R}_{N}}f\|_{2}\gtrless\log N\|f\|_{2}.\)__ Proof.: Consider the product type function \(f_{N}(y,z)=\frac{1}{(yz)^{\frac{1}{2}}}\chi_{1\leqslant y\leqslant N}(y)\chi_ {1\leqslant z\leqslant N}(z)\). Note that \(\|f_{N}\|_{2}=\log N\). Let \(R_{k}\) be the rectangle of dimension \(1\times N\) parallel to the line \(z=\frac{k}{N}y\) and lying below the line \(z=\frac{k}{N}y\) with \(O=(0,0)\) and \(P_{k}=\left(\frac{N^{2}}{(N^{2}+k^{2})^{\frac{1}{2}}},\frac{Nk}{(N^{2}+k^{2})^ {\frac{1}{2}}}\right)\) as its two vertices (see Figure 1). For \(x=(x_{1},x_{2})\in[0,N]^{2}\) satisfying \(\frac{k-1}{N}x_{1}\leqslant x_{2}<\frac{k}{N}x_{1},\ k=2,3,...,N\) and \(|x|\leqslant N\), we have \[M_{\mathcal{R}_{1,N}}(f_{N})(x)\geqslant\frac{1}{N}\int_{R_{k}}f_{N}(y,z)dydz\] Figure 1. The blue rectangle denotes the rectangle \(R_{k}\) with vertices \(O\) and \(P_{k}\) on the line \(l_{k}:z=\frac{k}{N}y\). \[\geqslant\frac{1}{N}\int_{\frac{N+(N^{2}+k^{2})^{\frac{1}{2}}}{k}}^{ \frac{N^{2}}{2}}\int_{\frac{k}{N}y-\frac{(N^{2}+k^{2})^{\frac{1}{2}}}{N}}^{\frac {k}{N}y}\frac{N^{\frac{1}{2}}}{k^{\frac{1}{2}}y}dzdy\] \[\geqslant\frac{1}{(Nk)^{\frac{1}{2}}}\frac{(N^{2}+k^{2})^{\frac{1 }{2}}}{N}\int_{\frac{2(N^{2}+k^{2})^{\frac{1}{2}}}{k}}^{\frac{N^{2}}{2}}\frac{ 1}{y}dy\] \[\geqslant\frac{1}{(Nk)^{\frac{1}{2}}}\left[\log\left(\frac{N^{2}} {(N^{2}+k^{2})^{\frac{1}{2}}}\right)-\log\left(\frac{2(N^{2}+k^{2})^{\frac{1}{ 2}}}{k}\right)\right]\right.\] \[=\frac{1}{(Nk)^{\frac{1}{2}}}\log\left(\frac{N^{2}k}{2(N^{2}+k^{ 2})}\right)\] \[\geqslant\frac{\log\frac{k}{4}}{(Nk)^{\frac{1}{2}}}.\] Now, \[\|M_{\mathcal{R}_{1,N}}(f_{N})\|_{2}^{2} \geqslant\int_{0}^{N}\int_{0}^{\pi/2}|M_{\mathcal{R}_{1,N}}(f_{N} )(re^{i\theta})|^{2}rdrd\theta\] \[\geqslant\int_{0}^{N}\sum_{k=2}^{N}\int_{\theta=\arctan(\frac{k-1 }{N})}^{\theta=\arctan(\frac{k}{N})}\left(\frac{\log\frac{k}{4}}{(Nk)^{\frac{1 }{2}}}\right)^{2}rdrd\theta\] \[\gtrsim\sum_{k=2}^{N}\frac{N(\log\frac{k}{4})^{2}}{2k}\left( \arctan\left(\frac{k}{N}\right)-\arctan\left(\frac{k-1}{N}\right)\right)\] \[\gtrsim(\log N)^{3}.\] Next, we take \(x=(x_{1},x_{2})\in[1,N]^{2}\) such that \(4\leqslant|x|\leqslant N\). For each \(x\), we consider the rectangle \(R_{x}\) containing \(x\) of dimensions \(\frac{|x|-2}{N}\times(|x|-2)\) with one of its shorter side touching the circle centered at origin and of radius \(2\) and the longer side is parallel to the line \(z=\frac{x_{2}}{x_{1}}y\) and lying below the line \(z=\frac{x_{2}}{x_{1}}y\). We note that the equation of lines for longer sides of the rectangle are \(z=\frac{x_{2}}{x_{1}}y\) and \(z=\frac{x_{2}}{x_{1}}y-\frac{|x|(|x|-2)}{x_{1}N}\) and the equation of lines for shorter sides are \(z=-\frac{x_{1}}{x_{2}}y+\frac{2|x|}{x_{1}}\) and \(z=-\frac{x_{1}}{x_{2}}y+\frac{|x|^{2}}{x_{1}}\) Thus, \[M_{\mathcal{R}_{N}}(f_{N})(x) \geqslant\frac{N}{(|x|-2)^{2}}\int_{R_{x}}f_{N}(y,z)dydz\] \[\geqslant\frac{N}{(|x|-2)^{2}}\int_{\frac{2x_{1}N+x_{2}(|x|-2)}{| x|N}}^{x_{1}}\int_{\frac{x_{2}}{x_{1}}y-\frac{|x|(|x|-2)}{x_{1}N}}^{\frac{x_{2} }{x_{1}}y}\frac{x_{1}^{\frac{1}{2}}}{x_{2}^{\frac{1}{2}}y}dzdy\] \[\geqslant\frac{N}{(|x|-2)^{2}}\frac{|x|(|x|-2)}{(x_{1}x_{2})^{ \frac{1}{2}}N}\int_{\frac{2x_{1}N+x_{2}(|x|-2)}{|x|N}}^{x_{1}}\frac{1}{y}dy\] \[\geqslant\frac{1}{(x_{1}x_{2})^{\frac{1}{2}}}\left[\log\left(x_{1 }\right)-\log\left(\frac{2x_{1}N+x_{2}(|x|-2)}{|x|N}\right)\right]\] \[=\frac{1}{(x_{1}x_{2})^{\frac{1}{2}}}\log\left(\frac{x_{1}|x|N}{2x_{1} N+x_{2}(|x|-2)}\right)\] \[\geqslant\frac{\log\frac{|x|}{4}}{(x_{1}x_{2})^{\frac{1}{2}}}.\] Therefore we have, \[\|M_{\mathcal{R}_{N}}(f_{N})\|_{2}^{2} \geqslant\int_{[1,N]^{2}}\frac{\left(\log\frac{|x|}{4}\right)^{2} }{x_{1}x_{2}}\;dx\] \[\geqslant\int_{2}^{N}\sum_{k=2}^{N}\int_{\theta=\arctan(\frac{k-1 }{N})}^{\theta=\arctan(\frac{k}{N})}\frac{\left(\log\frac{r}{4}\right)^{2}}{kr^ {2}}rdrd\theta\] \[\geqslant\sum_{k=2}^{N}\left(\arctan\left(\frac{k}{N}\right)- \arctan\left(\frac{k-1}{N}\right)\right)\frac{N}{k}\int_{2}^{N}\frac{(\log \frac{r}{4})^{2}}{r}dr\] \[\gtrsim(\log N)^{4}.\] ## 10. Further discussions In this section we initiate a discussion about connections of bilinear Kakeya maximal function with other type of maximal functions in the bilinear setting. The aim of this discussion is to indicate some further questions that need to be investigated. Let \(\Omega\) denote the set of vectors in \(\mathbb{R}^{2}\) and given a collection of rectangles \(\mathcal{R}\) we use the notation \(\mathcal{R}^{\Omega}\) to denote the collection of those rectangles in \(\mathcal{R}\) which have their longest side parallel to some \(\omega\in\Omega\). If not stated otherwise, the elements of \(\Omega\) will be unit vectors. For the linear case, when \(\mathfrak{F}\) is the collection of rectangles with longest side parallel to one of the vectors in \(\Omega\) and \(\operatorname{card}(\Omega)=N\), the operator \(M_{\mathfrak{F}}\) defined in (2.1) satisfies the bound, \[\|M_{\mathfrak{F}}\|_{L^{2}(\mathbb{R}^{2})\to L^{2}(\mathbb{R}^{2})} \lesssim\log N.\] The above inequality was shown to hold by Stromberg [27] for the uniformly distributed set of directions and for arbitrary set of directions, this was resolved by Katz [12]. Moreover, for the lacunary set of directions, \(M_{\mathfrak{F}}\) was shown to be weak type \((2,2)\) by Cordoba and Fefferman [8], and it was proved to be bounded in \(L^{p},\;1<p<\infty\) in [22] using Fourier transform methods. We define the directional bilinear maximal operator \(\mathcal{M}_{\mathcal{R}^{\Omega}}\) by \[\mathcal{M}_{\mathcal{R}^{\Omega}}(f,g)(x)=\sup_{R\in\mathcal{R}^{\Omega}:(x, x)\in R}\frac{1}{|R|}\int_{R}|f(y_{1})||g(y_{2})|\;dy_{1}dy_{2}.\] Here we would like to refer the interested reader to Stromberg [27], Cordoba and Fefferman [8], Katz [12], Nagel, Stein and Wainger [22] for some of the important results for the linear counterpart \[M_{\mathcal{R}^{\Omega}}f(x)=\sup_{R\in\mathcal{R}^{\Omega}:x\in R}\frac{1}{|R|} \int_{R}|f(y)|\;dy,\ x\in\mathbb{R}^{2}.\] Another important class of maximal functions in the bilinear theory, as studied by Lacey [16], is defined as \[\mathcal{M}_{\alpha}(f,g)(x)=\sup_{t>0}\frac{1}{2t}\int_{-t}^{t}|f(x-t)g(x- \alpha t)|\;dt. \tag{10.1}\] where \(\alpha\in\mathbb{R}\). Observe that if \(\alpha=0,1,\) then \(L^{p}-\)estimates for \(\mathcal{M}_{\alpha}\) can be easily deduced from that of the Hardy-Littlewood maximal function. In the remaining cases, Lacey [16] proved that for \(\alpha\neq 0,1,\) the operator \(\mathcal{M}_{\alpha}\) maps \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\) into \(L^{p_{3}}(\mathbb{R})\) for \(\frac{2}{3}<p_{3}<\infty\) and \(\frac{1}{p_{3}}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\). It is well-known that the maximal function \(\mathcal{M}_{\alpha}\) is intimately connected with the bilinear Hilbert transforms. Next, we observe that with the use of Lebesgue differentiation theorem, one can deduce that \[\mathcal{M}_{\alpha}(f,g)(x)\lesssim\mathcal{M}_{\mathcal{R}^{\Omega_{\alpha} }}(f,g)(x)\lesssim\mathcal{M}_{\alpha}(f,Mg)(x),\] where \(\Omega_{\alpha}=\{(1,\alpha)\}.\) This relation naturally gives rise to questions and probable methods to address issues related to \(L^{p}-\)boundedness of both the maximal functions. **Remark 10.1**.: _We have few observations in order highlighting the dependence on \(\alpha\) of the operator \(\|\mathcal{M}_{\alpha}\|_{L^{p_{1}}\times L^{p_{2}}\to L^{p_{3}}}\) when \((1,\alpha)\) is near the diagonal. It is evident from the example given in Proposition 9.1 that the operator \(\mathcal{M}_{\mathcal{R}^{\Omega_{1}}}\) fails to be bounded from \(L^{p}(\mathbb{R})\times L^{p^{\prime}}(\mathbb{R})\) into \(L^{1}(\mathbb{R})\). A modification in example as given below, allows us to show that the \(\|\mathcal{M}_{\mathcal{R}^{\Omega_{\alpha_{N}}}}\|_{L^{p}\times L^{p^{\prime} }\to L^{1}},\) where \(\alpha_{N}=1-\frac{c}{N},\) grows logarithmically in \(N\). This shows that \(\|\mathcal{M}_{\alpha}\|_{L^{p}\times L^{p^{\prime}}\to L^{1}}\) is not uniformly bounded in the neighbourhood of \(\alpha=1\)._ **Observation:** Let \(\alpha_{N}=1-\frac{c}{N},\) where \(c\) is a small constant, then \(\|\mathcal{M}_{\mathcal{R}^{\Omega_{\alpha_{N}}}}\|_{L^{p}\times L^{p^{\prime} }\to L^{1}}\gtrsim\log N.\) The following example proves this assertion. **Example 10.2**.: For \(N\in\mathbb{N}\) consider \(f_{N}(x)=x^{-\frac{1}{p}}\chi_{(x:3\varepsilon x\leq N)}(x)\) and \(g_{N}(x)=x^{-\frac{1}{p^{\prime}}}\chi_{(x:3\varepsilon x\leq N)}(x).\) Note that \(\|f_{N}\|_{p}\simeq\log N.\) For \(2<x<N\), consider the rectangle \(R_{x}\) containing the point \((x,x)\) with dimension \((x-1)\sqrt{1+\alpha_{N}^{2}}\times\frac{(x-1)\sqrt{1+\alpha_{N}^{2}}}{N}\) and having its longest side in the direction of \(z=\alpha_{N}y\) with \((x,x)\) and \((1,1+\frac{c}{N}(x-1))\) as two vertices on the longer side. Note that the long sides of \(R_{x}\) are given by \(z=\alpha_{N}^{2}y+\frac{c}{N}x-\frac{(x-1)\sqrt{1+\alpha_{N}^{2}}}{N}\) and \(z=\alpha_{N}y+\frac{c}{N}x.\) Consider \[\mathcal{M}_{\mathcal{R}^{\Omega_{\alpha_{N}}}}(f_{N},g_{N})(x) \geqslant \frac{1}{|R_{x}|}\int_{1+\frac{N-c}{\sqrt{(N-c)^{2}+N^{2}}}}^{x} \int_{\alpha_{N}y+\frac{c}{N}x-\frac{(x-1)\left(1+\alpha_{N}^{2}\right)}{N}}^{ \alpha_{N}y+\frac{c}{N}x}f_{N}(y)g_{N}(z)dzdy\] \[\geqslant \frac{1}{|R_{x}|}\int_{1+\frac{N-c}{\sqrt{(N-c)^{2}+N^{2}}}}^{x} \int_{\alpha_{N}y+\frac{c}{N}x}^{\alpha_{N}y+\frac{c}{N}x}\frac{1}{y^{\frac{1 }{p}}(\alpha_{N}y+\frac{c}{N}x)^{\frac{1}{q}}}dzdy\] \[\geqslant \frac{1}{|R_{x}|}\int_{1+\frac{N-c}{\sqrt{(N-c)^{2}+N^{2}}}}^{x} \frac{(x-1)\left(1+\alpha_{N}^{2}\right)}{N}\frac{1}{\alpha_{N}y+\frac{c}{N}x}dy\] \[= \frac{1}{\alpha_{N}(x-1)}\int_{\alpha_{N}\left(1+\frac{N-c}{\sqrt {(N-c)^{2}+N^{2}}}\right)}^{\alpha_{N}x}\frac{1}{s+\frac{c}{N}x}ds\] \[= \frac{1}{\alpha_{N}(x-1)}\left[\log x-\log\left(\alpha_{N}\left(1 +\frac{N-c}{\sqrt{(N-c)^{2}+N^{2}}}\right)+\frac{c}{N}x\right)\right]\] Since \(N\) is large and \(c\) is a fixed small constant, then for \(2<x<N\) we see that \[\alpha_{N}\left(1+\frac{N-c}{\sqrt{(N-c)^{2}+N^{2}}}\right)+\frac{c}{N}x\simeq 1.\] Thus, we get that \[\mathcal{M}_{\mathcal{R}^{\Omega_{\alpha_{N}}}}(f_{N},g_{N})(x)\geqslant \frac{\log x}{x-1}\simeq\frac{\log x}{x}.\] This implies that \[\|\mathcal{M}_{\mathcal{R}^{\Omega_{\alpha_{N}}}}(f_{N},g_{N})\|_{1} \gtrsim\int_{2}^{N}\frac{\log x}{x}=(\log N)^{2}-(\log 2)^{2}.\] ## Acknowledgement Ankit Bhojak and Saurabh Shrivastava acknowledge the financial support from Science and Engineering Research Board, Department of Science and Technology, Govt. of India, under the scheme Core Research Grant, file no. CRG/2021/000230. Surjeet Singh Choudhary is supported by CSIR(NET), file no.09/1020(0182)/2019- EMR-I for his Ph.D. fellowship.
2308.15186
A semiclassical model for charge transfer along ion chains in silicates
It has been observed in fossil tracks and experiments in the layered silicate mica muscovite the transport of charge through the cation layers sandwiched between the layers of tetrahedra-octahedra-tetrahedra. A classical model for the propagation of anharmonic vibrations along the cation chains has been proposed based on first principles and empirical functions. In that model, several propagating entities have been found as kinks or crowdions and breathers, both with or without wings, the latter for specific velocities and energies. Crowdions are equivalent to moving interstitials and transport electric charge if the moving particle is an ion, but they also imply the movement of mass, which was not observed in the experiments. Breathers, being just vibrational entities, do not transport charge. In this work, we present a semiclassical model obtained by adding a quantum particle, electron or hole to the previous model. We present the construction of the model based on the physics of the system. In particular, the strongly nonlinear vibronic interaction between the nuclei and the extra electron or hole is essential to explain the localized charge transport, which is not compatible with the adiabatic approximation. The formation of vibrational localized charge carriers breaks the lattice symmetry group in a similar fashion to the Jahn-Teller Effect, providing a new stable dynamical state. We study the properties and the coherence of the model through numerical simulations from initial conditions obtained by tail analysis and other means. We observe that although the charge spreads from an initial localization in a lattice at equilibrium, it can be confined basically to a single particle when coupled to a chaotic quasiperiodic breather. This is coherent with the observation that experiments imply that a population of charge is formed due to the decay of potassium unstable isotopes.
Juan F R Archilla, Jānis Bajārs, Yusuke Doi, Masayuki Kimura
2023-08-29T10:06:36Z
http://arxiv.org/abs/2308.15186v3
# A semiclassical model for charge transfer along ion chains in silicates ###### Abstract It has been observed in fossil tracks and experiments in the layered silicate mica muscovite the transport of charge through the cation layers sandwiched between the layers of tetrahedra-octahedra-tetrahedra. A classical model for the propagation of anharmonic vibrations along the cation chains has been proposed based on first principles and empirical functions. In that model, several propagating entities have been found as kinks or crowdions and breathers, both with or without wings, the latter for specific velocities and energies. Crowdions are equivalent to moving interstitials and transport electric charge if the moving particle is an ion, but they also imply the movement of mass, which was not observed in the experiments. Breathers, being just vibrational entities, do not transport charge. In this work, we present a semiclassical model obtained by adding a quantum particle, electron or hole to the previous model. We present the construction of the model based on the physics of the system. In particular, the strongly nonlinear vibronic interaction between the nuclei and the extra electron or hole is essential to explain the localized charge transport, which is not compatible with the adiabatic approximation. The formation of vibrational localized charge carriers breaks the lattice symmetry group in a similar fashion to the Jahn-Teller Effect, providing a new stable dynamical state. We study the properties and the coherence of the model through numerical simulations from initial conditions obtained by tail analysis and other means. We observe that although the charge spreads from an initial localization in a lattice at equilibrium, it can be confined basically to a single particle when coupled to a chaotic quasiperiodic breather. This is coherent with the observation that experiments imply that a population of charge is formed due to the decay of potassium unstable isotopes. ## 1 Introduction Tracks in mica muscovite were observed due to the special properties of the material. It can be exfoliated into very thin sheets, which are also semitransparent and therefore allow for direct observation. Some of those dark tracks were due to positively charged particles such as positrons, protons, or antipions [1, 2, 3]. Many other tracks were along the potassium hexagonal lattice layer and were therefore attributed to quasi-one-dimensional lattice excitations called quodons [4, 5]. They were experimentally observed in an experiment where alpha particles were sent onto a monocrystal side, and subsequently, it was observed the ejection of an atom from the other side along the lattice close-packed directions [6]. Fossil tracks were most probably produced by the recoil of potassium ions after beta decay, which is 99% electron emission, leaving behind a positive charge. Therefore, it was deduced that quodons could transport electric charge [7, 8], which opened the possibility of the experimental measurement of electric current. This was achieved by bombarding a mica monocrystal with alpha particles which were expected to produce a large number of nonlinear excitations and measuring the current in the absence of an electric field, a phenomenon called _hyperconductivity_. There was an initial peak of current that after some seconds would diminish to the current transported by the flux of alpha particles [9]. It was interpreted as the nonlinear excitations trapping the accumulated electric charge left behind from electron beta emission. When this reservoir was exhausted, the only charge available was the one provided by the alpha particles. More experiments were done with other silicates [10] and other materials as it was developed a test to distinguish hyperconductivity and to separate quodon currents from Ohmic currents in semiconductors or conductors [11, 12]. The authors recommend a recent review on the subject [13]. A classical model for a cation chain in muscovite was developed based on first principles and empirical potentials. Interestingly, it was found the existence of crowdions or lattice kinks with energies of 26 eV, which is below the energy provided by the nucleus recoil after beta emission and above the energy to eject an atom, therefore coherent with the mica ejection experiment [14, 15]. There were also found kinks with other energies but with wings, also called nanopterons [16, 17]. Interestingly, a crowdion is a moving interstitial, which within an ionic crystal implies the movement of electric charge, making it a candidate for hyperconductivity. There are, however, observations of primary fossil tracks, which scatter in many other fainter ones, which should have much smaller energies, implying that less energetic nonlinear excitations as breathers were also of interest. Breathers are nonlinear localized solutions with also a vibration [18]. They are well described mathematically [19], and there are methods to construct numerically exact ones [20]. They can also appear in systems with long-range interaction [21] as alpha-helix proteins [22]. Breathers, also known as _intrinsic localized modes_ (ILMs), were found in 3D in Si [23]. The theory of ILMs in 3D molecular dynamics was developed in Ref. [24], where it was found that they appear after X-ray recoil in molecular dynamics of ionic crystals but also in metals, such as Ni, Nb, and Fe. It was demonstrated that ILM frequencies can be within gaps in the phonon band or above it. The ILMs were highly mobile with energy of the order of 1 eV. ILMs were later found in other bcc metals, such as V and W [25], in fcc crystals, such as Cu [26, 27], and in hcp Be [28]. ILMs were also found in covalent crystals with the diamond structure, such as the insulator C, and the semiconductors Si and Ge [26, 27]. They were also constructed in graphene both with classical [29], and ab initio molecular dynamics [30]. To summarize, it was demonstrated that breathers or ILMs can appear in many materials with different electric behavior and with many different crystal structures. The spectral theory of breathers in the moving frame was developed in Ref. [31]. In the same work, it was applied to the muscovite model and traveling exact breathers with small energy were found. A related phenomenological model for muscovite was also used with the property that it was very easy to obtain traveling breathers in two dimensions [32]. The theory of exact breathers in the moving frame was extended to two and more dimensions using that model as an example [33]. The theory was also extended to a variation of the latter model with the addition of a quantum particle. Numerical methods were developed to deal with the differences in the time scale of the charge and the atoms that were able to conserve the charge probability [34]. Their spectral properties were deduced and described in Ref. [35]. In this paper, we construct a semiclassical model for the specific model based on potassium chains in mica muscovite constructed from first principles and empirical potentials in Ref. [15], by adding a quantum particle. In many aspects, the model is similar to the phenomenological model used in Ref. [34, 35], but it is more complicated as the diagonal terms of the quantum charge Hamiltonian are not constant but correspond to the interaction with the other electric charges in the crystal. The vibronic interaction between the nuclei and the extra electron or hole is strongly nonlinear with the consequence that the tunneling or an electron between nearest-neighbor ions is only probable when the nuclei are close enough and the transition probability increases strongly as they approach. The consequence is the formation of quodons, dynamical states that transport electric charge in a localized manner, breaking the lattice discrete translation invariance as happens with the spontaneous symmetry breaking in the Jahn-Teller or pseudo Jahn-Teller effect[36, 37]. Certainly, the space of quodons has to keep the lattice translation invariance but given the low probability for the formation of quodons, their population at any given time should also break the translational invariance. The paper is structured as follows: after the introduction, the model is described in Section 2, including the hole or electron potential; the Hamiltonian and dynamical equations are then obtained in Section 3, which are linearized in Section 4. Section 5 presents the results of numerical integration for different initial variables, such as localized traveling and stationary trial solutions, and localized charge in a system at equilibrium; also it is observed a breather rebounding in a charge and a kink with an extra charge, and finally, a charge trapped by a chaotic quasiperiodic breather. The main part of the article finishes with the conclusions. There are also two appendixes: Appendix A describes the transformation of the semiclassical system into a real canonical Hamiltonian one, and Appendix B describes methods for numerical integration that preserve the charge probability at each integration step. ## 2 Model We propose a tight-binding model for a positive charge, a hole, or an electron, that we will call a charge with \(Q=1\) for the hole and \(Q=-1\) for the electron. The model is an extension of the model already used for muscovite by Archilla et al. [14, 15, 17, 38]. The ket \(|n\rangle\) represents the state in which the extra charge is located at the lattice position \(n\), being \(\langle n|\) the corresponding bra or adjoint operator, with \(\langle m|n\rangle=\delta_{m,n}\). An extra charge state is given by \(|\phi(t)\rangle=\sum_{n}c_{n}(t)|n\rangle\), where \(c_{n}(t)\) represent the time-dependent possibility that the extra charge is located at site \(n\). We will omit in what follows the explicit time dependence of \(|\phi\rangle\) and \(c_{n}\) when convenient. The adjoint operator to \(|\phi\rangle\) is \(\langle\phi|=\sum_{n}c_{n}^{*}\langle n|\), where \(c_{n}^{*}\) is the complex conjugate of \(c_{n}\) and \(|c_{n}^{2}|=c_{n}^{*}c_{n}\) is the probability of finding the extra charge in the site \(n\) in the state \(|\phi\rangle\). The total probability is one as there is an electron or hole in the system, i.e.: \[\sum_{n=1}^{N}|c_{n}|^{2}=1. \tag{1}\] The cations K\({}^{+}\) are subjected both to an on-site potential \(U(u_{n})\) which represents the interaction of K\({}^{+}\) with the ions of the surrounding lattice except for the K\({}^{+}\) ions in the same row, which are explicitly included as the interatomic interaction \(V(u_{n}-u_{n-1})=V_{C}(u_{n}-u_{n-1})+V_{Z}(u_{n}-u_{n-1})\), where \(V_{C}\) is the Coulomb repulsion; and \(V_{Z}\), a Yukawa type potential is a simplification of the ZBL potential [39], and corresponds to the repulsion between nuclei screened by the electron cloud. The variables \(u_{n}\) represent the \(n\)-th ion separation from the equilibrium position. We will use scaled units convenient for the modeled system: \(u_{L}=a=5.19\,\mathrm{\AA}\) is the equilibrium distance between potassium ions; \(u_{E}\) is the Coulomb energy corresponding to two units of charge \(e\) at distance \(u_{L}\), i.e., \(u_{E}=k_{e}e^{2}/a\simeq 2.77\,\mathrm{eV}\); the unit of mass is the mass of a potassium atom \(u_{M}=m_{\mathrm{K}}=39.1\,\mathrm{amu}\); and the unit of time becomes a derived quantity: \(u_{T}=(u_{M}u_{L}^{2}/uE)^{1/2}=(m_{K}a^{3}/k_{e}e^{2})^{1/2}\simeq 0.2\,\) ps. The unit of angular frequency is equivalent to 5 Trad/s or \(u_{f}=1/(2\pi u_{T})\simeq 0.8\,\)THz\(\simeq 26.7\,\)cm\({}^{-1}\). In the scaled units, the interaction potential energies become: \[V = V_{C}+V_{Z} \tag{2}\] \[V_{C}(u_{n}-u_{n-1}) = \frac{1}{1+u_{n}-u_{n-1}}\] (3) \[V_{Z}(u_{n}-u_{n-1}) = \frac{B}{1+u_{n}-u_{n-1}}\exp(-\frac{1+u_{n}-u_{n-1}}{\rho})=\] (4) \[\frac{C}{1+u_{n}-u_{n-1}}\exp(-\beta(u_{n}-u_{n-1}))\,,\] with \(B=184.1\) and \(\rho=0.05690\)\(\beta=1/\rho=17.58\) and \(C=B\exp(-\beta)=4.285\times 10^{-6}\). The on-site potential is obtained with the use of empirical potentials and electrostatic potentials [40], considering the interaction with the ions in the layers above, the first one composed of O\({}^{-2}\), and the second of a mixed species between Si and Al with a positive charge \(+3.1\). The resulting potential is a Fourier series that can be truncated at the fourth term: \[U=\sum_{m=0}^{4}v_{m}\cos(2\pi x),\quad\mbox{with} \tag{5}\] \[\{v_{m}\}=\{2.4474,-3.3490,1.0997,-0.2302,0.0321\}\,. \tag{6}\] The on-site potential \(U\) is represented among other potentials in Fig. 1. It has a potential barrier of 20 eV, as obtained with molecular dynamics [41], and a small amplitude frequency of 110 cm\({}^{-1}\), as observed experimentally [42]. It is soft for \(x\lesssim 0.3a\), and hard for larger distances as corresponds to a bounded system. Both the interaction and the on-site potentials are described in detail in Refs. [15, 31]. In the following, we consider for the first time some of the consequences of adding an extra unit of charge as a hole or an electron. ### Hole or electron potential Localized energy transport in muscovite has been observed experimentally [6], and it has been deduced that dark tracks are produced by positive charge [7, 8]. Subsequently, charge transport has been observed experimentally in muscovite [9] and other silicates [10, 11, 12, 13]. Different tracks suggest different types of carriers [43]. In this article, we attempt to model the transport of positive or negative charge attached to the K\({}^{+}\) ions and coupled with the lattice. An extra charge will appear due to \(\beta^{-}\) decay of \({}^{40}\)K [44], for which the nucleus transforms in \({}^{40}\)Ca with an extra proton. The ion becomes Ca\({}^{++}\), with the extra hole migrating to the neighboring ion. The far less frequent \(\beta^{-}\) decay will transform \({}^{40}\)K into \({}^{40}\)Ar, and the ion will become Ar\({}^{0}\), with an electron that can migrate to neighboring K\({}^{+}\) transforming them into neutral K [8]. An extra charge in site \(n\) will experience the electrostatic interaction with the surrounding ions, given by \(U_{Q}(u_{n})\), and also with the other K\({}^{+}\) in the same row, given also by \(V_{C}(u_{n}-u_{n-1})\) in (3). \(U_{Q}\) is different from \(U\) in (6), because the short-range interaction is already taken into account, and therefore only the extra electrostatic interaction of the extra charge has to be considered [40]. Considering the interaction with the oxygen ions at the immediate layers above and below and with the Si-Al mixed species in the following layer with charge \(+3.1\) leads to a potential described also by a Fourier series truncated at the fourth harmonic. We denote it as \(U_{Q}\), being \(U_{h}=U_{Q}\) for a hole and \(U_{e}=-U_{h}\) for an extra electron. They are given by: \[U_{h} = \sum_{m=0}^{4}h_{m}\cos(2\pi x),\quad\text{with}\] \[\{h_{m}\} = \{-0.6160,0.6941,-0.0930,0.0167,-0.0018\}\,, \tag{7}\] \[U_{Q} = QU_{h}\,.\] Figure 1-Left shows the different potentials. Note that \(U_{h}\) has a maximum at the equilibrium position for a hole (\(Q=+1\)) because it is energetically favorable to become closer to one of the negative oxygen ions. On the contrary, for an electron \(Q=-1\), the potential energy \(U_{e}\) has a minimum. The potential \(U_{h}\) has a maximum at \(x=0\) and a minimum between at \(x=0.5\) with a potential well of \(\simeq-3.74\,\)eV, which diminishes the height of the potential barrier. The potential \(U_{e}=-U_{h}\) has the opposite properties. The potential \(V_{C}(u_{n}-u_{n-1})\) is equal to (3), as is the Coulomb repulsion between an extra charge in site \(n\) and the nearest K\({}^{+}\,\)ions. In this case, the interaction with the extra charge with the K\({}^{+}\,\)ions in the same row does not include a Yukawa or ZBL potential because there is not an extra nuclei. Therefore the Hamiltonian operator for the extra charge will be \[\hat{H}_{Q}=\sum_{n}E_{n}|n\rangle\langle n|-J_{n,n+1}|n\rangle\langle n+1|-J _{n,n-1}|n\rangle\langle n-1|\,. \tag{8}\] The transfer integrals \(J_{n,n-1}=J_{n-1,n}\) are related to the probability of a transition from the state \(|n-1\rangle\) to the state \(|n\rangle\) and vice versa. The term \(E_{n}\) can be obtained easily as the expected Figure 1: (**Left**) Substrate potentials \(U_{K}\) experienced by a potassium ion K\({}^{+}\); \(U_{h}\), the electric potential experienced by a positive hole h\({}^{+}\); and \(U_{k}+U_{h}\), is the potential experienced by the double cation K\({}^{++}\). \(U_{K}\) is a sum of electrical and Buckingham terms [40]. (**Right**) Also, \(U_{K}\) is represented with \(U_{e}\), the potential experienced by an extra electron e\({}^{-}\), and \(U_{K}+U_{e}\), the potential experienced by the neutral atom K\({}^{0}\). The latter is therefore due only to the Buckingham potentials in Ref. [40]. The unit of energy is \(u_{E}\simeq 2.77\,\)eV. value of the charge Hamiltonian \(\langle n|\hat{H}_{Q}|n\rangle\) when the nondiagonal terms \(J_{n,n-1}\) are \(zero\). It will be composed of the classical energy of the extra charge in the site \(n\), that is, the electrostatic interaction with the lattice, plus the electrostatic interaction with the nearest \(\mathrm{K}^{+}\,\mathrm{ions}\). Then: \[E_{n}=QU_{h}(u_{n})+\frac{Q}{1+u_{n}-u_{n-1}}+\frac{Q}{1+u_{n+1}-u_{n}}-2Q+E_{0 }\,. \tag{9}\] We have subtracted the hole electrostatic energy at equilibrium \(-2Q\) and added a reference value \(E_{0}\), so as \(E_{n}=E_{0}\) at the equilibrium distance. The value of \(E_{0}\) has no physical consequences, and it will be generally taken as zero, however, some other values may be more convenient than others for numerical integration and to obtain periodic solutions [35]. The terms \(J_{n-1,n}\) have to be negligible for the equilibrium distance and become large only at the distance where the two electronic clouds of the \(\mathrm{K}^{+}\,\mathrm{ions}\) interact. Therefore, a reasonable assumption is that they are exponentials with a decay rate similar to the ZBL repulsion between nuclei. That is, \[J_{n-1,n}=J_{n,n-1}=J_{0}\exp(-\alpha(1+u_{n}-u_{n-1}))=I_{0}\exp(-\alpha(u_{n }-u_{n-1}))\,. \tag{10}\] A first guess of \(\alpha\) is \(\alpha=\beta\) as both terms are the consequence of the overlapping of the electron shells. In principle, \(I_{0}\) is the same for a hole or an electron, but this assumption might be revised. Note that \(I_{0}\) is the only parameter for which we do not have an approximate value at the moment. We expect to deduce it from the band structure and experimental mobilities in muscovite, but at this stage, it will be taken as an adjustable parameter. Also, the value of \(\alpha\) initially equal to \(\beta\) might have to be reconsidered. Figure 2 shows the interchange of probability when the particles approach as a consequence of the functional form of the transfer integrals. Figure 2: Displacements (blue lines) and charge probability (red lines) for a system after an initial compression of particles 15 and 16 to a distance of 0.4. It brings about the oscillations of the particles in the antiphase for some time. The interchange of probability when the ions approach can be seen, and also the high frequency of the charge transfer. ## 3 Hamiltonian and dynamical equations We obtain the dynamical equations for the charge amplitudes from the Schrodinger equation \({\rm i}\hbar\partial/\partial t|\phi\rangle=\hat{H}_{Q}|\phi\rangle\) collecting together the coefficients of the basis states \(|n\rangle\). The Planck constant in scaled units will be denoted \(\tau=\hbar/u_{E}/u_{T}\), where \(u_{E}\) and \(u_{T}\) are the scaled units of energy and time, therefore \(\tau=0.0011968\). Then: \[{\rm i}\tau\dot{c}_{n}=\left[QU_{h}(u_{n})+\frac{Q}{1+u_{n}-u_{n-1}}+\frac{Q}{1 +u_{n+1}-u_{n}}-2Q+E_{0}\right]c_{n}-\left[J_{n,n-1}c_{n-1}+J_{n,n+1}c_{n+1} \right]\,. \tag{11}\] The equations of movement for the variables \(u_{n}\) are obtained from the Hamiltonian as \(\dot{p}_{n}=-\partial H_{tot}/\partial u_{n}\) and \(\dot{u}_{n}=\partial H_{tot}/\partial p_{n}=p_{n}\). Where \(H_{tot}\) is the Hamiltonian obtained as \[H_{tot}=H_{lat}+\langle\phi|H_{Q}|\phi\rangle. \tag{12}\] The first component of the Hamiltonian is the lattice classical Hamiltonian: \[H_{lat} =\sum_{n}\frac{1}{2}p_{n}^{2}+U(u_{n})+V(u_{n}-u_{n-1})\,, \tag{13}\] with \(V\) and \(U\) given in (2) and (6). The second component of the Hamiltonian is the expected value of the charge Hamiltonian in a generic state \(|\phi\rangle=\sum_{n}c_{n}|n\rangle\), which is given by: \[H_{Q}=\langle\phi|\hat{H}_{Q}|\phi\rangle = \sum_{n}E_{n}c_{n}^{*}e_{n}-\left[J_{n,n+1}c_{n}^{*}c_{n+1}+J_{n,n -1}c_{n}^{*}c_{n-1}\right]\,, \tag{14}\] with \(E_{n}\) from (9) and \(J_{n,n+1}\) from (10). ### Final equations Calculating the derivatives of \(E_{n}\) and \(J_{n,n+1}\), and collecting together the different terms, we obtain the dynamical equations for \(u_{n}\): \[\ddot{u}_{n}=-U^{\prime}(u_{n})-QU_{h}^{\prime}(u_{n})|c_{n}|^{2}\] \[-\frac{1}{(1+u_{n+1}-u_{n})^{2}}\left[1+C\exp(-\beta(u_{n+1}-u_{n }))+Q|c_{n}|^{2}+Q|c_{n+1}|^{2}\right]\] \[+\frac{1}{(1+u_{n}-u_{n-1})^{2}}\left[1+C\exp(-\beta(u_{n}-u_{n-1 }))+Q|c_{n}|^{2}+Q|c_{n-1}|^{2}\right]\] \[+\frac{C\beta\exp(-\beta(u_{n}-u_{n-1}))}{1+u_{n}-u_{n-1}}-\frac{ C\beta\exp(-\beta(u_{n+1}-u_{n}))}{1+u_{n+1}-u_{n}}\] \[+\alpha I_{0}\exp(-\alpha(u_{n+1}-u_{n}))(c_{n+1}^{*}c_{n}+c_{n}^ {*}c_{n+1})\] \[-\alpha I_{0}\exp(-\alpha(u_{n}-u_{n-1}))(c_{n}^{*}c_{n-1}+c_{n-1} ^{*}c_{n})\,. \tag{15}\] Note that the last two lines are real. The dynamical equations for the extra charge are given by: \[{\rm i}\tau\dot{c}_{n}=\left[QU_{h}(u_{n})+\frac{Q}{1+u_{n}-u_{n- 1}}+\frac{Q}{1+u_{n+1}-u_{n}}-2Q+E_{0}\right]c_{n}\] \[-I_{0}\exp(-\alpha(u_{n}-u_{n-1}))c_{n-1}-I_{0}\exp(-\alpha(u_{n+ 1}-u_{n}))c_{n+1}\,. \tag{16}\] These equations can be written in real form and also as canonical Hamiltonian equations as explained in Appendix A. ## 4 Linearization We expand the terms in the dynamical equations (15)-(16), using \(1/(1+x)\simeq 1-x+x^{2}\) and \(1/(1+x)^{2}\simeq 1-2x+3x^{3}\), and we can neglect the ZBL potential because of small displacements \(u_{n}\), \(C\ \simeq 10^{-6}\) and \(\beta C\simeq 10^{-4}\). We obtain the linearized dynamical equations: \[\ddot{u}_{n} = -\omega_{0}^{2}u_{n}+2(u_{n+1}+u_{n-1}-2u_{n})\,, \tag{17}\] \[{\rm i}\tau\dot{c}_{n} = E_{0}c_{n}-I_{0}[c_{n+1}+c_{n-1}]\,. \tag{18}\] The frequency of the lattice homogeneous oscillations \(\omega_{0}\) is given by \[\omega_{0}=(-\sum_{m=1}^{4}(2\pi m)^{2}v_{m})^{1/2}\simeq 4.4800\,. \tag{19}\] A value of \(E_{0}\neq 0\) implies only a shift in the \(c_{n}\) frequencies of \(E_{0}/\tau\), which can be convenient for integration purposes but has no physical consequences as always the products \(c_{n}c_{m}^{*}\) that appear in the dynamical equations are invariant with respect to a global frequency shift [35]. Note that the variables \(u_{n}\) and \(c_{n}\) become decoupled at the linear limit. The dispersion relations are independent and are given by \[\omega^{2} = \omega_{0}^{2}+4c_{s}^{2}\sin^{2}(q/2)\,,\quad\mbox{for the variables}\,u_{n}\,; \tag{20}\] \[\omega = \frac{E_{0}}{\tau}-\frac{2I_{0}}{\tau}\,\cos(q)\,,\quad\mbox{for the variables}\,c_{n}\,. \tag{21}\] The constant \(c_{s}=\sqrt{2}\) in (20) is the sound velocity in the lattice system without on-site potential and it is written as a symbol for comparison with other scalings. The second equation (21) multiplied by the scaled Planck constant \(\tau\) provides the charge energy [45], i.e.: \[H_{Q}=\tau\omega=E_{0}-2I_{0}\,\cos(q)\,. \tag{22}\] ## 5 Simulation tests In this section, we test some physically interesting initial solutions and observe the result of the integration of the full system and some of its properties to check the model proposed. We limit ourselves to simulations for an extra hole, that is, \(Q=+1\) in the previous sections. The preferred numerical methods are those that preserve the physical properties of the system at each integration step, in particular, charge probability conservation. They are described in Appendix B. ### Extended solutions Linear solutions of the linearized equations are extended ones \(c_{n}=\frac{1}{\sqrt{N}}\exp({\rm i}[qn-\omega t])\). Substitution in (17) and (18) leads to \(\ddot{u}_{n}=0\) and \(\tau\omega=-2I_{0}\cos(q)\) and the charge Hamiltonian and frequency are \(H_{h}=\tau\omega=E_{0}-2I_{0}\cos(q)\) and \(\omega=E_{0}/\tau-2\frac{I_{0}}{\tau}\cos(q)\) as seen above. The lattice Hamiltonian is zero because \(u_{n}=0\) and \(p_{n}=0\). Note that the unit energy in scaled variables is exactly \(u_{E}=k_{c}e^{2}/a\), the electrostatic potential energy of a unit charge at the lattice unit distance. So an extra charge would provide twice that amount if the charge is localized in a single site, but it is diminished for the extended solution. The velocity of the waves in \(c_{n}\) should be the phase velocity as there is, in principle, a single plane wave, that is, \(V_{teo}=\frac{\omega}{q}=-\frac{2}{q}\frac{I_{0}}{\tau}\cos(q)\). The physical reason for the lattice to remain frozen is that the charge density is constant because \(|c_{n}|^{2}=1/N\) so each charge is subjected to opposite repulsive forces from each neighbor with the same modulus that cancel themselves, i.e.: \(\ddot{u}=-Q|c_{n+1}^{2}|+Q|c_{n-1}^{2}|=-Q/N+Q/N=0\). These solutions are somewhat irrelevant since nothing happens. However, they are very useful for coherence. ### Traveling localized trial functions We propose the trial function \(c_{n}=A_{0}\exp(-\xi|n-V_{b}t|)\exp(\mathrm{i}[qn-\omega t])\), which is \(c_{n}=A_{0}\exp(-\xi[n-V_{b}t])\exp(\mathrm{i}[qn-\omega t])\) for \(n-V_{b}t>0\) and \(c_{n}=A_{0}\exp(+\xi[n-V_{b}t])\exp(\mathrm{i}[qn-\omega t])\) when \(n-V_{b}t<0\) alternatively changing \(\xi\) for \(-\xi\) when \(n-V_{b}t<0\). It is easy to see that the time derivative is well defined in \(n-V_{b}t=0\) and we will consider the coherence of this definition below. By substitution in Eq. (16) and collecting together the real and imaginary coefficients we obtain: \[H_{h}=\tau\omega=-2I_{0}\cosh(\xi)\cos(q)\,, \tag{23}\] \[\tau\xi V_{b}=2I_{0}\sinh(\xi)\sin(q)\,. \tag{24}\] We observe that changing \(\xi\) for \(-\xi\) does not change the above equations, and therefore they are valid for both tails of the trial function. Also, note that \(V_{b}\) and \(\sin(q)\) have the same sign. See Fig. 3. The trial function \(c_{n}\) is not a solution, and therefore it spreads. Simulation times should be the order of the theoretical period \(T_{\mathrm{teo}}=2\pi/\omega\), with \(\omega=H_{h}/\tau\) above. However, they are a simple one of testing the equations, the simulation code and to get insight into the physics of the system. It is remarkable how well works for the tails of an actual solution. ### Stationary localized trial functions There are two stationary trial functions \(c_{n}\): if \(V_{b}=0\), \(\sin(q)=0\) and \(q=0\) and \(q=\pm\pi\) (same physical wavevector). For \(q=0\), \(H_{h}=-2I_{0}\cosh(\xi)\) and for \(q=\pm\pi\), \(H_{h}=2I_{0}\cosh(\xi)\). For large values of \(I_{0}/\tau\) as 100 or 10, the charge probability spreads rapidly, for values as \(I_{0}/\tau=1\) the lattice couples with the lattice and for \(T_{h}=2.97\) larger than \(T_{0}=1.4\), meaning that the lattice evolves faster than the charge. Interestingly, there is the phenomenon of self-localization, a localized vibration of \(u_{n}\) develops at \(n_{0}\), the particle with more charge at \(t=0\), affecting the two neighbors and at the same time the charge probability becomes more concentrated in that same particle than at the beginning, as can be seen in Fig. 6. **Figure 4. (Left)** Localized wave obtained with initial conditions \(u=0\), \(p=0\), \(c_{n}=A\exp(-\xi|n|)\exp({\rm i}qn)\), with \(q=\pi/3\), \(N=64\), \(I_{0}/\tau=100\), \(\xi=0.32\). Integration parameters: \(h=6\times 10^{-5}\), \(T_{\rm end}=0.23\) and \(4000\) steps. Results: graphic velocity \(V_{graph}\simeq 186\), \(V_{\rm teo}=\frac{2I_{0}}{\tau}\frac{\sinh(\xi)}{\xi}\sin(q)=176\), \(H_{h,teo}=-2I_{0}\cosh(\xi)\cos(q)=-0.126\), \(H_{h,num}(0)=-0.1131\). **(Right)** FFT of \(c_{n}\) together with the theoretical phonon band \(\omega=-\frac{2I_{0}}{\tau}\cos(q)\). Note: \(H_{h}\) loses \(0.006\%\) of energy to the lattice after \(T_{\rm end}=4T_{\rm teo}\). The trial solution spreads in despite the small interaction with the lattice due to the hopping probability. **Figure 5. Energy evolution obtained with initial conditions \(u=0\), \(p=0\), \(c_{n}=A\exp(-\xi|n|)\exp({\rm i}qn)\), with \(q=\pi/3\), \(N=64\), \(I_{0}/\tau=0.1127\), \(\alpha=12.45\) and integration step: \(h=0.01\). \(T_{0}=1.4\) is the period of decoupled lattice small oscillations. The lattice gets very quickly energy from the charge. **(Left)** First 5 periods with \(T\simeq T_{0}\). **(Right)** Last 60 periods with the same main period and mean charge/lattice energy, corresponding to an energy charge of \(0.015\) or \(40\,{\rm meV}\). The process is accompanied by a small increase in localization and a moderate rupture of the monotony of the decreasing pattern. ### Stationary charge The simplest initial conditions are provided by the lattice at equilibrium \(u=0\), \(p=0\), and the location of a charge at site \(n\), i.e., \(|c_{m}|^{2}=\delta(n,m)\), with \(a_{n}=1,b_{n}=0\). Any other combination of \(a_{n}\) and \(b_{n}\) that keeps the probability is equivalent. We observe that the charge probability does not spread until quite high values of \(I_{0}/\tau\), actually to obtain a fast spread we need \(I_{0}/\tau=10\). This is coherent with the properties of muscovite actually being an insulator. Figure 7 shows this spread. ### Breather rebounding in a charge With a simple pattern, it is possible to produce non-exact breathers. If we locate an extra charge in their vicinity, the breather rebounds, while the charge keeps its localized position. Initially, a symmetrical oscillation of the neighboring particles to the charge develops. See Fig. 8. ### Kink with an extra charge Note that for kinks, the particles became very close, and the energy can change very rapidly, therefore, a smaller step \(h\) might be necessary. Kinks are produced without charge at energies of \(26.2\,\,\mathrm{eV}\)[15, 17], with \(u_{E}\simeq 2.77\,\mathrm{eV}\), the velocity to be provided to a single particle should be about \(V_{b}=\sqrt{2\times 26.2/2.77}\simeq 4.35\) in scaled units. Locating the charge with \(c_{15}=1\) and \(p_{15}=4.4\) with \(I_{0}/\tau=0.01\), we can test the system. With step \(h=0.001\), the energy is not conserved while the charge is always conserved due to our numerical method. So we use \(h=10^{-4}\) and obtain a kink for the lattice variables. The charge probability is divided. One part \(67\%\) is located at the initial particle and a smaller one travels with the kink, as can be seen in Fig. 9. There is always a small probability left at each particle and lost to the kink. The process depends heavily on \(I_{0}/\tau\). For \(I_{0}/\tau=0.1\), both the charge and the lattice vibration remain trapped, but increasing the initial momentum to \(p_{15}=6\), the kink reappears traveling with \(12\%\) probability with the charge. In this case, the charge is not dispersed, which is more favorable than \(I_{0}/\tau=0.01\). Note that in an ionic crystal, the movement of an ion implies the movement of electric charge by itself. In this case, the charge transported is larger \(+2e\). This might be coherent with the Figure 6: **(Left)** Approximated oscillating self-localized mode obtained with initial conditions \(u=0\), \(p=0\), \(c_{n}=A\exp(-\xi|n|)\exp(\mathrm{i}qn)\), with \(q=0\), \(N=64\), \(I_{0}/\tau=1\), \(\xi=0.32\). Integration parameters: \(h=1.4\times 10^{-3}\), \(T_{\mathrm{end}}\simeq 30\) and \(213090\) steps. Results: \(H_{h,teo}=-2I_{0}\cosh(\xi)=-0.0252\), \(H_{h,num}(0)=-0.00226\), then it oscillates between -0.02 and -0.05 with corresponding oscillations of the lattice Hamiltonian. **(Right)** Self-localization of the charge density. thick lines of primary quodons. Figure 8: Approximate breather generated with \([u_{9},\ldots,u_{13}]=[0.0183\), -0.0501, 0.0370, \(-0.0106\), \(0.0011]\), \([p_{9},\ldots,p_{13}]=[0.0692\), 0.0056, -0.1636, 0.1448, -0.0848], \(c_{20}=1\). The charge \(c_{n}\) is in red, and the lattice coordinates are amplified 10 times. Parameters: \(h=0.01\), \(N=10000\), \(I_{0}/\tau=0.1\). Figure 7: Charge spread in a lattice initially at rest, \(c_{16}=1\). Parameters: \(h=10^{-3}\), \(N=1000\), \(I_{0}/\tau=10\). The probability is divided into three. Any of the traveling probabilities corresponds to traveling charge without much perturbation of the lattice. ### Chaotic breather with an extra charge We have found chaotic breathers [46] with an extra charge that are quasi-periodic in the lattice variables and also for the charge amplitude or probability. This is an interesting possibility as it is a mechanism for trapping energy and charge during certain times. The pattern is close to the Page mode, that is, with a site with maximum amplitude with nearest neighbors with smaller amplitude, and opposite phase. This is a breather with high energy 4.5 eV, 5 eV corresponding to the lattice and -0.5 eV to the charge. It is presented in Fig. 10-Left with the parameters \(\alpha=12.5\) and \(I_{0}/\tau=0.1117\). The particle with initial probability one loses around 0.015, and then there is a small interchange to neighboring particles that is recovered but in a nonperiodic way, as it can be seen in Fig. 10-Right. The phase space of the variables \(a_{n}\) and \(b_{n}\) of the charge amplitude and the lattice displacements and momenta are also represented in Fig. 11. Only the core particle variables and the nearest neighbors are represented for clarity. This entity is called a chaotic breather or chaobreather for short. From the physical point of view it represents a different form of energy and charge localization, as observed in the hyperconductivity experiments described in Sect. 1, where alpha particles initially bring about a current peak showing that a reservoir of charge has been mobilized. Figure 9: **(Left)** A kink produced with initial momentum \(p_{15}=4.4\) (\(E_{K}=0.5p^{2}\)=26.8 eV) and a localized charge with \(c_{15}=1\). A kink is produced and the charge probability density \(\left|c_{n}\right|^{2}\) (in red) is partially left at the initial particle and partially travels with the kink. Parameters: \(h=10^{-4}\), \(I_{0}/\tau=0.01\), \(T_{\rm end}=5\) and \(N=64\) particles. Dashed lines are reference lines both for the charge and the lattice variables. Results: \(\simeq 27\%\) probability is carried initially with the kink, diminishing to \(\simeq 9\%\) at time \(t=5\). A stable probability of 67% is left at the initial site. **(Right)** A small amount of charge is left stable with each particle after the passage of the kink. For \(I_{0}/\tau=0.1\), the same kick does not produce a kink, for a larger kick of \(p_{16}=6\), it does. A smaller probability of 12% travels with the charge but with no observable dispersion. A larger value of \(I_{0}/\tau\) needs studying as there are bursts of increase of the lattice Hamiltonian and (negative) charge Hamiltonian, although the total Hamiltonian is conserved. Figure 11: Phase space of the phase spaces of the chaotic breather with an extra charge as described in Section 5.7. In blue, the core particles and, in red and green, the two nearest neighbors. **(Left)** Charge amplitude real and imaginary part \(a_{n}\) and \(b_{n}\). **(Right)** Lattice coordinate and momentum \(u_{n}\) and \(p_{n}\). See text. Parameters \(I_{0}/\tau=0.1117\), \(\alpha=12.45\), integration step \(h=10^{-5}\). Figure 10: Quasi-periodic chaotic breather with an extra charge. **(Left)** The coordinates of particle 15 and the two nearest neighbors are shown with apparent but not exact periodicity. The charge probability represented as color remains localized in the initial particle after an initial small spread at the nearest neighbors. **(Right)** The real part of the charge amplitude shows the chaotic behaviour with \(a_{n}\simeq 1\) repeating but non periodically. Parameters \(I_{0}/\tau=0.1117\), \(\alpha=12.45\), integration step \(h=10^{-5}\). ## 6 Conclusions In this work, we have presented a model for charge transport along K\({}^{+}\) chains in silicates mediated by nonlinear excitations. The motivation for this work is the experimental observation of hyperconductivity, the phenomenon of charge transport in the absence of an electric field when a side of the crystal is bombarded with alpha particles. The model relies heavily upon the work of previous publications, but for the first time, an electric charge is considered through a quantum Hamiltonian. The vibronic interaction between ions and an extra electron or hole, described by the transfer integral, is strongly nonlinear, increasing as neighboring ions become closer enhancing the probability of charge transmission. This allows for the existence of localized charge states that break the discrete translational invariance of the lattice. These states, when mobile, are the proposed charge carriers in hyperconductivity experiments. Most of the parameters are obtained through physical deduction, although some are yet not well known, particularly the transfer integral. Extensive work on that subject is being done, and it will be published elsewhere. In this work, we analyze the coherence of the model, its behavior, and spectra, for different initial conditions based on different ansatze, obtained from the tail analysis of extended and exponentially localized profiles, isolated charges, and other means. We have found interesting phenomena, such as the self-localization of some stationary solutions and the trap of a charge by a chaotic breather. The obtention of exact traveling solutions is the object of the present research, as well as the estimation of the missing parameters. Although in this work we have developed the model both for holes and electrons, we have limited the simulations to holes as they are the best candidate for the phenomenon of hyperconductivity because 99% of the \({}^{40}\)K decay leaves a positive charge behind. New development may require a modification or refinement of our model, but, at present, it seems physically sound. At present, the propagation of charge has not been achieved due to the large number of frequencies that appear due to the different amplitudes of the particle oscillations. We expect to solve this problem both numerically and conceptually for the physical system. That work will be reported in future publications. JFRA thanks projects MICINN PID2019-109175GB-C22 and VII PPIT-US 2023. He also acknowledges the Universities of Osaka and Latvia for hospitality. JB acknowledges support from PostDocLatvia grant No.1.1.1.2/VIAA/4/20/617. YD acknowledges the support from grant JSPS Kakenhi (C) No. 19K03654. MK acknowledges support from grants JSPS Kakenhi (C) No. 21K03935.
2306.12345
The Effect of Noise on the Emergence of Continuous Norms and its Evolutionary Dynamics
We examine the effect of noise on societies of agents using an agent-based model of evolutionary norm emergence. Generally, we see that noisy societies are more selfish, smaller and discontent, and are caught in rounds of perpetual punishment preventing them from flourishing. Surprisingly, despite the effect of noise on the population, it does not seem to evolve away. We carry out further analysis and provide reasons for why this may be the case. Furthermore, we claim that our framework that evolves the noise/ambiguity of norms may be a new way to model the tight/loose framework of norms, suggesting that despite ambiguous norms detrimental effect on society, evolution does not favour clarity.
Stavros Anagnou, Daniel Polani, Christoph Salge
2023-06-21T15:41:49Z
http://arxiv.org/abs/2306.12345v2
# The Effect of Noise on the Emergence of Continuous Norms and its Evolutionary Dynamics ###### Abstract We examine the effect of noise on societies of agents using an agent-based model of evolutionary norm emergence. Generally, we see that noisy societies are more selfish, smaller and discontent, and are caught in rounds of perpetual punishment preventing them from flourishing. Surprisingly, despite the detrimental effect of noise on the population, it does not seem to evolve away. In fact, in some cases it seems the level of noise increases. We carry out further analysis and provide reasons for why this might be the case. Furthermore, we claim that our framework that evolves the noise/ambiguity of norms is a new way to model the tight/loose framework of norms, suggesting that despite ambiguous norms' detrimental effect on society, evolution does not favour clarity. ## Introduction The social world is replete with norms, an important aspect of organising societies. Social norms reduce the degrees of freedom in the actions of individuals, making them more predictable and stabilising societies (FeldmanHall and Shenhav, 2019). Norms also enable unrelated agents to manage shared resources (Mathew et al., 2013), thereby extending cooperation beyond genetic relatives (Richerson et al., 2016). Norm emergence is usually studied with discrete behaviours. Game theory tends to consider moral behaviour to be composed of discrete actions: cooperate and defect (Axelrod, 1986), hawk and dove (Smith, 1982), stag and hare (Skyrms, 2003). Other examples of discrete norms include political party affiliation, coordinating or not coordinating (Lewis, 1969; McElreath et al., 2003) or adopting a given behaviour e.g. a possession norm (Epstein and Axtell, 1996; Flentge et al., 2001). We know, however, that norms are not always this discrete, and a large number of norms exist on a continuous spectrum of behaviour, e.g. what amount is acceptable to take from a shared resource, how fast you walk, how close you stand next to someone during a conversation (Kelly and Setman, 2021). These have received much less attention in terms of modelling (Le and Boyd, 2007), with the exception of continuous opinions as modelled in the closely related field of opinion dynamics (Flache et al., 2017). Previous work using continuous behaviour includes continuous iterated prisoners dilemma (Le and Boyd, 2007) (on a scale of 0 complete defection, to 1, cooperation). In general, cooperation in continuous dilemmas is less stable and it is harder for cooperation to invade a population of defectors (Le and Boyd, 2007). Bendor et al. (1991), investigated how noise affected the success of fixed strategies in a continuous prisoners dilemma. They showed that populations of generous strategies were more successful because generous strategies avoided spiraling into rounds of mutual reclamination in noisy environments. Going beyond continuous game theory, Aubert-Kato et al. (2015) investigated the emergence of frugal and greedy behaviours in an embodied version of a dilemma where agents varied in how long they exploited a food source - the longer it exploits the food source, the more selfish the agent is. Michaeli and Spiro (2015) showed how "liberal" and "conservative" punishment regimes can affect the polarisation of a continuous opinion. Further, previous work on iterated prisoners dilemma by Ashlock et al. (2006) showed that even small differences in implementation, e.g. representation choice, can lead to significantly different dynamics. We intend to combine these previous elements together to investigate the effects of noise on the emergence of continuous social norms. We investigate this in an evolutionary agent-based simulation, comparing agent societies with deterministic and probabilistic behaviours to see if noise significantly changes the dynamics of the society, i.e. norm emergence and other properties of agent societies. To achieve this, we evolve three continuous norms. Uniquely, we also evolve the level of noise on each of these properties. This allows us to investigate the effect of noise on a continuous model of norm emergence, which is of use to modelers considering whether or not to include noise in their models. Further, by making the amount of noise a variable that is available to evolution, it allows us to study the evolutionary dynamics of noise. We define criteria for norm emergence in a continuous system and show that our deterministic societies obey these criteria. We find that deterministic societies tend to be less selfish, less hypocritical and less discontented, with agents sharing resources more effectively and sanctioning each other less. In contrast, noisy societies tend to fall into perpetual punishment of each other despite the abundance of resources. This begs the question, if noise is detrimental to the agent society, why does it not evolve away? We show that there does not seem to be an evolutionary pressure to eliminate noise and offer some reasons as to why this may be the case. Further, we suggest our model offers some insight into thinking about the evolution of loose and tight societies. The tightness-looseness framework looks at culture in terms of the strength of their cultural norms (number and clarity) and strength of punishment when a norm is violated. Tight cultures have stronger norms and punishments and loose cultures have more vague norms with less harsh punishments. This framework provides insights into the function of norms, with cultures tightening in response to threats, making them better at dealing with them. Our model expands the existing work by considering the noise inherent in looser societies that have more vague or ambiguous social norms (Gelfand et al., 2006; Roos et al., 2015; Pan et al., 2021). ## 2 Model and Experiments The following section introduces a multi-agent model we developed to study the effect additive noise has on continuous norm emergence. We study two experimental conditions in the model. In the deterministic case, the behavior of each agent is defined by three internal, continuous variables (Bite Size (B), Sanction Threshold (T) and Sanction Strength (S)). In the probabilistic case, we add Gaussian noise to those variables each time they are used to determine behavior. The strength (variance) of this added noise is defined for each of the three internal values by another three agent-specific values, respectively - i.e. BN, TN and SN. The simulation can be separated into different steps, as visualized in Fig. 1, which are defined as follows: ### Initialisation At the beginning of the simulation, we create 100 agents and set the resource level to 1000 units. Each agent's internal values for B, S and T are initialized to uniformly random values between \(0.0\) and \(1.0\). For the probabilistic model, each agent's noise values (BN, TN and SN) are initialized between \(0.0\) and \(0.5\). Each agent's energy level is set to 10. After initialization, the simulation proceeds in rounds. Each round has a different, randomized order of all agents, and each of the following steps is performed in that order. ### Eat When it is their turn, each agent tries to consume resources according to their Bite Size (B). This value is added to their internal energy and removed from the global resource level. If there are no resources left, the agent gets no energy. The higher the value, the more greedy/selfish the agent is compared to other agents. If all agents eat at a higher Bite Size, the environment will not be able to support as many agents; thereby exhibiting 'tragedy of the commons' dynamics (Hardin, 1968). ### Sanction During their turn, each agent can observe the actually consumed resources of the 10 previous agents. Each agent checks if the previous agents are more than its own internal Sanctions Threshold (T); if so, it sanctions them. In other words, T is the amount of deviance an agent tolerates before punishing another agent, i.e. what an agent finds acceptable. Sanctioning means the agent reduces the other agent's internal energy by its own Sanction Strength (S), and it also pays a sanctioning cost of 0.1 \(*\) S, which is subtracted from its own energy level. ### Metabolise Each agent has their energy level reduced by \(0.01\) during each round. Figure 1: Flow diagram describing the stages of the agent-based simulation. ### Death and Reproduction After those steps, the simulation checks if any agents have an energy level lower than 0.0, in which case they are removed from the simulation. Then any agent with an energy level larger than 10 gets to reproduce. Reproduction means we generate a copy of the agent, with the same 3 or 6 internal values, mutated with a \(0.1\) chance by adding Gaussian noise with \(0\) mean and \(0.1\) variance to each of those values. The energy of the child and parent are both set at half the parent's prior energy level. Before the next round starts, we add 100 units of resources to the resource level. ### Creating Probabilistic Behaviour To create the probabalistic behaviour, we used a Gaussian function each time the agent consumed from the shared resource or punished another agent. Using their value and accompanying standard deviation (noise parameter) to create a value for the amount of food eaten B, how tolerant they are T and how harshly they punish S, e.g. for each instance of sanctions we select the strength from a normal distribution with a mean of S and a standard deviation of SN. This added noise can be interpreted either as behavioral or perceptual error. ### Implementation Choices The evolutionary dynamics in this simulation are created by differential reproduction, rather than by defining an explicit fitness function and selection process. Since the order agents eat and sanction each other is sequential, it means agents at the front of the queue have an advantage, as they get to consume from the resource first. Conversely, being last in the queue also has an advantage as there is no one behind the agent to sanction them. To minimise the impact of these asymmetries on the results, we randomise this order in each round. We limit the initial values for the noise values to \([0.0,0.5]\), because some initial simulation with the full range resulted in very volatile dynamics that were hard to analyze. But after initialization, it is possible for the noise parameters to have any value between 0.0 and 1.0 so they can adapt in that direction. We explored further parameter settings not reported here (varied agent metabolism, cost of sanctions), which produced results similar to those in this paper. Note that the deterministic condition can be seen as a special case of the probabilistic condition, where the three noise parameters are very close to 0 for all agents. ## Results ### Continuous Norm Emergence First we want to answer the questions, is this a model for the emergence of continuous norms? Usually in discrete models there is an arbitrary threshold, such as 80 percent of the population must possess that behaviour for it to be considered a norm (Savarimuthu and Cranefield (2011)). Since our behaviours are continuous, and its not clear what it would mean for two agents to have the same behaviour, we define criteria on how to assess norm emergence in a continuous context. Figure 2: The average value of each trait in the population plotted over time. Deterministic (left) and probabilistic (right). Individual runs are plotted as coloured lines and the average of those runs is plotted as a black line. N = 34 per condition. 1. The behaviour converges and stabilises: Do traits decrease terms in variance across the population from where they began, and do the average behaviours stabilise? This would be indicated by a decreasing value for the variance of a given behaviour across the agent population, and a lack of change of the average behaviour over time. 2. The behaviour the population stabilises at is arbitrary across runs to a certain extent: This criterion ensures the resultant behaviour is not fully due to environmental scaffolding, meaning when the behaviour is the only rational/viable action given the environmental constraints, e.g. a population level preference for walking over a bridge as opposed to walking across lava would not be considered a norm (Westra and Andrews, 2022). This would be indicated by repeated simulations stabilizing at different average values. Note that this requires some level of randomness in the simulation, with different seeds. Further to our stipulations, we clarify that we are talking about norms under the general definition of normative regularities: "A socially maintained pattern of behavioral conformity within a community."(Westra and Andrews, 2022). We take this definition rather than a rule-based one, which requires higher cognitive capacities such as language (required to express the rule) (Kelly and Setman, 2021). Further, we think it is a wider framework that encompasses a broader range of phenomena that are of interest. This permissive of "bottom-up" approach may help us reveal the building blocks of normative cognition (Westra and Andrews, 2022; de Waal and Ferrari, 2010). In Fig. 2, we see that in our deterministic simulation, after about 200 ticks, all agent traits manage to settle on a particular value. This value is arbitrary to a certain extent with different runs settling at different values. This is important as it means that the behaviour is indeed a norm and not just a product of environmental scaffolding, i.e. the only rational action given the environmental circumstances (Westra and Andrews, 2022). In the probabilistic cases, it seems that norms are a lot more volatile, with average values in flux. Further, if we look at the population level variances of each trait in Fig. 3 we see that Bite Size converges onto a much lower level than it began, thereby satisfying our criteria for norm emergence as the population converges on a shared behaviour. On the other hand, the population level variance of Sanction Threshold and Sanction Strength does not always decrease, so it is harder to make a case for the population to converge on a norm for the latter two traits. This effect doesn't happen as much in the noisy case (Fig. 3), although there is some convergence for Sanction Threshold and Sanction Strength. However, this could be explained by the fact that probabilistic populations tend to have much lower populations than deterministic ones (Fig. 4), which could be decreasing the variance through random drift. Figure 3: The population-level variance (the variance of a trait in each population) of each trait plotted over time. Deterministic (left) and probabilistic (right). Individual runs are plotted as coloured lines and the average of those runs is plotted as a black line. N = 34 per condition. ### Comparison Deterministic vs. Probabilistic Model Probabilistic agent societies are generally more selfish and have less stable cooperation. Fig. 4 shows that deterministic societies tend to have lower Bite Sizes than probabilistic ones, meaning the societies tend to be less selfish as they are all consuming less. Further, the norm seems to be more stable in the deterministic condition, with many cases in the probabilistic condition that initially settled on low Bite Sizes breaking out into higher Bite Sizes. Particularly striking is that in 100 runs none of the deterministic runs broke away, suggesting very stable norms in the deterministic population and volatile in the noisy populations. Further, when this experiment was done with an initial noise range of \([0.0,1.0]\) as opposed to \([0.0,0.5]\), the average value of Bite Size went up to \(0.9\), indicating very high levels of selfishness. Values for the other two norm traits (Sanction Threshold and Sanction Strength) are comparable between probabilistic and deterministic societies, but as with Bite Size, the runs in the probabilistic version are more volatile (Fig. 2). The probabilistic populations are dramatically smaller (Fig. 4), with populations of 20 compared to thousands in the deterministic case. Only a handful of the probabilistic populations manage to reach comparable population levels to the deterministic populations. Probabilistic societies are more hypocritical (Fig. 4). We defined hypocrisy as an agents sanction threshold being lower than its own Bite Size (\(T_{self}>B_{self}\)). After an initial increase at the beginning of the simulation, both deterministic and probabilistic populations see a sharp decrease. However, rates of hypocrisy are much lower in the deterministic Figure 4: Various agent and population properties plotted over time. Deterministic (left) and probabilistic (right). Individual runs are plotted as coloured lines and the average of those runs is plotted as a black line. N = 100 per condition. case (around \(0\)) compared to hypocrites in the probabilistic case (around \(0.05\)), and the noisy runs are generally more volatile, with numerous runs breaking out into high numbers of hypocrites. Initially, probabilistic and deterministic societies also have similar levels of norm convergence as measured by trait variance decreasing initially and staying at around \(0.1\). However, as in Fig. 2, this level of convergence is less stable in the probabilistic condition. Further, since the populations are much smaller in the probabilistic condition, this may bias the population level variance metric, so we should not read too much into this result. Further, the small populations mean genetic/cultural drift effects could overpower subtle selection pressures. Despite the small populations, however, there is still a significant amount of births/deaths (data not shown here), suggesting selection is still happening. To address the effects of small populations in the future, we will have a fixed population with replacement instead of a dynamic one. But for now, since we are interested in seeing the effect on noise on the size of the populations as well, we will keep it as is. Strangely, the reason for low populations in the probabilistic population is not a lack of resources. To see the real reason, we look at the amount of energy lost due to punishment of the agents (Fig. 4). This plot shows that the amount of energy was lost either as a result of damage from sanctions or cost of executing sanctions. In the deterministic case, we see a spike in sanctions at the onset if the simulation as agents punish each other due to the diversity of norms. The society then converges toward a norm as the sanctioning decreases. In the probabilistic case, it seems the societies do not adequately manage to converge most of the time, resulting in this period of adjustment never ending, leading to discontented society marred by perpetual punishment. This may be the reason agents raise their Bite Size in order to better protect themselves against punishment; in a high-noise society has hypocrites, you will likely be punished anyway, so there is no rational reason to be generous and have a low bite size. It is important here to mention our simulation is not a case of limited resources; in fact, the regrowth rate of the simulation is quite high to assess the dynamics of noise in plentiful conditions. In conclusion, probabilistic populations are more selfish, hypocritical, discontented and less stable while the deterministic populations managing to reach drastically higher populations with the same amount of resources. ### Noise Evolution Given that noise in our model seems to be detrimental to a society overall, and noting that the deterministic model is just a special case of the probabilistic model, we would expect that our models evolve away the noise. But if we look at the development of the average value for the three added noise parameters, we do not see a decrease of noise (Fig. 5). On the contrary, if we just look at the first 500 steps, the average noise seems to slowly increase. This could in part be explained by a random walk, since we initialize between \([0.0,0.5]\) but allow for noise to evolve to the full range of \([0.0,1.0]\). If there was no evolutionary pressure, we would expect the noise parameters to drift towards \(0.5\). To investigate the evolutionary dynamics of noise, we looked at how the standard deviation evolved over time (Fig. 5). We see that average noise apart from small deviations does not really seem to increase. With the individual runs looking like random walks and the average not really changing.To further investigate why this is happening and to make sure this lack of evolution is not due to the shortness of the runs, we did the following. We first ran a sample of 34 runs for each condition for 5000 time steps (10x longer than previous experiments) and then compared noisy runs that were successful (i.e. reach large populations larger than 5000, which is comparable to the deterministic case) with those that were not successful (i.e. where the population collapse or stayed at low population numbers). ### Interpreting the Evolution of Noise In the longer runs, we see that noise seems to slightly increase for Bite Size for all runs Fig. 6 (left panels). But if we look at only the successful runs (right panels), it seems that on average they don't really change in terms of their value over time. For some of the runs, this might not be so much due to evolutionary adaptation, but the fact that the runs started with low noise to begin with and happened to be better at surviving. This being said, a minor proportion of Figure 5: The average standard deviation (noise) for each trait plotted over time. Individual runs are plotted as coloured lines and the average of those runs is plotted as a black line. N = 100 per condition. successful runs had an initial increase in noise but eventually settled at low noise values. It seems then that for Bite Size, successful runs (those that reach higher populations) are runs that, by chance, started at the low noise levels. If we look at the noise for Sanction Threshold (tolerance) (TN), we see a similar pattern: a slight increase in noise overall if we look at all of the runs but the majority of the successful runs start at low noise and remain there. So although the graphs show that we could reduce noise by selecting populations by their overall performance, noise level does not seem to decrease when selection occurs on a per agent basis. In contrast to the other two traits, Sanction Strength noise (SN) seems to increase in both successful and unsuccessful runs, in spite of the fact that noise is initialised at [0, 0.5] (Fig. 6). Specifically, there seems to be a region between 0.6 and 0.8 where the most successful runs seem to settle, with unsuccessful runs above and below this region. One could interpret this as being a selection pressure favouring an intermediate level of Sanction Strength noise, however more experiments with runs starting at a wider range of noise would be needed to conform this. Taken together, this results seem to suggest that although noise makes agent populations less successful, there doesn't seem to be an evolutionary process that reduces noise, in fact, in the case of Sanction Strength, evolution may increase noise. ## Discussion We present a model of continuous norm emergence to show the effects of noise on the dynamics of norm emergence and the simulation more generally. We first showed that deterministic societies satisfy our criteria for norm emergence: * 1: That they converge on behaviours compared to the start of the simulation; and * 2: That the norm is somewhat arbitrary, meaning not the only rational action given the circumstances i.e. environmental scaffolding (Westra and Andrews, 2022). Second, we showed that deterministic societies compared to noisy ones are: more able to settle on norms, distribute resources effectively (altruism), less hypocritical, less discontent and are more stable in these properties over time. In contrast, noisy societies do not prosper because they have high levels of sanctioning not seen in deterministic societies, falling into rounds of perpetual punishment. This result goes against the "mad man theory" hypothesis, where an adversary in a negotiation is more likely to stand down if they think their opponent (in our simulation, the agent sanctioning) is unpredictable to avoid provoking them (McManus (2019)). In our simulation, it appears that noisy punishment only seems to incentivise agents to increase their provoking behavior (breaking norms by increasing Bite Size). This occurs because they will be punished whether they obey a norm or not, so they might as well consume as much as they can to increase their chances of survival. Figure 6: The average standard deviation (noise) for each trait plotted over time. All runs (left) and only runs with \(populations>5000\) (right). Individual runs are plotted as coloured lines and the average of those runs is plotted as a black line. N = 34 per condition. Although, the detrimental effects of noisy sanctions may be due to the fact that both the punishment behaviours (S and T) and eating behaviour (B) are noisy, further analysis by adding noise to only one of the behaviours e.g. B or T would be needed to confirm this. The widespread instability of noisy societies raised the question: if noise is detrimental, wouldn't it evolve away? Through further analysis, we showed that although populations with low levels of noise ended up being more successful (reaching higher populations), there wasn't an evolutionary trend toward reducing noise, in fact in some cases there seemed to be an evolutionary pressure to increase noise. We offer some explanations why this may be happening. Firstly, for noise to be selected against there might need be group selection, as deterministic societies tend to be much larger, they would be able to out compete smaller noisy groups. Secondly, although high levels of noise are detrimental to the group, there might be a benefit to the individual of having noise; perhaps it enables them to avoid punishments. Further, one agent lowering their noise would not necessarily benefit them enough to dominate the population if the rest of society still has high levels of noise. A further contribution of this paper is that it may offer a way to model/think about the evolution of cultural tightness/looseness (Gelfand et al., 2006). Which is defined as 1. strength of sanctioning (tolerance to deviance from norms) and 2. strength of social norms (number and clarity). Tight cultures have stronger norms and punishments and loose cultures have vaguer norms with less harsh punishments (Gelfand et al., 2006; Roos et al., 2015). We refine 1. by differentiating between tolerance to deviation (i.e. sanction threshold) and also the strength of punishment when someone deviates (i.e. sanction strength). Further, our model could be a new way to model the evolution of clarity of social norms using noise (2). Finally, computer models studying tightness and looseness assume discrete behaviours, we relax this assumption by grounding our study in a continuous modelling framework (Pan et al., 2021). Taking this lens on our simulation, we could claim that the clarity of norms (tightness) can't be evolved despite there being selective pressure against it. This is a peculiar finding, as it would imply although societies with vague rules are at a disadvantage, ambiguity persists. ### Future Work In our future work we would like add the following extensions. Currently the regrowth rate of the shared resource is static. We could vary this and make the resource growth dynamic by having "seasons" and see the resultant dynamics. Further, we only studied vertical cultural transmission but didn't include horizontal transmission, where individuals in the same generation copy each other's strategies. In contrast to other models of tightness and looseness, our model has three norms instead of one, we could add more norms and analyse the interplay between the "tightness" and "looseness" of different norms. Finally, to further study the evolutionary dynamics of noise; we should compare different combinations of traits with and without noise, e.g. have a noisy punishment threshold but deterministic bite size and punishment strength. ## Acknowledgements We would like to thank Niki Papadogiannaki, Imran Khan and the anonymous reviewers for their helpful comments and feedback. Stavros Anagnou is supported by a studentship from the University of Hertfordshire.
2306.00999
Multi-Unitary Complex Hadamard Matrices
We analyze the set of real and complex Hadamard matrices with additional symmetry constrains. In particular, we link the problem of existence of maximally entangled multipartite states of $2k$ subsystems with $d$ levels each to the set of complex Hadamard matrices of order $N=d^k$. To this end, we investigate possible subsets of such matrices which are, dual, strongly dual ($H=H^{\rm R}$ or $H=H^{\rm\Gamma}$), two-unitary ($H^R$ and $H^{\Gamma}$ are unitary), or $k$-unitary. Here $X^{\rm R}$ denotes reshuffling of a matrix $X$ describing a bipartite system, and $X^{\rm \Gamma}$ its partial transpose. Such matrices find several applications in quantum many-body theory, tensor networks and classification of multipartite quantum entanglement and imply a broad class of analytically solvable quantum models in $1+1$ dimensions.
Wojciech Bruzda, Grzegorz Rajchel-Mieldzioć, Karol Życzkowski
2023-05-30T20:11:18Z
http://arxiv.org/abs/2306.00999v2
# Multi-Unitary Complex Hadamard Matrices ###### Abstract We analyze the set of real and complex Hadamard matrices with additional symmetry constrains. In particular, we link the problem of existence of maximally entangled multipartite states of \(2k\) subsystems with \(d\) levels each to the set of complex Hadamard matrices of order \(N=d^{k}\). To this end, we investigate possible subsets of such matrices which are, dual, strongly dual (\(H=H^{\rm R}\) or \(H=H^{\Gamma}\)), two-unitary (\(H^{R}\) and \(H^{\Gamma}\) are unitary), or \(k\)-unitary. Here \(X^{\rm R}\) denotes reshuffling of a matrix \(X\) describing a bipartite system, and \(X^{\Gamma}\) its partial transpose. Such matrices find several applications in quantum many-body theory, tensor networks and classification of multipartite quantum entanglement and imply a broad class of analytically solvable quantum models in \(1+1\) dimensions. ## 1 Introduction ### Complex Hadamard Matrices A square matrix \(H\) with \(\pm 1\) entries is said to be a (real) Hadamard matrix [1], if its rows (or columns) are mutually orthogonal. This definition can be generalized by expanding the entries of \(H\) to a unit circle on the complex plane. Such matrices are called complex Hadamard matrices and they are the main subject of this paper. We write \[H\in\mathbb{H}(N)\iff\forall\,j,k:|H_{jk}|=1\text{ and }HH^{\dagger}=N\, \mathbb{I}, \tag{1}\] where \(\mathbb{H}(N)\) denotes the set of complex Hadamard matrices (CHM). These matrices, unitary up to rescaling, are extensively used in contemporary theoretical physics and mathematics [2, 3, 4, 5]. Every \(H\in\mathbb{H}(N)\) is invariant with respect to multiplication by monomial unitary matrices \(M=PD\), where \(P\) is a permutation matrix and \(D\) is a unitary diagonal matrix [6]. In other words, both matrices \(H\) and \(H^{\prime}=MHM^{\prime}\) belong to the same orbit and are called equivalent, written \(H\simeq H^{\prime}\). The classification of all distinct orbits of CHM for a fixed dimension \(N\), completed by Haagerup in cases \(N=2,3,4,5\)[6], remains open for \(N\geqslant 6\)[7, 8, 9]. Sometimes it is convenient to express a matrix \(H\) in the dephased (normalized) form, in which its first row and first column consist of ones. The remaining submatrix of order \((N-1)^{2}\) is called the core of \(H\). In the class of CHM of size \(N\) one distinguishes a proper subset \(\mathrm{B}\mathbb{H}(N,q)\subset\mathbb{H}(N)\), called Butson matrices [10, 11]. Given integer \(q>1\), we write that \(B\in\mathrm{B}\mathbb{H}(N,q)\), if all entries of \(B\) are \(q^{\mathrm{th}}\) roots of unity, i.e. \(B_{jk}^{q}=1\) for any \(j\) and \(k\). A comprehensive study of Butson-type matrices and their monomial equivalence classification in low dimensions has been performed in Ref. [12]. An Appendix to that work, called Butson Home, is presented as the online catalog [13] and contains precalculated Butson matrices provided in the logarithmic form. In the following sections, these prefabricated units will serve as an input to the procedures exploring the sets of dual or multi-unitary Hadamard matrices. Throughout the paper, symbol \(N\) shall denote the square dimension \(N=d^{2}\) for \(d\geqslant 2\), unless stated otherwise, and we mainly focus on two cases for \(d=3\) and \(4\). ### Subsets of Complex Hadamard Matrices We introduce several proper subsets of \(\mathbb{H}(N)\), which will be subjected to investigation in the following sections. Any Hadamard matrix \(H\) of size \(N=d^{2}\) can be written as a tensor \(H_{jk}=H_{ab;cd}\). We define the operations of reshuffling and partial transpose as \(H_{ab;cd}^{\mathrm{R}}=H_{ac;bd}\) and \(H_{ab;cd}^{\Gamma}=H_{ad;cb}\), respectively. Note that both operations, in general, do not preserve unitarity. We say that \(U\in\mathbb{U}(N)\) is dual unitary matrix if both \(U\) and (independently) \(U^{\mathrm{R}}\) are unitary. Similarly, \(U\) is called \(\Gamma\)-dual unitary if both \(U\) and \(U^{\Gamma}\) are unitary.1 When restricted to the set of (complex) Hadamard matrices, we define two subsets \(\mathbb{H}^{\mathrm{R}}(N)\) and \(\mathbb{H}^{\Gamma}(N)\) containing aforementioned matrices. Special cases where \(U=U^{\mathrm{R}}\) or \(U=U^{\Gamma}\) (strong duality) will be considered in Section 2. Furthermore, if matrix \(U\) remains unitary after both operations; reshuffling \(U^{\mathrm{R}}\in\mathbb{U}(N)\) and partial transpose \(U^{\Gamma}\in\mathbb{U}(N)\), we call it a 2-unitary matrix (sometimes written two-unitary). Such matrices in the set \(\mathbb{H}(N)\) are elements of the subset denoted by \(\mathbb{H}^{2}(N)\). A natural extension of 2-unitarity is \(k\)-unitarity defined for a system consisting of \(2k\) parties, each with \(d\) internal levels (\(d\) being the dimension of a local Hilbert space), see Section 4 and Ref. [14]. Hence, a \(k\)-unitary matrix of order \(d^{k}\) preserves unitarity regardless of which rearrangement of multi-index of \(U\) has been applied. Hadamard matrices which are \(k\)-unitary will belong to the set denoted by \(\mathbb{H}^{\mathrm{k}}(N)\) and we will pay special attention to \(k=2\). Footnote 1: Sometimes we interchangeably use the notion of dual unitarity and R-dual unitarity, but \(\Gamma\)-duality shall always be written explicitly. The following inclusion relations hold: \[\mathbb{H}^{\mathrm{k}}(N)\subset\mathbb{H}^{\mathrm{R}}(N)\subset\mathbb{H}(N )\subset\sqrt{N}\mathbb{U}(N). \tag{2}\] For any \(q>2\), a similar structure inside the Butson class can be written, \[\mathrm{B}\mathbb{H}^{\mathrm{k}}(N,q)\subset\mathrm{B}\mathbb{H}^{\mathrm{R} }(N,q)\subset\mathrm{B}\mathbb{H}(N,q)\subset\mathbb{H}(N)\subset\sqrt{N} \mathbb{U}(N). \tag{3}\] In both relations the sets \(\mathbb{H}^{\mathrm{R}}(N)\) and \(\mathrm{B}\mathbb{H}^{\mathrm{R}}(N,q)\) can be replaced by \(\mathbb{H}^{\Gamma}(N)\) and \(\mathrm{B}\mathbb{H}^{\Gamma}(N,q)\), respectively. As it will be shown, none of the sets \(\mathbb{H}^{\mathrm{R}}(N)\), \(\mathbb{H}^{\Gamma}(N)\) and \(\mathbb{H}^{\mathrm{k}}(N)\) is empty, and it is rather easy to obtain such matrices in dimension \(N=9\) and \(N=16\), both numerically and analytically. Numerical methods used in this paper are described in Appendix A. A straightforward criterion to confirm duality and 2-unitarity of a matrix is checking that appropriate matrices remain unitary after rearrangements of their entries. Instead, one can simply calculate the associated linear entropy of any matrix \(X\) of size \(d^{2}\), defined as \[S(X)=\frac{d^{2}}{d^{2}-1}\left(1-\frac{\mathrm{Tr}(XX^{\dagger}XX^{\dagger})} {\mathrm{Tr}^{2}(XX^{\dagger})}\right)\in[0,1]. \tag{4}\] By construction, for any unitary \(U\in\mathbb{U}(d^{2})\) one has \(S(U)=1\). For brevity, we are going to write the triplet \((S(U),S(U^{\mathrm{R}}),S(U^{\Gamma}))\) as \(S(U)=(a,b,c)\), where \(a\), \(b\), \(c\) correspond to the entropies of \(U\), \(U^{\mathrm{R}}\) and \(U^{\Gamma}\), respectively. Hence, \(S(U)=(1,1,1)\) shall denote a 2-unitary matrix, while \(S(U)=(1,1,c)\) and \(S(1,b,1)\) with \(b,c\in[0,1)\) stand for strictly dual (R-dual) and \(\Gamma\)-dual matrices \(U\). ### Two-unitary Matrices of order \(d^{2}\) and Local Unitary Invariants In the set of 2-unitary matrices defined in Sec. 1.2 (not necessarily in the Hadamard subset) we distinguish special operations that preserve the property of 2-unitarity. Similarly to the idea of Hadamard orbits, we say that two 2-unitary matrices \(X\) and \(X^{\prime}\) of order \(d^{2}\) are locally unitarily equivalent, if there exist four unitary matrices \(U_{1}\), \(U_{2}\), \(U_{3}\) and \(U_{4}\) in \(\mathbb{U}(d)\), such that \((U_{1}\otimes U_{2})X(U_{3}\otimes U_{4})=X^{\prime}\), written \(X\simeq_{\mathrm{LU}}X^{\prime}\). This means that the matrix \(X\in\mathcal{U}(d^{2})\) describing a four-partite system in Hilbert space \(\mathcal{H}_{d^{2}}\), see Eq. (35), composed of \(d\)-dimensional subsystems, does not change qualitatively if subjected to local rotations in four subspaces \(\mathcal{H}_{d}\). Note that LU-equivalence differs from the standard Hadamard equivalence and, in general, Hadamard monomial operations strongly affect the properties of 2-unitarity. Given \(d\geqslant 2\), a natural question appears, how many LU-equivalence classes there exist in the set of 2-unitary matrices. The problem is investigated in Ref. [15]. In order to illustrate the idea, consider the following permutation matrix of order nine, \[P_{9}=[1,9,5,6,2,7,8,4,3], \tag{5}\] where each number represents the position of unity in consecutive columns of \(P_{9}\). One can easily confirm that for any four three-dimensional unitary matrices \(U_{j}\in\mathbb{U}(3)\), matrix \((U_{1}\otimes U_{2})P_{9}(U_{3}\otimes U_{4})\) is 2-unitary. In other words, the linear entropy achieves its maximal values for every realignment of \(P_{9}\), i.e. \(S(P_{9})=(1,1,1)\). In this case it can be proven [15], that there exists only one LU-equivalence class, so that the matrix \(P_{9}\) is its simplest representative. In dimension \(d=4\), there are at least two LU-equivalence classes in the set of 2-unitary matrices. The first one is generated by the following permutation matrix [16], \[P_{16}=[1,16,6,11,15,2,12,5,8,9,3,14,10,7,13,4], \tag{6}\] where each number was encoded like in Eq. (5), corresponding to the case of \(P_{9}\). The explicit form of a representative of the other class, which is orthogonal up to rescaling, reads Dual and Self-Dual Hadamard matrices of order \(d^{2}\) Unitary operator \(U\), for which the reshuffled matrix \(U^{\rm R}\) is also unitary is called dual unitary. These operators gained broad interest due to their facility of modeling dynamics and exact solving nonintegrable many-body systems [17, 18, 19, 20, 21, 22]. The concept of duality becomes useful for applications in the theory of quantum cellular automata in \(1+1\) and \(2+1\) dimensions [23]. Dual unitary matrices play a crucial role in the search of absolutely maximally entangled states [24], as the hypothetical solution in the form of a 2-unitary matrix lies on the intersection of two "dual" subspaces. The duality is usually defined in the literature by the operation of reshuffling. Another, somehow symmetric definition uses transformation defined by partial transposition with respect to \(1^{\rm st}\) or \(2^{\rm nd}\) subsystem, \({\rm T}_{1}\) or \({\rm T}_{2}\), respectively [25]. Since, given \(X\), one has \((X^{{\rm T}_{1}})^{{\rm T}_{2}}=(X^{{\rm T}_{2}})^{{\rm T}_{1}}=X^{\rm T}\), we restrict our considerations only to \(\Gamma\equiv{\rm T}_{2}\). Concrete representations of dual unitaries can be immediately constructed. Canonical examples include: identity operator, CNot gate, or the permutation matrix \(P_{9}\), determined by Eq. (5). One can also easily confirm that several Butson matrices [13] share the property of R- or \(\Gamma\)-duality. Let us focus on more complicated objects. We recall the quantized "cat map" [26], in a form borrowed from Ref. [27], \[\big{(}G_{N}(a,b,c)\big{)}_{jk}\equiv\exp\Big{\{}\frac{i\pi}{N}\left(aj^{2}+ bk^{2}+cjk\right)\Big{\}}, \tag{8}\] for \((a,b,c)\in\mathbb{R}^{3}\) and \(1\leqslant j,k\leqslant N=d^{2}\) with \(d\geqslant 3\). Several properties of such a map is straightforwardly related to the notion of R-duality and multi-unitarity. We present, without proofs, the most important observations, which can be easily confirmed by analyzing singular value decomposition for \(G_{N}\). **Observation 1**.: _For any \(d\geqslant 3\) and any integer \(m\geqslant 1\) such that \(m\not\equiv 0\;({\rm mod}\;3)\), the matrix \(G_{N}(m,2m,-2m)\) defined in Eq. (8) represents a \(2\)-unitary matrix of order \(N=d^{2}\)._ **Observation 2**.: _For all odd values of \(d\in\{3,5,7,...\}\) and for any integer \(m\geqslant 1\) such that \(m\not\equiv 0\,(\,{\rm mod}\;3)\), the matrix \(G_{N}(m,2m,-2m)\) defined in Eq. (8) represents a \(2\)-unitary Hadamard matrix of order \(N=d^{2}\)._ **Observation 3**.: _For all even values of \(d\in\{4,6,8,...\}\) and for any integer \(m\geqslant 1\) such that \(m\not\equiv 0\,(\,{\rm mod}\,3)\), the matrix \(G_{N}(m,2m,-2m)\) defined in Eq. (8) represents a dual unitary Hadamard matrix of order \(N=d^{2}\)._ Similar structures can be obtained also for triplets other than the case \((m,2m,-2m)\) with \(m\in\mathbb{N}\setminus 3\mathbb{N}\) mentioned above. The list of such examples of complex Hadamard matrices of the Butson class includes: \[G_{9}(-4,-8,-4) \in\mathbb{B}\mathbb{H}(9,9), \tag{9}\] \[\{\,G_{9}(-7,-8,-8),\,G_{9}(-8,-7,-8),\,G_{9}(-5,2,-2)\,\} \in\mathbb{B}\mathbb{H}(9,18),\] (10) \[G_{25}(-6,-4,-2) \in\mathbb{B}\mathbb{H}(25,25),\] (11) \[G_{25}(4,1,2) \in\mathbb{B}\mathbb{H}(25,50),\] (12) \[G_{49}(1,-1,2) \in\mathbb{B}\mathbb{H}(49,98),... \tag{13}\] In order to complete the picture, we shall mention the strong duality with respect to both operations, separately R or \(\Gamma\). We call a matrix \(X\) of order \(N=d^{2}\) strongly dual (or self-dual), if \(X=X^{\rm R}\). Similar definition pertains to strong \(\Gamma\)-duality (or self \(\Gamma\)-duality), if \(X=X^{\Gamma}\). Both these properties, however, are volatile and they are not preserved along LU-equivalence orbits. Suppose \(X=X^{\rm R}\) or \(X=X^{\Gamma}\), then in general \[Y=(U_{1}\otimes U_{2})X(U_{3}\otimes U_{4})\quad\Longrightarrow\quad Y\neq Y^{ \rm R}\text{ or }Y\neq Y^{\Gamma} \tag{14}\] for \(U_{j}\in\mathbb{U}(d)\). They are not preserved along Hadamard equivalence orbits either. This makes such matrices very specific points in set \(\mathbb{H}(N)\). A currently open problem is, whether it is possible to define special (restricted) orbits for self-dual matrices. It is not difficult to obtain self-R-dual and self-\(\Gamma\)-dual matrices both numerically and analytically. One particular symmetric example finished out from the Butson Home collection [13], reads2 Footnote 2: Every \(\exp\{\cdot\}\) function defining matrices should be understood as acting elementwise. so that we have \[C=DBD^{\dagger}=\exp\left\{\frac{2i\pi}{3}\left[\begin{array}{cccc|cccc|cccc|cccc} \cdot&\cdot&\cdot&\cdot&\cdot&2&1&\cdot&1&2\\ \cdot&\cdot&\cdot&\cdot&1&\cdot&2&2&\cdot&1\\ \cdot&\cdot&\cdot&2&1&\cdot&1&2&\cdot&\cdot\\ \hline&\cdot&1&2&\cdot&\cdot&\cdot&\cdot&2&1\\ \cdot&\cdot&2&2&\cdot&2&2&\cdot&2&1\\ \cdot&\cdot&1&1&1&1&\cdot&2&1\\ \hline&\cdot&2&1&\cdot&1&2&\cdot&\cdot&\cdot\\ 2&1&\cdot&\cdot&1&2&1&1&1\\ 1&\cdot&2&\cdot&1&2&2&2&2\end{array}\right]\right\}\in\mathrm{B}\mathbb{H}^{2}(9,3). \tag{17}\] Matrix \(C\) is unitary and after dephasing it reveals the original Butson matrix \(B\). By direct inspection, one can prove that both \(C^{\mathrm{R}}\) and \(C^{\Gamma}\) are permutation equivalent to \(B\). This implies that \(C\) is a 2-unitary Hadamard matrix of order 9 with \(S(C)=(1,1,1)\). The collection of Butson matrices for \(N=16\) contains exactly 5 and 1786763 records corresponding to cardinalities of the sets \(\mathrm{B}\mathbb{H}(16,2)\) and \(\mathrm{B}\mathbb{H}(16,4)\), respectively. Let the symbol \(B_{[k]}\) denote the matrix encoded on the \(k^{\mathrm{th}}\) place (offset) in \(\mathrm{B}\mathbb{H}(16,q)\) in [13]. We consider just two initial cases from the latter (bigger) class, which also includes the smaller one. First observation is rather trivial - the following product of \(B_{[1]}\) and the permutation matrix \(P_{16}\) is 2-unitary, \[B_{[1]}P_{16}=\exp\left\{i\pi\left[\begin{array}{cccc|cccc|cccc|cccc|cccc} \cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot&\cdot& \cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot&1&\cdot&1&1&\cdot&1&\cdot&\cdot&1&\cdot&1&\cdot&1&\cdot&1&\cdot\\ \cdot&1&\cdot&1&\cdot&\cdot&1&1&\cdot&\cdot&1&\cdot&1&\cdot&\cdot\\ \cdot&1&\cdot&1&\cdot&\cdot&1&\cdot&1&\cdot&1&\cdot&1&\cdot&\cdot\\ \cdot&1&\cdot&1&\cdot&\cdot&\cdot&1&\cdot&1&\cdot&1&\cdot&1&\cdot\\ \cdot&1&\cdot&1&\cdot&\cdot&1&\cdot&\cdot&1&\cdot&\cdot&1&\cdot\\ \hline&\cdot&1&\cdot&1&\cdot&1&\cdot&1&\cdot&\cdot&1&\cdot&\cdot&1\\ \cdot&\cdot&\cdot&1&1&\cdot&\cdot&1&\cdot&\cdot&\cdot&\cdot&1&1\\ \cdot&\cdot&\cdot&1&1&\cdot&1&\cdot&\cdot&\cdot&\cdot&1&1&\cdot\\ \cdot&1&\cdot&1&\cdot&1&\cdot&1&\cdot&1&\cdot&\cdot&1&\cdot\\ \cdot&\cdot&\cdot&\cdot&1&1&\cdot&1&\cdot&1&\cdot&\cdot&\cdot&\cdot\end{array} \right]\right\}\in\mathrm{B}\mathbb{H}^{2}(16,2), \tag{18}\] without introducing any additional modifications or diagonal factors, where \(P_{16}\) is defined in Eq. (6). Lastly, consider the \(8^{\mathrm{th}}\) matrix from the catalog [13] of \(\mathrm{B}\mathbb{H}(16,4)\), each admiting additional dependence on a single parameter \(\alpha_{j}\in[0,1)\), with \(\omega=\exp\{2i\pi/3\}\), such that \(Y_{16}^{(2)}(\alpha_{1},\alpha_{2})=D_{L}(\alpha_{1})B_{[8]}P_{16}D_{R}(\alpha_{ 2})\in\mathbb{H}^{2}(16)\). Due to the fact that the final matrix depends on two parameters, it does not necessarily belong to the Butson class. In other words, \(Y_{16}^{(2)}(\alpha_{1},\alpha_{2})\in\mathrm{B}\mathbb{H}(16,4)\) only for particular values of \(\alpha_{1}\) and \(\alpha_{2}\). Diagonal matrices (20) and (21) are not the only possible solutions and it is possible to find entirely different pairs of \((D_{L},D_{R})\) that bring \(B_{[8]}P_{16}\) to \(\mathbb{H}^{2}(16)\). However, in this paper we are not going to solve the problem of the full classification and uniqueness of the solutions. A similar remark concerns all diagonal matrices presented throughout the paper. Note that the presence of \(P_{16}\) in both cases is crucial. The set \(\mathrm{B}\mathbb{H}(16,4)\) was probed at random, but no matrix \(B_{[k]}\), for hundreds of different values of \(1\leqslant k\leqslant 1786763\), could be brought to a 2-unitary one with the help of two diagonal matrices and without additional multiplication by \(P_{16}\), as in the above examples. ### Analytical Results Inferred from CHM Beyond the \(\mathrm{B}\mathbb{H}\) Class We recall the matrix of order 9 constructed by Karlsson [30], \[K_{9}^{(2)}(\zeta)=\mathrm{circ}\left[\begin{array}{cccc|cccc}1&x&x&y&u&w&y& w&u\\ x&1&x&w&y&u&u&y&w\\ x&x&1&u&w&y&w&u&y\end{array}\right], \tag{22}\] which is symmetric and block circulant, with circulant blocks (BCCB). This means that the full form of \(K_{9}^{(2)}\) is obtained by a right-circulant shift of three blocks in (22). It admits two-parametric non-affine family depending on two conjugated pairs of real parameters encoded in a single complex parameter \(\zeta\), which can be concisely written as \[\left.\begin{array}{c}x\\ y\end{array}\right\}=\frac{1}{4}(1+\zeta)\left(1\pm i\sqrt{\frac{16}{|1+\zeta |^{2}}}-1\right),\hskip 28.452756ptu\\ w\end{array}\right\}=\frac{1}{4}(1-\zeta)\left(1\pm i\sqrt{\frac{16}{|1-\zeta |^{2}}}-1\right), \tag{23}\] for \(\zeta\in\mathcal{D}=\big{\{}z\in\mathbb{C}:|1-z|\leqslant 4\big{\}}\cap\big{\{}z \in\mathbb{C}:|1+z|\leqslant 4\big{\}}\setminus\{\mp 1\}\). Our first observation is that for any value of \(\zeta\in\mathcal{D}\), one obtains a 2-unitary complex Hadamard matrix, \(Y=K_{9}^{(2)}(\zeta)P_{9}\in\mathbb{H}^{2}(9)\), where \(P_{9}\) is defined in Eq. (5). Another example involves two diagonal matrices \[D_{L} =\exp\Big{\{}\mathrm{diag}\Big{[}\frac{2\pi i}{3}\left(0,1,1,1,1, 1,0,0,2,0\right)\Big{]}\Big{\}}\quad\text{and} \tag{24}\] \[D_{R} =\exp\Big{\{}\mathrm{diag}\Big{[}\frac{2\pi i}{3}\left(0,1,1,1,0, 1,2,2,1\right)\Big{]}\Big{\}}, \tag{25}\] which, upon application to the tensor product \(F_{3}\otimes F_{3}\), yield a 2-unitary matrix, \(D_{L}(F_{3}\otimes F_{3})D_{R}\in\mathbb{H}^{2}(9)\). These two matrices allow us to construct a 4-parameter family of 2-unitary matrices, \(D_{L}F_{9}^{(4)}(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})D_{R}\in\mathbb{H }^{2}(9)\), with arbitrary values of affine parameters \(\alpha_{j}\), \(j\in\{1,2,3,4\}\). In both cases, the proof is straightforward and exploits the observation that all matrices used in the definition of 2-unitarity are indeed complex Hadamard ones or, equivalently, the vector of linear entropies is (maximally) flat. In contrast, so far, no similar examples were found for other known matrices: \(B_{9}^{(0)}\), \(N_{9}^{(0)}\), \(S_{9}^{(0)}\) nor \(Y_{9}^{(0)}\)[31, 32, 9]. All these matrices are characterized by vanishing defect and do not admit any internal parameterization. This suggests the following statement. **Conjecture 1**.: _Only non-isolated complex Hadamard matrices of order \(N=d^{2}\) provide two-unitary structures._ It can be intuitively understood since \(2\)-unitarity requires a sort of flexibility in the matrix. Isolated structures have rigidly fixed elements and might not provide enough degrees of freedom to allow for additional internal rearrangements. Moreover, in the class \(\mathbb{H}(16,4)\) discussed by Lampaio et al. [13], there are exactly \(7978\) matrices with vanishing value of defect (and very likely there are many others which are implicitly isolated, i.e. they are isolated despite the fact their defect is non-zero). Several such matrices were picked at random from the set, and none of them reflected properties of \(2\)-unitarity when assisted with two diagonal matrices and two different permutations. This behavior significantly supports Conjecture 1. Investigation of all known \(16\)-dimensional matrices from the Catalog [29], namely \[\Big{\{}H_{16A}\simeq\bigotimes_{j=1}^{4}H_{2},\ H_{16B,C,D,E},\ F_{4}\otimes F _{4},\ S_{8}\otimes H_{2},\ F_{8}\otimes H_{2},\ S_{16}^{(11)},\ F_{16}^{(17)} \Big{\}}, \tag{26}\] did not reveal any \(2\)-unitary elements belonging to \(\mathbb{H}^{2}(16)\). However, this does not imply that they do not exist. It might still happen that a particular choice of permutation matrices, \(X\to P_{L}XP_{R}\), shall bring one of these matrices to the desired form. Finally, for \(N=16\), the modified Sinkhorn algorithm (supplied with a random-Gaussian seed) finds hundreds of different examples that belong to \(\mathbb{H}^{2}(16)\). At this stage it is hard to distinguish and classify them. Some of these examples form families that can be written analytically. One additional family is presented in Appendix C. ### Local unitary construction of \(2\)-Unitary Hadamard matrices Let us recall once again the permutation matrix \(P_{9}\) of order \(9\), defined in Eq. (5). Local unitary equivalence relation (LU-equivalence) allows us to act on \(P_{9}\) locally, obtaining \[Q=(U_{1}\otimes U_{2})P_{9}(U_{3}\otimes U_{4}), \tag{27}\] where four \(U_{j}\in\mathbb{U}(3)\) are unitary matrices of order three. Resulting matrix \(Q\in\mathbb{U}(9)\) preserves the property of being \(2\)-unitary, but does not need to belong to \(\mathbb{H}(9)\), for instance, \[(F_{3}\otimes F_{3})P_{9}(F_{3}\otimes F_{3})\not\in\mathbb{H}(9). \tag{28}\] Only some special choices of \(U_{j}\) provide \(2\)-unitary Hadamard matrices. The most general form reads \[\big{(}M_{a}F_{3}^{m_{1}}M_{b}\otimes M_{c}F_{3}^{m_{2}}M_{d}\big{)}P_{9}\big{(} M_{e}F_{3}^{m_{3}}M_{f}\otimes M_{g}F_{3}^{m_{4}}M_{h}\big{)}\in\mathbb{H}^{2}(9), \tag{29}\] where \(m_{j}\in\{0,1\}\), \(m_{1}+m_{2}+m_{3}+m_{4}=2\), and \(M_{x}\) for \(x\in\{a,b,...,h\}\) are eight general monomial matrices of the form \(M_{x}=D_{N}P_{N}\) with arbitrarily chosen unitary diagonal (\(D_{N}\)) and permutation (\(P_{N}\)) matrices of order \(N\). In other words, exactly two of the positions denoted by ellipsis "..." in \[\big{(}M_{a}...M_{b}\otimes M_{c}...M_{d}\big{)}P_{9}\big{(}M_{e}...M_{f} \otimes M_{g}...M_{h}\big{)} \tag{30}\] should be filled by the Fourier matrix \(F_{3}\), while the other two by \(\mathbb{I}_{3}\), leaving \(\binom{4}{2}=6\) such possibilities in total. Similarly, one can write \[\big{(}M_{a}...M_{b}\otimes M_{c}...M_{d}\big{)}P_{16}\big{(}M_{e}...M_{f} \otimes M_{g}...M_{h}\big{)}\in\mathbb{H}^{2}(16), \tag{31}\] where...'s are placeholders for two Fourier matrices \(F_{4}(\alpha_{1})\) and \(F_{4}(\alpha_{2})\) and two \(\mathbb{I}_{4}\). This time we obtain six two-parametric families each depending on two parameters \((\alpha_{1},\alpha_{2})\in[0,1)^{\times 2}\). Since the orthogonal matrix \(O_{16}\) (defined in Eq. (7)) is not a permutation matrix, the situation is slightly different. There are only four one-parametric families, each depending on a single parameter \(\alpha\), \[\big{(}M_{a}...M_{b}\otimes M_{c}...M_{d}\big{)}O_{16}\big{(}M_{e}...M_{f} \otimes M_{g}...M_{h}\big{)}\in\mathbb{H}^{2}(16), \tag{32}\] where \(...\)'s are placeholders for the only one Fourier matrix \(F_{4}(\alpha)\) and three \(\mathbb{I}_{4}\). In particular, the "simplest" possible 2-unitary real Hadamard matrix reads, \[H=(F_{2}\otimes F_{2}\otimes\mathbb{I}_{4})O_{16}, \tag{33}\] where \(F_{2}\) denotes the standard Hadamard matrix \(H_{2}\). Consider a special dimension \(d=4k\) with \(k\in\mathbb{N}\setminus\{0\}\). In all such cases, \(\text{AME}(4,d^{2})\) states are represented by permutation matrices \(P_{d^{2}}\)[33]. Assuming the Hadamard conjecture is true [34, 35], we can also pick a real Hadamard matrix \(H_{d}\). We have **Proposition 1**.: _Let \(H_{d}\) be a real Hadamard matrix of order \(d=4k\) for \(k\geqslant 1\). Then there exist a \(2\)-unitary Hadamard matrix,_ \[H_{d^{2}}=(H_{d}\otimes H_{d})P_{d^{2}}\in\mathbb{H}^{2}(d^{2}). \tag{34}\] That is, the tensor square of a real Hadamard matrix acting on a permutational representation of \(\text{AME}(4,d^{2})\) state brings it to the 2-unitary form. The veracity of this observation is an immediate consequence of the construction. Moreover, this can be generalized to the complex domain by introducing monomial matrices as shown in Eqs. (29) and (31), where real Hadamard matrices \(H_{d}\) can be replaced by appropriate complex counterparts. Note that for \(d=6\), neither Prop. 1, nor quantum cat map (8), can provide any example of a 2-unitary matrix in \(\mathbb{H}(36)\). We will reprise this observation in the next section in the context of some particular quantum states. ## 4 Multi-unitary Matrices and Absolutely Maximally Entangled States This section contains information about multi-unitary matrices in relation to quantum entanglement and a special class of quantum states. Also, we provide a physical motivation to investigate Hadamard matrices with a special internal structure. Quantum entanglement is a property of Nature that describes strong correlations between physical systems at the sub-atomic level. A primer and fundamental knowledge about this very special resource can be found in [36] and references therein. By definition, entanglement needs at least two interacting systems to emerge. Of special interest are interactions inside many-body systems which provide practical tools for the incoming era of quantum computing -- quantum protocols, error correcting codes, and quantum cryptography, require quantum entanglement. Mathematical description of entanglement involves Hilbert space formalism. In the commonly accepted interpretation, a composed quantum system is modeled by a tensor product \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes...\), where each \(\mathcal{H}_{X}\) corresponds to individual subspace for each party. Even in the simplest case of a pure state \(|\Psi\rangle\), the question about its factorization onto the smaller components is a complex and open problem. In other words, if \(|\Psi\rangle\neq|\psi_{A}\rangle\otimes|\psi_{B}\rangle\otimes...\), where \(|\psi_{X}\rangle\in\mathcal{H}_{X}\), we call it an entangled state, otherwise it is separable. Note that we restrict our attention to pure states only [37]. For mixed quantum states, the mechanism is similar [38], but beyond the scope of this paper. In a general scenario, multipartite entanglement for a physical system is described by the pair \((M,d)\) with \(M\) being a number of parties (subsystems) and \(d\) - number of levels that each system admits. We assume an equal number of degrees of freedom for each system, \(\mathcal{H}_{A}=\mathcal{H}_{B}=...=\mathcal{H}_{d}\) for \(d\geqslant 2\), as well as an even number of parties \(M=2k\). A pure state \(|\Psi\rangle\) of such a system is called absolutely maximally entangled (AME) if it maximizes the entanglement among all equal bipartitions, that is tracing out \(k=M/2\) subsystems should leave the state \(\rho_{k}=\mathrm{Tr}_{k}\rho_{M}\) in the maximally mixed form, \(\rho_{k}=\mathbb{I}_{d^{k}}/d^{k}\), where \(\rho_{M}=|\Psi\rangle\langle\Psi|\) is a density matrix on the entire system. In an alternative approach, a pure state \(|\Psi\rangle\) is expressed by means of tensor of coefficients. Suppose \(|\Psi\rangle\) is expanded in the product basis of \(\mathcal{H}=\bigotimes\mathcal{H}_{d}\) as \[|\Psi\rangle=\sum_{i_{1}}^{d}\sum_{i_{2}}^{d}\cdots\sum_{i_{M}}^{d}\mathcal{T} _{i_{1}i_{2}...i_{M}}\bigotimes_{j=1}^{M}|i_{j}\rangle. \tag{35}\] A tensor \(\mathcal{T}\) with \(M\) indices can be reshaped into a matrix \(U\in\mathbb{U}(d^{k})\) by appropriate combinations of multi-index \(\mathcal{J}=(j_{1}j_{2}...j_{M})\). In general, there are \(\frac{1}{2}\binom{M}{2}\) such rearrangements excluding global transpositions. If the matrix \(U\) remains unitary after every possible rearrangement of \(\mathcal{J}\), it is called \(k\)-unitary. In particular, if the system contains \(M=4\) subsystems divided pairwise onto two parts, \(k=M/2=2\), we recover well known notion of 2-unitarity described in Sec. 1.2. The concept of tensors with those properties, called perfect tensors, was introduced in the context of solvable models for the bulk/boundary correspondence [39]. An AME state corresponding to a \(k\)-unitary permutation matrix \(P_{d^{k}}\) is called a state with minimal support. The other extreme cases, determined by \(k\)-unitary Hadamard matrices, are called AME states with maximal support. A short list of multi-unitary matrices representing AME states of particular configuration is presented below. * There are no AME\((4,2)\) states [40], which means that there are no 2-unitary matrices of size 4. * A 3-unitary Butson matrix \(H_{8}\) corresponds to AME(6,2) state of six qubits [14], \[H_{8}=\left[\begin{array}{cccc|cccc|cccc}-&-&-&+&-&+&+&+\\ -&-&-&+&+&-&-&-\\ \hline-&+&+&-&-&+&-&-\\ +&+&-&+&-&+&-&-\\ \hline-&+&-&-&-&-&+&-\\ +&-&+&+&-&-&+&-\\ \hline+&-&-&-&+&+&+&-\\ +&-&-&-&-&-&-&+\end{array}\right]\in\mathrm{B}\mathbb{H}^{3}(8,2)\Longleftrightarrow \mathrm{AME}(6,2).\] (36) * Let \(p\) be a prime number. There exist 3-unitary matrices \(B\in\mathrm{B}\mathbb{H}^{3}(p^{3},p)\). In particular, there exists a Butson-type matrix corresponding to AME\((6,3)\) state of six qutrits [41]. * A 3-unitary matrix matrix \(H_{64}=P_{64}H_{4}^{\otimes 3}\in\mathbb{H}^{3}(64)\) implies existence of the state AME\((6,4)\) of six ququarts [41]. The matrix \(P_{64}\) is an appropriate 64-dimensional permutation matrix. * There are no 4-unitary matrices of order 16 what follows from non-existence of the state AME\((8,2)\)[42]. A collection of AME states indicating their (non-)existence and additional brief commentary is presented in the online service maintained by Huber and Wyderka [43]. An open problem related to existence of particular structure is related to the recently discovered "golden" AME(4,6) state of four quhexes [24, 44]. This truly quantum state represented by a \(2\)-unitary matrix of order \(N=36\) with a startling internal structure has an intrinsically complex nature with complex phases being multiplicities of the \(20^{\text{th}}\)-roots of unity. There are no real Hadamard matrices of order six, hence this dimension does not apply to formula (34). However, the question whether there exists a complex Hadamard matrix of order \(36\) representing an \(\text{AME}(4,6)\) state remains unsolved. ### Strongly \(2\)-Unitary Matrices Finally, we complete the classification by considering the strong \(2\)-unitarity. Matrix \(X\) is said to be strongly \(2\)-unitary (or self \(2\)-unitary), if \(X=X^{\Gamma}=X^{\text{R}}\). This can be generalized over strong \(k\)-unitary matrices which remain pairwise equal to unitary matrices after any possible rearrangement of the multi-index (cf. previous section). **Proposition 2**.: _In the set of complex Hadamard matrices of any order \(N=d^{2}\) for \(d\geqslant 2\), there are no strongly \(2\)-unitary matrices._ Proof.: The case of \(d=2\) is excluded by the Theorem **1**. in Ref. [40]. In the general case, second column in \(X_{d^{2}}=X_{d_{2}}^{\text{R}}=X_{d^{2}}^{\Gamma}\) is, by construction, repeated on the \((d+1)^{\text{th}}\) position. Hence, the matrix cannot be a full-rank one. ## 5 Conclusions and Open Problems In this paper we explore the set of (complex) Hadamard matrices with a special structure, related to absolutely maximally entangled states and multiunitary matrices. The set of Hadamard matrices is rich enough to contain matrices being (self) dual (R-dual) \(\Gamma\)-dual and two-unitary. This allows us to present several constructive analytic examples and even continuous multiparametric families of such objects. One exceptional case is strong \(k\)-unitarity, which is excluded just by construction. In the simplest case, for \(k=2\), there is no such unitary matrix \(X\), for which \(X=X^{\text{R}}=X^{\Gamma}\). Full classification of the above-mentioned matrices currently seems out of reach. However, while working on the project, we identified several problems to be addressed in the future, which are interesting from both physical and mathematical viewpoint: 1. Analyze the special class of Hadamard matrices for which the operations of \({}^{\text{R}}\) and \({}^{\Gamma}\) correspond to global permutation matrices. An example of such a matrix is the Karlsson family \(K_{9}^{(2)}\). 2. Describe the subclass of Hadamard matrices of size \(N=d^{2}\), which can be written as a tensor product: \(H=A\otimes B\), where \(A\) and \(B\) are some (not necessarily Hadamard) matrices. 3. Classify all operations that do not spoil \(k\)-unitarity. For example, for \(K_{9}^{(2)}\), such a transformation is \(P_{9}K_{9}^{(2)}\), where \(P_{9}\) is defined in Eq. (5). 4. Prove or disprove the conjecture that there exists a \(36\)-dimensional real or CHM, which corresponds to an \(\text{AME}(4,6)\) state. ## 6 Acknowledgements We acknowledge several fruitful discussions with Dardo Goyeneche and Suhail Ahmad Rather. We also wish to thank the participants of the Hadamard 2022 conference [45] which took place at the Jagiellonian University in Krakow on the turn of June and July 2022, during which the problems presented in this paper were first raised. WB and KZ are supported by Foundation for Polish Science under the Team-Net Project No. POIR.04.04.00-00-17C1/18-00, and QuantERA Project No. 2021/03/Y/ST2/00193 GRM is supported by ERC AdG NOQIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PCI2019-111828-2, QUANTERA DYNAMITE PCI2022-132919, Proyectos de I+D+I "Retos Colaboracion" QUSPIN RTC2019-007196-7); MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya; Fundacio Cellex; Fundacio Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU (PASQuanS2.1, 10113690); EU Horizon 2020 FET-OPEN OPTologic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 -- NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 ("La Caixa" Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them. ## Appendix A Numerical techniques In pursuit of numerical examples, we used two different methods to obtain either a matrix \(Y\) or a pair of two unitary diagonal matrices which bring a certain matrix \(X\) into the form \(Y=D_{L}XD_{R}\), with desired properties. Let \(D_{L}\) and \(D_{R}\) be two unitary diagonal matrices, each depending on \(N\) real phases \(\alpha=(\alpha_{1},...,\alpha_{N})\) and \(\beta=(\beta_{1},...,\beta_{N})\), respectively. The first method is a random walk over the phases \(\alpha\) and \(\beta\). Define a function \(\mathcal{Z}\) with \(2N\) real variables: \[\mathcal{Z}(\alpha_{1},...,\alpha_{N};\beta_{1},...,\beta_{N})=\chi\big{(}D_{ L}(\alpha)XD_{R}(\beta)\big{)}, \tag{37}\] where \[\chi(U)=|S(U)-1|+|S(U^{\Gamma})-1|+|S(U^{\rm R})-1| \tag{38}\] with \(S\) being a linear entropy defined in Eq. (4) for any matrix, in particular for \(U\in\mathbb{U}(N)\), while \(X\) in Eq. (37) represents some element from \(\mathbb{H}(N)\) for \(N=9\) or \(N=16\). For many \(X\in\mathbb{H}(N)\) or \(X\in\mathbb{B}\mathbb{H}(N,q)\) the minimizing procedure quickly converges, which is equivalent to \(Z\to 0\). Possibility to fix some phases allows the recovery of analytical forms of \(D_{L}\) and \(D_{R}\). It is also easy to modify function \(\xi(U)\) in order to search for dual or self-dual CHM. When one needs to obtain a single matrix from the set \(\mathbb{H}^{2}(N)\) or \(\mathbb{H}^{\rm R}(N)\), another numerical recipe should be applied. Recall the Sinkhorn algorithm, which was originally designed to generate bistochastic matrices [46, 47]. Convergence of this alternating combination of disjoint operations is assured under strict mathematical conditions [48], however, the flexibility of this method proved to be useful in a wide range of applications. Recently the modified version of this procedure was successfully used in search of the very special quantum states describing multipartite entanglement [24] and it assisted the discovery of a series of new complex Hadamard matrices [9] in several dimensions \(N\geqslant 8\). Here we present yet another modification adapted for 2-unitary CHM. Consider a map \(\mathcal{M}_{S}:\mathbb{C}^{N^{2}}\rightarrow\mathbb{H}(N)\) defined by an iterative procedure. We start with a matrix \(X_{0}\) (a seed) with entries from the random Gaussian distribution, \(\forall\,j,k:(X_{0})_{jk}\in\mathcal{N}(0,1)\), and every next iteration consists of four operations: 1) normalization of entries, 2) polar decomposition (\(\mathrm{P_{d}}\)), 3) reshuffling (\(\mathrm{R}\)), and 4) partial transpose (\(\mathrm{\Gamma}\)). This can be written compactly with help of an intermediate matrix \(T_{n}\) as \[\mathcal{M}_{S}:\begin{cases}(T_{n})_{jk}&=\dfrac{(X_{n})_{jk}}{|(X_{n})_{jk}|},\\ \\ X_{n+1}&=\left(\mathrm{P_{d}}(T_{n})\right)^{\mathrm{\Gamma R}}\quad\text{for} \quad n\geqslant 1.\end{cases} \tag{39}\] In many cases, as \(n\rightarrow\infty\), for any random seed \(X_{0}\), we obtain \(Y=\mathcal{M}_{S}(X_{0})\) being a 2-unitary CHM of dimension \(N=9\) or \(N=16\). ## Appendix B Examples of Self-Dual (R-Dual) and Self \(\Gamma\)-Dual CHM A four-parametric self-R-dual unitary family stemming from the Fourier matrix \(F_{9}^{(4)}\)[7] can be obtained with two diagonal matrices, \[D_{L} =\mathrm{diag}\Big{\{}1,1,1,1,1,1,e^{2i\pi\alpha},e^{2i\pi\alpha},e^{2i\pi\alpha}\Big{\}}:\alpha\in[0,1), \tag{40}\] \[D_{R} =\mathrm{diag}\Big{\{}1,1,1,1,a\omega^{2},b\omega^{4},1,c\omega^ {4},d\omega^{8}\Big{\}}:\omega=\exp\{i\pi/9\}, \tag{41}\] as \(Y=D_{L}(\alpha)F_{9}^{(4)}(a,b,c,d)D_{R}\in\mathbb{H}^{\mathrm{R}}(9)\). The matrix \(D_{R}\) depends on the original parameters \(\{a,b,c,d\}\) involved into the Fourier one, while \(D_{L}\) depends on an additional parameter which does not affect self-duality. Entropy \(S(Y)=(1,1,f(a,b,c,d))\), where \(f\) is a function of independent parameters. With appropriate diagonal unitary matrices \(D_{L}\) and \(D_{R}\), it is possible to recover numerous other similar examples, including \(D_{L}(F_{3}\otimes F_{3})D_{R}\), \(D_{L}(F_{4}(a)\otimes F_{4}(a))D_{R}\), \(D_{L}H_{16A,B}D_{R}\), \(D_{L}(F_{2}\otimes F_{8}^{(5)}(a,b,c,d,e))D_{R}\), which belong to \(\mathbb{H}^{\mathrm{R}}(9)\) or \(\mathbb{H}^{\mathrm{R}}(16)\). Similarly, if the reshuffling operation is replaced by partial transpose, one can discover the following matrices. Isolated Butson matrix of order \(N=9\), \[B_{9}^{(0)}=\exp\left\{\frac{i\pi}{3}\left[\begin{array}{ccccc|c|c|c}.&.&.&.&. &.&.&.\\.&5&3&2&5&3&2&1&5\\.&3&3&.&1&5&4&1&3\\ \hline.&2&2&.&2&4&4&4\\.&5&1&.&3&3&4&3&1\\.&3&5&2&3&5&2&5&1\\.&.&2&4&2&.&2&4&4\\.&3&3&4&5&1&.&3&1\\.&1&5&4&3&3&.&1&3\\ \end{array}\right]\right\}\in\mathrm{B\mathbb{H}}(9,6) \tag{42}\] becomes self \(\Gamma\)-dual when transformed by two diagonal unitary matrices of the form \(D_{L}=\mathrm{diag}\{1,1,1,1,\omega,1,1,1,\omega\}\) and \(D_{R}=\mathrm{diag}\{1,1,1,1,\omega,1,1,\omega,\omega^{2}\}\) with \(\omega=\exp\{2i\pi/3\}\). That is \(Y=D_{L}B_{9}^{(0)}D_{R}\), \(S(Y)=(1,\frac{20}{27},1)\) and \(Y=Y^{\Gamma}\in\mathrm{B\mathbb{H}}\Gamma^{(}9,6)\). Note that in this class of matrices, the entropy ceases to be flat on the central component. Another isolated example can be constructed out of the well-known matrix \(N_{9}^{(0)}\) found by Bechaump and Nicoara [31]. Here, we present its equivalent form \[N_{9}^{(0)}\simeq\left[\begin{array}{rrrrrrrr}1&1&1&1&1&1&1&1&1\\ 1&y&y^{2}&-1&-y&y&y^{3}&y^{3}&y\\ 1&y^{2}&y^{4}&-y&y^{2}&-y^{3}&y^{4}&y^{2}&1\\ 1&-y^{4}&-y^{3}&y^{3}&-y^{3}&-y^{2}&-y^{4}&-y^{2}&-1\\ 1&-1&y^{2}&-1y/y&1&1&y&y^{2}&1/y\\ 1&y^{3}&-y&-1&y^{2}&y^{3}&y&y&y\\ 1&y&1&-1/y&y^{2}&y^{2}&-1&1&1/y\\ 1&y^{4}&y^{2}&-y&y^{4}&y^{2}&y^{2}&-y^{2}&1\\ 1&y^{3}&y^{4}&-y^{3}&y^{4}&y^{2}&y^{3}&y^{2}&-1\end{array}\right],\ \ y=-\frac{1}{4}+i\frac{ \sqrt{15}}{4}. \tag{43}\] Two diagonal matrices \[D_{L} =\mathrm{diag}\{1,1,1,1,-y^{4},-y^{3},1,y,1\}, \tag{44}\] \[D_{R} =\mathrm{diag}\{1,1,1,1,-1,-y,-y^{3},\xi,\xi y\}\quad:\quad\xi= \frac{7}{2^{7}}+i\frac{33\sqrt{15}}{2^{7}} \tag{45}\] bring \(N_{9}^{(0)}\) to \(Y=D_{L}N_{9}^{(0)}D_{R}\) so that \(S(Y)=(1,\frac{22259}{31104},1)\) and \(Y=Y^{\Gamma}\in\mathbb{H}^{\Gamma}(9)\). The fact we need to introduce the rather uncommon unimodular number \(\xi\) remains the mystery of complex Hadamard matrices. Not only isolated matrices provide self \(\Gamma\)-dual structures in \(\mathbb{H}(9)\). The Karlsson's matrix \(K_{9}^{(2)}(\zeta)\) with \(\zeta=3\), \(D_{L}=D_{R}=\mathrm{diag}\{1,1,1,1,\omega^{2},\omega,\omega,\omega^{2},1\}\) and \(\omega=\exp\{2i\pi/3\}\) defines a \(\Gamma\)-dual matrix \(Y=D_{L}K_{9}^{(2)}(3)D_{L}\) such that \(S(Y)=(1,\frac{3}{4},1)\) and \(Y=Y^{\Gamma}\in\mathbb{H}^{\Gamma}(9)\). The tensor product of two Fourier matrices \(F_{3}\otimes F_{3}\) can be turned into a self \(\Gamma\)-dual matrix, \[Y(\alpha)=\left(\mathrm{diag}\{1,1,e^{2i\pi\alpha}\}\otimes\mathbb{I}_{3} \right)(F_{3}\otimes F_{3}) \tag{46}\] to form a one-parametric family such that \(Y(\alpha)=Y_{9}^{\Gamma}(\alpha)\) and \(S(Y(\alpha))=(1,0,1)\) for \(\alpha\in[0,1)\). Similarly, one has \(Y(a_{1},a_{2})=F_{4}(a_{1})\otimes F_{4}(a_{2})\in\mathbb{H}^{\Gamma}(16)\) without any additional tuning by diagonal matrices; \(Y(a_{1},a_{2})=Y^{\Gamma}(a_{1},a_{2})\) and \(S(Y(a_{1},a_{2}))=(1,0,1)\) for all \(a_{1},a_{2}\in[0,1)\). Finally, the following list contains self \(\Gamma\)-dual CHM obtained by the tensor product construc tion of two Fourier matrices \(F_{N}\) for \(5\leqslant N\leqslant 16\) including possible parameterizations: \[Y_{25} =F_{5}^{(0)}\otimes F_{5}^{(0)}, \tag{47}\] \[Y_{36}(\alpha_{1},\alpha_{2}) =F_{6}^{(2)}(\alpha_{1},\alpha_{1})\otimes F_{6}^{(2)}(0,0),\] (48) \[Y_{49} =F_{7}^{(0)}\otimes F_{7}^{(0)},\] (49) \[Y_{64}(\alpha_{1},...,\alpha_{5}) =F_{8}^{(5)}(\alpha_{1},...,\alpha_{5})\otimes F_{8}^{(5)}(0,0,0, 0,0),\] (50) \[Y_{81}(\alpha_{1},...,\alpha_{4};\alpha_{5},\alpha_{6}) =F_{9}^{(4)}(\alpha_{1},...,\alpha_{4})\otimes F_{9}^{(4)}(\alpha _{5},0,0,\alpha_{6}),\] (51) \[Y_{100}(\alpha_{1},...,\alpha_{4}) =F_{10}^{(4)}(\alpha_{1},...,\alpha_{4})\otimes F_{10}^{(4)}(0,0, 0,0),\] (52) \[Y_{121} =F_{11}^{(0)}\otimes F_{11}^{(0)},\] (53) \[Y_{144A}(\alpha_{1},...,\alpha_{9};\alpha_{10}) =F_{12A}^{(9)}(\alpha_{1},...,\alpha_{9})\otimes F_{12A}^{(9)}( \alpha_{10},0,0,0,0,0,0,0),\] (54) \[Y_{144B}(\alpha_{1},...,\alpha_{9}) =F_{12B}^{(9)}(\alpha_{1},...,\alpha_{9})\otimes F_{12B}^{(9)}(0, 0,0,0,0,0,0,0),\] (55) \[Y_{144C}(\alpha_{1},...,\alpha_{9};\alpha_{10}) =F_{12C}^{(9)}(\alpha_{1},...,\alpha_{9})\otimes F_{12C}^{(9)}( \alpha_{10},0,0,0,0,0,0,0),\] (56) \[Y_{144D}(\alpha_{1},...,\alpha_{9}) =F_{12D}^{(9)}(\alpha_{1},...,\alpha_{9})\otimes F_{12D}^{(9)}(0, 0,0,0,0,0,0,0),\] (57) \[Y_{169} =F_{13}^{(0)}\otimes F_{13}^{(0)},\] (58) \[Y_{196}(\alpha_{1},...,\alpha_{6}) =F_{14}^{(6)}(\alpha_{1},...,\alpha_{6})\otimes F_{14}^{(6)}(0,0, 0,0,0,0),\] (59) \[Y_{225}(\alpha_{1},...,\alpha_{8}) =F_{15}^{(8)}(\alpha_{1},...,\alpha_{8})\otimes F_{15}^{(8)}(0,0, 0,0,0,0,0),\] (60) \[Y_{256}(\alpha_{1},...,\alpha_{17}) =F_{16}^{(17)}(\alpha_{1},...,\alpha_{17})\otimes F_{16}^{(17)}( 0,...,0). \tag{61}\] In each case, \(\alpha_{j}\in[0,1)\), \(S(Y_{N})=(1,0,1)\) and \(Y_{N}=Y_{N}^{\Gamma}\). Moreover, in general, for each dimension \(N\), matrix \(Y_{N^{2}}:=F_{N}(\vec{0})\otimes F_{N}(\vec{0})\) is self \(\Gamma\)-dual, where \(\vec{0}\) represents the vector of parameters \(\alpha_{j}\) set to \(0\) for all \(j\), provided a given Fourier matrix admits additional parameterization. ## Appendix C Family of \(2\)-unitary CHM of order \(N=16\) The Sinkhorn algorithm produces several numerical examples of \(2\)-unitary complex Hadamard matrices in the set \(\mathbb{H}^{2}(16)\). Many of them admit free parameters and can be expressed analytically. Here we present a single, one-parameter affine family in the form of its core of size \(15\), \[\mathrm{core}\Big{(}T_{16}^{(1)}(a)\Big{)}= \tag{62}\] \[\left[\begin{array}{rrrrrrrrrrrrrr}-1&-ia&a&-a&ia&1&-1&1&-1&-a& ia&1&-1&a&-ia\\ i&a&-a&a&-a&-i&-1&-i&-1&a&-a&i&1&a&-a\\ 1&-1&-1&-1&-1&1&1&1&1&-1&-1&1&-1&-1\\ \hline-1&1&-1&-1&1&1&1/a&i/a&-ia&a&-1/a&-i/a&ia&-a\\ 1&ia&a&-a&-ia&-1&-1&-i&i&-ia&-i&i&ia&a\\ -i&-a&-a&a&a&i&-1&-1&i&ia&-ia&1&-i&ia&-ia\\ -1&-1&1&-1&-1&-1&-1/a&-ia&a&1/a&i/a&ia&-ia\\ \hline-i&-ib&-b&ia&ia&ia&-ia&i&-1&b&ib&-ia&ia&-ia&ia\\ -1&ia&-a&ia&a&-i&-1&-a&ia&i&i&-ia&i&-ia&a\\ i&-a&ia&ia&ia&-1&-i&i&1&a&-a&-1&-i&-ia&-ia\\ -i&ib&b&i&ia&-ia&-ia&ia&i&-1&-b&-ib&ia&-ia&-ia&ia\\ \hline 1&1&1&-ia&-a&1/a&-i/a&-i/a&ia&a&-1&-1&-1&-1\\ -1&-ia&a&-ia&a&i&i&-i&-i&ia&-a&-1&1&-a&ia\\ i&a&-a&ia&ia&1&i&-1&-i&-ia&-ia&-i&-1&-a&a\\ 1&-1&-1&-ia&-a&-1/a&i/a&ia&a&-1&-1&1&1\end{array}\right],\] where \(b=a^{2}\), while \(a=\exp\{2i\pi p_{1}\}\) is a unimodular parameter depending on \(p_{1}\in[0,1)\). The matrix \(Y_{16}^{(1)}(a)\equiv D_{L}(a)T_{16}^{(1)}(a)D_{R}(a)\), with \(\omega=\exp\{i\pi/6\}\) and \[D_{L}(a) =\text{diag}\big{\{}1,1,1,1,1,1,-1,1,1,a,-a,-1,1,-1,-1,1\big{\}}, \tag{63}\] \[D_{R}(a) =\text{diag}\big{\{}1,1,1,1,1,-i,-ia^{2},-a^{2},\omega,\omega^{4},\omega^{7},\omega^{10},-i,-1,i,1\big{\}}, \tag{64}\] forms a 2-unitary family in \(\mathbb{H}^{2}(16)\). Several other examples can be found on GitHub [28].
2304.07657
Identities for vacillating tableaux via growth diagrams
We give bijective proofs using Fomin's growth diagrams for identities involving numbers of vacillating tableaux that arose in the representation theory of partition algebras or are inspired by such identities.
Christian Krattenthaler
2023-04-15T23:44:05Z
http://arxiv.org/abs/2304.07657v2
# Identities for vacillating tableaux via growth diagrams ###### Abstract. We give bijective proofs using Fomin's growth diagrams for identities involving numbers of vacillating tableaux that arose in the representation theory of partition algebras or are inspired by such identities. Key words and phrases:Growth diagrams, Robinson-Schensted correspondence, standard Young tableaux, oscillating tableaux, vacillating tableaux, set partitions, Stirling numbers of the second kind, integer partitions 2020 Mathematics Subject Classification: Primary 05A15; Secondary 05A17 05A19 05E10 ## 1. Introduction Recently, there has been a resurgence of interest in algorithmic bijections involving vacillating tableaux, due to Halverson and Lewandowski [8], which produce combinatorial proofs of identities that arise in the representation theory of partition algebras. Here, given a positive integer \(K\), a _vacillating tableau_ of length \(K\) from the (integer) partition \(\lambda\) to the partition \(\mu\) (see Section 2 for all definitions) is a sequence of partitions \(\lambda=\lambda^{0}\supset\lambda^{1}\subset\lambda^{2}\supset\cdots\subset \lambda^{2K}=\mu\), for some positive integer \(K\), where \(\lambda^{i}\) and \(\lambda^{i+1}\) differ by exactly one cell for all \(i\). We write \(m_{\mu}^{\lambda}(K)\) for the number of vacillating tableaux from \(\lambda\) to \(\mu\) and, as usual, \(f^{\lambda}\) for the number of _standard Young tableaux_ of shape \(\lambda\). Using this notation, the identities considered in [8] are \[n^{k}=\sum_{\lambda\vdash n}f^{\lambda}m_{(n)}^{\lambda}(k),\quad\text{for $n \geq 1$}, \tag{1.1}\] and \[B_{2k}=\sum_{\lambda\vdash n}\left(m_{(n)}^{\lambda}(k)\right)^{2}=m_{(n)}^{( n)}(2k),\quad\text{for $n\geq 2k$}, \tag{1.2}\] where \(B_{m}\) is the \(m\)-th _Bell number_ (the number of all set partitions of \(m\)). (The expression in the centre results trivially from the right-hand side by cutting the vacillating tableau from \((n)\) to \((n)\) halfway into two vacillating tableaux.) As a matter of fact, Martin and Rollet [10] had proven the more general identity \[\sum_{l=1}^{n}S(k,l)=m_{(n)}^{(n)}(k),\quad\text{for $n\geq 1$}, \tag{1.3}\] via a (rather impenetrable) recursively built bijection. (See Appendix A for a completely worked out special case of the identity.) Here, the number \(S(k,l)\) is a _Stirling number of the second kind_, giving the number of all partitions of a \(k\)-element set into \(l\) blocks. Clearly, the identity (1.2) is the special case of (1.3) where we replace \(k\) by \(2k\). It is the article [1], in which Berikkyzy et al. study fine properties of Halverson and Lewandowski's algorithm, which drew my attention to [8]. The mentioned algorithm is an extremely elegant deletion-insertion algorithm that proves (1.1) bijectively. Now, whenever I see (Robinson-Schensted) "insertion", my immediate reaction is: there must also be presentation of the algorithm in terms of Fomin's _growth diagrams_[4, 5, 6] (see [11, 12], [13, Sec. 5.2] and [14, Sec. 7.13] for non-technical expositions)! On my search in the literature whether this has already been explained, I discovered that I had commented on that issue in [9] (which I had completely forgotten1): Footnote 1: I believe that the article [8] had been brought to my attention by a referee. \(\ldots\) _we point out that \(\ldots\) Robinson-Schensted like algorithms between set partitions and oscillating sequences of (integer) partitions have also been constructed by Halverson and Lewandowski, with the completely different motivation of explaining combinatorial identities arising from the representation theory of the partition algebra. Halverson and Lewandowski provide both the insertion/deletion and the growth diagram presentation of the algorithms. However, in their considerations, Greene's theorem does not play any role._ In retrospect: mostly correct, but not entirely. There are two deletion-insertion algorithms in [8]: one that proves (1.1), and one that proves (1.2). Halverson and Lewandowski do provide a growth diagram description of the latter algorithm, but not of the former. Second, the growth diagram description of the latter algorithm is not quite the "right one" in the sense that it does not extend to a bijective proof of the more general identity (1.3). Furthermore, by missing these points, I also missed that, by looking into the growth diagram perspective of the above identities, there are more identities to be discovered in this context. The purpose of the present article is to make up for these oversights. After fixing notations and recalling the concept of growth diagrams in the next section in order to be self-contained (largely copied from [9]), we present a growth diagram bijection proving (1.1) in Section 3. At this point, a disclaimer is in order: as opposed to [9], which provides growth diagram versions of the bijections in [3], here I do not claim that our bijection proving (1.1) is a growth diagram version of Halverson and Lewandowski's bijection. Rather, it seems that the two are not related in any simple way. See the end of Section 3 for more specific comments. On the other hand, they share a limiting property which is investigated by Berikkyzy et al. in [1] for the algorithm of Halverson and Lewandowski. See Theorem 3 in the same section. With this perspective in mind, one realises that (1.1) can be generalised to \[n^{k}=\sum_{\lambda\vdash n}f^{\lambda}m^{\lambda}_{\mu}(k),\quad\text{for $n \geq 1$}, \tag{1.4}\] where \(\mu\) is _any_ fixed partition of \(n\). The corresponding bijective proof using growth diagrams is presented in Section 4. (The identity (1.4) could also be proved by an appropriate adaptation of the deletion-insertion algorithm of Halverson and Lewandowski.) We move on in Section 5 to give a growth diagram bijection proving (1.3). That proof provides the inspiration for variations of that identity. In Section 6, we present a growth diagram bijection proving \[S(k,n)+S(k,n-1)=m_{(n)}^{(1^{n})}(k),\quad\mbox{for $n\geq 1$}, \tag{1.5}\] where \(1^{n}\) is short for a sequence of \(n\) 1's. (See Appendix B for a completely worked out special case of the identity.) Moreover, in Section 7, we argue that we have \[\sum_{l=1}^{n-2}S(k,l)+\sum_{l=1}^{n-1}l^{2}S(k,l)+(n-1)^{2}S(k,n)=m_{(n-1,1)}^ {(n-1,1)}(k),\quad\mbox{for $n\geq 2$}. \tag{1.6}\] (See Appendix C for a completely worked out special case of the identity.) Obviously, more identities of this type could be derived in the same spirit. We emphasise that, once one has understood the growth-diagram machinery, all these proofs are _one-picture proofs_: the growth diagram picture makes the corresponding identities immediately "obvious". **2. Definitions and notation.** We start by fixing the standard partition notation (cf. e.g. [14, Sec. 7.2]). A _partition_ is a weakly decreasing sequence \(\lambda=(\lambda_{1},\lambda_{2},\dots,\lambda_{\ell})\) of positive integers. This also includes the _empty partition_ (), denoted by \(\emptyset\). To each partition \(\lambda\), one associates its _Ferrers diagram_ (also called _Ferrers shape_), which is the left-justified arrangement of squares with \(\lambda_{i}\) squares in the \(i\)-th row, \(i=1,2,\dots\). If \(n=\lambda_{1}+\lambda_{2}+\dots+\lambda_{\ell}\), then we say that \(\lambda\) is a partition of (size) \(n\), and we write \(\lambda\vdash n\). We define a _partial order_\(\subseteq\) on partitions by containment of their Ferrers diagrams. The _union_\(\mu\cup\nu\) of two partitions \(\mu\) and \(\nu\) is the partition which arises by forming the union of the Ferrers diagrams of \(\mu\) and \(\nu\). Thus, if \(\mu=(\mu_{1},\mu_{2},\dots)\) and \(\nu=(\nu_{1},\nu_{2},\dots)\), then \(\mu\cup\nu\) is the partition \(\lambda=(\lambda_{1},\lambda_{2},\dots)\), where \(\lambda_{i}=\max\{\mu_{i},\nu_{i}\}\) for \(i=1,2,\dots\). The _intersection_\(\mu\cap\nu\) of two partitions \(\mu\) and \(\nu\) is the partition which arises by forming the intersection of the Ferrers diagrams of \(\mu\) and \(\nu\). Thus, if \(\mu=(\mu_{1},\mu_{2},\dots)\) and \(\nu=(\nu_{1},\nu_{2},\dots)\), then \(\mu\cap\nu\) is the partition \(\rho=(\rho_{1},\rho_{2},\dots)\), where \(\rho_{i}=\min\{\mu_{i},\nu_{i}\}\) for \(i=1,2,\dots\). The _conjugate of_ a partition \(\lambda\) is the partition \(\lambda^{\prime}=(\lambda^{\prime}_{1},\dots,\lambda^{\prime}_{\lambda_{1}})\) where \(\lambda^{\prime}_{j}\) is the length of the \(j\)-th column in the Ferrers diagram of \(\lambda\). Given a partition \(\lambda=(\lambda_{1},\lambda_{2},\dots,\lambda_{\ell})\), a _standard_ (_Young_) _tableau of shape_\(\lambda\) is a left-justified arrangement of the integers \(1,2,\dots,\lambda_{1}+\lambda_{2}+\dots+\lambda_{\ell}\) with \(\lambda_{i}\) entries in row \(i\), \(i=1,2,\dots\), such that the entries along rows and columns are increasing. By considering the sequence of partitions (Ferrers shapes) \((\lambda^{i})_{i\geq 0}\), where \(\lambda^{i}\) is the shape formed by the entries of \(T\) which are at most \(i\), \(i=0,1,2,\dots\), one sees that standard tableaux of shape \(\lambda\) are in bijection with sequences \(\emptyset=\lambda^{0}\subset\lambda^{1}\subset\dots\subset\lambda^{n}=\lambda\), where \(\lambda^{i-1}\) and \(\lambda^{i}\) differ by exactly one square for all \(i\). _Growth diagrams_ are certain labellings of arrangements of cells. The arrangements of cells which we need here are arrangements which are left-justified (that is, they have a straight vertical left boundary), bottom-justified (that is, they have a straight horizontal bottom boundary), and rows and columns in the arrangement are "without" holes, that is, if we move along the top-right boundary of the arrangement, we always move either to the right or to the bottom. Figure 1.a shows an example of such a cell arrangement. We fill some cells of such an arrangement \(C\) with crosses \(X\) such that every row and every column contains at most one \(X\). See Figure 1.b for an example. Next, the corners of the cells are labelled by partitions such that the following two conditions are satisfied: * A partition is either equal to its right neighbour or smaller by exactly one square, the same being true for a partition and its top neighbour. * A partition and its right neighbour are equal if and only if in the column of cells of \(C\) below them there appears no \(X\) and if their bottom neighbours are also equal to each other. Similarly, a partition and its top neighbour are equal if and only if in the row of cells of \(C\) to the left of them there appears no \(X\) and if their left neighbours are also equal to each other. See Figure 2 for an example. (More examples can be found in Figures 4-12.) There, we use a short notation for partitions. For example, \(11\) is short for \((1,1)\). Indeed, the filling represented in Figure 2 is the same as the one in Figure 1.b. Diagrams which obey the conditions (C1) and (C2) are called _growth diagrams_. Figure 1: We are interested in growth diagrams which obey the following (_forward_) _local rules_ (see Figure 3). (F1) If \(\rho=\mu=\nu\), and if there is no cross in the cell, then \(\lambda=\rho\). (F2) If \(\rho=\mu\neq\nu\), then \(\lambda=\nu\). (F3) If \(\rho=\nu\neq\mu\), then \(\lambda=\mu\). (F4) If \(\rho,\mu,\nu\) are pairwise different, then \(\lambda=\mu\cup\nu\). (F5) If \(\rho\neq\mu=\nu\), then \(\lambda\) is formed by adding a square to the \((k+1)\)-st row of \(\mu=\nu\), given that \(\mu=\nu\) and \(\rho\) differ in the \(k\)-th row. Figure 3: Figure 2: * If \(\rho=\mu=\nu\), and if there is a cross in the cell, then \(\lambda\) is formed by adding a square to the first row of \(\rho=\mu=\nu\). Thus, if we label all the corners along the left and the bottom boundary by empty partitions (which we shall always do in this paper), these rules allow one to determine all other labels of corners uniquely. It is not difficult to see that the rules (F5) and (F6) are designed so that one can also work one's way in the other direction, that is, given \(\lambda,\mu,\nu\), one can reconstruct \(\rho\)_and_ the filling of the cell. The corresponding (_backward_) _local rules_ are: * If \(\lambda=\mu=\nu\), then \(\rho=\lambda\). * If \(\lambda=\mu\neq\nu\), then \(\rho=\nu\). * If \(\lambda=\nu\neq\mu\), then \(\rho=\mu\). * If \(\lambda,\mu,\nu\) are pairwise different, then \(\rho=\mu\cap\nu\). * If \(\lambda\neq\mu=\nu\), then \(\rho\) is formed by deleting a square from the \((k-1)\)-st row of \(\mu=\nu\), given that \(\mu=\nu\) and \(\lambda\) differ in the \(k\)-th row, \(k\geq 2\). * If \(\lambda\neq\mu=\nu\), and if \(\lambda\) and \(\mu=\nu\) differ in the first row, then \(\rho=\mu=\nu\). In case (B6) the cell is filled with a cross. In all other cases the cell is left empty. Thus, given a labelling of the corners along the top-right boundary of a cell arrangement, one can algorithmically reconstruct the labels of the other corners of the cells _and_ of the filling by working one's way to the left and to the bottom. These observations lead to the following theorem. **Theorem 1.** Let \(C\) be an arrangement of cells. The fillings of \(C\) with the property that every row and every column contains at most one \(X\) are in bijection with labellings (\(\emptyset=\lambda^{0},\lambda^{1},\ldots,\lambda^{k}=\emptyset\)) of the corners of cells appearing along the top-right boundary of \(C\), where \(\lambda^{i-1}\) and \(\lambda^{i}\) differ by at most one square, and \(\lambda^{i-1}\subseteq\lambda^{i}\) if \(\lambda^{i-1}\) and \(\lambda^{i}\) appear along a horizontal edge, whereas \(\lambda^{i-1}\supseteq\lambda^{i}\) if \(\lambda^{i-1}\) and \(\lambda^{i}\) appear along a vertical edge. Moreover, \(\lambda^{i-1}\subsetneqq\lambda^{i}\) if and only if there is an \(X\) in the column of cells of \(C\) below the corners labelled by \(\lambda^{i-1}\) and \(\lambda^{i}\), and \(\lambda^{i-1}\supsetneqq\lambda^{i}\) if and only if there is an \(X\) in the row of cells of \(C\) to the left of the corners labelled by \(\lambda^{i-1}\) and \(\lambda^{i}\). In addition to its local description, the bijection of the above theorem has also a _global_ description. The latter is a consequence of a theorem of Greene [7] (see also [2, Theorems 2.1 and 3.2]). In order to formulate the result, we need the following definitions: a _NE-chain_ of a filling is a sequence of \(X\)'s in the filling such that any \(X\) in the sequence is above and to the right of the preceding \(X\) in the sequence. Similarly, a _SE-chain_ of a filling is a set of \(X\)'s in the filling such that any \(X\) in the sequence is below and to the right of the preceding \(X\) in the sequence. **Theorem 2.** Given a growth diagram on a cell arrangement with empty partitions labelling all the corners along the left boundary and the bottom boundary of the cell arrangement, the partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\) labelling corner \(c\) satisfies the following two properties: * For any \(k\), the maximal cardinality of the union of \(k\) NE-chains situated in the rectangular region to the left and below of \(c\) is equal to \(\lambda_{1}+\lambda_{2}+\cdots+\lambda_{k}\). * For any \(k\), the maximal cardinality of the union of \(k\) SE-chains situated in the rectangular region to the left and below of \(c\) is equal to \(\lambda^{\prime}_{1}+\lambda^{\prime}_{2}+\cdots+\lambda^{\prime}_{k}\), where \(\lambda^{\prime}\) denotes the partition conjugate to \(\lambda\). In particular, \(\lambda_{1}\) is the length of the longest NE-chain in the rectangular region to the left and below of \(c\), and \(\lambda^{\prime}_{1}\) is the length of the longest SE-chain in the same rectangular region. **3. Bijective proof of (1.1).** We apply Theorem 1 to the cell arrangement which has row lengths \(n,n+1,\ldots,n+k,n+k,\ldots,n+k\) (from top to bottom), where \(n+k\) occurs \(n\) times. Figure 4 shows such a cell arrangement for \(n=6\) and \(k=3\). (The reader should ignore the crosses and labellings at this point.) We consider _exclusively_ fillings of this cell arrangement which have the following two properties: * There is _exactly_ one cross in each row and in each column. * In the last \(n\) rows of the cell arrangement (this is the part of the cell arrangement Figure 4: in which all row lengths are \(n+k\); in the figure it is separated from the upper part by a thick line) the crosses form a NE-chain. It should be observed that Properties (1) and (2) together imply that the cross in the right-most column of the cell arrangement must occur at the top of the column. See Figure 4 for an example of such a filling. By Theorems 1 and 2, the forward growth diagram construction yields a bijection between the above fillings and sequences of partitions (read along the top-right boundary of the cell arrangement) of the form \[\emptyset=\lambda^{0}\subset\lambda^{1}\subset\cdots\subset\lambda ^{n} \tag{3.1a}\] \[\supset\lambda^{n+1}\subset\lambda^{n+2}\supset\lambda^{n+3} \subset\cdots\supset\lambda^{n+2k-1}\subset\lambda^{n+2k}=(n)\] (3.1b) \[\supset(n-1)\supset\cdots\supset(1)\supset\emptyset, \tag{3.1c}\] where successive partitions in this sequence differ by exactly one square. In other words, the images under the forward growth diagram construction of the fillings satisfying Properties (1) and (2) decompose into an increasing sequence from the empty partition to \(\lambda:=\lambda^{n}\) (the part in (3.1a); it corresponds to a standard tableau of shape \(\lambda=\lambda^{n}\)), followed by a vacillating tableau of length \(k\) from \(\lambda=\lambda^{n}\) to \((n)\) (the part in (3.1b) together with \(\lambda^{n}\) from the previous line), followed by a completely determined decreasing sequence from \((n)\) to the empty partition (the part in (3.1c)). In particular, it is Property (2) combined with Theorem 2 which implies that the last \(n+1\) partitions are completely determined as indicated above. Conversely, again using Theorems 1 and 2, by putting such a sequence along the top-right boundary of the cell arrangement and applying the backward growth diagram construction, one obtains a filling with Properties (1) and (2). Hence, we see that the fillings satisfying Properties (1) and (2) are in bijection with pairs \((T,V)\), where \(T\) is a standard tableau of some shape \(\lambda\) of size \(n\), and where \(V\) is a vacillating tableau of length \(k\) from \(\lambda\) to \((n)\). Clearly, the number of these pairs is exactly the sum on the right-hand side of (1.1). It remains to count the fillings of the above cell arrangements. The key observation is that, once the crosses are filled in the first \(k\) rows of the cell arrangement (in Figure 4 this is the part above the thick line), then the filling is already completely determined since, by Property (1), there must be exactly one cross in each row and in each column, and, by Property (2), these crosses must form a NE-chain. So, the remaining question is: how many ways are there to put crosses in the first \(k\) rows? There are \(n\) possibilities to put a cross in the first row (which has length \(n\)). Once that cross has been placed, since by Property (1) there can be only one cross in a column, there remain \(n\) possibilities to place a cross in the second row (which has length \(n+1\)). Etc. Thus, in total we have \(n^{k}\) possibilities to place crosses in the first \(k\) rows, explaining the left-hand side of (1.1). \(\qquad\square\) The example in Figure 4 illustrates this bijection. The filling there gets mapped to the pair \[\left(\begin{array}{ccc}1&2&3\\ 4&5&,\ 321\supset 32\subset 42\supset 41\subset 51\supset 5\subset 6\\ 6&&\end{array}\right).\] As we pointed out in the introduction, our growth diagram bijection does not seem to be related to the deletion-insertion bijection of Halverson and Lewandowski in [8] in any simple way. Indeed, Halverson and Lewandowski realise the left-hand side of (1.1) combinatorially as sequences \((i_{1},i_{2},\ldots,i_{k})\in\{1,2,\ldots,n\}^{k}\), and use these data for their deletion-insertion procedure. Obviously, there are several ways to extract such a sequence from our fillings. Since we agreed that the placement of crosses in the first \(k\) rows determines all other crosses uniquely by Properties (1) and (2), the most straightforward way to read off such a sequence would be to define \(i_{1}\) to be the number of the column in which we find the cross in the first row, to define \(i_{2}\) to be the number of the column in which we find the cross in the second row not counting column \(i_{1}\), etc. In the example of Figure 4, we would read off the sequence \((3,4,2)\). (The reader should observe that this is the reversal of the sequence that is used as an example in Figure 3 of [8]; yet, the images of the bijections here and in [8] seem completely unrelated.) The explanation for the unrelatedness of the two bijections lies perhaps in the fact that Halverson and Lewandowski use jeu de taquin to define their deletion. While there is a growth diagram description of jeu de taquin (cf. [14, Appendix 1 of Chapter 7]), we do not use it here, which seems to make the two bijections incomparable. Nevertheless, they share one property that is in the focus of [1]: a limiting property. In order to understand what is meant by this, let us introduce the following notation: given a partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{\ell})\), define the truncation \(\lambda^{*}:=(\lambda_{2},\ldots,\lambda_{\ell})\). Then we have the following result. **Theorem 3.** Let \((i_{1},i_{2},\ldots,i_{k})\) be a given sequence in \(\{1,2,\ldots,n\}^{k}\). Let \[(n)=\lambda^{n}\supset\lambda^{n+1}\subset\lambda^{n+2}\supset\lambda^{n+3} \subset\cdots\supset\lambda^{n+2k-1}\subset\lambda^{n+2k}=(n)\] be the image of \((i_{1},i_{2},\ldots,i_{k})\) (under the identification with placements of crosses of cell arrangements that was explained just above) under the growth diagram bijection described in the proof of (1.1) in this section. Then \[\emptyset=(\lambda^{n})^{*}\supset(\lambda^{n+1})^{*}\subset(\lambda^{n+2})^ {*}\supset(\lambda^{n+3})^{*}\subset\cdots\supset(\lambda^{n+2k-1})^{*}\subset (\lambda^{n+2k})^{*}=\emptyset\] is always the same for \(n\geq\max\{i_{1},i_{2}+1\ldots,i_{k}+k-1\}\). Remark. The vacillating tableau (3.2) is called limiting vacillating tableau in [1]. Proof of Theorem 3. This is completely obvious in view of Theorem 2. Once the NE-chain in the lower part of the cell arrangement (the last \(n\) rows) "takes over" (meaning that it is longer than \(k\)), the first component of the partitions read along the top-right boundary of the triangular part of the cell arrangement will be equal to the length of that chain, while all other components are smaller, and are determined -- again via Theorem 2 -- by the crosses in the first \(k\) rows, which in their turn correspond to the sequence \((i_{1},i_{2},\ldots,i_{k})\). \(\Box\) **4. Bijective proof of (1.4).** The proof of (1.4) is identical with the one of (1.1), with one notable difference. Since, now, along the staircase part of the cell arrangement we want to read a vacillating tableau from \(\lambda\) to \(\mu\) (with \(\mu\) not necessarily equal to \((n)\)), we must modify the arrangement of crosses in the lower part of the cell arrangement. To be precise, Given a partition \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{\ell})\) of size \(n\), we consider _exclusively_ fillings of the cell arrangement as in the proof of (1.1) which have the following two properties: * There is _exactly_ one cross in each row and in each column. * In rows \(k+n,k+n-1,\ldots,k+n-\mu_{1}+1\) (the bottom-most \(\mu_{1}\) rows of the cell arrangement), the crosses form a NE-chain; in rows \(k+n-\mu_{1},k+n-\mu_{1}-1,\ldots,k+n-\mu_{1}-\mu_{2}+1\), the crosses form another NE-chain, and all of them must lie strictly to the left of the crosses in the lowest \(\mu_{1}\) rows that were already discussed; \(\ldots\) ; in rows \(k+n-\mu_{1}-\cdots-\mu_{\ell-1},k+n-\mu_{1}-\cdots-\mu_{\ell-1}-1,\ldots,k+n- \mu_{1}-\cdots-\mu_{\ell-1}-\mu_{\ell}+1=k+1\), the crosses also form a NE-chain and lie strictly to the left of the crosses in the lower rows that were already discussed. See Figure 5 for an example of such a filling, where \(n=6,\,k=3\), and \(\mu=(3,2,1)\). Figure 5 Theorem 2 implies that, if we apply the forward growth diagram construction, then along the right border we read (from bottom to top) the partitions \[\emptyset\subset(1)\subset\cdots\subset(\mu_{1})\subset(\mu_{1},1) \subset\cdots\subset(\mu_{1},\mu_{2})\\ \subset\cdots\subset(\mu_{1},\ldots,\mu_{\ell-1},1),\ldots,(\mu_{ 1},\ldots,\mu_{\ell-1},\mu_{\ell})=\mu,\] that is, first the first component of the partitions grows up to \(\mu_{1}\), then the second component grows up to \(\mu_{2}\),..., and finally the last component grows up to \(\mu_{\ell}\), so that we end in \(\mu\). See Figure 5, where \(\mu=(3,2,1)\). Consequently, one sees that, by the growth diagram construction, fillings satisfying Properties (1') and (2') are in bijection with sequences of partitions (read along the top-right boundary of the cell arrangement) of the form \[\emptyset=\lambda^{0}\subset\lambda^{1}\subset\cdots\subset\lambda ^{n} \tag{4.1a}\] \[\supset\lambda^{n+1}\subset\lambda^{n+2}\supset\lambda^{n+3} \subset\cdots\supset\lambda^{n+2k-1}\subset\lambda^{n+2k}=\mu\] (4.1b) \[\supset(\mu_{1},\ldots,\mu_{\ell}-1)\supset\cdots\supset(\mu_ {1},\ldots,\mu_{\ell-1})\supset\cdots\supset(\mu_{1})\supset\cdots\supset( 1)\supset\emptyset, \tag{4.1c}\] where successive partitions in this sequence differ by exactly one square. Here again, these sequences decompose into three parts, namely an increasing sequence from the empty partition to \(\lambda:=\lambda^{n}\) (the part in (4.1a), corresponding to a standard tableau of shape \(\lambda=\lambda^{n}\)), followed by a vacillating tableau of length \(k\) from \(\lambda=\lambda^{n}\) to \(\mu\) (the part in (4.1b) together with \(\lambda^{n}\) from the previous line), followed by a completely determined decreasing sequence from \(\mu\) to the empty partition (the part in (4.1c)). Hence, we see that the fillings satisfying Properties (1') and (2') are in bijection with pairs \((T,V)\), where \(T\) is a standard tableau of some shape \(\lambda\) of size \(n\), and where \(V\) is a vacillating tableau of length \(k\) from \(\lambda\) to \(\mu\). Clearly, the number of these pairs is exactly the sum on the right-hand side of (1.4), while the number of fillings is equal to \(n^{k}\) for the same reason as in the proof of (1.1). \(\quad\Box\) Figure 5 shows an example of the above bijection where the filling gets mapped to the pair \[\begin{pmatrix}1&2\\ 3&5\\ 4&\\ 6&\end{pmatrix}\quad,\ 2211\supset 221\subset 2211\supset 221\subset 222 \supset 221\subset 321\\ \end{pmatrix}.\] **5. Bijective proof of (1.3).** We choose again the cell arrangement from the proof of (1.1), that is, the arrangement with row lengths \(n,n+1,\ldots,n+k,n+k,\ldots,n+k\) (from top to bottom), where \(n+k\) occurs \(n\) times. Figure 6 shows such a cell arrangement for \(n=3\) and \(k=10\). The fillings that we consider here are different, though. More precisely, we consider _exclusively_ fillings of this cell arrangement which have the following three properties: 1. There is _exactly_ one cross in each row and in each column. 2. In the last \(n\) rows of the cell arrangement (this is the part of the cell arrangement in which all row lengths are \(n+k\); in the figure it is separated from the upper part by a thick horizontal line) the crosses form a NE-chain. 3. In the first \(n\) columns of the cell arrangement (this is the part of the cell arrangement in which all column lengths are \(n+k\); in the figure it is separated from the right part by a thick vertical line) the crosses form a NE-chain. It should be observed that Properties (1) and (2) together imply that the cross in the right-most column of the cell arrangement must occur at the top of the column, and that Properties (1) and (3) imply that the cross in the top row of the arrangement must be in the right-most cell of that row. See Figure 6 for an example of such a filling. By Theorems 1 and 2, the forward growth diagram construction yields a bijection between the above fillings and sequences of partitions (read along the top-right boundary of Figure 6: the cell arrangement) of the form \[\emptyset\subset(1)\subset\cdots\subset(n)=\lambda^{n} \tag{5.1a}\] \[\supset\lambda^{n+1}\subset\lambda^{n+2}\supset\lambda^{n+3}\subset \cdots\supset\lambda^{n+2k-1}\subset\lambda^{n+2k}=(n)\] (5.1b) \[\supset(n-1)\supset\cdots\supset(1)\supset\emptyset, \tag{5.1c}\] where successive partitions in this sequence differ by exactly one square. In other words, the images under the forward growth diagram construction of the fillings satisfying Properties (1)-(3) decompose into a completely determined increasing sequence from the empty partition to \((n)=\lambda^{n}\) (the part in (5.1a)), followed by a vacillating tableau of length \(k\) from \((n)\) to \((n)\) (the part in (5.1b) together with \((n)\) from the previous line), followed by a completely determined decreasing sequence from \((n)\) to the empty partition (the part in (5.1c)). In particular, it is Properties (2) and (3) combined with Theorem 2 which imply that the first \(n+1\) and the last \(n+1\) partitions are completely determined as indicated above. By definition, the above sequences are counted by \(m_{(n)}^{(n)}(k)\). It remains to determine the number of fillings satisfying Properties (1)-(3). The first observation is that such a filling will have a certain number, say \(n-l\) for some integer \(l\) with \(1\leq l\leq n\), of successive crosses along the main diagonal of the cell arrangement. More precisely, there will be crosses in rows and columns \(i,\,i=1,2,\ldots,n-l\) (counted from bottom-left), for some integer \(l\) with \(1\leq l\leq n\). Further \(l\) crosses will have to be placed in columns \(n-l+1,\ldots,n-1,n\) such that, together with the aforementioned \(n-l\) crosses along the main diagonal they form a NE-chain, and further \(l\) crosses will have to be placed in rows \(n-l+1,\ldots,n-1,n\) such that, together with the aforementioned \(n-l\) crosses along the main diagonal they form a NE-chain. In our example in Figure 6, there is just \(1=3-2\) cross in the bottom-left of the main diagonal, so that \(l=2\). We now concentrate on the triangular region, \(\Delta\) say, consisting of rows and columns \(n+1,n+2,\ldots,n+k\) (again counted from bottom-left). In Figure 6, this is the region to the right and above of the thick lines. We have isolated that region in Figure 7. The \(l\) crosses in columns \(n-l+1,\ldots,n-1,n\) discussed above occupy certain rows. (One of the crosses must necessarily be placed at the end of the top row of the cell arrangement.) These \(l\) rows must not be occupied by crosses in \(\Delta\). Similarly, the \(l\) crosses in rows \(n-l+1,\ldots,n-1,n\) discussed above occupy certain columns. (One of the crosses must necessarily be placed at the top of the right-most column of the cell arrangement.) These \(l\) columns must not be occupied by crosses in \(\Delta\). On the other hand, we must place exactly one cross in each of the remaining \(k-l\) rows and \(k-l\) columns. See Figures 6 and 7 (where \(n=3\), \(k=10\), and \(l=2\)). The configuration of crosses in \(\Delta\) may be interpreted in a one-to-one fashion as a set partition \(\pi\) of \(\{1,2,\ldots,k\}\). Indeed, if there is a cross in row \(i\) of \(\Delta\) (counted from the top) and in column \(j\) of \(\Delta\) (counted from the left), then we declare \(i\) and \(j\) to be in the same block of \(\pi\). Thus, the configuration of crosses in Figure 7 corresponds to the partition \[\{\{1,2,3,4,5,7\},\{6,8,9,10\}\}. \tag{5.2}\] This is indeed a one-to-one correspondence, the inverse mapping being defined by ordering the numbers in each block of the partition by size and placing a cross in row \(i\) and column \(j\) whenever \(i\) and \(j\) are successive elements in a(n ordered) block (with \(i<j\)). The proof of (1.3) can now be completed by observing that, under the above described correspondence, configurations of \(k-l\) crosses in \(\Delta\) are in bijection with partitions of \(\{1,2,\ldots,k\}\) with \(l\) blocks. This explains the left-hand side of (1.3) since the Stirling number \(S(k,l)\) equals the number of these partitions. \(\Box\) The bijection of the above proof is illustrated in Figure 6, mapping the partition in (5.2) to the vacillating tableau \[3,2,3,2,3,2,3,2,3,2,21,2,21,2,2,3,2,3,2,3,\] and vice versa. **6. Bijective proof of (1.5).** The proof of (1.5) is analogous to the one of (1.3) in the previous section. The only difference is that, here, Property (3) of the fillings gets replaced by * In the first \(n\) columns of the cell arrangement (this is the part of the cell arrangement in which all column lengths are \(n+k\)) the crosses form a SE-chain. Figure 7: As a consequence, there are two possibilities for the placement of the crosses in the bottom-left square of the cell arrangement consisting of rows and columns \(1,2,\ldots,n\) (counted from bottom-left): either there are no crosses in that square, or there is exactly one cross, which is placed in row \(1\) and column \(n\). Figure 8 provides an example for the former case, in which \(n=3\) and \(k=10\), and where the partition \[\{\{1,2\},\{3,4,5,7\},\{6,8,9,10\}\}\] Figure 8: gets mapped to the vacillating tableau \[111\supset 11\subset 21\supset 11\subset 111\supset 11\subset 21 \supset 11\subset 21\supset 11\\ \subset 21\supset 2\subset 21\supset 2\subset 21\supset 2\subset 3 \supset 2\subset 3\supset 2\subset 3,\] while Figure 9 shows an example for the latter case, in which again \(n=3\) and \(k=10\), and where the partition \[\{\{1,2,6,8,9,10\},\{3,4,5,7\}\}\] is mapped to \[111\supset 11\subset 21\supset 11\subset 21\supset 2 \subset 3\supset 2\subset 3\supset 2\subset 21\\ \supset 2\subset 21\supset 2\subset 21\supset 2\subset 3 \supset 2\subset 3\supset 2\subset 3.\] Clearly, here Theorem 2 implies that the sequences that we read off along the top-right boundary of the cell arrangement have the form \[\emptyset\subset(1)\subset(1,1)\subset\cdots\subset(1,1,\ldots,1 )=\lambda^{n} \tag{6.1a}\] \[\supset\lambda^{n+1}\subset\lambda^{n+2}\supset\lambda^{n+3} \subset\cdots\supset\lambda^{n+2k-1}\subset\lambda^{n+2k}=(n)\] (6.1b) \[\supset(n-1)\supset\cdots\supset(1)\supset\emptyset, \tag{6.1c}\] We leave the details to the reader. \(\qquad\square\) **7. Sketch of bijective proof of (1.6).** We proceed as in the previous proofs of (1.3) and (1.5). Here, we consider fillings of the cell arrangement with row lengths \(n,n+1,\ldots,n+k,n+k,\ldots,n+k\) (from top to bottom) which have the following properties: 1. There is _exactly_ one cross in each row and in each column. 2. The crosses in rows \(k+1,k+2,\ldots,k+n-1\) of the cell arrangement form a NE-chain, but the cross in row \(k+n\) (the last row) does not extend this NE-chain. 3. The crosses in columns \(2,3,\ldots,n\) of the cell arrangement form a NE-chain, but the cross in the first column does not extend this NE-chain. By Theorems 1 and 2, the forward growth diagram construction yields a bijection between the above fillings and sequences of partitions (read along the top-right boundary of the cell arrangement) of the form \[\emptyset\subset(1)\subset(1,1)\subset(2,1)\subset\cdots\subset (n-1,1)=\lambda^{n} \tag{7.1a}\] \[\supset\lambda^{n+1}\subset\lambda^{n+2}\supset\lambda^{n+3} \subset\cdots\supset\lambda^{n+2k-1}\subset\lambda^{n+2k}=(n-1,1)\] (7.1b) \[\supset(n-2,1)\supset\cdots\supset(2,1)\supset(1,1)\supset(1) \supset\emptyset, \tag{7.1c}\] where successive partitions in this sequence differ by exactly one square. In other words, the images under the forward growth diagram construction of the fillings satisfying Properties (1)-(3) decompose into a completely determined increasing sequence from the empty partition to \((n-1,1)=\lambda^{n}\) (the part in (7.1a)), followed by a vacillating tableau of length from \((n-1,1)\) to \((n-1,1)\) (the part in (7.1b) together with \((n-1,1)\) from the previous line), followed by a completely determined decreasing sequence from \((n-1,1)\) to the empty partition (the part in (7.1c)). In particular, it is Properties (2) and (3) combined with Theorem 2 which imply that the first \(n+1\) and the last \(n+1\) partitions are completely determined as indicated above. By definition, the above sequences are counted by \(m_{(n-1,1)}^{(n-1,1)}(k)\). It remains to determine the number of fillings satisfying Properties (1)-(3). The set of these fillings decomposes into three pairwise disjoint subsets according to three structurally different possibilities to place the crosses. These three possibilities are Figure 9: Growth diagram bijection for (1.5), second case exemplified in Figures 10-12, respectively. The meaning of the thick lines in the figures is the same as in Figures 6-9. The first possibility (see Figure 10, where \(n=5\) and \(k=4\)) is to have a cross in the last row and the second column of the arrangement, a cross in the next-to-last row and the first column, and crosses along the main diagonal in rows and columns \(i\), for \(i=3,4,\ldots,n-l\) (counted from bottom-left), for some integer \(l\) with \(1\leq l\leq n-2\), while further \(l\) crosses are placed in columns \(n-l+1,\ldots,n-1,n\) such that, together with the aforementioned \(n-l-1\) crosses in columns \(2,3,\ldots,n-l\) they form a NE-chain, and further \(l\) crosses are placed in rows \(n-l+1,\ldots,n-1,n\) such that, together with the aforementioned \(n-l-1\) crosses in rows \(2,3,\ldots,n-l\) they form a NE-chain. (In Figure 10, we have \(l=2\).) The number of these fillings is given by the first term on the left-hand side of (1.6). The second possibility (see Figure 11, where \(n=5\) and \(k=4\)) is to have crosses along the main diagonal in rows and columns \(i\), for \(i=2,3,\ldots,n-l+1\) (counted from bottom-left), for some integer \(l\) with \(1\leq l\leq n-1\), while further \(l\) crosses are placed in columns \(n-l+2,\ldots,n-1,n\) such that, together with the aforementioned \(n-l\) crosses in columns \(2,3,\ldots,n-l+1\) they form a NE-chain, further \(l\) crosses are placed in rows \(n-l+2,\ldots,n-1,n\) such that, together with the aforementioned \(n-l\) crosses in rows \(2,3,\ldots,n-l+1\) they form a NE-chain, and finally a cross is placed in the first column within the first \(k\) columns, and a cross is placed in the last row within the last \(k\) columns. (In Figure 11, Figure 10: we have \(l=3\).) The number of these fillings is given by the second term on the left-hand side of (1.6). The multiplicative factor \(l^{2}\) has its explanation in the freedom to place the "special crosses" in the first column and the last row in relation to the two sets of "further \(l\) crosses" mentioned above. The third (and last) possibility (see Figure 12, where \(n=3\) and \(k=6\)) is to place no crosses into the bottom-left square consisting of the first \(n\) columns and last \(n\) rows, to place a NE-chain of crosses in columns \(2,3,\ldots,n\) avoiding that square, together with a cross in the first column that does not extend this NE-chain, and to place a NE-chain of crosses in rows \(2,3,\ldots,n\) (counted from bottom) avoiding that square, together with a cross in the last row that does not extend this NE-chain. The number of these fillings is given by the third term on the left-hand side of (1.6). Here, the multiplicative factor is \((n-1)^{2}\) (and not \(n^{2}\)) since there is one option less for the relative arrangement of the crosses in the first \(n\) columns and last \(n\) rows. We leave the details to the reader. \(\qquad\Box\) **Appendix A** Here we work out the special case of (1.3) where \(n=3\) and \(k=5\). We have \(S(5,1)=1\), \(S(5,2)=15\), and \(S(5,3)=25\). Hence, on the left-hand side of (1.3) we obtain \(S(5,1)+S(5,2)+S(5,3)=41\). The 41 vacillating tableaux of length 5 from (3) to (3) that must Figure 11: Growth diagram bijection for (1.6), second case exist according to the right-hand side of (1.3) are \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2 \subset 3\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2 \subset 21\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21\supset 2 \subset 3\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21\supset 2 \subset 21\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21\supset 11 \subset 21\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 2\subset 3 \supset 2\subset 3\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 2\subset 3 \supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 2\subset 21 \supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\] Figure 12: \[3\supset 2\subset 21\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 3\supset 2\subset 21\supset 2 \subset 3\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 3\supset 2\subset 21\supset 2 \subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 3\supset 2\subset 21\supset 11 \subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 2\subset 3\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 2\subset 3\supset 2 \subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 2\subset 21\supset 2 \subset 3\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 2\subset 21 \supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 2\subset 21 \supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 2\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 3\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 3\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 3\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 111\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 111\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 111\supset 11\subset 21 \supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 111\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 111\supset 11\subset 21\supset 2\subset 3\] ## Appendix B Here we work out the special case of (1.5) where \(n=3\) and \(k=5\). We have \(S(5,2)=15\) and \(S(5,3)=25\). Hence, on the left-hand side of (1.5) we obtain \(S(5,2)+S(5,3)=40\). The 40 vacillating tableaux of length 5 from (3) to \((1,1,1)\) that must exist according to the right-hand side of (1.5) are \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2 \subset 21\supset 11\subset 11\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21 \supset 2\subset 21\supset 11\subset 111\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 111\] \[3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\supset 11\supset 2 1\supset 2\subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\supset 11\subset 21\supset 11\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\supset 11\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\supset 11\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\supset 11\supset 11\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\subset 21\supset 11\supset 11\supset 11\supset 2 \subset 3\] \[3\supset 2\subset 21\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 1\supset 11\supset 11\supset 11\supset 11\supset 11\supset 11\supset 1\supset 11\supset 11\supset 11\supset 11\supset 1\supset 11\supset 11\supset 1\supset 11\supset 11 [MISSING_PAGE_POST] \(3\supset 2\subset 21\ \[3\supset 2\subset 21\supset 11\subset 21\supset 11\subset 111\supset 11\subset 111 \supset 11\subset 111\] \[3\supset 2\subset 21\supset 11\subset 111\supset 11\subset 111\supset 11\subset 111 \supset 11\subset 111\] ## Appendix C Here we work out the special case of (1.6) where \(n=3\) and \(k=3\). We have \(S(3,1)=1\), \(S(3,2)=3\), and \(S(3,3)=1\). Hence, on the left-hand side of (1.6) we obtain \(S(3,1)+1^{2}\cdot S(3,1)+2^{2}\cdot S(3,2)+(3-1)^{2}S(3,3)=18\). The 18 vacillating tableaux of length 3 from \((2,1)\) to \((2,1)\) that must exist according to the right-hand side of (1.6) are \[21\supset 2\subset 21\supset 2\subset 21\supset 2\subset 21\] \[21\supset 2\subset 21\supset 2\subset 21\supset 11\subset 21\] \[21\supset 2\subset 21\supset 11\subset 21\supset 2\subset 21\] \[21\supset 11\subset 21\supset 2\subset 21\supset 2\subset 21\] \[21\supset 2\subset 21\supset 11\subset 21\supset 11\subset 21\] \[21\supset 11\subset 21\supset 2\subset 21\supset 11\subset 21\] \[21\supset 11\subset 21\supset 11\subset 21\supset 11\subset 21\] \[21\supset 11\subset 21\supset 11\subset 21\supset 11\subset 21\] \[X21\supset 2\subset 3\supset 2\subset 21\supset 2\subset 21\] \[21\supset 2\subset 21\supset 2\subset 3\supset 2\subset 21\] \[21\supset 2\subset 3\supset 2\subset 3\supset 2\subset 21\] \[Y21\supset 2\subset 3\supset 2\subset 21\supset 11\subset 21\] \[Z21\supset 11\subset 21\supset 2\subset 3\supset 2\subset 21\] \[U21\supset 2\subset 21\supset 11\subset 111\supset 11\subset 21\] \[21\supset 11\subset 111\supset 11\subset 21\supset 2\subset 21\] \[21\supset 11\subset 111\supset 11\subset 21\supset 11\subset 21\] \[21\supset 11\subset 21\supset 11\subset 111\supset 11\subset 21\]
2303.03496
Wind Turbine Gearbox Fault Detection Based on Sparse Filtering and Graph Neural Networks
The wind energy industry has been experiencing tremendous growth and confronting the failures of wind turbine components. Wind turbine gearbox malfunctions are particularly prevalent and lead to the most prolonged downtime and highest cost. This paper presents a data-driven gearbox fault detection algorithm base on high frequency vibration data using graph neural network (GNN) models and sparse filtering (SF). The approach can take advantage of the comprehensive data sources and the complicated sensing networks. The GNN models, including basic graph neural networks, gated graph neural networks, and gated graph sequential neural networks, are used to detect gearbox condition from knowledge-based graphs formed using wind turbine information. Sparse filtering is used as an unsupervised feature learning method to accelerate the training of the GNN models. The effectiveness of the proposed method was verified on practical experimental data.
Jinsong Wang, Kenneth A. Loparo
2023-03-06T21:08:07Z
http://arxiv.org/abs/2303.03496v1
# Wind Turbine Gearbox Fault Detection Based on Sparse Filtering and Graph Neural Networks ###### Abstract The wind energy industry has been experiencing tremendous growth and confronting the failures of wind turbine components. Wind turbine gearbox malfunctions are particularly prevalent and lead to the most prolonged downtime and highest cost. This paper presents a data-driven gearbox fault detection algorithm base on high frequency vibration data using graph neural network (GNN) models and sparse filtering (SF). The approach can take advantage of the comprehensive data sources and the complicated sensing networks. The GNN models, including basic graph neural networks, gated graph neural networks, and gated graph sequential neural networks, are used to detect gearbox condition from knowledge-based graphs formed using wind turbine information. Sparse filtering is used as an unsupervised feature learning method to accelerate the training of the GNN models. The effectiveness of the proposed method was verified on practical experimental data. Keywords:Wind energy, fault detection, graph neural network, sparse filtering ## 0 Introduction The wind market of 2021 trend data shows that the global market has an installed base of 590-GW that delivers clean energy to the consumers representing a 10% increase compared with 2020 [1]. China and the United States have the largest wind energy capacities at 211,392 MW and 96,433 MW, respectively [1]. Average turbine size (rotor diameter, and hub height) and capacity (nameplate) are continuing a long-term growth trend. Comparing utility-scale wind turbines in 2010 and 2018, in 2018 the average diameter is 116 (m) and the average hub height is 88 (m) (a 35% increase) while the average nameplate capacity is 2.4 MW (a 60% increase) [1]. If current trends continue, wind energy can save consumers $280 billion, reduce 12.3 gigatons of greenhouse gases emission, preserve 260 million gallons of water, increase tax revenue by $3.2 billion, and support 600,000 jobs by 2050 [2]. Wind energy reliability issues are a result of the rapidly growing market and wind turbine technology development. Higher wind energy reliability can improve operation and maintenance (O&M) costs, capacity factors, levelized cost, and grid interconnection [3]. The failure of wind turbine components is a critical reliability issue; downtime caused by component failures has harmful effects economic and operational aspects [4]. Gerbox malfunctions have the most prolonged downtime, the most expensive O&M costs, and the most substantial impact on grid operations [5]. Gerbox vibration is monitored to assist with wind turbine health management and avoid malfunctions [6]. Internet of Things (IoT) has been an attracting technology for real-time monitoring of valuable devices. IoT-based wind turbine Prognostics and Health Management (PHM) is defined as a lifecycle support system, in which, (1) wind turbine data is collected for mission-critical and cost-sensitive components, for example, the gearbox; (2) fault detection, diagnosis, and prognosis are implemented by processing and modeling these data; (3) decision support system is able to interpret these outcomes to accomplishable required operational and maintenance strategies [7]. By 2030, IoT-based wind turbine technologies such as PHM [7] have the potential to increase wind turbine production by 25% and reduce cost by 50% [8]. However, IoT-based PHM systems can be challenging, they require sensing and vibration condition monitoring, analysis of oil debris, and accurate and reliable fault detection, diagnosis, and prediction algorithms to improve decision-making [7]. On the other hand, there are some data-driven approaches that using the sensing data collected from IoT system for fault detection of wind turbines. Neural-network-based approaches have been widely used for fault detection, resulting is high performance in some applications. Samanta presented a genetic algorithm-based artificial neural network for fault detection and diagnosis [9]. In Samanta's work, time-domain statistical features are extracted from the raw vibration data obtained under different loading and operating conditions. This data was then used as input to a multilayer perceptron neural network, and a genetic algorithm is used for feature selection to optimize the classification accuracy. The GA-based ANN achieved 100% classification accuracy, which was better than without the GA and like a GA-based support vector machine (SVM). Amar et al. proposed an ANN-based detection and diagnosis system using frequency-domain features extracted by the Fast Fourier Transform (FFT) [10]. Feature selection from the FFT spectral images was based on a 2D averaging filter and binary image. Saravanan et al. proposed an ANN-based fault detection system using the discrete wavelet transformation (DWT) [11]. Gaussian Process (GP) is used for wind turbine condition monitoring to identify operational anomalies [12]. An artificial neural network (ANN)-based condition monitoring approach using data from supervisory control and data acquisition system is proposed [13]. Random Forests and XGboost are combined to establish a data-driven wind turbine fault detection framework [14]. Denoising auto encoder (DAE) is also used to develop a multivariate data-driven fault detection framework [15]. Ensemble empirical mode decomposition (EEMD) and independent component analysis (ICA) are proposed to integrate to conduct wind turbine gearbox diagnosis [16]. In order to select the optimal variables, a PCA-based approach is proposed [17]. Compressed sensing is proposed to identify the impulsive feature [18]. These previous works used a typical fault detection and diagnosis process: signal preprocessing, feature extraction and selection, and classification. Lei et al. propose an intelligent fault diagnosis method for mechanical vibration data with unsupervised feature learning [19]. This method contained two stages: unsupervised feature extraction and classification. Sparse filtering [20] was implemented by an unsupervised two-layer neural network based on optimized sparsity distribution from the raw data for feature extraction. The features were then classified using SoftMax regression [21]. As compared to traditional ANN-based models, Lei's work reduced the prior knowledge and related feature extraction expertise that is required. The model had similar efficiency to CNN-based models using comprehensive vibration data but reduced computational cost. Zhang et al. proposed an innovative approach for wind turbine fault detection based on Gaussian Processes (GP) [22] and bootstrap-based ensemble neural networks (BENN) [23] to produce early prediction of the health conditions, with high level accuracy when applied to gearbox oil temperature and the generator winding voltage datasets [24]. However, the hidden domain knowledge of wind turbine such as the structural information of the sensing data, and the features (or the number of features) are still manual selected. A comprehensive review of vibration-based fault diagnosis of wind turbine gearbox is given by Wang et al [25]. To address the issues and opportunities discussed above, this paper presents a hybrid approach of sparse filtering and GNN (SF-GNN) for wind turbine fault detection. In general, SF-GNN identifies the gearbox health condition from a knowledge-based input graph that efficiently conducts high frequency sensor data via the acceleration of sparsening filtering and ontologically describes the wind turbine and gearbox, and generates either a single output of the condition of an individual component and sequential outputs with additional semantical information to the conditions. Three GNN models are deployed in this paper: basic graph neural networks [26][27], Gated Graph Neural Networks (GG-NNs) [28] and Gated Graph Sequence Neural Networks (GGS-NNs) [28]. All three GNN-based models are trained in a supervised manner for node classification. The input graph is modeled by a knowledge-based structure that explicitly incorporates wind turbine terminologies, sensors, and operating conditions. The motivation to use GNN-based models is to classify the target node by considering complete wind turbine information, that is the composition of wind turbine, and the structure of sensors are utilized to provide robust and reasonable results. In a general sense of sensor network, a large number of sensors are mounted and organized as a complex network. For example, each component of a wind turbine gearbox is monitored by one or more sensor(s) for comprehensive operation state tracking. To detect the fault and malfunctional events based on an induvial or partial sensor(s) is time-consuming and suffering low accuracy outcomes, due to massive data sources. Therefore, this paper claims that deep neural nets with graph input, containing a group of sensors structured by relation types (such as hierarchical and causal), outperforms the methods only considering individual sensors. Additionally, the literatures reviewed in this paper present highlights of prediction models with input of individual sensor data source or a group of sources but without relations, which causes that the detection outcomes are adapted to a fixed operation situation. However, the fault events are caused by one or more impacts. For example, a data source of the mid shaft is applied for the fault detection, that is, the fault is assumed to occur in such situation of the defect of the shaft instead of considering the comprehensive causes as a real world problem. The motivation to use sparse filtering is to improve the efficiency of the GNN-based models, specifically to reduce the computational cost of feature learning. the data source used in this work is high frequency vibration data; that is, it presents extra sensitive noise impact than usual data source. Therefore, it is a challenge to use such as raw data input of a deep network, let alone organize such multiple sources in a relational graph. Therefore, sparse filtering is applied to control this issue. It is an unsupervised learning method and drives sparsity the feature matrix of the ultra-high frequency sensor signals, standardize the features to receive equal activation, and qualify the signals data validity to GNNs. The contributions of this paper are considered that (1) a data-driven approach SF-GNN (based on sparse filtering and graph neural networks) is proposed for vibration-based wind turbine gearbox fault detection; (2) a knowledge-based graph of the ontological wind turbine gearbox and sensor data is developed; (3) sequential outputs provide semantical detection results for single or multiple defect occurrence; (4) SF-GNN outperforms accuracy and the computing time; (5) to the best of our knowledge, this is the first paper to use GNN-based for wind turbine fault detection. The reminder of the paper is organized as follows: Section II introduces the SF-GNN models, the main GNN-based methods are reviewed, and their significance is explained. Section III introduces the experimental setup and the performance of the SF-GNN models are investigated with comparisons to neural-network-based models. The effect of sparse filtering for performance improvement and the impact of GNN-based fault detection are also discussed. Section IV provides a summary and conclusions. ## 1 Methods ### General Graph Neural Networks The graph neural network (GNN) is a neural network-based approach for representation learning and classifying nodes in a graph [27]. Representation learning is an approach to simplify a complex graph structure. The graph of complex system, for example a biological network or group of social media users, can be very large and complicated. Features with structural information are extracted and interpreted from the graph and then used for the intended application, such as prediction and classification [29]. Traditionally, structural information is extracted using hand-engineered approaches. On the contrast, representation learning maps the original graph network to a low-dimensional space that can be used to infer and present the graph. This process is called node embedding or node labeling [29]. Representation learning accelerates graph-based applications to learn and encode structural information in a simplified approach. Concisely, GNNs provide node embedding and node classification. In this work, a GNN is used as a supervised node classification method. The input graph is the knowledge-based graph structure of the wind turbine and the target node is one of the health condition nodes. In the embedding stage, given the graph \(G=(V,E)\), each node in \(V\) is mapped to a low dimensional space. Each node representation \(x_{v}^{(t)}\) at timestep \(t\) is defined by \(f_{w}^{t}\), which is implemented as a recurrent neural network. The embedding of \(x_{v}^{(t=0)}\) is randomly initialized and in the absence of node attribute labels, each iteration updates the representation as [26][28] : \[x_{v}^{(t)}=f_{w}^{t}(l_{w},l_{conv},l_{v^{\prime}},x_{v^{\prime}}^{(t-1)}). \tag{1}\] GNN is based on a recursive approach where information (labels) from neighbor nodes and edges are aggregated and the network \(f_{w}^{t}\) is decomposed as the sum of per-edge aggregation functions \(f_{agt}\): \[f_{Embed}\Big{(}l_{w},l_{conv},l_{v^{\prime}},x_{v^{\prime}}^{( t)}\Big{)}=\] \[\sum_{v^{\prime}_{in}}f_{agt}\Big{(}l_{w},l_{v^{\prime}\to v },l_{v^{\prime}},x_{v^{\prime}}^{(t-1)}\Big{)}+\] \[\sum_{v^{\prime}_{out}}f_{agt}(l_{w},l_{v\to v^{\prime}},l_{v^{ \prime}},x_{v^{\prime}}^{(t-1)}). \tag{2}\] The \(f_{agt}\) are defined by a neural network [21] with configuration of labels and non-linear activation function, and a recursion for updating the trainable parameters \(w\) and \(b\) as follows: \[x_{v}^{(t)}=\sum_{v^{\prime}\in V(v)}f_{agt}[W^{(l_{w}l_{v^{\prime}}-v^{\prime }l_{v^{\prime}})}x_{v^{\prime}}^{(t-1)}+b^{(l_{w}l_{v^{\prime}}-v^{\prime}l_{v^ {\prime}})}]. \tag{3}\] Once the final embedding space is computed, the second stage of node classification is defined by the neural network \(g_{w}^{t}\)[21]: \[y_{v}^{(t)}=g_{w}^{t}(x_{v}^{(t)},l_{v}). \tag{4}\] ### The gated Graph Neural Networks The gated graph sequential neural network (GGS-NN) is a GNN-based approach using a modified gated graph neural network (GG-NN) [28]. In the GNN, neighbor node information is aggregated by one shared neural network across layers. When the complexity of the input graph increases, over fitting the GNN parameters and the computational cost of training by backpropagation with vanishing gradient problem is a problem [29]. The GG-NN addresses these issues by performing a recurrent update with similar gating mechanisms. The GG-NN includes "information aggregation \(+\) RNN" [29] with fixed-steps of representation learning and unrolling recurrence using backpropagation optimization methods through time and specific node information as the initialized input. GGS-NN is an extension of GG-NN where multiple GGS-NNs are used to perform predictions of: (1) the output of the current step, and (2) initialization for the next step [28]. In the initialization step, each node \(v\in V\) is annotated with a real-valued feature \(F^{(v)}\in\mathbb{R}^{D}\), and then the state vector is initialized from the features. In this work, features are optimized using sparse filtering. In the propagation step, the graph is unrolled to the fixed step while the nodes aggregate information from neighbors. The aggregation function is the same as with the GNN, but the general propagation model is replaced by the Gated Recurrent Unit (GRU) [30] that is used for hidden state updating that depends on aggregation and the previous state. In the output step, node-level output is computed like the GNN; the GGS-NN is used to produce an output sequence of the wind turbine's operating condition, including component type and name, sensor, and final health state. ### Sparsity Optimization Sparse filtering is an unsupervised learning method with the objective to make the feature matrix sparse [20]. Each feature in column (\(d\)) and row (\(i\)) in the feature matrix is defined as: \[F_{i}^{d}=W^{T}x^{d} \tag{5}\] where \(W\) denotes the weight matrix; \(x^{d}\) denotes the \(d\)-\(th\) input data from the training set of \(\{x^{d}\}_{d=1}^{D}\), \(x^{d}\in\mathbb{R}^{N\times 1}\). The sparse filtering iteration begins with normalization of each input data using the \(L_{2}\) norm: \[L_{2}(F_{i})=F_{i}/\|F_{i}\|_{2}. \tag{6}\] Each feature from the normalized input data is then normalized using the \(L_{2}\) norm, \[L_{2}(F^{d})=F^{d}/\|F^{d}\|_{2}. \tag{7}\] Each normalized feature is then minimized using the \(L_{1}\) norm: \[min\ \sum_{l=1}^{D}\left\|L_{2}(F^{d})\right\|_{1}. \tag{8}\] The learning performance is measured according to three properties of the features: population sparsity, lifetime sparsity, and high dispersal [20]. Population sparsity describes the quantity of non-zero active element should be minimal, which requires high sparseness from the input data. The sparsity of the input \(x^{d}\) is defined as the \(L_{1}\) norm of \(F^{d}\). During the \(L_{2}\) normalization process, the feature is projected onto the surface of the unit ball. Then by the \(L_{1}\) minimization, the sparseness of the feature is improved. Lifetime sparsity describes the quality of the sparsity of the feature that is expected to be discriminative with a high potential of selectivity [31]. The sparsity of feature \(F_{i}\) is defined as the \(L_{1}\) norm of \(F_{i}\). A significantly high level of sparseness can achieve lifetime sparsity. High dispersal describes the statistical properties of features that are expected to have similar activation. The activation similarity is measured by the variance over all features. Lower values of variance indicate higher dispersal. The \(L_{2}\) norm, based on Euclidian distance, measures the variance [31] and normalized features share equal activation. Sparse filtering can deal with high-dimensional inputs gracefully, and is very easy to use, for it has only one hyperparameter, the number of features to learn. ## 2 The Proposed Method A SF-GNN is developed for wind turbine fault detection based on graph neural networks accelerated by sparse filtering. The core concept of the approach is to use the GNNs to classify the vibration data labeled as normal or fault within a graph structure, and the aim is to produce single and sequential detection outcomes. The approach includes three modules: graph identification, feature learning, and fault detection. ### Graph Identification A graph is a type of data structure that describes relationships and interactions between individual entities. The edges and nodes of a graph define the directed graph structure, \(G=(V,E)\), where \(V\) denotes the nodes \(v\in V\), and \(E\) denotes the edges with direction from node \(v\) to \(v^{\prime}\), \(e=(v,v^{\prime})\in V\times V\)[29]. In this paper, the graph structure includes the following configurations to each node and edge: Labels and Neighbors. Labels are assigned to the nodes and edges. Node label \(l_{v}\) describes the features of each entity, for example, the health conditions of the wind turbine components. Edge label \(l_{CON}\) describes the connection level between two entities. The edge label contains \(l_{v\to v^{\prime}}\) indicates an outgoing connection (\(v\to v^{\prime}\)) and \(l_{v^{\prime}\to v}\) for an incoming connection (\(v^{\prime}\to v\)). Neighbors are nodes that connect with node \(v\) with labels \(l_{v^{\prime}}\). \(v^{\prime}_{in}\) and \(v^{\prime}_{out}\) are incoming and outgoing nodes, respectively. The label and neighbor configurations above are used in the GNN [28]. The graph is formed as a knowledge-based structure that is adapted and modified from the wind energy ontology developed Dilek Kucuk et al [32]. In the knowledge-based graph, the labels and neighbors includes the following specifications of the three hierarchical levels: the wind turbine structural components, the sensors and data, and the conditions that include the four relationships: "is-a", "has", "measures", and "causes". Level 1 is modeled as the terminology node that is associated with the wind turbine components including the gearbox elements and the bearing arrangements. These nodes are connected by "has". Level 2 is modeled as the data node that defines the sensors mounted on the gearbox components and the corresponding measured data. They are connected to sensor type nodes by "is-a" and the data type nodes by "measures". Level 3 is modeled as the state nodes that define the health conditions of the wind turbine components that are connected by "causes". The meta nodes, for example, the data type and the component type, are used to define the attributes of the nodes. ### Sr-Gnn The SF-GNN is a two-stage fault detection model. The input of the model is defined by a knowledge-based structure of the wind turbine, including the components, the sensor network, the health conditions, and the connection relationships (_Graph Identification_). The first stage is feature learning by sparse filtering where high frequency raw vibration data corresponding to the sensor nodes (_Feature Learning_) is preprocessed. The second stage is wind turbine using GNN-based models. The SF-GNN includes two of the models: the basic graph neural network and gated graph neural network. The basic GNN generates a single detection output that reflects Figure 1: The knowledge-based structure. This figure presents a general sense of the graph identification. The actual input graph is explained in the experiment section. the general health condition of the wind turbine gearbox. GGS-NN generate either single or sequential detection output(s). The sequential outputs reflect detailed health conditions of the components with type and name, sensor, and final health state. The general framework of the SF-GNN is presented in Fig. 2. ## 3 Experiments ### Experiment Setup Experiments are conducted to demonstrate the node classification efficiency of the GNN-based approaches and the effect of the SF-based feature learning. In the first part of the experiment, the single output accuracy of GNN and GG-NN are investigated with the comparisons to a multilayer perceptron [21] and the SF-based Softmax regression [19]. In the second part, the sequence output accuracy of GGS-NN is investigated via using raw vibration data, SF-based features, time- and frequency-domain features [9], and graph inputs. For the implementation, sparse filtering and other GNN-based approaches are adapted and modified from available models [33][34][35][36]. The activation function is log-sigmoid [37] and the loss function is cross entropy [37]. The hyperparameters are selected as: hidden layer size = 16; learning rate fixed at 0.001; \(\text{dropout}=0.2\), and length of \(L_{2}\) regularization = 0.05. For the training procedure, the optimization uses Kingma et al's method [38]. The parameter (weight and bias) initialization uses Glorot et al.'s method [39]. The maximum epoch number is 1,000. An early stopping criterion is used to terminate computations if there is no improvement in the loss function for the first 20 epochs. The datasets are divided according to: 60% training, 20% testing, and 20% development. Both normal and defected datasets of 8 sensors are used for training. That is, the ground-truth data of the healthy or normal condition datasets are used for the method evaluations including generalization check of SF-GNN. For evaluation, the confusion matrix [40] is used to quantify the classification accuracy, and the accuracy is defined as the ratio of the true positives and true negatives over the total set of classifications. ### Data Description Wind turbine vibration data from the National Renewable Energy Laboratory (NREL) is used for the experiments [41]. The test turbine has three-blades, is stall-controlled and upwind, with rated power of 750kW; generator operates at 1200 rpm or 1800 rpm nominal. Two gearboxes (one "healthy" and one "damaged") are tested under the operating conditions: main shaft speed at 22.09 rpm, nominal high-speed shaft at 1800 rpm, 50% of rated power. The descriptions of the accelerometers are shown in Table I. The vibration data (m/s\({}^{2}\)) is collected at 40 kHz for 10 minutes by the accelerometers mounted on the gearboxes. The data are labeled as "healthy" and "damaged" according to the conditions of two gearboxes. Each 40-kHz dataset is divided into segments where each data segment is 0.1-second in length with 4,000 samples in each segment. There are 50,000 segments in total and these segments are considered as raw (vibration) data. In this dataset, the defects of the wind turbine are caused by oil-loss events. The fault type are summarized and labeled for the targeted node classification task. Vibratory acceleration data at high frequency is used in this paper. Disturbance of the acceleration indicates defect occurrence on the wind turbine gearbox. Accordingly, the faults of the wind turbine are detectable through vibration-based methods. The input knowledge-based graph is formed based on the description provided previously. The graph includes 21 terminology nodes, 8 data nodes, 2 state nodes, and 4 meta nodes, with three different edge types. 8 sensor signals are considered in the experiment and correspond to the data nodes. ### Results and Discussions \begin{table} \begin{tabular}{c c} \hline Sensor Name & Description \\ \hline AN3 & Ring gear radial 6 o’clock \\ AN4 & Ring gear radial 12 o’clock \\ AN5 & LS-SH radial \\ AN6 & IMS-SH radial \\ AN7 & HS-SH radial \\ AN8 & HS-SH upwind bearing radial \\ AN9 & HS-SH downwind bearing radial \\ AN10 & Carrier downwind radial \\ \hline \end{tabular} \end{table} Table 1: The sensor description Figure 2: The framework of SF-GNN. Module 1 identifies the input graph with wind turbine knowledge and raw sensor data. Module 2 learns processes feature learning on the raw sensor data within the input graph, which is the stage 1 of SF-GNN. Module 3 initiates GNN models and generates the single detection output by basic GNN and GG-NN and the sequential output by GGS-NN, which is the stage 2 of SF-GNN. \(\bullet\) The Effect of Sparse Filtering: The only hyperparameter in sparse filtering is the input dimension. Four different input dimensions are tested as shown in Table II: 50, 100, 300 and 500. For the dimensions 100, 300 and 500, the models do not have significantly different accuracy; however, the running time does increase as the dimension increases. The 300-dimension test provides more detailed features than the 100-dimension test. The detailed features include redundant active entities, which incurs extra running time and overfitting. The 100-dimension test results can distinguish one active entity (the highest peak) and a few others with moderate running time. Therefore, the 100-dimension inputs are used for the experiments. * Single Output Results: As presented in Table III, the combined approach of sparse filtering and GNN-based models achieves higher accuracy with increased computational cost. From the detection results, the SF-learned features with the GNN-based models efficiently outperform the raw and manual features (Feature-GNN and Feature-GG-NN), but still have greater computational cost than MLP and SF-SoftMax, which are baseline models with feature inputs without graph structure inputs. Therefore, they perform detection with less computing time. * Sequential Outputs Results: The GGS-NN learns the nodes and their relationships from the entire input graph. The output sequence includes node predictions of components, data types, and the health state; and includes the relationship (edge) predictions of "has", "is-a", "measures", and "causes". Semantically, for example, the level 1 outputs are "the gearbox has a ring gear" and "the ring gear has AN3"; the level 2 outputs are "AN3 is a sensor" and "AN3 measures vibration"; and the level 3 output is "AN3 causes fault operations"-meaning the fault is located on the component where AN3 is mounted. This experiment includes two test cases: one fault signal and two fault signals. As presented in Table IV, the two cases achieve similar accuracy. Compared to the overall accuracy of the basic GGS-NN at 87.27%, which is like the work of Li et al., the sparse filtering has an accuracy of 90.73%. For the sequence results, we observe: (1) SF-learned features perform better than the GGS-N; (2) sequential outputs have lower accuracy than single outputs. Sparse filtering is a problem-specific feature learning method for the single output case in which the GG-NN concentrates on state node prediction depending on the vibration signal data; therefore, sparse filtering has direct impact on GG-NN performance. Node annotation for initialization can impact accuracy. During the procedure of GGS-NN, annotated nodes predicted by the GG-NN models has unstable performance compared to the pre-set and fixed annotations in the single-output case; therefore, lower accuracy occurs in the case of sequential outputs. \(\bullet\) The Impact of the GNN-Based Approaches: According to the performance of wind turbine fault detection system using the graph models, the GNN-based approaches are the most effective and have the greatest promise. Specifically: (1) graph-based inputs can integrate related knowledge of the configuration and operations of the wind turbine, the sensor network, and the data sources. This integration of information helps to interpret and manage how data from different sensors and wind turbines can effectively be used in developing learning algorithms and systems for fault detection, diagnosis, and prognosis. The results of sequential detection indicate that GNN-based approaches can effectively use multiple factors for fault detection and produce comprehensive analytical results for IoT applications such as using "big" data from wind turbine experiments to develop a fault detection and diagnosis system that can improve wind turbine operational reliability and transmission grid reliability and resiliency. ## 4 Conclusion A GNN-based wind turbine fault detection method is proposed in this paper using experimental data from NREL. This work deploys two methods: GNNs and sparse filtering. SF-GNN identifies the gearbox health condition from a knowledge-based input graph that ontologically describes the wind turbine and gearbox and generates a single output of the condition of an individual component and sequential outputs with additional semantical information to the conditions. Sparse filtering is deployed to sparsity the feature matrix of the high frequency sensor signals, standardize the features to receive equal activation, and qualify the signals data validity to GNNs. As compared to the original GNNs achievements, this work presents a complex input graph with multiple edge and node types and more sensitive and higher frequency signals, and outperforms in accuracy and efficiency by acceleration of sparse filtering. As the experiment demonstrated, the GNN-based approaches can efficiently detect the component faults using both single and sequential output detection strategies. The GGS-NN can successfully produce a logical and reasonable fault detection sequence using data from a single sensor or from multiple sensors. Sparse filtering provides significant improvements in \begin{table} \begin{tabular}{c c c} \hline Preprocessing & 1 fault signal & 2 fault signals \\ \hline SF & 90.73 & 90.36 \\ Raw & 87.27 & 88.17 \\ Feature & 90.04 & 89.56 \\ \hline \end{tabular} \end{table} Table 4: The sequence outputs accuracies (%) \begin{table} \begin{tabular}{c c c c} \hline Input Dimension & GNN & GG-NN & GGS-NN & Running time (s) \\ \hline 50 & 74.24 & 88.25 & 82.57 & \(<\) 60 \\ 100 & 92.33 & 93.74 & 90.73 & \(<\) 60 \\ 300 & 91.72 & 94.14 & 90.12 & 62 \\ 500 & 92.41 & 93.62 & 90.83 & 78 \\ \hline \end{tabular} \end{table} Table 2: The effect of Sf on GNN-based models \begin{table} \begin{tabular}{c c c} \hline Models & Accuracy (\%) & Running time (s) \\ \hline SF-GNN & 92.33 & 54 \\ SF-GG-NN & **93.74** & 47 \\ Feature-GNN & 83.54 & 58 \\ Feature-GG-NN & 89.28 & 52 \\ SF-SoftMax & 89.92 & **38** \\ \hline \end{tabular} \end{table} Table 3: The single output accuracies the single output cases but only modest improvements in the sequential output cases. Future work is necessary to improve accuracy, reduce the computational burden, and further explore the GGS-NN-based method. We have observed in our application that GNN-based methods can be improved by including representation learning using raw data instead of incorporating a separate feature learning stage.
2304.08564
Entanglement entropy of the proton in coordinate space
We calculate the entanglement entropy of a model proton wave function in coordinate space by integrating out degrees of freedom outside a small circular region $\bar A$ of radius $L$, where $L$ is much smaller than the size of the proton. The wave function provides a nonperturbative distribution of three valence quarks. In addition, we include the perturbative emission of a single gluon and calculate the entanglement entropy of gluons in $\bar A$. For both, quarks and gluons we obtain the same simple result: $S_E =-\int\frac{dx}{\Delta x}\, N_{L^2}(x)\log[N_{a^2}(x)]$, where $a$ is the UV cutoff in coordinate space and $\Delta x$ is the longitudinal resolution scale. Here $N_{S}(x)$ is the number of partons (of the appropriate species) with longitudinal momentum fraction $x$ inside an area $S$. It is related to the standard parton distribution function (PDF) by $N_S(x)=\frac{S}{A_p}\, \Delta x\, F(x)$, where $A_p$ denotes the transverse area of the proton.
Adrian Dumitru, Alex Kovner, Vladimir V. Skokov
2023-04-17T19:01:45Z
http://arxiv.org/abs/2304.08564v1
# Entanglement entropy of the proton in coordinate space ###### Abstract We calculate the entanglement entropy of a model proton wave function in coordinate space by integrating out degrees of freedom outside a small circular region \(\bar{A}\) of radius \(L\), where \(L\) is much smaller than the size of the proton. The wave function provides a nonperturbative distribution of three valence quarks. In addition, we include the perturbative emission of a single gluon and calculate the entanglement entropy of gluons in \(\bar{A}\). For both, quarks and gluons we obtain the same simple result: \(S_{E}=-\int\frac{dx}{\Delta x}\,N_{L^{2}}(x)\log[N_{a^{2}}(x)]\), where \(a\) is the UV cutoff in coordinate space and \(\Delta x\) is the longitudinal resolution scale. Here \(N_{S}(x)\) is the number of partons (of the appropriate species) with longitudinal momentum fraction \(x\) inside an area \(S\). It is related to the standard parton distribution function (PDF) by \(N_{S}(x)=\frac{S}{\bar{A}_{p}}\,\Delta x\,F(x)\), where \(A_{p}\) denotes the transverse area of the proton. ###### Contents * I Introduction * II Laying the groundwork * II.1 The valence quark Fock state * II.2 A model wave function * III The reduced density matrix and entanglement entropy of a three quark system * III.1 The density matrix for a small disc * III.2 Entanglement entropy * III.3 What is entangled here? * IV Including the \(|qqqg\rangle\) Fock state * IV.1 \(\Psi_{qqqg}\) at order \(g\) and \(\Psi_{qqq}\) at order \(g^{2}\) * IV.2 First perturbative correction to the density matrix * V Entanglement entropy of the perturbative density matrix * V.1 Entanglement entropy of quarks * V.2 Entanglement entropy of the gluon * VI Discussion * A Shannon entropy of a probability density function for a continuous degree of freedom * B Calculating traces of powers of \(\rho\) * C Checking traces Introduction Rapid advent of quantum science in recent years provides strong motivation for asking new types of questions in many areas of inquiry, including high energy nuclear and particle physics. In particular, there is an ongoing vigorous discussion about the relevance of entanglement (and the associated entanglement entropy) in the context of particle production in high energy hadronic collisions [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. The initial discussion by Kharzeev and Levin [8] is framed in the context of entanglement of the degrees of freedom inside a small area of the proton actually probed in a DIS experiment, with the rest of the degrees of freedom in the proton wave function, and, in particular, with soft modes of the gluon field responsible for confinement. It was suggested that the entropy of this entanglement translates into the Boltzmann entropy of particles produced in the collision. Some model calculations have been performed to probe this picture [9; 12; 13; 14; 15], and it has also been subjected to an experimental test [24]. However, no direct calculation of entanglement entropy in coordinate space has so far been reported in the literature. The aim of this manuscript is to fill this gap. Of course, such a calculation requires knowledge of the wave function of the proton, and needless to say, the exact proton wave function is not known. Nevertheless, several simple model wave functions that provide the distribution of valence quarks at large \(x\) and low resolution \(Q^{2}\) have been used in QCD phenomenology over the years with reasonable success, e.g. refs. [25; 26; 27; 28]. These quark wave functions can be improved by including a perturbative gluon component, as described in ref. [29], and used in ref. [30] to compute DIS structure functions at high energy, and in ref. [22] to study entanglement of momentum-space degrees of freedom over the whole area of the proton. In this paper our main goal is to derive expressions for the density matrix and entropy of a small "hole" in the proton in such a setup. For numerical estimates we will use one specific light-cone valence quark model wave function from refs. [25; 26]. The idea of our calculation is very straightforward. We divide the transverse area of the proton into a small disc \(\bar{A}\) and its complement \(A\), and integrate out all degrees of freedom in \(A\). The result is the reduced density matrix \(\rho_{\bar{A}}\) which contains complete information for the calculation of any observable localized in \(\bar{A}\). We then calculate the von Neumann entropy of \(\rho_{\bar{A}}\). Even before including the perturbative gluon component, the result is nontrivial. The entropy in this case is associated with different numbers of quarks that can reside inside \(\bar{A}\). Note that the total number of valence quarks in the model wave function is fixed (three), nevertheless the wave function carries finite probabilities of finding different numbers of quarks inside \(\bar{A}\). Integrating over \(A\) therefore generates a reduced density matrix which spans Hilbert subspaces with different occupation numbers, \(n\). The von Neumann entropy arises precisely due to nonvanshiing eigenvalues of \(\rho_{\bar{A}}\) in subspaces with different \(n\). Note that in the simple case when the total number of partons is fixed, the reduced density matrix is diagonal in the particle number basis _by fiat_. This follows immediately since in reducing the density matrix we trace over \(A\), and thus calculate matrix elements between states which have equal numbers of partons in \(A\). For wave functions that do not preserve the number of partons we expect, in general, that \(\rho_{\bar{A}}\) would not be diagonal in the \(n\) basis. Thus, including a perturbative gluon emission may lead to such a nondiagonal \(\rho_{\bar{A}}\). As it turns out, in the first order of perturbation theory this does not happen due to the fact that in the valence part of the wave function the color and spatial degrees of freedom are not entangled with each other. We first perform the calculation in the way described above for the valence wave function that contains three quarks only. We next include a one gluon state which is generated by the first order perturbative correction. Here, for simplicity we modify our procedure somewhat, i.e. we trace over the quark degrees of freedom in the whole wave function, and only then do we generate \(\rho_{\bar{A}}\) by reducing over the gluon degrees of freedom in \(A\). We then calculate the entanglement entropy of the resulting density matrix, which now has the meaning of entropy of gluons inside \(\bar{A}\). This paper is structures as follows. In Section II we prepare our tools for performing the calculation in coordinate space and describe the model wave function for valence quarks. In Section III we calculate the reduced density matrix \(\rho_{\bar{A}}\) and the entropy for a small disc \(\bar{A}\) in this model. Here "small" means small relative to the nonperturbative scale which determines the spatial size of the model wave function. We discuss the dependence of the entropy on the area of \(\bar{A}\) in this regime. In Section IV we include an additional perturbatively emitted gluon in the wave function, and again calculate the reduced density matrix (in the way described above) and discuss its properties. The entanglement entropy is also calculated in sec. V. In both cases (quarks and gluons) the entanglement entropy can be written in a very suggestive form in terms of the PDF of the appropriate parton species, eq. (94). Finally, in Section VI we discuss our results and their possible relation to the suggestion of ref. [8]. ## II Laying the groundwork In the following we denote any three vector \(p\) as \(p=(p^{+},\vec{p})\), where \(p^{+}\) and \(\vec{p}\) are longitudinal and transverse components of the vector respectively. We will be using a mixed representation for the wave function where coordinate space is used to represent the transverse degrees of freedom, and momentum space for the longitudinal ones. In this mixed representation we denote a state of the proton at center of mass (COM) position \(\vec{R}=0\) and longitudinal momentum \(P^{+}\) by \(|\vec{R}=0,P^{+}\rangle\). This convoluted notation does not reference the wave function for the internal degrees of freedom, i.e. the coordinates, color and spin states of the constituents, which we will specify in a short while. The coordinate space proton state vector is related to the momentum space state vector through (see e.g. ref. [31]) \[|\vec{R},P^{+}\rangle={\cal N}\int_{\vec{P}}e^{i\vec{P}\cdot\vec{R}}\,|\vec{P},P^{+}\rangle\, \tag{1}\] where \(\vec{P}\) is the transverse momentum of the proton, and the integration measure is \[\int_{\vec{P}}\equiv\int\frac{{\rm d}^{2}P}{(2\pi)^{2}}. \tag{2}\] The normalization factor is determined from the condition \(|{\cal N}|^{2}\,\int_{\vec{P}}=1\). A proton centered at \(\vec{R}=0\) is then \[|\vec{R}=0,P^{+}\rangle={\cal N}\int_{\vec{P}}\,|\vec{P},P^{+}\rangle. \tag{3}\] We employ the standard normalization of the momentum space states: \[\langle K\,|\,P\rangle=16\pi^{3}\,P^{+}\,\delta(P^{+}-K^{+})\,\delta^{2}(\vec {P}-\vec{K}) \tag{4}\] which leads to the following normalization of the the mixed space state vector \[\langle\vec{R}=0,P^{\prime+}\,|\,\vec{R}=0,P^{+}\rangle=4\pi\,P^{+}\,\delta(P ^{+}-P^{\prime+}). \tag{5}\] The density operator for this state is \[\hat{\rho}=|\vec{R}=0,P^{+}\rangle\ \langle\vec{R}=0,P^{+}|. \tag{6}\] In the following we will be calculating matrix elements of \(\hat{\rho}\) between states of the partonic (Fock) Hilbert space \[\rho_{\alpha\alpha^{\prime}}=\langle\alpha^{\prime}|\vec{R}=0,P^{+}\rangle\ \langle\vec{R}=0,P^{+}|\alpha\rangle\, \tag{7}\] where \(\alpha\) denotes a collection of "labels" (such as the LC momentum fractions \(x_{i}\), coordinates and color indices) assigned to the basis vectors of the Fock space. ### The valence quark Fock state We start with considering states that contain three valence quarks only. In the model described below the color and spatial degrees of freedom are not entangled, i.e. the wave function is a direct product of the color and spatial state vectors. In this case, for the spatial wave function \(\alpha=\{x_{i},\vec{r}_{i}\}\) refers to the quark LC momentum fractions and their transverse coordinates. The state vector \(|P^{+},\vec{P}\rangle\) of a proton made of \(N_{c}\) "valence" quarks is written as \[|P\rangle=\sum_{h_{i}}\int\limits_{[0,1]^{N_{c}}}[{\rm d}x_{i}]\int[{\rm d}^ {2}k_{i}]\,\Psi\left(k_{i},h_{i}\right)\ \frac{1}{\sqrt{N_{c}!}}\sum_{i_{1}\ldots i_{N_{c}}}\epsilon_{i_{1}\cdots i_{N_ {c}}}\,|p_{1},i_{1},h_{1};\cdots;p_{N_{c}},i_{N_{c}},h_{N_{c}}\rangle\, \tag{8}\] where \[[{\rm d}x_{i}] = \delta\left(1-\sum_{i}x_{i}\right)\ \prod_{i}\frac{{\rm d}x_{i}}{2x_{i}}\, \tag{9}\] \[\left[{\rm d}^{2}k_{i}\right] = (2\pi)^{3}\,\delta\left(\sum_{i}\vec{k}_{i}\right)\ \prod_{i}\frac{{\rm d}^{2}k_{i}}{(2\pi)^{3}}. \tag{10}\] Here \(k_{i}=(k_{i}^{+},\vec{k}_{i})\) denote the momenta of the \(i\)-th quark in the transverse rest frame of the proton, and \(\vec{p}_{i}=\vec{k}_{i}+x_{i}\vec{P}\). The space-helicity wave function \(\Psi\left(k_{i},h_{i}\right)\) is symmetric under exchange of any two quarks while the state is antisymmetric in color space. In what follows we will mainly focus on the spatial wave function and trace out spin-flavor and color degrees of freedom. We can now write the proton state in terms of the quark Fock space states \[|\vec{R}=0,P^{+}\rangle={\cal N}\int_{\vec{P}}\,\int[{\rm d}x_{i}]\int[{\rm d}^{ 2}k_{i}]\,\Psi\left(k_{i}\right)\ |p_{1};\,p_{2};\,p_{3}\rangle\ \, \tag{11}\] where we have omitted the quark (and proton) spins, for simplicity. Summing up, we integrate over the Galilean-invariant "internal" quark transverse momenta subject to the constraint that they add up to zero, and then over the COM transverse momentum \(\vec{P}\), which is also the momentum of the proton. Analogously, the three-quark coordinate space state with the quarks located at \(\vec{r}_{i}\) and carrying LC momentum fractions \(x_{i}\) is constructed as: \[|x_{1},\vec{r}_{1};\,x_{2},\vec{r}_{2};\,x_{3},\vec{r}_{3}\rangle={\cal N}\int _{Q}\int[{\rm d}^{2}q_{i}]\,e^{-i\sum\langle\vec{q}_{i}+x_{i}\vec{Q}\rangle \cdot\vec{r}_{i}}\,|x_{i},\vec{q}_{i}+x_{i}\vec{Q}\rangle. \tag{12}\] Equation (12) can be extended to four (and more) particles simply by adding labels for momentum fraction and transverse position/momentum of the additional particle to the state vector, and including the momentum of the additional particle in the integration measure eq. (10). The overlap of the proton state with the state of three quarks localized at fixed transverse coordinates is given by \[\langle\vec{R}=0,P^{+}|x_{i},\vec{r}_{i}\rangle = |{\cal N}|^{2}\,\int_{P,Q}\int[{\rm d}y_{i}]\int[{\rm d}^{2}k_{i}] \,\int[{\rm d}^{2}q_{i}]\,e^{-i\sum\langle\vec{q}_{i}+x_{i}\vec{Q}\rangle \cdot\vec{r}_{i}}\,\Psi^{*}(y_{i},\vec{k}_{i})\] \[\ \ \ \ \ \prod_{i}\langle y_{i},\vec{k}_{i}+y_{i}\vec{P}\,|\,x_{i}, \vec{q}_{i}+x_{i}\vec{Q}\rangle\] \[= |{\cal N}|^{2}\,(2\pi)^{3}\,\delta(1-\sum x_{i})\ \delta(\sum x_{i}\vec{r}_{i})\,\int[{\rm d}^{2}q_{i}]\,e^{-i\sum \vec{q}_{i}\cdot\vec{r}_{i}}\,\Psi^{*}(x_{i},\vec{q}_{i})\,\] where we used eqs. (4,9, 10). Note that the overlap does not vanish only for states with COM located at the origin, \(\sum x_{i}\vec{r}_{i}=0\), just as for the proton, c.f. eq. (11) in [31]; or ref. [32] for the analogous case of a \(q\bar{q}\) dipole. Also, the LC momentum fractions of the quarks must sum up to one. Since only such states contribute to the proton density matrix, and we included the constraints on the longitudinal momentum fractions/the transverse momenta in the integration measure (9,10), a matrix element of the properly normalized density matrix is given by \[\rho_{\alpha\alpha^{\prime}} = \frac{\langle\vec{R}=0,P^{+}|\alpha^{\prime}\rangle}{|{\cal N}|^ {2}\,(2\pi)^{3}\,\delta(1-\sum x_{i}^{\prime})\ \delta(\sum x_{i}^{\prime}\vec{r}_{i}^{\prime})}\ \frac{\langle\alpha|\vec{R}=0,P^{+}\rangle}{|{\cal N}|^{2}\,(2\pi)^{3}\, \delta(1-\sum x_{i})\ \delta(\sum x_{i}\vec{r}_{i})} \tag{14}\] \[= \int[{\rm d}^{2}q_{i}]\,e^{i\sum\vec{q}_{i}\cdot\vec{r}_{i}}\,\int [{\rm d}^{2}q_{i}]\,e^{-i\sum\vec{q}_{i}^{\prime}\cdot\vec{r}_{i}^{\prime}}\, \Psi^{*}(x_{i}^{\prime},\vec{q}_{i}^{\prime})\ \Psi(x_{i},\vec{q}_{i})\] (15) \[= \Psi^{*}(x_{i}^{\prime},\vec{r}_{i}^{\prime})\ \Psi(x_{i},\vec{r}_{i})\, \tag{16}\] where \(\alpha=\{x_{i},\vec{r}_{i}|\,\sum x_{i}=1,\sum x_{i}\vec{r}_{i}=0\}\) and \(\alpha^{\prime}=\{x_{i}^{\prime},\vec{r}_{i}^{\prime}|\,\sum x_{i}^{\prime}=1, \sum x_{i}^{\prime}\vec{r}_{i}^{\prime}=0\}\) denote two sets of LC momentum fractions and transverse quark positions. Here in the last step we used the definition (B.4) of ref. [31] for the coordinate space LC wave functions: \[\Psi(x_{i},\vec{r}_{i})=\int[{\rm d}^{2}q_{i}]\,e^{i\sum\vec{q}_{i}\cdot\vec{r} _{i}^{\prime}}\,\Psi(x_{i},\vec{q}_{i}). \tag{17}\] The normalization of the coordinate space wave function will be obtained later in eq. (31) from the requirement that the trace of the density matrix \({\rm tr}\,\hat{\rho}=1\). For the model wave function considered here (see below) the color degrees of freedom of the above density matrix could be restored simply by multiplying by the normalized color space matrix \(\frac{1}{3!}\,\epsilon_{i_{1}i_{2}i_{3}}\,\epsilon_{i_{1}^{\prime}i_{2}^{ \prime}i_{3}^{\prime}}\). ### A model wave function Our main goal here is to obtain general expressions for the reduced density matrix in a transverse region \(\overline{A}\) of the proton and to estimate the entropy associated with this density matrix (which we do in sec. III.2). For this we require an explicit expression for the three-quark wave function \(\Psi_{999}\). We employ a simple model due to Schlumpf and Brodsky [25; 26], \[\Psi\left(x_{i},\vec{k}_{i}\right)\sim\sqrt{x_{1}x_{2}x_{3}}\ e^{-{\cal M}^{2}/2 \beta^{2}};\qquad{\cal M}^{2}=\sum\frac{\vec{k}_{i}^{2}+m_{q}^{2}}{x_{i}}\,. \tag{18}\] Here \({\cal M}^{2}\) is the invariant mass squared of the non-interacting three-quark system [33], i.e. the sum of the quark LC energies multiplied by \(P^{+}\). The non-perturbative parameters \(m_{q}=0.26\) GeV and \(\beta=0.55\) GeV have been fixed in Refs. [25; 26] to match empirical properties of the proton at low energy and low resolution. Note that \(\beta\) is of order \(N_{c}=3\) times the root-mean-square valence quark transverse momentum in the proton. This Gaussian wave function can be easily transformed to position space. One obtains (up to normalization) \[\Psi\left(x_{i},\vec{r}_{i}\right)\sim F(x_{1},x_{2},x_{3})\ e^{-\frac{1}{2} \,a_{13}\,\beta^{2}\,r_{13}^{2}}\ e^{-\frac{1}{2}\,a_{23}\,\beta^{2}\,r_{23}^{ 2}}\ e^{b\,\beta^{2}\,\vec{r}_{13}\cdot\vec{r}_{23}} \tag{19}\] with \[\vec{r}_{ij} \equiv \vec{r}_{i}-\vec{r}_{j}\,\] \[F(x_{1},x_{2},x_{3}) = (2\pi\beta^{2})^{2}\,\frac{(x_{1}x_{2}x_{3})^{3/2}}{(2\pi)^{6}}\ e ^{-\frac{m_{q}^{2}}{2\beta^{2}}\sum\frac{1}{x_{i}}}\,\] \[a_{23} = x_{2}\,(1-x_{2})\,\] \[a_{13} = x_{1}\,(1-x_{1})\,\] \[b = x_{1}\,x_{2}. \tag{20}\] One can easily verify that this is symmetric under the exchange of any two quarks, \((x_{i},\vec{r}_{i})\leftrightarrow(x_{j},\vec{r}_{j})\); \(i,j=1,2,3\). ## III The reduced density matrix and entanglement entropy of a three quark system We can now construct a reduced density matrix by tracing over a subset of degrees of freedom. Here we are interested in the reduced density matrix that determines observables localized to a small circle in the center of the proton. To find this density matrix we have to trace over the region \(A\) of the proton which is the outside of the circle in question. In other words we have to integrate over the transverse positions and LC momentum fractions of all quarks located in \(A\). ### The density matrix for a small disc First we note that the Hilbert space inside the disc \(\bar{A}\) is a direct sum of Hilbert spaces of zero, one, two and three particles. In addition, it is obvious that since we are tracing over \(A\), the reduced density matrix does not contain off diagonal elements that connect states with different particle numbers. The reduced density matrix therefore can be represented as a block diagonal matrix of the form \[\rho_{\overline{A}} = \begin{pmatrix}\rho_{0}&0&0&0\\ 0&\rho_{1}&0&0\\ 0&0&\rho_{2}&0\\ 0&0&0&\rho_{3}\end{pmatrix} \tag{21}\] Note that the various blocks along the diagonal are density matrices over Hilbert spaces of different dimensionality. To calculate \(\rho_{0}\) we place all quarks in \(A\), \[\rho_{0}=\int[{\rm d}x_{i}]\int[{\rm d}^{2}r_{i}]\ \Theta_{A}(\vec{r}_{1})\, \Theta_{A}(\vec{r}_{2})\,\Theta_{A}(\vec{r}_{3})\ |\Psi(x_{i},\vec{r}_{i})|^{2}. \tag{22}\] Here, \[[{\rm d}^{2}r_{i}]={\rm d}^{2}r_{1}\,{\rm d}^{2}r_{2}\,{\rm d}^{2}r_{3}\, \delta(\sum x_{i}\vec{r}_{i})\, \tag{23}\] and \(\Theta_{A}(\vec{r})=1\) if \(\vec{r}\in A\) and \(0\) otherwise. This is a pure dimensionless (by the normalization condition in eq. (31) below) number giving the probability that in our wave function no quarks reside in \(\vec{A}\). The second block \(\rho_{1}\) of (21) is the probability density that only one of the quarks is localized in \(\overline{A}\) while the other two are localized in \(A\). Tracing over \(A\) we have to set \(\vec{r}_{1}=\vec{r}_{1}^{\,\prime}\in A\) and \(\vec{r}_{2}=\vec{r}_{2}^{\,\prime}\in A\), so by virtue of the COM constraint we also have \(\vec{r}_{3}=\vec{r}_{3}^{\,\prime}\), with \(\vec{r}_{3}\in\overline{A}\), so \(\rho_{1}\) is diagonal in coordinate space: \[(\rho_{1})_{\alpha\alpha}=3\int\frac{{\rm d}x_{1}{\rm d}x_{2}}{8x_{1}x_{2}x_{3 }}\,\delta\left(1-\sum x_{i}\right)\int{\rm d}^{2}r_{1}\,{\rm d}^{2}r_{2}\, \delta\left(\sum x_{i}\vec{r}_{i}\right)\,\Theta_{A}(\vec{r}_{1})\,\Theta_{A}( \vec{r}_{2})\,\,|\Psi(x_{i},\vec{r}_{i})|^{2}\,\,,\qquad(\vec{r}_{3}\in \overline{A}). \tag{24}\] The matrix indices here are \(\alpha=\{x_{3},\vec{r}_{3}\}\), defined over the domain \(0\leq x_{3}\leq 1\) and \(\vec{r}_{3}\in\overline{A}\). Clearly, the dimensionalities of \(\rho_{1}\) and \(\rho_{0}\) are different. While \(\rho_{0}\) is dimensionless and has the meaning of probability, \(\rho_{1}\) has dimension \(1/r^{2}\) and has the meaning of probability density. To construct a probability from \(\rho_{1}\) we would have to multiply it by the "lattice spacing" in the transverse coordinate space, \(a^{2}\), and in fact also by the elementary length in the longitudinal momentum space, \(\Delta x\). If we take this route, the integration over the coordinate \(\vec{r}_{3}\) and the momentum fraction \(x_{3}\) will have to be performed with the dimensionless measure \({\rm d}^{2}r_{3}/a^{2}\,\,{\rm d}x_{3}/\Delta x\). For the discussion of the density matrix itself this is not crucial since a calculation of the average of any observable involves integration over \(x\) and \(r_{i}\) and the minimal area cancels in the product of the probability density and the integration measure. However when we calculate von Neumann entropies \(S_{E}\) this becomes important, since we need to define a dimensionless _probability_ in order to take its logarithm. In fact it is also crucial to work with a dimensionless density matrix when we calculate the trace of any nontrivial (not first) power of \(\rho\). Since the index on \(\rho_{1}\) is continuous, the density matrix is infinitely dimensional. We therefore expect its individual matrix elements to vanish in the strict continuum limit (for vanishing \(a^{2}\) and \(\Delta x\)) as the first power of \(a^{2}\Delta x\). When calculating \({\rm tr}\,\rho_{1}\) this smallness of the matrix elements is compensated by the integration over \(\vec{r}_{3},\,\,x_{3}\). However when we calculate \({\rm tr}\,\rho_{1}^{N}\), the diagonal matrix elements now vanish as \((a^{2}\Delta x)^{N}\), while there is still only a single integral over \(\vec{r}_{3},\,\,x_{3}\) involved in calculating the trace. Therefore \({\rm tr}\,\rho_{1}^{N}\rightarrow_{a,\Delta x\to 0}(a^{2}\Delta x)^{N-1}\), and it is imperative to keep the lattice spacing finite in order to obtain any physical information about \({\rm tr}\,\rho_{1}^{N}\) beyond the trivial fact that it vanishes in the continuum limit. We will therefore introduce the lattice spacing in the definition of the density matrix and will forthwit with \[(\rho_{1})_{\alpha\alpha}=3\Delta x\,a^{2}\int\frac{{\rm d}x_{1}{\rm d}x_{2}} {8x_{1}x_{2}x_{3}}\,\delta\left(1-\sum x_{i}\right)\int{\rm d}^{2}r_{1}\,{\rm d }^{2}r_{2}\,\delta\left(\sum x_{i}\vec{r}_{i}\right)\,\Theta_{A}(\vec{r}_{1}) \,\Theta_{A}(\vec{r}_{2})\,\,|\Psi(x_{i},\vec{r}_{i})|^{2}\,\,,\qquad(\vec{r}_ {3}\in\overline{A})\,\,, \tag{25}\] with the understanding that the trace is taken with respect to the measure \(\frac{{\rm d}^{2}r_{2}}{a^{2}}\frac{{\rm d}x_{3}}{\Delta x}\Theta_{\overline{A }}(\vec{r}_{3})\). Furthermore, we included the \(x_{3}\)-dependent part of the integration measure (9), i.e. the factor \(1/x_{3}\), in the definition of \(\rho_{1}\) so that the trace is given by the \(x_{3}\)-independent integration measure \(\frac{{\rm d}x_{3}}{\Delta x}\). One can easily understand why this is necessary by considering the classical Shannon entropy of a propability density distribution, see appendix A. The third block \(\rho_{2}\) corresponds to the configuration where two of the quarks are located in \(\overline{A}\) while the third one is located in \(A\): \[(\rho_{2})_{\alpha\alpha^{\prime}}=3\,(a^{2}\Delta x)^{2}\,\frac{\Psi^{*}( \alpha^{\prime},x_{3},\vec{r}_{3})}{x_{3}\sqrt{8x_{1}^{\prime}x_{2}^{\prime}x _{3}}}\,\frac{\Psi(\alpha,x_{3},\vec{r}_{3})}{x_{3}\sqrt{8x_{1}x_{2}x_{3}}}\,\,, \tag{26}\] with \(\alpha=\{x_{1},\vec{r}_{1},\,x_{2},\vec{r}_{2}\}\) and \(\alpha^{\prime}=\{x_{1}^{\prime},\vec{r}_{1}^{\,\prime},\,x_{2}^{\prime},\vec{ r}_{2}^{\prime}\}\). Note that there is no integral over the coordinate of the third quark in this equation. This is due to the fact that COM constraint rigidly determines \(\vec{r}_{3},x_{3}\) for given coordinates and longitudinal momenta of the first two quarks as \[x_{3}=1-x_{1}-x_{2}=1-x_{1}^{\prime}-x_{2}^{\prime},\qquad\vec{r}_{3}=-(x_{1} \vec{r}_{1}+x_{2}\vec{r}_{2})/x_{3}=-(x_{1}\vec{r}_{1}^{\,\prime}+x_{2}\vec{r} _{2}^{\,\prime})/x_{3}\,\,. \tag{27}\] The matrix indices \(\alpha,\alpha^{\prime}\) are defined over the domain where these relations are satisfied with \(0\leq x_{3}\leq 1\) and \(\vec{r}_{1},\vec{r}_{2},\vec{r}_{1}^{\,\prime},\vec{r}_{2}^{\prime}\in \overline{A}\), \(\vec{r}_{3}\in A\). We have again introduced the lattice spacing into the definition of a matrix element of \(\rho_{2}\) to make it dimensionless. The factor \(3\) in (26) arises since either one of the three quarks can reside in \(A\). The trace of \(\rho_{2}\) on the subspace with two particles is defined as \[{\rm tr}\,\rho_{2} = \int\frac{{\rm d}x_{1}}{\Delta x}\frac{{\rm d}x_{2}}{\Delta x}\, \int\frac{{\rm d}^{2}r_{1}}{a^{2}}\frac{{\rm d}^{2}r_{2}}{a^{2}}\,\Theta_{ \overline{A}}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec{r}_{2})\,\Theta(x_{3})\, \Theta_{A}(\vec{r}_{3})\,(\rho_{2})_{\alpha\alpha} \tag{28}\] \[= 3\int[{\rm d}x_{i}]\,\int[{\rm d}^{2}r_{i}]\,\Theta_{A}(\vec{r}_ {3})\,\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec{r}_{2})\,\,| \Psi(x_{i},\vec{r}_{i})|^{2}\,\,,\] where the lattice spacing cancels between the matrix element and the integration measure. Once again, the trace operation does not involve any Jacobians which depend on \(x_{1}\) and \(x_{2}\). Finally, the fourth block corresponds to all three quarks in \(\overline{A}\): \[(\rho_{3})_{\alpha\alpha^{\prime}}=\frac{(a^{2}\,\Delta x)^{2}}{x_{3}^{\prime} \sqrt{8x_{1}^{\prime}x_{2}^{\prime}x_{3}^{\prime}}}\,\frac{1}{x_{3}\sqrt{8x_{1} x_{2}x_{3}}}\,\,\Psi^{*}(x_{i}^{\prime},\vec{r}_{i}^{\prime})\,\Psi(x_{i},\vec{r}_{i}). \tag{29}\] Here \(\alpha=\{x_{1},\vec{r}_{1},x_{2},\vec{r}_{2}\}\), and similarly for \(\alpha^{\prime}\). These indices are defined over the domain \(0\leq x_{1},x_{2},x_{1}^{\prime},x_{2}^{\prime}\leq 1\) with \(0\leq x_{3}=1-x_{1}-x_{2}\leq 1\), \(0\leq x_{3}^{\prime}=1-x_{1}^{\prime}-x_{2}^{\prime}\leq 1\); and \(\vec{r}_{1},\vec{r}_{1}^{\prime},\vec{r}_{2},\vec{r}_{2}^{\prime},\vec{r}_{3},\vec{r}_{3}^{\prime}\in\overline{A}\), with \(\vec{r}_{3}=-(x_{1}\vec{r}_{1}+x_{2}\vec{r}_{2})/x_{3}\), \(\vec{r}_{3}^{\prime}=-(x_{1}^{\prime}\vec{r}_{1}^{\prime}+x_{2}^{\prime}\vec{ r}_{2}^{\prime})/x_{3}^{\prime}\). We have again introduced the lattice spacing in this definition so that the elements of \(\rho_{3}\) are dimensionless, although these factors cancel in the trace of an arbitrary power of \(\rho_{3}\). To take the trace in this block we calculate \[\operatorname{tr}\rho_{3} =\int\frac{\mathrm{d}x_{1}}{\Delta x}\frac{\mathrm{d}x_{2}}{ \Delta x}\int\frac{\mathrm{d}^{2}r_{1}}{a^{2}}\frac{\mathrm{d}r_{2}}{a^{2}}\, \Theta(x_{3})\,\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec {r}_{2})\,\Theta_{\overline{A}}(\vec{r}_{3})\,(\rho_{3})_{\alpha\alpha}\] \[=\int[\mathrm{d}x_{i}]\,[\mathrm{d}^{2}r_{i}]\,\left|\Psi(x_{i}, \vec{r}_{i})\right|^{2}\,\prod\Theta_{\overline{A}}(\vec{r}_{i}). \tag{30}\] Putting this all together we obtain that the total trace of the density matrix is1 Footnote 1: Note that on account of the permutation symmetry of the wave function, the following equality holds when multiplied by \(|\Psi(\vec{r}_{1},\vec{r}_{2},\vec{r}_{3})|^{2}\) under the integral : \(\Theta_{A}(\vec{r}_{1})\,\Theta_{A}(\vec{r}_{2})\,\Theta_{A}(\vec{r}_{3})+3 \Theta_{A}(\vec{r}_{1})\,\Theta_{A}(\vec{r}_{2})\,\Theta_{\overline{A}}(\vec {r}_{3})+3\Theta_{A}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec{r}_{2})\,\Theta_{ \overline{A}}(\vec{r}_{3})+\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{ \overline{A}}(\vec{r}_{2})\,\Theta_{\overline{A}}(\vec{r}_{3})=1\). \[\operatorname{tr}\rho_{\overline{A}}=\rho_{0}+\operatorname{tr}\rho_{1}+ \operatorname{tr}\rho_{2}+\operatorname{tr}\rho_{3}=\int[\mathrm{d}x_{i}]\,[ \mathrm{d}^{2}r_{i}]\,\left|\Psi(x_{i},\vec{r}_{i})\right|^{2}=1\,. \tag{31}\] The normalization of the coordinate space wave function is determined from this relation. In Appendix B we present expressions for calculating traces of powers of \(\rho\) which illustrate explicitly the need to introduce the "lattice spacing" in our calculation. ### Entanglement entropy We now discuss the von Neumann entropy associated with tracing the pure state \(\left|\vec{R}=0,P^{+}\right\rangle\left\langle\vec{R}=0,P^{+}\right|\) over the area \(A\): \[S_{\text{vN}}=-\lim_{\epsilon\to 0}\frac{\operatorname{tr}\left(\rho_{\bar{A}} \right)^{1+\epsilon}-1}{\epsilon}. \tag{32}\] Because we performed a partial trace over a _pure state_, this entropy represents a measure for the entanglement of the degrees of freedom remaining in \(\overline{A}\) with those from region \(A\), which have been traced out. We discuss the nature of entanglement in more detail in the following sec. III.3. Using the expressions from Appendix B for \(N=1+\epsilon\) and expanding to linear order in \(\epsilon\) this gives \[-S_{\text{vN}} =\rho_{0}\log\rho_{0}+\operatorname{tr}\rho_{3}\log\operatorname{ tr}\rho_{3}\] \[\quad+3\int[\mathrm{d}x_{i}]\,[\mathrm{d}^{2}r_{i}]\,\Theta_{ \overline{A}}(\vec{r}_{3})\,\Theta_{A}(\vec{r}_{1})\,\Theta_{A}(\vec{r}_{2}) \,|\Psi(x_{i},\vec{r}_{i})|^{2}\] \[\qquad\qquad\qquad\times\log\left(3\Delta x\,a^{2}\int[\mathrm{d }y_{i}]\,[\mathrm{d}^{2}s_{i}]\,\delta(\vec{s}_{3}-\vec{r}_{3})\,\delta(x_{3}- y_{3})\,\Theta_{A}(\vec{s}_{1})\,\Theta_{A}(\vec{s}_{2})\,|\Psi(y_{i},\vec{s}_{i})|^{2}\right)\] \[\quad+3\int[\mathrm{d}x_{i}]\,[\mathrm{d}^{2}r_{i}]\,\Theta_{A}( \vec{r}_{3})\,\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec{r} _{2})\,|\Psi(x_{i},\vec{r}_{i})|^{2}\] \[\qquad\qquad\qquad\times\log\left(3\Delta x\,a^{2}\int[\mathrm{d }y_{i}]\,[\mathrm{d}^{2}s_{i}]\,\delta(\vec{s}_{3}-\vec{r}_{3})\,\delta(x_{3}- y_{3})\,\Theta_{\overline{A}}(\vec{s}_{1})\,\Theta_{\overline{A}}(\vec{s}_{2})\,|\Psi(y_{i}, \vec{s}_{i})|^{2}\right)\, \tag{33}\] where we used \(\operatorname{tr}\rho_{\overline{A}}=1\). This is a rather formal expression, and to understand some of its properties we will consider the dependence of the entropy on the area \(\overline{A}\) of the cutout. When the region \(\overline{A}\) shrinks to a point we of course expect the entropy to vanish2. Indeed, all the terms in eq. (33) vanish, except \(\rho_{0}\) which approaches 1: for vanishingly small area the probability to find zero particles inside is unity. Taking the area small but nonvanishing, for a circular cutout with radius \(L\), we have: Footnote 2: The same is true if \(\overline{A}\) encompasses the entire transverse space. \[\frac{\partial S_{\rm vN}^{(0)}}{\partial L} =-\frac{\partial\rho_{0}}{\partial L}=2\pi L\,\int{\rm d}x\,I(x) \qquad\qquad\qquad({\rm for}\,L\to 0) \tag{34}\] \[I(x) =3\int[{\rm d}y_{i}]\,\delta(y_{3}-x)\int{\rm d}^{2}r_{1}\,{\rm d }^{2}r_{2}\,\,\delta(x_{1}\vec{r}_{1}+x_{2}\vec{r}_{2})\,\,|\Psi(y_{1},\vec{r} _{1};y_{2},\vec{r}_{2};y_{3},\vec{0})|^{2}. \tag{35}\] For the model wave function from sec. II.2 we obtain the numerical estimate \[\frac{1}{\beta}\,\frac{\partial S_{\rm vN}^{(0)}}{\partial L}=2\pi\,L\beta\, \cdot 0.534(1). \tag{36}\] There is an additional contribution of order \(\sim L^{2}\) to the entropy. It is due to the third term in eq. (33) which originates from \(\rho_{1}\). Again, for a circular cutout \(\overline{A}\) of radius \(L\) centered at the origin, we have \[\frac{\partial S_{\rm vN}^{(1)}}{\partial L}=2\pi L\,\int{\rm d}x\,I(x)\,\log \frac{1}{a^{2}\,(\Delta x)\,I(x)}\,\qquad({\rm for}\,L\to 0). \tag{37}\] For small "lattice spacing" \(a\) the logarithm in this expression is large, and this is in fact the dominant contribution to the derivative of the entropy with respect to \(L\). For example, for the model wave function from sec. II.2, and for a fairly coarse resolution of transverse position and longitudinal momentum, \(\Delta x\,(a\beta)^{2}=0.1\), we obtain the numerical estimate \[\frac{1}{\beta}\,\frac{\partial S_{\rm vN}^{(1)}}{\partial L}=2\pi\,L\beta\, \cdot 1.21(1). \tag{38}\] Thus, for small \(L^{2}\) the leading contribution to the entropy is \[S_{\rm vN}^{(1)}=-\pi L^{2}\,\int{\rm d}x\,I(x)\,\log[a^{2}\,(\Delta x)I(x)]. \tag{39}\] This can be rewritten in a more transparent way if we notice that \(I(x)\) as defined in (35) is nothing but the density of quarks with longitudinal momentum \(x\) in the proton \(I(x)=F(x)/A_{p}\), where \(A_{p}\) is the transverse area of the proton and \(F(x)\) is the quark PDF. We then have \[S_{\rm vN}^{(1)}=-\frac{\pi L^{2}}{A_{p}}\,\int{\rm d}x\,F(x)\,\log[\frac{a^{ 2}}{A_{p}}\,(\Delta x)F(x)]\,. \tag{40}\] The dependence of the entropy on the lattice spacing is easily understood. Since \(\rho_{1}\) is a matrix with continuous index, we expect its eigenvalues to be small, i.e. of order \(a^{2}\), while the number of nonvanishing eigenvalues is large \(O(1/a^{2})\). For such a matrix with a large number of small eigenvalues, the entropy is indeed proportional to the logarithm of the inverse eigenvalue, and this is what we see in (37). The area scaling of the entropy is also quite natural, since at small \(L\) the number of degrees of freedom in the reduced density matrix is proportional to the area of the cutout. ### What is entangled here? We would now like to comment on the nature of entanglement that produces the entanglement entropy that we calculated. It is somewhat different from the naive picture of entanglement we are used to in a vacuum state of a Quantum Field Theory. In the QFT setting one divides space into two regions \(A\) and \(\bar{A}\) and considers the wave function of local field degrees of freedom in the two regions \(\Phi(x\in\bar{A})\) and \(\Phi(y\in A)\). The entanglement is then understood in terms of nonfactorizability of the wave function \(\Psi[\Phi(x),\Phi(y)]\neq\Psi_{1}[\Phi(x)|\Psi_{2}[\Phi(y)]\), and the entanglement entropy is associated with this nonfactorizability. In our case the nature of entanglement is somewhat different. It is not that some internal degree of freedom of quarks in \(A\), like color or helicity, is entangled with quarks in \(\bar{A}\). In fact, we do not have to consider several quarks with internal degrees of freedom at all to understand our result. Let us imagine having just one quark in the proton area. This quark can be either in \(A\) or in \(\bar{A}\). We can write the total wave function of the quark in terms of the basis states in the Hilbert spaces \(H_{A}\) and \(H_{\bar{A}}\). For simplicity, we will even forget about different transverse coordinates in \(A\) and \(\bar{A}\). The wave function of our quark can then be written as \[\Psi=a|0\rangle_{A}\times|1\rangle_{\bar{A}}+b|1\rangle_{A}\times|0\rangle_{ \bar{A}} \tag{41}\] where \(|a|^{2}\) is the probability that the quark is in \(\bar{A}\) and \(|b|^{2}=1-|a|^{2}\) is the probability that it is in \(A\). Tracing over \(A\) removes the relative phase of \(a\) and \(b\) and we generate the reduced density operator \(\hat{\rho}_{\bar{A}}=\left[|a|^{2}|1\rangle\langle 1|+|b|^{2}|0\rangle \langle 0|\right]_{\bar{A}}\). This is a mixed state over \(\bar{A}\) and carries the entanglement entropy. Thus, the entanglement in our calculation is between the quark being (or not being) in \(A\) and the **same** quark being (or not being) in \(\bar{A}\). These states are maximally entangled since the total number of quarks is fixed to be exactly one. This is a "quantum mechanical" rather than "QFT type" entanglement, very similar to the "Schrodinger cat" thought experiment [34; 35], where one should read **one quark in \(A\)** as _the cat is alive_, and **no quark in \(\bar{A}\)** as _radioactive nucleus intact_; **no quark in \(A\)** as _the cat is dead_ and **the quark is in \(\bar{A}\)** as _radioactive nucleus decayed_. ## IV Including the \(|qqqg\rangle\) Fock state We now add the \(|qqqq\rangle\) Fock states into our calculation. In perturbation theory, such states have nonvanishing probability at order \(g^{2}\). We write the proton state schematically in the form \[|P\rangle\sim\Psi_{qqq}\,\epsilon_{i_{1}i_{2}i_{3}}\,|q_{i_{1}}\,q_{i_{2}}\,q _{i_{3}}\rangle+\Psi_{qqqq}\,\left[(t^{a})_{ji_{1}}\epsilon_{i_{1}i_{2}i_{3}} \,|q_{j}\,q_{i_{2}}\,q_{i_{3}}\,g_{a}\right\rangle-(i_{1}\leftrightarrow i_{2} )-(i_{1}\leftrightarrow i_{3})\right]. \tag{42}\] In the leading perturbative order the three quark wave function \(\Psi_{qqq}\) includes the \(O(g^{2})\) virtual corrections, and \(\Psi_{qqqq}\) is the (3 quarks+1 gluon) spatial wave function at order \(O(g)\). In the two components the quarks are in different representations of color-SU(3): they are in the color singlet in the \(|qqq\rangle\) state, and in the color octet in \(|qqg\rangle\). In the following we will calculate the entanglement entropy for the same geometry as in the previous section. To simplify the calculations, however, we will trace the density matrix over the colors of the quarks. The pure state (42) is described by a density operator which in principle contains off diagonal matrix elements in the particle number basis \[|P\rangle\,\langle P| \sim \Psi_{qqq}\,\epsilon_{i_{1}i_{2}i_{3}}\,\Psi^{*}_{qqqq}\,\epsilon _{i_{2}^{\prime}i_{3}^{\prime}}\,|q_{i_{1}}\,q_{i_{2}}\,q_{i_{3}}\rangle\, \langle q_{i_{1}^{\prime}}\,q_{i_{2}^{\prime}}\,q_{i_{3}^{\prime}}\,| \tag{43}\] \[+\Psi_{qqqg}\,(t^{a})_{ji_{1}}\epsilon_{i_{1}i_{2}i_{3}}\,\Psi^{*} _{qqqqg}\,(t^{a^{\prime}})_{i_{1}^{\prime}j^{\prime}}\epsilon_{i_{1}^{\prime}j ^{\prime}}\epsilon_{i_{2}^{\prime}i_{3}^{\prime}}\,|q_{j}\,q_{i_{2}}\,q_{i_{3}} \,g_{a}\rangle\,\langle q_{j^{\prime}}\,q_{i_{3}^{\prime}}\,q_{i_{3}^{\prime}} \,g_{a^{\prime}}|\] \[+\Psi_{qqq}\,\epsilon_{i_{1}i_{2}i_{3}}\,\Psi^{*}_{qqqq}\,(t^{a^{ \prime}})_{i_{1}^{\prime}j^{\prime}}\epsilon_{i_{1}^{\prime}i_{2}^{\prime}i_{3 }^{\prime}}\,|q_{i_{1}}\,q_{i_{2}}\,q_{i_{3}}\rangle\,\langle q_{j^{\prime}} \,q_{i_{2}^{\prime}}\,q_{i_{3}^{\prime}}\,g_{a^{\prime}}|\] \[+\Psi_{qqqqg}\,(t^{a})_{ji_{1}}\epsilon_{i_{1}i_{2}i_{3}}\,\Psi^{* }_{qqqq}\,\epsilon_{i_{1}^{\prime}i_{2}^{\prime}i_{3}^{\prime}}\,|q_{j}\,q_{i_ {2}}\,q_{i_{3}}\,g_{a}\rangle\,\langle q_{i_{1}^{\prime}}\,q_{i_{2}^{\prime}} \,q_{i_{3}^{\prime}}|\.\] However, the off diagonal matrix elements vanish after tracing over quark colors precisely due to the fact that the three quarks are in the color singlet state in \(\Psi_{qqqq}\) and in the color octet state in \(\Psi_{qqqqg}\). The reduced (over the quark color) density operator is diagonal in particle number and has the form \[\mathrm{tr}_{qqq-\mathrm{colors}}\hat{\rho} \sim 3!\,\Psi_{qqq}\,\Psi^{*}_{qqq}\,|q\,q\,q\,q\rangle\,\langle q\,q\,q| \tag{44}\] \[+\delta^{aa^{\prime}}\,\Psi_{qqqqg}\,\Psi^{*}_{qqqqg}\,|q\,q\,q\, q\,g\rangle\,\langle q\,q\,q\,g|\.\] ### \(\Psi_{qqqqg}\) at order \(g\) and \(\Psi_{qqq}\) at order \(g^{2}\) Our first order of business is to calculate the perturbative wave function. For simplicity, we restrict ourselves to the soft gluon approximation, i.e. we assume that the gluon longitudinal momentum is much smaller than the typical longitudinal momentum of a quark. Let us begin with \(\Psi_{qqqq}\). The emission of a gluon from one of the quarks generates the following \({\cal O}(g)\) correction3 to the momentum space proton state \(|P\rangle\): Footnote 3: We are following the notation and expressions from refs. [22; 29]. \[|P^{+},\vec{P}\rangle_{{\cal O}(g)} = g\int[{\rm d}x_{i}]\int[{\rm d}^{2}k_{i}]\,\Psi_{qqq}(k_{i})\, \frac{1}{\sqrt{6}}\sum_{j_{1}j_{2}j_{3}}\epsilon_{j_{1}j_{2}j_{3}}\int\limits_ {\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\sum_{ \sigma ma} \tag{45}\] \[\left[(t^{a})_{mj_{1}}\frac{\Theta(x_{1}-x_{g})}{x_{1}-x_{g}}\, \hat{\psi}_{q\to qg}(p_{1};p_{1}-k_{g},k_{g})|m,p_{1}-k_{g};\,j_{2},p_{2};\,j_ {3},p_{3}\rangle\right.\] \[\left.+(t^{a})_{mj_{2}}\frac{\Theta(x_{2}-x_{g})}{x_{2}-x_{g}}\, \hat{\psi}_{q\to qg}(p_{2};p_{2}-k_{g},k_{g})|j_{1},p_{1};\,m,p_{2}-k_{g};\,j_ {3},p_{3}\rangle\right.\] \[\left.+(t^{a})_{mj_{3}}\frac{\Theta(x_{3}-x_{g})}{x_{3}-x_{g}}\, \hat{\psi}_{q\to qg}(p_{3};p_{3}-k_{g},k_{g})|j_{1},p_{1};\,j_{2},p_{2};\,m,p_{ 3}-k_{g}\rangle\right]\otimes|a,k_{g},x_{g},\sigma\rangle\.\] The integration measures here, \([{\rm d}x_{i}]\) and \([{\rm d}^{2}k_{i}]\), pertain to coordinates of the _parent_ quarks. We have cut off the integration over the light-cone momentum fraction of the gluon \(x_{g}\) by \(\Delta x\) to regularize the soft singularity in QCD. That is, we prohibit gluon emission into the lowest "bin" of \(x_{g}\). The light-cone gauge Fock space amplitude for the \(qg\) state of a quark in the soft gluon approximation in \(D=4\) dimensions is \[\hat{\psi}_{q\to qg}(p;k_{q},k_{g})=2\,\frac{x_{p}}{k_{g}^{2}+\Delta^{2}}\, \vec{k}_{g}\cdot\vec{\epsilon}_{\sigma}^{*} \tag{46}\] where \(x_{p}=p^{+}/P^{+}\), and \(\Delta^{2}\) is a regulator for the collinear singularity. Physically, the regularization is provided by the finite size of the color singlet state which the emitter is a part of. Thus the magnitude of the regulator \(\Delta\) is of order \(\Lambda_{\rm QCD}\), or in our case of the order of the inverse size of the model proton wave function set by the parameter \(\beta\). It is much smaller that the inverse radius squared of the cutout \(\bar{A}\). Projecting on the Fock space state \(|\alpha\rangle\), where \(\alpha\) denotes a set of four momentum fractions \(x_{i}\), transverse positions \(\vec{r}_{i}\) and colors \(i_{1},i_{2},i_{3},a\), we obtain \[\langle\alpha|P^{+},\vec{R}=0\rangle_{{\cal O}(g)} = 2g\,\frac{|{\cal N}|^{2}}{(2\pi)^{2}}\,(2\pi)^{3}\,\delta\left(1- \sum x_{i}\right)\,\delta\left(\sum x_{i}\vec{r}_{i}\right)\frac{1}{\sqrt{6}} \int[{\rm d}^{2}k_{i}]\,e^{i\sum\vec{k}_{i}\cdot\vec{r}_{i}}\,\frac{\vec{k}_{g} \cdot\vec{\epsilon}_{\sigma}^{*}}{k_{g}^{2}+\Delta^{2}} \tag{47}\] \[\sum_{j}\left[\epsilon_{j_{2}i_{3}}\,(t^{a})_{i_{1}j}\,\,\Psi_{qqq }(k_{1}+k_{g};k_{2};k_{3})+\epsilon_{i_{1}ji_{3}}\,(t^{a})_{i_{2}j}\,\,\Psi_{qqq }(k_{1};k_{2}+k_{g};k_{3})\right.\] \[\left.+\epsilon_{i_{1}i_{2}j}\,(t^{a})_{i_{3}j}\,\,\Psi_{qqq}(k_{1 };k_{2};k_{3}+k_{g})\right]\.\] To properly account for probability conservation we also need to include the \({\cal O}(g^{2})\) virtual corrections to \(\Psi_{qqq}\). There are two types of such corrections. The first one arises due to emission and reabsorption of a gluon by one of the quarks, and amounts to multiplying the momentum space quark state vectors in eq. (11) by the wave function renormalization factor \[\left(Z_{q}(x_{1})\,Z_{q}(x_{2})\,Z_{q}(x_{3})\right)^{1/2}=1-\frac{1}{2}\left( C_{q}(x_{1})+C_{q}(x_{2})+C_{q}(x_{3})\right)\, \tag{48}\] with \[C_{q}(x_{1}) = \frac{g^{2}C_{F}}{4\pi^{2}}\int_{\Delta x/x_{1}}^{1}\frac{{\rm d }z}{z}\,A_{0}(\Delta^{2})\, \tag{49}\] \[A_{0}(\Delta^{2}) = 4\pi\int\frac{{\rm d}^{2}n}{(2\pi)^{2}}\frac{1}{\vec{n}^{2}+ \Delta^{2}}\.\] Again, a cutoff \(\Delta x\) on the momentum fraction of the gluon was introduced here. We regulate \(A_{0}(\Delta^{2})\) in the UV by a Pauli-Villars type regulator \[A_{0}^{\rm reg}(\Lambda^{2}/\Delta^{2})=A_{0}(\Delta^{2})-A_{0}(\Lambda^{2})=4 \pi\int\frac{{\rm d}^{2}n}{(2\pi)^{2}}\left[\frac{1}{\vec{n}^{2}+\Delta^{2}}- \frac{1}{\vec{n}^{2}+\Lambda^{2}}\right]=\log\frac{\Lambda^{2}}{\Delta^{2}}\, \tag{50}\] where \(\Lambda^{2}\) is a UV cutoff. Then, \[C_{q}^{\rm reg}(x_{1})=\frac{g^{2}C_{F}}{4\pi^{2}}\,\log\frac{x_{1}}{\Delta x}\, \log\frac{\Lambda^{2}}{\Delta^{2}}. \tag{51}\] We were forced to introduce the momentum UV regulator in the present calculation in order to regulate gluon emissions at short transverse distances. Recall that earlier we had to introduce a similar (coordinate space) regulator \(a\) in order to define probabilities and entropy for a continuous system, e.g. in (25). The two regulators of course should not be considered independent. In the following we take them to be related as \(\Lambda^{2}=1/a^{2}\), in the same way as we took the regulator of the soft divergence to be equal to the "lattice spacing" in the longitudinal momentum space \(\Delta x\). The second virtual correction to \(\Psi_{\rm qqq}\) is due to the exchange of a gluon between any pair of quarks. Let quark 1 emit and quark 2 absorb the gluon in \(|P\rangle\); we then have (again for \(x_{g}\to 0\)): \[|P^{+},\vec{P}\rangle_{{\cal O}(g^{2})}^{12} = \int[{\rm d}x_{i}]\int[{\rm d}^{2}k_{i}]\,\Psi_{\rm qqq}\,(k_{1}; k_{2};k_{3})\;\;\frac{1}{\sqrt{6}}\sum_{j_{1}j_{2}j_{3}}\epsilon_{j_{1}j_{2}j_{3}} \tag{52}\] \[g^{2}\sum_{\sigma,a,n,m}(t^{a})_{mj_{1}}(t^{a})_{nj_{2}}\int_{ \Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\frac{ 1}{x_{1}}\,\hat{\psi}_{q\to qg}(p_{1};p_{1}-k_{g},k_{g})\] \[\frac{1}{x_{2}}\,\hat{\psi}_{qg\to q}(p_{2},k_{g};p_{2}+k_{g})\ |m,p_{1}-k_{g};\,n,p_{2}+k_{g};\,j_{3},p_{3}\rangle\ \.\] Here, the amplitude for the absorption of a gluon by a quark is \[\hat{\psi}_{qg\to q}(k_{q},k_{g};p)=-2x_{p}\,\frac{\vec{k}_{g}\cdot\vec{ \epsilon}_{\sigma}}{k_{g}^{2}+\Delta^{2}}. \tag{53}\] We can now sum over gluon polarizations, \(\sum_{\sigma}\vec{k}_{g}\cdot\vec{\epsilon}_{\sigma}\cdot\vec{k}_{g}\cdot \vec{\epsilon}_{\sigma}=k_{g}^{2}\). Changing variables, \(\vec{k}_{1}\to\vec{k}_{1}+\vec{k}_{g}\) and \(\vec{k}_{2}\to\vec{k}_{2}-\vec{k}_{g}\), we obtain \[|P^{+},\vec{P}\rangle_{{\cal O}(g^{2})}^{12} = -4g^{2}\int[{\rm d}x_{i}]\int[{\rm d}^{2}k_{i}]\,\frac{1}{\sqrt{6} }\sum_{j_{1}j_{2}j_{3}}\epsilon_{j_{1}j_{2}j_{3}}\sum_{a,n,m}(t^{a})_{mj_{1}}( t^{a})_{nj_{2}} \tag{54}\] \[\int_{x}\frac{{\rm d}x_{g}}{2x_{g}}\frac{{\rm d}^{2}k_{g}}{(2\pi) ^{3}}\Psi_{\rm qqq}\,(k_{1}+k_{g};k_{2}-k_{g};k_{3})\ \frac{k_{g}^{2}}{(k_{g}^{2}+\Delta^{2})^{2}}\ |m,p_{1};\,n,p_{2};\,j_{3},p_{3} \rangle\.\] Adding analogous contributions corresponding to gluon exchanges between quarks 1, 3, and 2, 3 we finally have \[|P^{+},\vec{R}=0\rangle_{{\cal O}(g^{2})} = -4g^{2}\int[{\rm d}x_{i}]\int[{\rm d}^{2}r_{i}]\,\frac{1}{\sqrt{6 }}\sum_{j_{1}j_{2}j_{3}}\epsilon_{j_{1}j_{2}j_{3}}\int_{\Delta x}\frac{{\rm d} x_{g}}{2x_{g}}\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\,\frac{k_{g}^{2}}{(k_{g}^{2}+ \Delta^{2})^{2}}\int[{\rm d}^{2}k_{i}]\,e^{i\sum\vec{k}_{i}\cdot\vec{r}_{i}}\ \sum_{a,n,m} \tag{55}\] \[\left[(t^{a})_{mj_{1}}(t^{a})_{nj_{2}}\Psi_{\rm qqq}\,(k_{1}+k_{ g};k_{2}-k_{g};k_{3})\ \right|m,x_{1},\vec{r}_{1};\,n,x_{2},\vec{r}_{2};\,j_{3},x_{3},\vec{r}_{3}\] \[\left.+(t^{a})_{mj_{1}}(t^{a})_{nj_{3}}\Psi_{\rm qqq}\,(k_{1}+k_{ g};k_{2};k_{3}-k_{g})\ \right|m,x_{1},\vec{r}_{1};\,j_{2},x_{2},\vec{r}_{2};\,n,x_{3},\vec{r}_{3}\rangle\] \[\left.+(t^{a})_{mj_{2}}(t^{a})_{nj_{3}}\Psi_{\rm qqq}\,(k_{1};k_{2 }+k_{g};k_{3}-k_{g})\ \left|j_{1},x_{1},\vec{r}_{1};\,m,x_{2},\vec{r}_{2};\,n,x_{3},\vec{r}_{3} \rangle\right]\.\] Projecting this onto the three quark Fock state \(\langle j_{i},x_{i},\vec{r}_{i}|\) gives \[\langle j_{i},x_{i},\vec{r}_{i}|P^{+},\vec{R}=0\rangle_{{\cal O}(g^{2})} = -4g^{2}\,\frac{|{\cal N}|^{2}}{(2\pi)^{2}}\,\delta\left(1-\sum x_{i }\right)\,(2\pi)^{3}\,\delta\left(\sum x_{i}\vec{r}_{i}\right)\,\int[{\rm d}^ {2}q_{i}]\,e^{i\sum\vec{q}_{i}\cdot\vec{r}_{i}}\,\int_{\Delta x}\frac{{\rm d}x_ {g}}{2x_{g}}\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\,\frac{k_{g}^{2}}{(k_{g}^{2}+ \Delta^{2})^{2}} \tag{56}\] \[\sum_{i_{1}i_{2}i_{3}}\left[\frac{1}{\sqrt{6}}\epsilon_{i_{1}i_{ 2}j_{3}}(t^{a})_{j_{1}i_{1}}(t^{a})_{j_{2}i_{2}}\Psi_{\rm qqq}\left(x_{1}, \vec{q}_{1}+\vec{k}_{g};x_{2},\vec{q}_{2}-\vec{k}_{g};x_{3},\vec{q}_{3}\right)\right.\] \[\left.+\frac{1}{\sqrt{6}}\epsilon_{i_{1}j_{2}i_{3}}(t^{a})_{j_{1} i_{1}}(t^{a})_{j_{3}i_{3}}\Psi_{\rm qqq}\left(x_{1},\vec{q}_{1}+\vec{k}_{g};x_{2}, \vec{q}_{2};x_{3},\vec{q}_{3}-\vec{k}_{g}\right)\right.\] \[\left.+\frac{1}{\sqrt{6}}\epsilon_{j_{1}i_{2}i_{3}}(t^{a})_{j_{1} i_{2}}(t^{a})_{j_{3}i_{3}}\Psi_{\rm qqq}\left(x_{1},\vec{q}_{1};x_{2},\vec{q}_{2}+ \vec{k}_{g};x_{3},\vec{q}_{3}-\vec{k}_{g}\right)\right]\.\] Alternatively, in terms of the position space wave function, \[\langle j_{i},x_{i},\vec{r}_{i}|P^{+},\vec{R}=0\rangle_{{\cal O}(g^{2})} = -4g^{2}\,\frac{|{\cal N}|^{2}}{(2\pi)^{2}}\,\delta\left(1-\sum x_{i} \right)\,(2\pi)^{3}\,\delta\left(\sum x_{i}\vec{r}_{i}\right)\,\Psi_{\rm qqq} \,(x_{i},\vec{r}_{i})\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\frac{{\rm d}^{2}k _{g}}{(k_{g}^{2}+\Delta^{2})^{2}} \tag{57}\] \[\sum_{i_{1}i_{2}i_{3}}\left[\frac{1}{\sqrt{6}}\epsilon_{i_{1}i_{2}j_ {3}}(t^{a})_{j_{1}i_{1}}(t^{a})_{j_{2}i_{2}}\,e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}- \vec{r}_{2})}\right.\] \[\left.+\frac{1}{\sqrt{6}}\epsilon_{i_{1}j_{2}i_{3}}(t^{a})_{j_{1}i _{1}}(t^{a})_{j_{3}i_{3}}\,e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3})}\right.\] \[\left.+\frac{1}{\sqrt{6}}\epsilon_{j_{1}i_{2}i_{3}}(t^{a})_{j_{2} i_{2}}(t^{a})_{j_{3}i_{3}}\,e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{3})} \,\right]. \tag{57}\] ### First perturbative correction to the density matrix We are now in a position to calculate the perturbative correction to the density matrix. As already mentioned at the beginning of this section, eq. (44), after tracing over the colors of the quarks the density matrix takes the form \[\rho\,=\,\begin{pmatrix}\rho^{qqq}&0\\ 0&\rho^{qqqg}\end{pmatrix}\,. \tag{58}\] Note that since \(\rho^{qqq}\) and \(\rho^{qqqg}\) are probability densities on subspaces with different numbers of particles, they have different dimensions. The trace operations over the two entries are given by \(\mathrm{d}x_{1}/(2x_{1})\,\mathrm{d}x_{2}/(2x_{2})\,\mathrm{d}x_{3}/(2x_{3}) \,\delta\,(1-\sum_{i}x_{i})\,\,\mathrm{d}^{2}r_{1}\,\mathrm{d}^{2}r_{2}\, \mathrm{d}^{2}r_{1}\) and \(\,\mathrm{d}x_{1}/(2x_{1})\,\mathrm{d}x_{2}/(2x_{2})\,\mathrm{d}x_{3}/(2x_{3} )\,\delta\,(1-\sum_{i}x_{i})\,\,\mathrm{d}x_{g}/(2x_{3})\,\mathrm{d}^{2}r_{1} \,\mathrm{d}^{2}r_{2}\,\mathrm{d}^{2}r_{3}\,2\pi\mathrm{d}^{2}r_{g}\,\delta( \sum x_{i}\vec{r}_{i})\), for \(\rho^{qqq}\) and \(\rho^{qqqg}\), respectively. Hence, if one is interested in probabilities given by \(\rho\) (or its purity, entropy etc.) one must multiply \(\rho^{qqqg}\) by \(2\pi\,a^{2}\Delta x/2x_{g}\), as we did in the previous sections. Let us now compute the matrix (58). We begin with \(\rho^{qqq}\) which gives the probability density on the three-quark state Hilbert space and includes \(\mathcal{O}(g^{2})\) virtual corrections. The first correction is to multiply the \(\mathcal{O}(1)\) non-perturbative density matrix from eq. (14) by six wave function renormalization factors, \(\prod Z^{1/2}(x_{i})=1-\sum C_{q}^{\mathrm{reg}}(x_{i})/2\). Secondly, we add a term similar to eq. (14) where we replace one of the non-perturbative 3-quark state of the proton by the \(\mathcal{O}(g^{2})\) virtual correction due to the exchange of a gluon by two quarks, eq. (56). We also trace over the quark colors. In all, \[\rho^{qqq}_{\alpha\alpha^{\prime}} =\left[1-\frac{1}{2}\left(C_{q}^{\mathrm{reg}}(x_{1})+C_{q}^{ \mathrm{reg}}(x_{2})+C_{q}^{\mathrm{reg}}(x_{3})+C_{q}^{\mathrm{reg}}(x_{1}^{ \prime})+C_{q}^{\mathrm{reg}}(x_{2}^{\prime})+C_{q}^{\mathrm{reg}}(x_{3}^{ \prime})\right)\right]\,\,\Psi_{\mathrm{qqq}}^{*}(x_{i}^{\prime},\vec{r}_{i}^ {\prime})\,\,\Psi_{\mathrm{qqq}}(x_{i},\vec{r}_{i})\] \[+2g^{2}C_{F}\,\int[\mathrm{d}^{2}q_{i}]\,\int_{\Delta x}\frac{ \mathrm{d}x_{g}}{2x_{g}}\frac{\mathrm{d}^{2}k_{g}}{(2\pi)^{3}}\,\frac{k_{g}^{2} }{(k_{g}^{2}+\Delta^{2})^{2}}\] \[\left\{e^{i\sum\vec{q}_{i}\cdot\vec{r}_{i}}\,\Psi_{\mathrm{qqq}}^ {*}(x_{i}^{\prime},\vec{r}_{i}^{\prime})\,\left[\Psi_{\mathrm{qqq}}\left(x_{1},\vec{q}_{1}+\vec{k}_{g};x_{2},\vec{q}_{2}-\vec{k}_{g};x_{3},\vec{q}_{3}\right) +\Psi_{\mathrm{qqq}}\left(x_{1},\vec{q}_{1}+\vec{k}_{g};x_{2},\vec{q}_{2};x_{ 3},\vec{q}_{3}-\vec{k}_{g}\right)\right.\right.\] \[\left.\left.+\Psi_{\mathrm{qqq}}\left(x_{1},\vec{q}_{1};x_{2}, \vec{q}_{2}+\vec{k}_{g};x_{3},\vec{q}_{3}-\vec{k}_{g}\right)\right]\right.\] \[\left.+e^{-i\sum\vec{q}_{i}\cdot\vec{r}_{i}^{\prime}}\,\Psi_{ \mathrm{qqq}}(x_{i},\vec{r}_{i})\,\left[\Psi_{\mathrm{qqq}}^{*}\left(x_{1}^{ \prime},\vec{q}_{1}+\vec{k}_{g};x_{2}^{\prime},\vec{q}_{2}-\vec{k}_{g};x_{3}^{ \prime},\vec{q}_{3}^{\prime}\right)+\Psi_{\mathrm{qqq}}^{*}\left(x_{1}^{ \prime},\vec{q}_{1}+\vec{k}_{g};x_{2}^{\prime},\vec{q}_{2};x_{3}^{\prime}, \vec{q}_{3}-\vec{k}_{g}\right)\right.\right.\] \[\left.\left.\left.+\Psi_{\mathrm{qqq}}^{*}\left(x_{1}^{\prime}, \vec{q}_{1};x_{2}^{\prime},\vec{q}_{2}+\vec{k}_{g};x_{3}^{\prime},\vec{q}_{3}- \vec{k}_{g}\right)\right]\right\}\,\,. \tag{59}\] \[=\left[1-\frac{1}{2}\left(C_{q}^{\mathrm{reg}}(x_{1})+C_{q}^{ \mathrm{reg}}(x_{2})+C_{q}^{\mathrm{reg}}(x_{3})+C_{q}^{\mathrm{reg}}(x_{1}^{ \prime})+C_{q}^{\mathrm{reg}}(x_{2}^{\prime})+C_{q}^{\mathrm{reg}}(x_{3}^{ \prime})\right)\right]\,\,\Psi_{\mathrm{qqq}}^{*}(x_{i}^{\prime},\vec{r}_{i}^ {\prime})\,\,\Psi_{\mathrm{qqq}}(x_{i},\vec{r}_{i})\] \[+2g^{2}C_{F}\,\Psi_{\mathrm{qqq}}^{*}(x_{i}^{\prime},\vec{r}_{i}^ {\prime})\,\Psi_{\mathrm{qqq}}(x_{i},\vec{r}_{i})\,\int_{\Delta x}\frac{ \mathrm{d}x_{g}}{2x_{g}}\frac{\mathrm{d}^{2}k_{g}}{(2\pi)^{3}}\,\frac{k_{g}^{2} }{(k_{g}^{2}+\Delta^{2})^{2}}\] \[\left.\left[e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3})}+e^{-i \vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3})}+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}- \vec{r}_{3})}+e^{i\vec{k}_{g}\cdot(\vec{r}_{1}^{\prime}-\vec{r}_{2}^{ \prime})}\,+e^{i\vec{k}_{g}\cdot(\vec{r}_{1}^{\prime}-\vec{r}_{3}^{\prime})}\, +e^{i\vec{k}_{g}\cdot(\vec{r}_{2}^{\prime}-\vec{r}_{3}^{\prime})}\,\right]. \tag{60}\] Here, as in eq. (16), \(\alpha=\{x_{i},\vec{r}_{i}|\,\sum x_{i}=1,\sum x_{i}\vec{r}_{i}=0\}\) and \(\alpha^{\prime}=\{x_{i}^{\prime},\vec{r}_{i}^{\prime}|\,\sum x_{i}^{\prime}=1, \sum x_{i}^{\prime}\vec{r}_{i}^{\prime}=0\}\) denote two sets of quark LC momentum fractions and transverse positions. Now we proceed to \(\rho^{qqqg}\). We trace it over quark and gluon colors, and, in addition, for simplicity over the gluon polarizations. Using eq. (47) in the definition (14) we obtain \[\rho^{qqqg}_{\alpha\alpha^{\prime}} =2g^{2}C_{F}\,\int[\mathrm{d}^{2}k_{1}]\,[\mathrm{d}^{2}k_{1}^{ \prime}]\,e^{i\sum\vec{k}_{i}\cdot\vec{r}_{i}-i\sum\vec{k}_{i}^{\prime},\vec{ r} \[+\Psi^{*}_{\rm qqq}(k_{1}^{\prime};k_{2}^{\prime};k_{3}^{\prime}+k_{ g}^{\prime})\,\Psi_{\rm qqq}(k_{1};k_{2};k_{3}+k_{g})\big{)}\] \[-\Psi_{\rm qqq}(k_{1}+k_{g};k_{2};k_{3})\,\Psi^{*}_{\rm qqq}(k_{1} ^{\prime};k_{2}^{\prime}+k_{g}^{\prime};k_{3}^{\prime})-\Psi_{\rm qqq}(k_{1}+k _{g};k_{2};k_{3})\,\Psi^{*}_{\rm qqq}(k_{1}^{\prime};k_{2}^{\prime};k_{3}^{ \prime}+k_{g}^{\prime})\] \[-\Psi_{\rm qqq}(k_{1};k_{2}+k_{g};k_{3})\,\Psi^{*}_{\rm qqq}(k_{1} ^{\prime}+k_{g}^{\prime};k_{2}^{\prime};k_{3}^{\prime})-\Psi_{\rm qqq}(k_{1};k _{2}+k_{g};k_{3})\,\Psi^{*}_{\rm qqq}(k_{1}^{\prime};k_{2}^{\prime};k_{3}^{ \prime}+k_{g}^{\prime})\] \[-\Psi_{\rm qqq}(k_{1};k_{2};k_{3}+k_{g})\,\Psi^{*}_{\rm qqq}(k_{1} ^{\prime}+k_{g}^{\prime};k_{2}^{\prime};k_{3}^{\prime})-\Psi_{\rm qqq}(k_{1} ;k_{2};k_{3}+k_{g})\,\Psi^{*}_{\rm qqq}(k_{1}^{\prime};k_{2}^{\prime}+k_{g}^{ \prime};k_{3}^{\prime})\big{\}}\;\;. \tag{61}\] Here, \(\alpha=\{x_{i},\vec{r}_{i}|\,\sum x_{i}=1,\sum x_{i}\vec{r}_{i}=0\}\) and \(\alpha^{\prime}=\{x_{i}^{\prime},\vec{r}_{i}^{\prime}|\,\sum x_{i}^{\prime}=1,\sum x_{i}^{\prime}\vec{r}_{i}^{\prime}=0\}\) denote two sets of quark and gluon momentum fractions and transverse positions. In Appendix C we show that the density matrix is indeed properly normalized. ## V Entanglement entropy of the perturbative density matrix In this section we calculate the entanglement entropy of the density matrix which includes one perturbatively emitted gluon. We will change our strategy somewhat to simplify the calculation. Integrating all degrees of freedom in \(A\) and calculating the entanglement entropy turns out to be rather awkward as there are many degrees of freedom in \(\vec{A}\). Instead we choose to reduce the density matrix to a partial set of degrees of freedom in the whole proton wave function, and only then we integrate over \(A\). We will follow two different routes. In subsection V.1 we reduce the density matrix calculated above by tracing over the gluon degrees of freedom in the whole space. The resulting quark density matrix is then traced over \(A\) and the associated entanglement entropy is calculated. Note that already after integrating over the gluon degrees of freedom the quark density matrix does not describe a pure state and therefore in all probability carries a nonvanishing entropy (which we do not calculate here). Thus the entropy we calculate is not exactly the entanglement entropy between the two spatial regions \(A\) and \(\overline{A}\), but instead measures entanglement of quarks in \(\overline{A}\) with the rest of the proton wave function4 (quarks in \(A\) and gluon anywhere). Footnote 4: Quantum correlations of regions \(A\) and \(\overline{A}\) could be analyzed using entanglement measures other than the von Neumann entropy, which apply also to mixed states. One such example is entanglement negativity [36; 37] which has been used recently to study two-quark azimuthal correlations in the light-cone wave function of the proton [23]. In subsection V.2 we perform a complementary procedure: we integrate over the quark degrees of freedom in the whole space, and then reduce the resulting gluon density matrix over \(A\) and calculate the entanglement entropy. Again, this entropy measures entanglement of gluons in \(\overline{A}\) with the rest of the proton wave function. ### Entanglement entropy of quarks Let us construct the three-quark density matrix by tracing out the gluon degrees of freedom in the whole space. Integrating over the gluon leads to the density matrix: \[\rho\,=\,\rho^{qqq}+{\rm tr}_{g}\,\rho^{qqqg}\,. \tag{62}\] The first term, \(\rho^{qqq}\) is given in eq. (60). To trace \(\rho^{qqg}_{\alpha\alpha^{\prime}}\) over the gluon we set \(\vec{r}_{g}^{\,\prime}=\vec{r}_{g}\) in eq. (61), and integrate with the measure \({\rm d}x_{g}/(2x_{g})\,2\pi{\rm d}^{2}r_{g}\). In principle, the upper limit of \(x_{g}\) in each term of (61) is different. However, in the small-\(x_{g}\) approximation which we are employing here, only the leading \(\log 1/x\) contribution is important and we may replace the upper limits by a typical quark momentum fraction \(\langle x_{q}\rangle\). We then obtain \[{\rm tr}_{g}\,\rho^{qqqg}_{\alpha\alpha^{\prime}} = 2g^{2}C_{F}\,\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\int \frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\,\frac{1}{k_{g}^{2}+\Delta^{2}}\,\int[{\rm d }^{2}k_{i}]\,[{\rm d}^{2}k_{i}^{\prime}]\,\,e^{i\sum\vec{k}_{i}\cdot\vec{r}_{i }-i\sum\vec{k}_{i}^{\prime}\cdot\vec{r}_{i}^{\prime}}\,\,\Psi^{*}_{\rm qqq}(k _{i}^{\prime})\,\Psi_{\rm qqq}(k_{i}) \tag{63}\] \[\left\{2\left(e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{1}^{ \prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{2}^{\prime})}\,+e^{-i \vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_{3}^{\prime})}\,\right)\right.\] \[\left.-e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{2}^{\prime})}\, -e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3}^{\prime})}\,-e^{-i\vec{k}_{g} \cdot(\vec{r}_{2}-\vec{r}_{1}^{\prime})}\,-e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}- \vec{r}_{3}^{\prime})}\,-e^{-i\vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_{1}^{ \prime})}\,-e^{-i\vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_{2}^{\prime})}\,\right\}\;.\] This can be written in terms of the position space wave functions (17), \[{\rm tr}_{g}\,\rho^{qqqqg}_{\alpha\alpha^{\prime}} = 2g^{2}C_{F}\,\Psi^{*}_{\rm qqq}(x_{i}^{\prime},\vec{r}_{i}^{ \prime})\,\Psi_{\rm qqq}(x_{i},\vec{r}_{i})\int_{\Delta x}\frac{{\rm d}x_{g}}{2x _{g}}\int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\,\left\{\left(\frac{1}{k_{g}^{2}+ \Delta^{2}}-\frac{1}{k_{g}^{2}+\Lambda^{2}}\right)\right.\] \[\left.2\left(e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{1}^{ \prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{2}^{\prime})}\,+e^{-i \vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_{3}^{\prime})}\,\right)\right)\] \[-\frac{1}{k_{g}^{2}+\Delta^{2}}\left[e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{2} ^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3}^{\prime})}\,+e^{-i \vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{1}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot( \vec{r}_{2}-\vec{r}_{3}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_ {1}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_{2}^{\prime})}\right]\right\}\.\] where we have reinstated the UV regulator \(\Lambda\)5. Footnote 5: The dependence on the IR cutoffs \(\Delta x\) and \(\Delta^{2}\), and on the UV regulator \(\Lambda^{2}\) cancels when eq. (62) is traced over the quark degrees of freedom, as shown in Appendix C. Let us now discuss the entropy of the density matrix (62). Both terms in (62) are proportional to the LO density matrix \(\rho_{\alpha\alpha^{\prime}}^{\rm LO}=\Psi_{\rm qqq}^{*}(x_{i}^{\prime},\vec{ r}_{i}^{\prime})\,\,\Psi_{\rm qqq}(x_{i},\vec{r}_{i})\) discussed in sec. II.1: \[\rho_{\alpha\alpha^{\prime}}^{\rm qqq}=F(\alpha,\alpha^{\prime})\,\rho_{ \alpha\alpha^{\prime}}^{\rm LO}\ \ \ \ \,\ \ \ \ \ \mbox{tr}_{g}\,\rho_{\alpha\alpha^{\prime}}^{\rm qqqqg}=G(\alpha,\alpha^{ \prime})\,\rho_{\alpha\alpha^{\prime}}^{\rm LO} \tag{65}\] with \[F(\alpha,\alpha^{\prime}) = 1-3C_{q}^{\rm reg}(\langle x_{q}\rangle)+2g^{2}C_{F}\,\int_{ \Delta x}\frac{{\rm d}x_{g}}{x_{g}}\int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\, \frac{1}{k_{g}^{2}+\Delta^{2}}\] \[\left\{e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{2})}\,+e^{-i \vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3})}+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}- \vec{r}_{3})}+e^{i\vec{k}_{g}\cdot(\vec{r}_{1}^{\prime}-\vec{r}_{2}^{\prime})} \,+e^{i\vec{k}_{g}\cdot(\vec{r}_{1}^{\prime}-\vec{r}_{2}^{\prime})}\,+e^{i\vec {k}_{g}\cdot(\vec{r}_{1}^{\prime}-\vec{r}_{3}^{\prime})}\,+e^{i\vec{k}_{g} \cdot(\vec{r}_{2}^{\prime}-\vec{r}_{3}^{\prime})}\,\right\}\] \[= 1-3C_{q}^{\rm reg}(\langle x_{q}\rangle)+\frac{2g^{2}C_{F}}{4\pi^ {2}}\,\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\left\{K_{0}(|\vec{r}_{1}-\vec {r}_{2}|\,\Delta)+K_{0}(|\vec{r}_{1}-\vec{r}_{3}|\,\Delta)+K_{0}(|\vec{r}_{2}- \vec{r}_{3}|\,\Delta)\right.\] \[\left.\qquad\qquad\qquad+K_{0}(|\vec{r}_{1}^{\prime}-\vec{r}_{2}^ {\prime}|\,\Delta)+K_{0}(|\vec{r}_{1}^{\prime}-\vec{r}_{3}^{\prime}|\,\Delta)+K _{0}(|\vec{r}_{2}^{\prime}-\vec{r}_{3}^{\prime}|\,\Delta)\right\}\] \[G(\alpha,\alpha^{\prime}) = 2g^{2}C_{F}\,\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\int\frac{ {\rm d}^{2}k_{g}}{(2\pi)^{3}}\,\left\{\left(\frac{1}{k_{g}^{2}+\Delta^{2}}- \frac{1}{k_{g}^{2}+\Lambda^{2}}\,\right)\,2\left(e^{-i\vec{k}_{g}\cdot(\vec{r} _{1}-\vec{r}_{1}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{2}^{ \prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{3}-\vec{r}_{3}^{\prime})}\,\right)\right.\] \[\left.-\frac{1}{k_{g}^{2}+\Delta^{2}}\left[e^{-i\vec{k}_{g}\cdot( \vec{r}_{1}-\vec{r}_{2}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec {r}_{3}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{1}^{\prime})} \,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{3}^{\prime})}\,+e^{-i\vec{k}_{g }\cdot(\vec{r}_{3}-\vec{r}_{1}^{\prime})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{3}- \vec{r}_{2}^{\prime})}\right]\right\}\] \[= \frac{2g^{2}C_{F}}{4\pi^{2}}\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_ {g}}\,\left\{2[K_{0}(|\vec{r}_{1}-\vec{r}_{1}^{\prime}|\,\Delta)-K_{0}(|\vec{ r}_{1}-\vec{r}_{1}^{\prime}|\,\Lambda)]+2[K_{0}(|\vec{r}_{2}-\vec{r}_{2}^{ \prime}|\,\Delta)-K_{0}(|\vec{r}_{2}-\vec{r}_{2}^{\prime}|\,\Lambda)]\right.\] \[\left.\qquad\qquad+2[K_{0}(|\vec{r}_{3}-\vec{r}_{3}^{\prime}|\, \Delta)-K_{0}(|\vec{r}_{3}-\vec{r}_{3}^{\prime}|\,\Lambda)]\right.\] \[\left.\qquad\qquad-K_{0}(|\vec{r}_{1}-\vec{r}_{2}^{\prime}|\, \Delta)-K_{0}(|\vec{r}_{1}-\vec{r}_{3}^{\prime}|\,\Delta)-K_{0}(|\vec{r}_{2}- \vec{r}_{1}^{\prime}|\,\Delta)\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-K_{ 0}(|\vec{r}_{2}-\vec{r}_{3}^{\prime}|\,\Delta)-K_{0}(|\vec{r}_{3}-\vec{r}_{1}^{ \prime}|\,\Delta)-K_{0}(|\vec{r}_{3}-\vec{r}_{2}^{\prime}|\,\Delta)\right\}. \tag{66}\] Interestingly, the diagonal matrix elements are unaffected by the presence of the gluon in the wave function, since \(F(\alpha,\alpha)+G(\alpha,\alpha)=1\) due to real-virtual cancellations. Also note that integration over the gluon reinstates the center of mass constraint for the coordinates of the three quarks. After tracing over region \(A\) both terms become block-diagonal in the quark number basis, since the integration over the gluon results in a reduced density matrix with fixed number of particles. The sub-blocks correspond to 0, 1, 2, 3 quarks in \(\overline{A}\), like in sec. III: \[\mbox{tr}_{A}\,\rho^{qqq}\,=\,\begin{pmatrix}\rho_{0}^{(F)}&0&0&0\\ 0&\rho_{1}^{(F)}&0&0\\ 0&0&\rho_{2}^{(F)}&0\\ 0&0&0&\rho_{3}^{(F)}\end{pmatrix}\ \ \ \,\ \ \ \ \mbox{tr}_{A}\,\mbox{tr}_{g}\,\rho^{qqqg}\,=\, \begin{pmatrix}\rho_{0}^{(G)}&0&0&0\\ 0&\rho_{1}^{(G)}&0&0\\ 0&0&\rho_{2}^{(G)}&0\\ 0&0&\rho_{3}^{(G)}\end{pmatrix}. \tag{67}\] Recall from sec. III that the \(\rho_{2}\) and \(\rho_{3}\) matrices are not diagonal in coordinate space (and their off-diagonal elements do get modified at \({\cal O}(g^{2})\)) but that \(\rho_{1}\) is, due to the COM constraint. In the limit of small \(L\), \(\rho_{0}\) and \(\rho_{1}\) give the leading contribution \(\sim L^{2}\) to the entropy. For \(\rho_{0}\) all quarks are in the region \(A\) that we trace over, so \(\vec{r}_{i}=\vec{r}_{i}^{\prime}\) and only the diagonal matrix elements of the density matrix (62) contribute. Since \(F(\alpha,\alpha)+G(\alpha,\alpha)=1\), the constant \(\rho_{0}\) remains equal to its value at LO, for any \(L\). Hence, the derivative of \(S^{(0)}\) for \(L\to 0\) remains \(\partial S^{(0)}/\partial L\sim L\) with the same numerical coefficient as in eq. (36). Now we consider \(\rho_{1}\). Since this block is diagonal in the quark indices, we only need to consider \[(\rho_ Here \(\alpha=\{x_{3},\vec{r}_{3}\in\overline{A}\}\). The perturbative correction again cancels as the sum \(F(\vec{r}_{i})+G(\vec{r}_{i})=1\), and we return to the expression from sec. III. The trace \([\int({\rm d}x_{3}/\Delta x)\,({\rm d}^{2}r_{3}/a^{2})\,\Theta_{\overline{A}}( \vec{r}_{3})]\) vanishes at \(L=0\) so there is no contribution to \(S(L=0)\). For \(L>0\), \[S^{(1)}=-3\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\Theta_{\overline{A}}( \vec{r}_{3})\,\Theta_{A}(\vec{r}_{1})\,\Theta_{A}(\vec{r}_{2})\,\left[F(\vec{r} _{i})+G(\vec{r}_{i})\right]\,\left|\Psi(x_{i},\vec{r}_{i})\right|^{2}\] \[\log\left(3\Delta x\,a^{2}\int[{\rm d}y_{i}]\,[{\rm d}^{2}s_{i}]\,\delta(\vec{ s}_{3}-\vec{r}_{3})\,\delta(x_{3}-y_{3})\,\Theta_{A}(\vec{s}_{1})\,\Theta_{A}( \vec{s}_{2})\,|\Psi(y_{i},\vec{s}_{i})|^{2}\right). \tag{69}\] The derivative of \(S^{(1)}\) w.r.t. \(L\) for \(L\to 0\) is proportional to \(L\) with the coefficient given in eq. (38). To summarize, we find that due to real-virtual cancellations in gluon emission, the leading (at small \(L\)) term in the entanglement entropy of quarks is identical to that for the initial non-perturbative three-quark wave function. Let us now take a look at \(\rho_{2}\) (two quarks inside the circle separated by a typical distance of order \(L\)). It is given by the LO expression, eq. (26), times \(F(\vec{r}_{1},\vec{r}_{2},\vec{r}_{3};\vec{r}_{1}^{\prime},\vec{r}_{2}^{\prime },\vec{r}_{3})+G(\vec{r}_{1},\vec{r}_{2},\vec{r}_{3};\vec{r}_{1}^{\prime},\vec {r}_{2}^{\prime},\vec{r}_{3})\). Due to the COM constraint \(x_{1}\vec{r}_{1}+x_{2}\vec{r}_{2}=x_{1}^{\prime}\vec{r}_{1}^{\prime}+x_{2}^{ \prime}\vec{r}_{2}^{\prime}=-x_{3}\vec{r}_{3}\), and \(x_{1}+x_{2}=x_{1}^{\prime}+x_{2}^{\prime}=1-x_{3}\), so that there is in fact no integral over the coordinates of the quark in \(A\) (\(\vec{r}_{3}\) or \(x_{3}\)) as those are completely determined by the coordinates and momentum fractions of the two quarks in \(\overline{A}\). We need \(F+G\) for \(\vec{r}_{3}=\vec{r}_{3}^{\prime}\), and we shall use the position space (Bessel function) form of these functions from eq. (66). In the limit \(\vec{r}_{3}^{\prime}\to\vec{r}_{3}^{\prime}\) we have that \(2K_{0}(|\vec{r}_{3}-\vec{r}_{3}^{\prime}|\,\Delta)-2K_{0}(|\vec{r}_{3}-\vec{r }_{3}^{\prime}|\,\Lambda)\to\log\frac{\Delta^{2}}{\Delta^{2}}\). Furthermore, we consider \(K_{0}(|\vec{r}_{1}-\vec{r}_{1}^{\prime}|\,\Lambda)\) and \(K_{0}(|\vec{r}_{2}-\vec{r}_{2}^{\prime}|\,\Lambda)\) to be exponentially small since generically \(|\vec{r}_{1}-\vec{r}_{1}^{\prime}|,|\vec{r}_{2}-\vec{r}_{2}^{\prime}|\sim L\) and \(L\Lambda\gg 1\). With that, and noting that \(L\Delta\ll 1\), we can write \[F+G \simeq 1-\frac{2g^{2}C_{F}}{4\pi^{2}}\,\log\frac{x_{q}}{\Delta x} \,\log\frac{\Lambda^{2}}{\Delta^{2}}+\frac{g^{2}C_{F}}{8\pi^{2}}\,\int_{\Delta x }\frac{{\rm d}x_{g}}{x_{g}}\left[\log\frac{1}{(\vec{r}_{1}-\vec{r}_{2})^{2}\, \Delta^{2}}+\log\frac{1}{(\vec{r}_{1}^{\prime}-\vec{r}_{2}^{\prime})^{2}\, \Delta^{2}}+2\log\frac{1}{(\vec{r}_{1}-\vec{r}_{1}^{\prime})^{2}\,\Delta^{2}}\right. \tag{70}\] \[\left.+2\log\frac{1}{(\vec{r}_{2}-\vec{r}_{2})^{2}\,\Delta^{2}}- \log\frac{1}{(\vec{r}_{1}-\vec{r}_{2}^{\prime})^{2}\,\Delta^{2}}-\log\frac{1} {(\vec{r}_{2}-\vec{r}_{1}^{\prime})^{2}\,\Delta^{2}}\right]\] \[\simeq 1-\frac{2g^{2}C_{F}}{4\pi^{2}}\,\log\frac{x_{q}}{\Delta x}\, \log L^{2}\Lambda^{2}+\frac{g^{2}C_{F}}{8\pi^{2}}\,\log\frac{x_{q}}{\Delta x} \,\log\frac{(\vec{r}_{1}-\vec{r}_{2}^{\prime})^{2}\,(\vec{r}_{2}-\vec{r}_{1}^{ \prime})^{2}}{(\vec{r}_{1}-\vec{r}_{2})^{2}\,(\vec{r}_{1}^{\prime}-\vec{r}_{2 }^{\prime})^{2}}\] (71) \[\simeq 1-\frac{2g^{2}C_{F}}{4\pi^{2}}\,\log\frac{x_{q}}{\Delta x}\, \log L^{2}\Lambda^{2}. \tag{72}\] The equality here is valid with leading logarithmic accuracy, since in the second step, \(\log\frac{1}{(\vec{r}_{1}-\vec{r}_{1}^{\prime})^{2}\,\Delta^{2}}\) and \(\log\frac{1}{(\vec{r}_{2}-\vec{r}_{2}^{\prime})^{2}\,\Delta^{2}}\) were replaced by \(\log\frac{1}{L^{2}\Delta^{2}}\); and in the last step an \(O(1)\) (non-logarithmic) term was dropped. With these simplifications \(F+G\) is just a number, i.e. it does not modify the matrix structure of \(\rho_{2}\) relative to the leading order, but only multiplies the entire matrix by a numerical prefactor. Its contribution to the entropy \(S^{(2)}\) is given by the last term in eq. (33) with the substitution \(|\Psi(x_{i},\vec{r}_{i})|^{2}\to(F+G)\,|\Psi(x_{i},\vec{r}_{i})|^{2}\): \[S^{(2)}=-3\left(1-\frac{2g^{2}C_{F}}{4\pi^{2}}\log\frac{x_{q}}{\Delta x}\,\log L ^{2}\Lambda^{2}\right)\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\Theta_{A}(\vec{r}_ {3})\,\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec{r}_{2})\,| \Psi(x_{i},\vec{r}_{i})|^{2}\] Figure 1: Left (virtual correction): the transverse position of the gluon emitting and gluon absorbing quark is the same; hence here the gluon transverse momentum is integrated up to \(\Lambda\), and we obtain a contribution \(\sim\log\frac{\Delta^{2}}{\Delta^{2}}\). \[\log\left(3\Delta x\,a^{2}\int[{\rm d}y_{i}]\,[{\rm d}^{2}s_{i}]\,\delta(\vec{s}_ {3}-\vec{r}_{3})\,\delta(x_{3}-y_{3})\,\Theta_{\overline{A}}(\vec{s}_{1})\, \Theta_{\overline{A}}(\vec{s}_{2})\,|\Psi(y_{i},\vec{s}_{i})|^{2}\right). \tag{73}\] Here, with leading logarithmic accuracy we have omitted the factor \(F+G\) under the logarithm. Note that, as opposed to the entropy associated with \(\rho_{1}\), this contribution to the entropy does receive corrections from gluon emission. This arises because the two quarks may be at different positions in the amplitude (1, 2) and the conjugate amplitude (1', 2'). The mismatch between these positions is generically of order \(L\), c.f. Fig. 1. For such configurations the contribution of the "real" diagrams, i.e. the diagrams where a quark exchanges a gluon with itself across the cut is proportional to \(\log\,\frac{1}{L^{2}\Delta^{2}}\) rather than \(\log\frac{\Lambda^{2}}{\Delta^{2}}\) as is the case for the "virtual" diagrams (where the gluon is exchanged in the amplitude or the complex conjugate amplitude). Thus the real-virtual cancellation is incomplete and leads to the logarithmic correction in (73). The real-virtual cancellation essentially only occurs when the gluon is emitted outside \(\bar{A}\) but not inside \(\bar{A}\). Also note that the perturbative correction to \(S^{(2)}\) is negative, suggesting stronger correlations between quarks when perturbative gluon emission is included. We note again, that \(S^{(2)}\) is subleading for small \(L^{2}\), and thus only provides a small correction to the quark entanglement entropy. ### Entanglement entropy of the gluon We now integrate over the quark degrees of freedom in the whole space. The resulting density matrix for the gluon has the general structure \[\rho\,=\,\begin{pmatrix}{\rm tr}_{\rm qqq}\,\rho^{qqq}&0\\ 0&\rho^{g}\end{pmatrix}\,. \tag{74}\] The first block is just a number which is equal to the probability that no gluons are present in the wave function. It is given by the integral of the diagonal of eq. (60) over \([{\rm d}x_{i}]\) and \([{\rm d}^{2}r_{i}]\): \[{\rm tr}_{\rm qqq}\,\rho^{qqq}=1-3C_{q}^{\rm reg}((x_{q}))+4g^{2} C_{F}\,\int[{\rm d}x_{i}]\int[{\rm d}^{2}r_{i}]\,\,\,|\Psi_{\rm qqq}(x_{i},\vec{r}_{i} )|^{2}\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\int\frac{{\rm d}^{2}k_{g}}{(2 \pi)^{3}}\,\frac{1}{k_{g}^{2}+\Delta^{2}}\] \[\left[\cos\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{2})+\cos\vec{k}_{ g}\cdot(\vec{r}_{1}-\vec{r}_{3})+\cos\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_{3}) \right]. \tag{75}\] For the second block we return to eq. (61) and integrate the diagonal in the three-quark space (\(x_{i}^{\prime}=x_{i}\) and \(\vec{r}_{i}^{\prime}=\vec{r}_{i}\) for the three quarks) over \([{\rm d}x_{i}]\) and \([{\rm d}^{2}r_{i}]\)6. Note that since we have traced out the quarks in the whole space, the COM constraint forces \(\vec{r}_{g}=\vec{r}_{g}^{\prime}\), and the gluon density matrix is diagonal: Footnote 6: In principle these are integrations over the LC momentum fractions and transverse coordinates of the three quarks with COM constraints which include the gluon, since the density matrix in eqs. (61) was defined over the domain \(\sum_{i=1}^{4}x_{i}=1\), \(\sum_{i=1}^{4}x_{i}\vec{r}_{i}=0\). However, when \(x_{g}\) is very small, the presence of the gluon will not significantly restrict the integrations over quark \(x_{i},\vec{r}_{i}\), and we can approximate \(x_{g}\approx 0\) in the \(\delta\)-functions for the COM constraints. \[\rho_{\alpha\alpha}^{g} = 2g^{2}C_{F}\,\int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\int\frac{{ \rm d}^{2}k_{g}^{\prime}}{(2\pi)^{3}}\frac{\vec{k}_{g}\cdot\vec{k}_{g}^{\prime }}{(k_{g}^{2}+\Delta^{2})\,(k_{g}^{\prime 2}+\Delta^{2})}\,e^{i\vec{k}_{g}\cdot\vec{r}_{g} -i\vec{k}_{g}^{\prime}\cdot\vec{r}_{g}}\ \int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\,|\Psi_{\rm qqq}(x_{i},\vec{r}_{i} )|^{2} \tag{76}\] \[\left\{2\left(e^{i(\vec{k}_{g}^{\prime}-\vec{k}_{g})\cdot\vec{r} _{i}}+e^{i(\vec{k}_{g}^{\prime}-\vec{k}_{g})\cdot\vec{r}_{2}}+e^{i(\vec{k}_{g} ^{\prime}-\vec{k}_{g})\cdot\vec{r}_{3}}\right)\right.\] \[\left.-e^{i\vec{k}_{g}^{\prime}\cdot\vec{r}_{2}-i\vec{k}_{g}\cdot \vec{r}_{1}}-e^{i\vec{k}_{g}^{\prime}\cdot\vec{r}_{3}-i\vec{k}_{g}\cdot\vec{r} _{1}}-e^{i\vec{k}_{g}^{\prime}\cdot\vec{r}_{1}-i\vec{k}_{g}\cdot\vec{r}_{2}}-e^ {i\vec{k}_{g}^{\prime}\cdot\vec{r}_{3}-i\vec{k}_{g}\cdot\vec{r}_{2}}-e^{i\vec{ k}_{g}^{\prime}\cdot\vec{r}_{1}-i\vec{k}_{g}\cdot\vec{r}_{3}}-e^{i\vec{k}_{g}^{ \prime}\cdot\vec{r}_{2}-i\vec{k}_{g}\cdot\vec{r}_{3}}\right\}\.\] Here, \(\alpha=\{x_{g},\vec{r}_{g}\}\) and \(\alpha^{\prime}=\{x_{g}^{\prime},\vec{r}_{g}^{\prime}\}\). As before, the gluon "propagators" have to be regularized in the UV by the Pauli-Villars regulator. This entails substituting \(\frac{1}{k_{g}^{2}+\Delta^{2}}\to\frac{1}{k_{g}^{2}+\Delta^{2}}-\frac{1}{k_{g}^{ 2}+\Delta^{2}}\), and the same for \(k_{g}^{\prime}\). We can simplify the resulting expression somewhat, noting that the UV divergence only resides in the first term in the curly brackets in (76), since in the second term both integrations, over \(\vec{k}_{g}\) and \(\vec{k}_{g}^{\prime}\), are already regulated by the phase factors, while in the first term the phase factors only regulate the integration over \(k_{g}-k_{g}^{\prime}\). Also, it is sufficient to regulate only one of the propagators in the product to eliminate the UV divergence, but this regularization has to be done symmetrically between \(\vec{k}_{g}\) and \(\vec{k}_{g}^{\prime}\). Thus, we substitute \[\frac{1}{(k_{g}^{2}+\Delta^{2})(k_{g}^{\prime 2}+\Delta^{2})}\to\frac{1}{(k_{g}^{2}+ \Delta^{2})(k_{g}^{\prime 2}+\Delta^{2})}-\frac{1}{2}\left[\frac{1}{(k_{g}^{2}+\Delta^{2})(k_{g}^{ \prime 2}+\Lambda^{2})}+\frac{1}{(k_{g}^{2}+\Lambda^{2})(k_{g}^{\prime 2}+\Delta^{2})}\right] \tag{77}\] _in the first term_ in the curly brackets of (76). We also use the symmetry of the quark wave function and the integration measure to rename coordinates in some terms. The resulting UV regular density matrix then is \[\rho^{g}_{\alpha\alpha} = 12g^{2}C_{F}\,\int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\int\frac{{ \rm d}^{2}k^{\prime}_{g}}{(2\pi)^{3}}\frac{\vec{k}_{g}\cdot\vec{k}^{\prime}_{g }}{(k_{g}^{2}+\Delta^{2})\,(k^{\prime 2}_{g}+\Delta^{2})}\int[{\rm d}x_{i}]\,[{\rm d }^{2}r_{i}]\,\,|\Psi_{\rm qqq}(x_{i},\vec{r}_{i})|^{2} \tag{78}\] \[\qquad\left\{e^{i(\vec{k}^{\prime}_{g}-\vec{k}_{g})\cdot(\vec{r}_ {1}-\vec{r}_{g})}-e^{i(\vec{k}_{g}-\vec{k}^{\prime}_{g})\cdot\vec{r}_{g}+i\vec {k}^{\prime}_{g}\cdot\vec{r}_{2}-i\vec{k}_{g}\cdot\vec{r}_{1}}\right\}\] \[-6g^{2}C_{F}\,\int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\int\frac{ \mathrm{d}^{2}k^{\prime}_{g}}{(2\pi)^{3}}\left[\frac{\vec{k}_{g}\cdot\vec{k}^ {\prime}_{g}}{(k_{g}^{2}+\Delta^{2})(k^{\prime 2}_{g}+\Lambda^{2})}+\frac{\vec{k}_{g} \cdot\vec{k}^{\prime}_{g}}{(k_{g}^{2}+\Lambda^{2})(k^{\prime 2}_{g}+ \Delta^{2})}\right]\] \[\qquad\qquad\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\,|\Psi_{\rm qqq }(x_{i},\vec{r}_{i})|^{2}e^{i(\vec{k}^{\prime}_{g}-\vec{k}_{g})\cdot(\vec{r}_ {1}-\vec{r}_{g})}\,\,.\] Note that when calculating trace of \(\rho\), the regulator simply adds the term \[-12g^{2}C_{F}\int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\,\frac{1}{\vec{k}_{g}^{2 }+\Lambda^{2}}\,\,, \tag{79}\] which (up to powers of \(\Delta^{2}/\Lambda^{2}\) ) cancels the similar term that arises from the regulator in \(C_{q}^{\rm reg}\) in (75). Thus, our Pauli-Villars regularization preserves the trace of the density matrix. On the other hand, \({\rm tr}\,\rho^{g}\) by itself has the meaning of the probability to find one gluon the proton wave function. The trace (\(2\pi\int{\rm d}^{2}r_{g}\,\int{\rm d}x_{g}/(2x_{g})\)) is given by \[{\rm tr}\,\rho^{g}=\frac{6g^{2}C_{F}}{8\pi^{2}}\,\log\frac{\langle x_{q} \rangle}{\Delta x}\left[\log\frac{\Lambda^{2}}{\Delta^{2}}-2\int[{\rm d}x_{i} ]\,[{\rm d}^{2}r_{i}]\,\,|\Psi_{\rm qqq}(x_{i},\vec{r}_{i})|^{2}\,K_{0}(|\vec{ r}_{2}-\vec{r}_{1}|\,\Delta)\right] \tag{80}\] where \(\Delta x\) as before is the IR cutoff on possible longitudinal momenta and the integral over \(x_{g}\) is cut off at \(\langle x_{q}\rangle\) consistently with the soft gluon approximation. In the second term the integral is dominated by \(|\vec{r}_{2}-\vec{r}_{1}|\) of order of the collinear regulator \(\Delta^{-1}\), so the second term is negligible. Eq. (80) can be related to the gluon PDF of the proton. To leading order in perturbation theory (see for example eq. (65a) in Kovchegov and Mueller [38]): \[\frac{\alpha_{s}C_{F}}{\pi}\,\frac{1}{\ell^{2}}=\frac{\partial}{\partial\ell^{ 2}}\,xG_{q}(x,\ell^{2})\,\,, \tag{81}\] is the gluon PDF of a quark. Thus, for a proton consisting of three quarks we identify the gluon PDF as \[\frac{3g^{2}C_{F}}{4\pi^{2}}\int{\rm d}k^{2}\,\left[\frac{1}{k^{2}+\Delta^{2}} -\frac{1}{k^{2}+\Lambda^{2}}\right]=\frac{3g^{2}C_{F}}{4\pi^{2}}\log\frac{ \Lambda^{2}}{\Delta^{2}}\,\,\to\,\,xG(x,\Lambda^{2})\,\,. \tag{82}\] Hence, we have \[{\rm tr}\,\rho^{g}=\int\limits_{\Delta x}\frac{{\rm d}x_{g}}{x_{g}}\,\,x_{g}G( x_{g},\Lambda^{2})=\int\limits_{\Delta x}{\rm d}x_{g}\,\,G(x_{g},\Lambda^{2})\,\,. \tag{83}\] Indeed this is just the total number of gluons at the resolution scale of the UV cutoff \(\Lambda\). The fact that the UV cutoff appears in this quantity is not surprising, since here we are dealing with the density matrix of the entire proton wave function rather than the part of it probed by a DIS probe. If we were to calculate the density matrix of only those degrees of freedom that participate in a DIS process, we expect that the UV cutoff would be substituted by the external resolution scale \(\Lambda^{2}\to Q^{2}\) provided by the virtual photon. Let us now construct the reduced density matrix after tracing over \(A\). It is of the form \[\rho\,=\,\begin{pmatrix}I+\rho_{0}^{g}&0\\ 0&\rho_{1}^{g}\end{pmatrix}\,\,, \tag{84}\] where \(I\equiv{\rm tr}_{\rm qqq}\,\rho^{qqq}\) for brevity. The first entry is the probability that there are no gluons in \(\bar{A}\), and is of course a pure number. \[\rho_{0}^{g}=\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\,\,2\pi\int{\rm d}^{2} r_{g}\,\Theta_{A}(\vec{r}_{g})\,\rho_{\alpha\alpha}^{g}\,\,. \tag{85}\] The lower block is a diagonal (in coordinate space) matrix \[\rho_{1}^{g}(\vec{r})=\rho^{g}(\vec{r})\Theta_{\bar{A}}(\vec{r})\, \tag{86}\] with \(\rho^{g}\) from eq. (78). Like for quarks, we need to scale \(\rho_{1}^{g}\) with the transverse-longitudinal lattice spacing, and with the factor \(2\pi/2x_{g}\) that accompanies the integration measure \({\rm d}x_{g}\,{\rm d}^{2}r_{g}\). We will not do it explicitly here, but instead restore these factors directly in the expression for the entropy. For \(L=0\), as already mentioned, \(I+\rho_{0}^{g}=1\). We are interested in the non-trivial small-\(L\) regime, \(\Delta^{-1}\gg L\gg\Lambda^{-1}\) or \(L\Delta\ll 1\ll\Lambda L\). In this regime we expect \(I+\rho_{0}^{g}\sim 1-{\cal O}({\rm L}^{2})\). The contribution to the entropy associated with this single eigenvalue of the density matrix should be \(S^{(0)}\sim{\cal O}(L^{2})\). The matrix \(\rho_{1}^{g}\) on the other hand, has small eigenvalues, for the same reason discussed previously. All the eigenvalues should be of order \((\Delta x)\,a^{2}\,\Delta^{2}\), due to the dimensionality of \(\rho_{1}^{g}\). Thus, we expect the contribution from \(\rho_{1}^{g}\) to the entropy to contain an additional enhancement by a logarithm of \((\Delta x)\,a^{2}\): \[S^{(1)}=-\int_{\Delta x}\frac{{\rm d}x_{g}}{2x_{g}}\,2\pi\int{\rm d}^{2}r_{g} \ \Theta_{\overline{A}}(\vec{r}_{g})\ (\rho^{g})_{\alpha\alpha}\,\log\left(2\pi a^{2}\frac{ \Delta x}{2x_{g}}\,(\rho^{g})_{\alpha\alpha}\right). \tag{87}\] This therefore is the leading contribution to the entropy and we will calculate it first. Let us examine \((\rho^{g})_{\alpha\alpha}\) for \(|\vec{r}_{g}|<L\ll\Delta^{-1}\). The first term in (78) is \[12g^{2}C_{F}\ \int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\int\frac{{ \rm d}^{2}q}{(2\pi)^{3}}\left[\frac{1}{(\vec{k}_{g}+\vec{q})^{2}+\Delta^{2}}+ \frac{\vec{k}_{g}\cdot\vec{q}}{(\vec{k}_{g}^{2}+\Delta^{2})\,((\vec{k}_{g}+ \vec{q})^{2}+\Delta^{2})}\right]\,e^{-i\vec{q}\cdot\vec{r}_{g}}\] \[\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\ |\Psi_{{\rm qq}}(x_{i},\vec{r}_{i})|^{2} \left\{e^{i\vec{q}\cdot\vec{r}_{i}}-e^{i\vec{k}_{g}\cdot(\vec{r}_{2}-\vec{r}_ {1})+i\vec{q}\cdot\vec{r}_{2}}\right\}\,. \tag{88}\] The integrals over the quark positions basically result in a "smeared \(\delta\)-function" in \(\vec{q}\) with width \(\Delta\), so \(|\vec{q}|\sim\Delta\). That means that the phase \(e^{-i\vec{q}\cdot\vec{r}_{g}}\sim 1\) since \(L\Delta\ll 1\). Furthermore, the denominator of the second rational factor is essentially constant (independent of the direction of \(\vec{q}\)) both for small and large \(\vec{k}_{g}\); hence, it gives zero after integration over the directions of \(\vec{q}\). Lastly, the second phase factor in the curly braces would restrict \(|\vec{k}_{g}|\sim\Delta\) which results in a subleading contribution. In all, we simplify the above to \[12g^{2}C_{F}\ \int\frac{{\rm d}^{2}k_{g}}{(2\pi)^{3}}\frac{1}{\vec{k}_ {g}^{2}+\Delta^{2}}\int\frac{{\rm d}^{2}q}{(2\pi)^{3}}\int[{\rm d}x_{i}]\,[{ \rm d}^{2}r_{i}]\ |\Psi_{{\rm q}{\rm qq}}(x_{i},\vec{r}_{i})|^{2}\,e^{i\vec{q}\cdot \vec{r}_{i}}\] \[\simeq 12g^{2}C_{F}\ \frac{\Delta^{2}}{\pi}\int\frac{{\rm d}^{2}k_{g}}{(2 \pi)^{4}}\frac{1}{\vec{k}_{g}^{2}+\Delta^{2}}. \tag{89}\] We have assumed in these estimates that the nonperturbative scale entering the gluon propagator (\(\Delta^{2}\)) and the nonperturbative scale appearing in the quark wave function (the average quark transverse momentum squared \(\sim\beta^{2}\)) are of the same order. The second term in (78) gives a similar expression but with \(\Delta\) replaced by \(\Lambda\), and with the opposite sign. In all, for \(|\vec{r}_{g}|<L\), \[(\rho^{g})_{\alpha\alpha}\simeq\frac{12\,g^{2}C_{F}}{(2\pi)^{4}}\,\Delta^{2}\, \log\frac{\Lambda^{2}}{\Delta^{2}}. \tag{90}\] This has a simple interpretation. We are calculating the probability density of a gluon to be emitted at point \(\vec{r}\) inside \(\bar{A}\). Since the region inside \(\bar{A}\) is small compared to the proton (\(L\ll 1/\Delta\)) the emission probability does not depend on \(\vec{r}\). It is given (with the appropriate prefactor) by the integral of the intensity of the Weizsacker-Williams field of a quark, integrated over the coordinate of the quark weighted with the square of the quark wave function. With logarithmic accuracy this is simply \(\int_{\Lambda^{-2}<r^{2}<\Delta^{-2}}{\rm d}^{2}r\frac{1}{r^{2}}\), which is precisely the logarithm in (90). Using this in eq. (87) we obtain \[S^{(1)} = -L^{2}\Delta^{2}\,\int_{\Delta x}{\rm d}x_{g}\,G(x_{g},\Lambda^{2} /\Delta^{2})\,\log\left(\frac{a^{2}\Delta^{2}}{\pi}(\Delta x)\,G(x_{g},\Lambda ^{2}/\Delta^{2})\right) \tag{91}\] \[= -L^{2}\Delta^{2}\,\int_{\Delta x}{\rm d}x_{g}\,G(x_{g},\Lambda^{2} /\Delta^{2})\,\log\left(\frac{\Delta^{2}}{\pi\Lambda^{2}}(\Delta x)\,G(x_{g}, \Lambda^{2}/\Delta^{2})\right)\.\] Once again, \((\Delta^{2}/\pi)\,G(x_{g},\Lambda^{2}/\Delta^{2})\) is the density of gluons per unit transverse area. Since the density matrix (84) is normalized, we infer from (90) \[I+\rho_{0}^{g}=1-\frac{3g^{2}C_{F}}{4\pi^{2}}\,L^{2}\Delta^{2}\,\int_{\Delta x} \,\frac{{\rm d}x_{g}}{x_{g}}\,\log\Lambda^{2}/\Delta^{2}\;, \tag{92}\] and the associated entropy is \[S^{(0)}=\frac{3g^{2}C_{F}}{4\pi^{2}}\,L^{2}\Delta^{2}\,\int_{\Delta x}\frac{{ \rm d}x_{g}}{x_{g}}\,\log\Lambda^{2}/\Delta^{2}=L^{2}\Delta^{2}\,\int_{\Delta x }{\rm d}x_{g}\,G(x_{g},\Lambda^{2}/\Delta^{2})\;. \tag{93}\] This is the gluon density per unit transverse area multiplied by the area of the cutout. As expected, this is a subleading correction to (91) and can be neglected. Thus our final result for the gluon entanglement entropy in the limit of small area of the cutout is given in eq. (91). ## VI Discussion To summarize, we calculated the entanglement entropy of subsets (in several variations) of partonic modes in the model proton wave function inside a small disk of radius \(L\) by integrating all the other modes in the rest of the wave function. The area was taken small relative to the total area of the proton (a soft, nonperturbative scale) \(L^{2}\ll\pi/\Delta^{2}\), but greater than the inverse UV cutoff \(L^{2}\gg 1/\Lambda^{2}\). We now want to comment on these results. Let us consider the two expressions (40) and (91). Eq. (40) gives the entanglement entropy of quarks at leading order in the model wave function, while eq. (91) is the entanglement entropy of gluons at NLO. They have almost identical structure and are reminiscent of the form of Boltzmann entropy of a system of noninteracting particles. The PDFs that enter (40) and (91) (\(F\) in the former and \(G\) in the latter) are the total numbers of quarks and gluons in the proton. Defining the number of partons (at a given \(x\)) inside an area \(S\), in the longitudinal momentum interval \(\Delta x\), as \(N_{S}(x)=\frac{S}{A_{p}}(\Delta x)F(x)\) for quarks and \(N_{S}(x)=(S\Delta^{2}/\pi)\,(\Delta x)\,G(x)\) for gluons, both equations can be written as \[S_{E}=-\int\frac{dx}{\Delta x}N_{L^{2}}(x)\log[N_{a^{2}}(x)]\,. \tag{94}\] This expression is quite natural. For small \(a^{2}\) and \(\Delta x\), one can only have either one or no partons inside the elementary cell \(a^{2}\Delta x\). The average number of partons \(N_{a^{2}}(x)\) is then just the probability that the cell contains a single parton. Eq. (94) then is just (the leading term of) the Shannon entropy of this distribution multiplied by the total area (or rather \(L^{2}/a^{2}\) - the number of independent elementary cells in the area of the cutout), and integrated over \(x\) with the appropriate measure. The fact that the entropy is proportional to the area \(L^{2}\) is a trademark property of an extensive quantity. Of course the entanglement entropy is not strictly speaking extensive - the proportionality to area only holds when the area of the cutout is small. Were we to take the area of the cutout to be equal to the area of the proton we would have to obtain vanishing entropy as we would not be integrating out any degrees of freedom. So, the dependence of entropy on area should follow some sort of a Page curve which could be obtained numerically from eq. (33), for example. The one significant difference between eqs. (40) and (91) is that in the latter the number of particles is defined with the resolution scale \(\Lambda^{2}\), as is appropriate in the QCD improved parton model, while in the former there is no need to specify a resolution scale. Does the entropy calculated here have direct physical meaning? One should remember, of course, that the calculation presented here does not refer to any particular physical process, but rather to the properties of the proton wave function _per se_. As such it is not observable directly. We can try, however, to interpret this result from the point of view of a DIS or jet production process. In this type of process there is a physical resolution scale, the momentum transfer \(Q^{2}\) to the electron or the transverse momentum of a produced jet. A naive physical picture is then that this scale should determine the size of the area of the proton measured by the probe, as well as the resolution with which one measures the parton number. Taking \(L^{2}\sim a^{2}\sim 1/Q^{2}\), \(\Delta^{2}=\pi\Lambda_{\rm QCD}^{2}\), and fixing the value of \(x\) as appropriate for DIS, we then may hope to define a more physical quantity. For gluons that would be \[S_{E}(Q^{2},x)=-N_{Q^{-2}}(x)\log[N_{Q^{-2}}(x)]=-\frac{\Lambda_{\rm QCD}^{2}} {Q^{2}}\,(\Delta x)G(x,Q^{2})\log\left(\frac{\Lambda_{\rm QCD}^{2}}{Q^{2}}( \Delta x)G(x,Q^{2})\right) \tag{95}\] It is not entirely clear to us what should be taken as the "longitudinal resolution scale" \(\Delta x\). The inclusive DIS cross section does not provide for a scale of this sort. However, if one measures the spectrum of produced particles, perhaps \(\Delta x\) should be related to the width of the rapidity bin in which the particles are measured. Finally, it would be interesting to compare our results with those of ref. [8]. This may not be entirely straightforward for the following reason. Our expressions apply to the "dilute regime" when the entropy is dominated by states with one parton within the cutout area, \(\frac{\Lambda_{\rm QCD}^{2}}{Q^{2}}(\Delta x)G(x,Q^{2})\ll 1\). On the other hand ref. [8] focused on the saturation regime where the number of particles in the cutout is assumed to be \({\cal O}(1/\alpha_{s})\). Still, the actual derivation of ref. [8] only requires that the rapidity is large enough so that the exponential growth of the gluon density in rapidity has taken hold. This in itself does not imply saturation, but rather the pre-saturation BFKL-like regime, so that the gluon density is still small but low-\(x\) evolution already has to be resummed. At any rate, one expects the same elements to appear in the expression for entropy both in ref. [8] and in our calculation. Indeed, the parton density is the basic physical quantity that appears, and in this respect the two results are similar. However there are some significant differences between the two. In particular, according to ref. [8] the entropy is given by the logarithm of \(xG(x)\). This is somewhat perplexing since \(xG(x)\) has the meaning of the longitudinal momentum carried by the partons, and not the parton number. Eq. (95) on the other hand contains \(\frac{\Lambda_{\rm QCD}^{2}}{Q^{2}}(\Delta x)G(x,Q^{2})\) which is precisely the number of partons in the area of the cutout (and in the rapidity interval \(\Delta x\)), which appears to be the natural basic element for quantifying the entropy. Whether the number of partons at high energy is somehow supplanted by the longitudinal momentum fraction carried by the partons is an interesting question which should be answered by an explicit calculation. ## Acknowledgements A.D. acknowledges support by the DOE Office of Nuclear Physics through Grant DE-SC0002307. A.K. is supported by the NSF Nuclear Theory grant 2208387. V.S. is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through the Contract No. DE-SC002008. We also acknowledge support by the Saturated Glue (SURGE) Topical Collaboration. We thank M. Lublinsky (Ben-Gurion University of the Negev) and the organizers of the workshop "Physics of Saturation - precision and quasicollectivity" for their support and hospitality in May 2022; we are also grateful to the participants of the workshop for stimulating discussions which initiated this work. ## Appendix A Shannon entropy of a probability density function for a continuous degree of freedom In this appendix we review the definition of the entropy associated with a classical probability density function over a continuous degree of freedom. We discuss the extension to a quantum mechanical density matrix at the end. First, recall the expression for the classical Shannon entropy for a set of _discrete_ outcomes of a random draw, with probabilities \(P_{i}\): \[H(\{P_{i}\})=-\sum_{i}P_{i}\log P_{i}. \tag{101}\] If the set of possible outcomes is _continuous_, e.g. \(x\in\mathbb{R}^{+}\), their distribution is given by a normalized, integrable (including possibly the \(\delta\)-function measure) probability density function \(p(x)>0\), \[\int\mathrm{d}x\,p(x)=1. \tag{102}\] Note that if \(x\) is dimensional then \(\dim(p)=[\dim(x)]^{-1}\). Also, that the integration measure does not involve _any \(x\)-dependent Jacobians_, all of which must be absorbed into \(p(x)\) for it to be a valid probability density with respect to the integration measure \(\mathrm{d}x\). To apply Shannon's formula here, we first discretize the continuous set of outcomes by introducing (equal size) bins \(\Delta x>0\). An outcome \(x\) falls into bin \(i=\lfloor x/\Delta x\rfloor\). The probability \(P_{i}\) for an event in bin \(i\), is \[P_{i}=\int\limits_{i\Delta x}^{(i+1)\Delta x}\mathrm{d}x\,p(x)\equiv p_{i}\, \Delta x\,\ \ \ \ \ \ \ \ \ \ (i\in\mathbb{N}_{0}). \tag{103}\] In the last step we defined the binned density \(p_{i}\) as the average of the probability density \(p(x)\) over bin \(i\). We now have \[H[p]=-\sum_{i}\,\int\limits_{i\Delta x}^{(i+1)\Delta x}\mathrm{d}x\,p(x)\,\log(p _{i}\,\Delta x). \tag{100}\] If \(p(x)\) is a continuous function then \[H[p]=-\int\mathrm{d}x\,p(x)\,\log(p(x)\,\Delta x)\, \tag{101}\] and the entropy does not have a finite \(\Delta x\to 0\) limit. (Even so, the relative entropy for two such probability densities does converge.) If, on the other hand, \(p(x)\) is given by a sum of Dirac \(\delta\) functions then eq. (100) does converge since this basically recovers the case of discrete outcomes. Along similar lines, let \(\rho_{xx^{\prime}}\) denote a density matrix describing a continuous degree of freedom. By postulate, its trace is normalized, \[\operatorname{tr}\rho=\int\mathrm{d}x\,\rho_{xx}=\int\frac{\mathrm{d}x}{ \Delta x}\,\left(\Delta x\right)\rho_{xx}=1. \tag{102}\] Once again we stress that the trace measure must be \(\mathrm{d}x\), with any \(x\)-dependent Jacobians absorbed into \(\rho\). The von Neumann entropy \(S\) is given by the Shannon entropy of the vector of dimensionless eigenvalues of \(\left(\Delta x\right)\rho\). Upon binning the eigenvalue distribution, it is given by \[S=-\sum_{i}\,\int\limits_{i\Delta\lambda}^{(i+1)\Delta\lambda}\mathrm{d} \lambda\,p(\lambda)\,\log(p_{i}\,\Delta\lambda). \tag{103}\] ## Appendix B Calculating traces of powers of \(\rho\) Gearing up for the calculation of the von Neumann entropy, we write expressions for the trace of powers of the density matrix, \(\operatorname{tr}\left(\rho_{\overline{A}}\right)^{N}\) (which, if desired can be used to determine the Renyi entropy). Since our density matrix is block diagonal in the particle number basis, the different blocks don't talk to each other and can be considered separately. In the zero particle subspace \(\rho_{0}\) is a number and thus \[\operatorname{tr}\rho_{0}^{N}=\rho_{0}^{N}. \tag{104}\] It is easy to see that in the three particle subspace we simply have \[\operatorname{tr}\rho_{3}^{N}=(\operatorname{tr}\rho_{3})^{N}=\left[\int[ \mathrm{d}x_{i}]\,[\mathrm{d}^{2}r_{i}]\,\Theta_{\overline{A}}(\vec{r}_{1})\, \Theta_{\overline{A}}(\vec{r}_{2})\,\Theta_{\overline{A}}(\vec{r}_{3})\,\left| \Psi(x_{i},\vec{r}_{i})\right|^{2}\right]^{N}. \tag{105}\] Note that in both (104) and (105) the lattice spacing \(a\) does not appear. Consider now the single particle subspace: \[(\rho_{1}^{2})_{\alpha\alpha} =(\rho_{1})_{\alpha\alpha}\,(\rho_{1})_{\alpha\alpha} \tag{106}\] \[=3\Delta x\,a^{2}\int\frac{\mathrm{d}x_{1}\mathrm{d}x_{2}}{8x_{1 }x_{2}x_{3}}\,\delta(1-x_{1}-x_{2}-x_{3})\,\int\mathrm{d}^{2}r_{1}\,\mathrm{d} ^{2}r_{2}\,\delta(x_{1}\vec{r}_{1}+x_{2}\vec{r}_{2}+x_{3}\vec{r}_{3})\,\Theta _{A}(\vec{r}_{1})\,\Theta_{A}(\vec{r}_{2})\,|\Psi(x_{1},\vec{r}_{1};x_{2}, \vec{r}_{2};x_{3},\vec{r}_{3})|^{2}\] \[\quad 3\Delta x\,a^{2}\int\frac{\mathrm{d}y_{1}\mathrm{d}y_{2}}{8 y_{1}y_{2}y_{2}}\,\delta(1-y_{1}-y_{2}-x_{3})\,\int\mathrm{d}^{2}s_{1}\,\mathrm{d} ^{2}s_{2}\,\delta(y_{1}\vec{s}_{1}+y_{2}\vec{s}_{2}+x_{3}\vec{r}_{3})\,\Theta _{A}(\vec{s}_{1})\,\Theta_{A}(\vec{s}_{2})\,|\Psi(y_{1},\vec{s}_{1};y_{2}, \vec{s}_{2};x_{3},\vec{r}_{3})|^{2}\.\] Taking the trace, \[\operatorname{tr}\rho_{1}^{2}=\int\frac{\mathrm{d}x_{3}}{\Delta x}\int\frac{ \mathrm{d}^{2}r_{3}}{a^{2}}\,\Theta_{\overline{A}}(\vec{r}_{3})\] \[3\Delta x\,a^{2}\int\frac{{\rm d}x_{1}{\rm d}x_{2}}{8x_{1}x_{2}x_{3} }\,\delta(1-x_{1}-x_{2}-x_{3})\,\int{\rm d}^{2}r_{1}\,{\rm d}^{2}r_{2}\,\delta(x _{1}\vec{r}_{1}+x_{2}\vec{r}_{2}+x_{3}\vec{r}_{3})\,\Theta_{A}(\vec{r}_{1})\, \Theta_{A}(\vec{r}_{2})\,|\Psi(x_{1},\vec{r}_{1};x_{2},\vec{r}_{2};x_{3},\vec{ r}_{3})|^{2}\] \[3\Delta x\,a^{2}\int\frac{{\rm d}y_{1}{\rm d}y_{2}}{8y_{1}y_{2}x_ {3}}\,\delta(1-y_{1}-y_{2}-x_{3})\,\int{\rm d}^{2}s_{1}\,{\rm d}^{2}s_{2}\, \delta(y_{1}\vec{s}_{1}+y_{2}\vec{s}_{2}+x_{3}\vec{r}_{3})\,\Theta_{A}(\vec{s} _{1})\,\Theta_{A}(\vec{s}_{2})\,|\Psi(y_{1},\vec{s}_{1};y_{2},\vec{s}_{2};x_{3},\vec{r}_{3})|^{2}\] \[=\int\frac{{\rm d}x_{3}}{\Delta x}\int\frac{{\rm d}^{2}r_{3}}{a^ {2}}\,\Theta_{\overline{A}}(\vec{r}_{3})\,\left[3\Delta x\,a^{2}\,\int[{\rm d }y_{i}]\,\delta(y_{3}-x_{3})\int[{\rm d}^{2}s_{i}]\,\delta(\vec{s}_{3}-\vec{r} _{3})\,\Theta_{A}(\vec{s}_{1})\,\Theta_{A}(\vec{s}_{2})\,|\Psi(y_{i},\vec{s}_ {i})|^{2}\right]^{2}. \tag{100}\] For an arbitrary \(N\) we obtain \[{\rm tr}\,\rho_{1}^{\,N}=\int\frac{{\rm d}x_{3}}{\Delta x}\int\frac{{\rm d}^{2 }r_{3}}{a^{2}}\,\Theta_{\overline{A}}(\vec{r}_{3})\,\left[3\Delta x\,a^{2}\int [{\rm d}y_{i}]\,\delta(y_{3}-x_{3})\int[{\rm d}^{2}s_{i}]\,\delta(\vec{s}_{3}- \vec{r}_{3})\,\Theta_{A}(\vec{s}_{1})\,\Theta_{A}(\vec{s}_{2})\,|\Psi(x_{i}, \vec{s}_{i})|^{2}\right]^{N}\!. \tag{101}\] As noted in the main text of this paper, the lattice spacing does not cancel in this expression, and formally this expression vanishes, for \(N>1\), in the "continuum limit" \(\Delta x,a\to 0\). Now consider traces of powers of \(\rho_{2}\). We have \[(\rho_{2}^{\,2})_{\alpha\alpha^{\prime}}=9\left(\Delta x\,a^{2} \right)^{3}\,\int[{\rm d}y_{i}]\,[{\rm d}^{2}s_{i}]\,\Theta_{\overline{A}}( \vec{s}_{1})\,\Theta_{\overline{A}}(\vec{s}_{2})\,\delta(x_{3}-y_{3})\,\delta( \vec{s}_{3}-\vec{r}_{3})\,\,\left|\Psi(y_{i},\vec{s}_{i})\right|^{2}\] \[\times\frac{\Psi(x_{i},\vec{r}_{i})}{x_{3}\sqrt{x_{1}x_{2}x_{3}}} \,\,\frac{\Psi^{*}(x_{i}^{\prime},\vec{r}_{i}^{\prime})}{x_{3}\sqrt{x_{1}^{ \prime}x_{2}^{\prime}x_{3}}}. \tag{102}\] In this expression \(\vec{r}_{3}=-(x_{1}\vec{r}_{1}+x_{2}\vec{r}_{2})/x_{3}=\vec{r}_{3}^{\prime}=- (x_{1}^{\prime}\vec{r}_{1}^{\prime}+x_{2}^{\prime}\vec{r}_{2}^{\prime})/x_{3}^ {\prime}\), while \(x_{3}=1-x_{1}-x_{2}=x_{3}^{\prime}=1-x_{1}^{\prime}-x_{2}^{\prime}\). Recall, also, that here the indices \(\alpha=\{x_{1},\vec{r}_{1},x_{2},\vec{r}_{2}\}\), \(\alpha^{\prime}=\{x_{1}^{\prime},\vec{r}_{1}^{\prime},x_{2}^{\prime},\vec{r}_{ 2}^{\prime}\}\), are defined over the domain \(\vec{r}_{1},\vec{r}_{2},\vec{r}_{1}^{\prime},\vec{r}_{2}^{\prime}\in\overline{A}\), \(\vec{r}_{3}\in A\). The trace, defined with the measure \({\rm d}x_{1}\,{\rm d}x_{2}\,{\rm d}^{2}r_{1}\,{\rm d}^{2}r_{2}\,\Theta(x_{3})\, \Theta_{A}(\vec{r}_{3})\,\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{\overline{ A}}(\vec{r}_{2})\,/(a^{2}\Delta x)^{2}\), is \[{\rm tr}\,\rho_{2}^{\,2} = 3\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\Theta_{A}(\vec{r}_{3} )\,\Theta_{\overline{A}}(\vec{r}_{1})\,\Theta_{\overline{A}}(\vec{r}_{2})\,| \Psi(x_{i},\vec{r}_{i})|^{2} \tag{103}\] \[\times 3\Delta x\,a^{2}\int[{\rm d}y_{i}]\,[{\rm d}^{2}s_{i}]\, \Theta_{\overline{A}}(\vec{s}_{1})\,\Theta_{\overline{A}}(\vec{s}_{2})\,\delta(x _{3}-y_{3})\,\delta(\vec{s}_{3}-\vec{r}_{3})\,|\Psi(y_{i},\vec{s}_{i})|^{2}\.\] Note that this is actually identical to eq. (100) for \({\rm tr}\,\rho_{1}^{\,2}\) with \(A\leftrightarrow\overline{A}\), as it should be. For general power \(N\), \[{\rm tr}\,\rho_{2}^{\,N}=\int\frac{{\rm d}x_{3}}{\Delta x}\int\frac{{\rm d}^{2 }r_{3}}{a^{2}}\,\Theta_{A}(\vec{r}_{3})\,\left[3\Delta x\,a^{2}\int[{\rm d}y_{i}] \,[{\rm d}^{2}s_{i}]\,\delta(x_{3}-y_{3})\,\delta(\vec{s}_{3}-\vec{r}_{3})\, \Theta_{\overline{A}}(\vec{s}_{1})\,\Theta_{\overline{A}}(\vec{s}_{2})\,|\Psi(y_ {i},\vec{s}_{i})|^{2}\right]^{N}. \tag{104}\] Again we see that the lattice spacing does not disappear in this expression and formally leads to its vanishing for \(a\to 0\) and \(N>1\). ## Appendix C Checking traces Here we check the normalization of the density matrix (58). First let us take the trace of \(\rho^{qqq}\), which amounts to setting \(x_{i}^{\prime}=x_{i}\), \(\vec{r}_{i}^{\,\prime}=\vec{r}_{i}\), and integrating over \([{\rm d}x_{i}]\) and \([{\rm d}^{2}r_{i}]\). The first line gives \[\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\left(1-3C_{q}^{\rm reg}(x_{1})\right)\, \,\left|\Psi_{\rm quqq}(x_{i},\vec{r}_{i})\right|^{2}=1-3\int[{\rm d}x_{i}]\,[{ \rm d}^{2}r_{i}]\,\,C_{q}^{\rm reg}(x_{1})\,\,\left|\Psi_{\rm quqq}(x_{i}, \vec{r}_{i})\right|^{2}\, \tag{105}\] where we used the normalization condition (31) for the 3-quark wave function. From the rest of eq. (59) we get \[2g^{2}C_{F}\,\int[{\rm d}x_{i}]\,[{\rm d}^{2}r_{i}]\,\int_{\Delta x _{g}}\frac{{\rm d}x_{g}}{2x_{g}\,(2\pi)^{3}}\,\frac{k_{g}^{2}}{(k_{g}^{2}+ \Delta^{2})^{2}}\,\,\left|\Psi_{\rm quqq}\,(x_{i},\vec{r}_{i})\right|^{2}\] \[\times\left[e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{2})}\,+e^ {-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r}_{ 2}-\vec{r}_{3})}\,+{\rm c.c.}\right]. \tag{106}\] Now let us take the trace of the matrix (61). To do this set \(x^{\prime}_{i}=x_{i}\), \(\vec{r}^{\prime}_{i}=\vec{r}_{i}\), and integrate over all degrees of freedom, including the momentum fraction of the gluon with the measure \(\mathrm{d}x_{g}/2x_{g}\), and its transverse position with the measure \(2\pi\,\mathrm{d}^{2}r_{g}\). This is done by performing the following steps: i) extract the integrations over \(\mathrm{d}x_{g}/2x_{g}\), \(\mathrm{d}^{2}k_{g}/(2\pi)^{3}\), and \(\mathrm{d}^{2}k^{\prime}_{g}/(2\pi)^{3}\) from \([\mathrm{d}x_{i}]\), \([\mathrm{d}^{2}k_{i}]\), and \([\mathrm{d}^{2}k^{\prime}_{i}]\); ii) perform the integration over \(\mathrm{d}^{2}r_{g}\) which produces a \((2\pi)^{2}\,\delta(\vec{k}_{g}-\vec{k}^{\prime}_{g})\); iii) shift the quark momenta (as needed) by \(-\vec{k}_{g}\) so that the arguments of the \(\Psi_{\mathrm{qqq}}\) functions no longer involve \(\vec{k}_{g}\); note that this also changes \(\delta(\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3}+\vec{k}_{g})\to\delta(\vec{k}_{1}+ \vec{k}_{2}+\vec{k}_{3})\), and similar for the primed momenta. The first three terms of eq. (61) then give (taking \(\Delta^{2}\to 0\) where possible) \[3\cdot 4g^{2}C_{F}\,\int[\mathrm{d}x_{i}]\,[\mathrm{d}^{2}r_{i}]\,\,\left|\Psi_{ \mathrm{qqq}}(x_{i},\vec{r}_{i})\right|^{2}\,\int_{\Delta x}^{x_{1}}\frac{ \mathrm{d}x_{g}}{2x_{g}}\int\frac{\mathrm{d}^{2}k_{g}}{(2\pi)^{3}}\frac{1}{k_{ g}^{2}+\Delta^{2}}. \tag{101}\] This cancels against the \(\mathcal{O}(g^{2})\) correction in eq. (100), after regularization of the UV divergence, \((k_{g}^{2}+\Delta^{2})^{-1}\to(k_{g}^{2}+\Delta^{2})^{-1}-(k_{g}^{2}+\Lambda^ {2})^{-1}\). The remaining terms of (61) give \[-2g^{2}C_{F}\,\int[\mathrm{d}x_{i}]\,[\mathrm{d}^{2}r_{i}]\,\, \left|\Psi_{\mathrm{qqq}}\left(x_{i},\vec{r}_{i}\right)\right|^{2}\,\int_{ \Delta x}\frac{\mathrm{d}x_{g}}{2x_{g}}\,\frac{\mathrm{d}^{2}k_{g}}{(2\pi)^{3 }}\,\frac{k_{g}^{2}}{(k_{g}^{2}+\Delta^{2})^{2}}\] \[\times\left[e^{-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{2})}\,+e^ {-i\vec{k}_{g}\cdot(\vec{r}_{1}-\vec{r}_{3})}\,+e^{-i\vec{k}_{g}\cdot(\vec{r} _{2}-\vec{r}_{3})}\,+\mathrm{c.c.}\right]. \tag{102}\] This cancels against (100). The cancellations of the perturbative corrections in the trace ensure that it remains =1, and independent of the coupling \(g^{2}\) and the IR (collinear and soft) and UV cutoffs.
2306.06330
Autonomous Drifting with 3 Minutes of Data via Learned Tire Models
Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modelling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data -- less than three minutes -- is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45mph. Comparisons with the benchmark model show a $4 \times$ improvement in tracking performance, smoother control inputs, and faster and more consistent computation time.
Franck Djeumou, Jonathan Y. M. Goh, Ufuk Topcu, Avinash Balachandran
2023-06-10T01:59:38Z
http://arxiv.org/abs/2306.06330v2
# Autonomous Drifting with 3 Minutes of Data via Learned Tire Models ###### Abstract Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modelling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data - less than three minutes - is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45mph. Comparisons with the benchmark model show a \(4\times\) improvement in tracking performance, smoother control inputs, and faster and more consistent computation time. ## I Introduction Maximizing tire force usage is critical to safely negotiating highly dynamic situations, e.g., emergency obstacle avoidance. Yet, accurately predicting the effective force generated by the four tires on a car is a difficult challenge. Firstly, the tire in isolation has many complex nonlinear phenomenon, including force saturation, camer thrust, and nonlinear load dependence. Indeed, significant effort has gone into developing analytical and empirical models for a _single_ tire [1, 2, 3, 4, 5, 6, 7], including the Magic Formula [1] which is frequently used in industry. Despite its popularity, fitting the many parameters of the Magic Formula is difficult and often requires specialized testing and facilities [2, 3]. When attached to a vehicle, the complexity compounds, as every input to these models is coupled into suspension dynamics, weight transfer, and other effects. Many control approaches in the literature thus resort to using a single-track assumption [8, 9, 10, 11, 12, 13], where these effects are 'lumped' into a single tire model at the front and rear axles, and empirically fit to measured vehicle data. This includes the Fiala brush model [14], which has been experimentally demonstrated in autonomous vehicle control scenarios at the limits of handling, including emergency obstacle avoidance, drifting, and racing [15, 16, 17, 18, 19, 20]. Although the simplicity aids control development, this single tire lumping often fails to accurately capture the intricate coupling created by higher-order effects. Neural networks, which have universal approximation properties in the limit, could offer a solution. Black-box and Magic Formula-based neural network models [21, 22, 23, 4, 5, 6, 7] have been explored in the literature. However, they do not retain physics-based guarantees, and none has been tested on a full-size car operating near or at the limits of handling. In general, their complexity has to be balanced against overfitting and computational efficiency, especially when reliable, physically insightful extrapolation is required for real-time control. Our first contribution is to combine the physical insights of tire models with the modelling power of neural networks. We propose a novel family of tire force models based on neural ordinary differential equations (NODE) [24, 25] and neural-ExpTanh, a novel parameterization which uses curves generated by the \(\exp(\cdot)\tanh(\cdot)\) function. These are designed to have high fitting fidelity while also incorporating fundamental tire modelling insights [1, 2, 3], including the friction ellipse constraint and 'S-shaped' saturation trend. The NODE model defines a differential equation whose family of solutions includes established models such as the Magic Formula [1] and Fiala brush model [14]. Through optimization-based techniques [24, 26], the model is trained to fit vehicle state measurements. To address the computational complexity of training and evaluating NODE models, which requires integrating a differential equation, we also introduce neural-ExpTanh, a subset of the NODE model's solutions. Neural-ExpTanh can be trained efficiently and targets real-time control precisely due to its cheap function and gradient computation time. Our second contribution provides an extensive experimental evaluation of these NODE and ExpTanh models on a full-size, heavily-modified Toyota Supra. We first compare our models to the Magic Formula and Fiala models on a dataset from the vehicle. The results show that NODE and ExpTanh satisfy the tire fundamentals while being up to \(2\times\) denser than the baselines around zero-mean prediction error. Fig. 1: A photocomposite showing stills from an overhead drone video of a fully autonomous experiment superimposed at 1s intervals. The videos of the experiments are available at [https://tinyurl.com/supra-neural-drift](https://tinyurl.com/supra-neural-drift). We then use these learned models as drop-in replacements for an analytical brush model in an existing nonlinear model predictive control framework [16, 27]. These are compared in autonomous drifting experiments on two different trajectories which consistently excite the nonlinear regime. Compared to the baseline, the results show improved tracking, fewer steering oscillations, and lower computation time. The last set of experiments demonstrates data efficiency and generalization of our models. We switch to a different set of tires, collect 3 minutes of manual driving data, train an ExpTanh model in a few seconds, and then perform figure-8 autonomous drifting experiments, shown in Figure 1. The learned model shows similarly good closed-loop performance, while the performance of the baseline model drops. ## II Fundamentals **Vehicle Dynamics.** The dynamics are described using a planar single-track model [10, 11, 12, 8, 13], shown in Figure 2. **Measurements of Tire Forces.** In this work, we learn tire models from the state measurements and estimates of the effective lumped axle forces. While there are different strategies for estimating these forces, for simplicity in this paper, we consider conditions where we can assume \(F_{xf}=0\), e.g., no torque on the front wheels. Then, we compute \(\dot{r},\dot{V},\dot{\beta}\) from measured states and invert through the matrix \(M\) to obtain estimated forces, indicated by \(\bar{F}\)[15]. ## III Physics-Informed Learned Tire Forces In this section, we describe our physics-based, neural ordinary differential equation (NODE) model and the derived ExpTanh parameterization. In keeping with tire modelling convention, we divide the discussion into pure slip and combined slip regimes. In pure slip, the tire is only creating force along one axis (\(\sigma=0\) or \(\alpha=0\)); in combined slip, the tire is creating both longitudinal and lateral force (\(\sigma\neq 0\) and \(\alpha\neq 0\)). From tire fundamentals [1, 2, 3], we expect the following generalized behavior, summarized in Figure 3. **Characteristic 'S-shape' curve.** As the absolute value of the input slip increases, the tire force magnitude also increases until a peak force is attained and the tire contact patch starts to slide. Beyond this point, the force decreases, following an 'S-shape' curve. In the pure slip regime, the input slip is \(\sigma\) or \(\alpha\). In the combined slip regime, the input slip is some combination of \(\sigma\) and \(\alpha\), and the output is the total force magnitude \(F_{\rm tot}=\sqrt{F_{x}^{2}+F_{y}^{2}}\). **Combined slip regime.** For combined slip, the components of \(F_{\rm tot}\) are distributed according to some ratio of the slip angle and longitudinal slip vs. the combined slip. An example is schematically shown in Figure 3, for fixed \(\sigma\) and \(|\alpha|\) increasing from 0: The proportion of longitudinal force decreases while the lateral force increases until saturation. **Friction limits.** In both regimes, the peak force is constrained by the maximum available tire/road adhesion capability, \(\mu F_{z}\), with \(\mu\) the friction coefficient and \(F_{z}\) the normal load on the tires. \(\mu F_{z}\) is difficult to know precisely as it depends on the surface, tire orientation, and the normal load - which in turn vary with the vehicle's state due to weight transfer/suspension dynamics. Yet, this notion of a maximum force greatly eases analysis for control and safety. Throughout this paper, we assume a given set of measurements \(\mathcal{D}=\{(\alpha_{f,r},\sigma_{f,r},r,V,\beta,\omega_{f,r},\widetilde{F} _{x},\widetilde{F}_{y},\mu\widetilde{F}_{z})_{i}\}_{i=1}^{N}\), where \(\mu\widetilde{F}_{z}=\bar{\mu}mg\) is a rough estimate of the nominal load and \(\bar{\mu}\) encodes any available approximate knowledge on \(\mu\). Fig. 3: The left figure shows the inflection points \(\alpha_{-1},\alpha_{0},\alpha_{1}\) and the changes in the convexity/concavity of \(F_{y}\) in the pure slip regime. The right figure shows the inflection point \(\kappa_{1}\) of \(F_{\rm tot}\) in the combined slip regime. Fig. 2: Single-track model of a vehicle on a reference path. ### _Physics-Informed NODE for Tire Force Modeling_ We seek a generalizable model that satisfies these physical insights. First, for the characteristic 'S-shape', instead of intuitive a curve, e.g., Magic Formula, that satisfies the 'S-shape', we characterize the family of physically-feasible curves using notions of convexity, concavity, and inflection points. Then, we optimize for the function in this family that best fits the data via stochastic gradient descent. Specifically, at critical inflection points (Figure 3), the curve changes convexity or concavity. In the pure slip regime, \(F_{y}\) contains _three_ inflection points \(\alpha_{-1},\alpha_{0},\alpha_{1}\). We seek a family of curves such that \(F_{y}(\alpha)\) is convex for all \(\alpha\leq\alpha_{-1}\) and \(\alpha\in[\alpha_{0},\alpha_{1}]\), and \(F_{y}(\alpha)\) is concave otherwise, for some \(\alpha_{-1},\alpha_{0},\alpha_{1}\). We have the same properties for \(-F_{x}(\sigma)\), for some \(\sigma_{-1},\sigma_{0},\sigma_{1}\). In the combined slip regime, the family of curves for \(F_{\mathrm{tot}}\) contains an inflection point \(\kappa_{1}\) such that \(\forall\kappa\leq\kappa_{1}\), \(F_{\mathrm{tot}}(\kappa)\) is concave, and convex otherwise. Convexity and concavity correspond to nonnegative and nonpositive second-order derivatives, respectively. Thus, the main idea is to learn the inflection points and the second derivative of the tire forces with respect to the corresponding slips while enforcing the desired convexity/concavity properties; the forces are then obtained by integration. Further, we enforce soft constraints on the peak force as required by the friction limits. **Pure Slip NODE Model.** The lateral force \(F_{y}\) is a solution of the second-order differential equation given by \[\dot{F}^{\theta}_{y} =G^{\theta}_{y},\ z=[\alpha,F^{\theta}_{y},G^{\theta}_{y}, \alpha^{\theta}_{-1},\alpha^{\theta}_{0},\alpha^{\theta}_{1}] \tag{2}\] \[\dot{G}^{\theta}_{y} =\begin{cases}\exp\{\mathrm{NN}^{\theta}_{1}(z,\mathrm{feat})\} \text{ if }\alpha\leq\alpha^{\theta}_{-1}\text{ or }\alpha\in[\alpha^{\theta}_{0},\alpha^{ \theta}_{1}]\\ -\exp\{\mathrm{NN}^{\theta}_{2}(z,\mathrm{feat})\}\text{ otherwise}\end{cases}\] where the derivative here is taken with respect to \(\alpha\). The set of features used for learning are \(\mathrm{feat}=[r,V,\beta,\mu F_{z}]\) for the front axle and \(\mathrm{feat}=[r,V,\mu\bar{F}_{z}]\) for the rear. We select the feature set \(\mathrm{feat}\) such that for fixed \(\mathrm{feat}\), \(\alpha\) given in (1) is not uniquely defined. \(\mathrm{NN}^{\theta}_{x}\) denotes a neural network, where \(\theta\) is the set of all parameters for the model. The inflection points are parameterized as \([\alpha^{\theta}_{-1},\alpha^{\theta}_{0},\alpha^{\theta}_{1},F^{\theta}_{0}, G^{\theta}_{0}]=\mathrm{NN}^{\theta}_{3}(\mathrm{feat})\) with \(F^{\theta}_{0}\), \(G^{\theta}_{0}\) being the initial states to use when integrating the differential equation. Note that choosing the \(\exp\) function in \(\dot{G}^{\theta}_{y}\) enforces the nonnegative and nonpositive second-order derivatives constraints. We then compute the parameters \(\theta\) by solving the following optimization problem \[\min_{\theta}\frac{1}{N}\sum_{\begin{subarray}{c}\alpha,r,V, \beta,\\ F_{y},\mu F_{z}\in\mathcal{D}\end{subarray}}\Big{(}\underbrace{\mathrm{ode} \big{(}(2),[\alpha^{\theta}_{0},\alpha],[F^{\theta}_{0},G^{\theta}_{0}]\big{)} }_{[F^{\theta}_{y},G^{\theta}_{y}]}-\bar{F}_{y}\Big{)}^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\lambda(\min\{\mu\bar{ F}_{z}-|F^{\theta}_{y}|,0\})^{2} \tag{3}\] where \(\mathrm{ode}\), an integration scheme, solves (2) from \(\alpha^{\theta}_{0}\) to the measured \(\alpha\) with the initial condition given by \([F^{\theta}_{0},G^{\theta}_{0}]\). The term \(\lambda(\min\{\mu\bar{F}_{z}-|F^{\theta}_{y}|,0\})^{2}\) enforces the friction limits knowledge as a soft constraint by penalizing values that exceed the estimated nominal load \(\mu\bar{F}_{z}\). The hyperparameter \(\lambda\) specifies the confidence in \(\mu\bar{F}_{z}\): Low values enable the peak force to be adjusted according to the data while high values constrain the peak force to be less than \(\mu\bar{F}_{z}\). **Combined Slip NODE Model.** The total force \(F_{\mathrm{tot}}\) is a solution of the second-order differential equation given by \[\dot{F}^{\theta}_{\mathrm{tot}} =G^{\theta}_{\mathrm{tot}},\ z=[\kappa,F^{\theta}_{\mathrm{tot}}, G^{\theta}_{\mathrm{tot}},\kappa^{\theta}_{1}] \tag{4}\] \[\dot{G}^{\theta}_{\mathrm{tot}} =\begin{cases}-\exp\{\mathrm{NN}^{\theta}_{1}(z,\mathrm{feat})\} \text{ if }\kappa\leq\kappa^{\theta}_{1}\\ \exp\{\mathrm{NN}^{\theta}_{2}(z,\mathrm{feat})\}\text{ otherwise}\end{cases}\] where the derivative is with respect to the combined slip \(\kappa\), \([\kappa^{\theta}_{1},F^{\theta}_{0},G^{\theta}_{0}]=\mathrm{NN}^{\theta}_{3}( \mathrm{feat})\), and the features feat are again picked such that \(\alpha\) and \(\sigma\) are not uniquely defined. Then, to learn the component distribution of this total force, we define \([s^{\theta}_{1},s^{\theta}_{2}]=\mathrm{NN}^{\theta}_{4}(\alpha,\sigma)\) and estimate \(F^{\theta}_{y}\) and \(F^{\theta}_{x}\) by scaling \(F^{\theta}_{\mathrm{tot}}\) as follows \[F^{\theta}_{y}=\frac{s^{\theta}_{1}F^{\theta}_{\mathrm{tot}}}{\sqrt{(s^{ \theta}_{1})^{2}+(s^{\theta}_{2})^{2}}},\ F^{\theta}_{x}=\frac{s^{\theta}_{2}F^ {\theta}_{\mathrm{tot}}}{\sqrt{(s^{\theta}_{1})^{2}+(s^{\theta}_{2})^{2}}} \tag{5}\] Thus, the parameters \(\theta\) are obtained by solving the problem \[\min_{\theta}\frac{1}{N}\sum_{\begin{subarray}{c}\alpha,r,V, \beta,F_{y},\\ F_{x},\mu F_{z}\in\mathcal{D}\end{subarray}}\Big{(}\underbrace{\mathrm{ode} \big{(}(4),[\kappa^{\theta}_{0},\kappa],[F^{\theta}_{0},G^{\theta}_{0}]\big{)} }_{[F^{\theta}_{\mathrm{tot}},\mathcal{G}^{\theta}_{\mathrm{tot}}]}-\bar{F}_{ \mathrm{tot}}\Big{)}^{2}\] \[+(F^{\theta}_{y}-F_{y})^{2}+(F^{\theta}_{x}-\bar{F}_{x})^{2}+ \lambda(\min\{\mu\bar{F}_{z}-F^{\theta}_{\mathrm{tot}},0\})^{2}\] where the measured total force is \(\bar{F}_{\mathrm{tot}}=\sqrt{\bar{F_{x}}^{2}+\bar{F_{y}}^{2}}\). **Remark 1**: _In the pure slip regime, learning \(-F^{\theta}_{x}\) follows exactly the description of \(F^{\theta}_{y}\) with \(\alpha\) replaced by \(\sigma\). Despite the rich class of functions encoded by the NODE formulation, solving (2) and (4) to estimate the forces slows down training and hinders the direct application of the formulation for control. In practice, we address this issue by first learning the parameters \(\theta\), then training a new neural network to mimic the solutions of (2) and (4) via overfitting. Thus, evaluating the obtained neural network and its Jacobian becomes computationally cheap for real-time control._ ### _ExpTanh: A New Family of Tire Models_ We restrict the NODE model's set of solutions to a family of functions, namely ExpTanh, satisfying the second-order derivative condition without the need to integrate a differential equation. ExpTanh curves are given by \[\texttt{ExpTanh}^{\theta}(z)=a^{\theta}_{0}+\big{(}a^{\theta}_{1}+a^{\theta}_{2 }e^{-a^{\theta}_{3}|z|}\big{)}\tanh\big{(}a^{\theta}_{4}(z-a^{\theta}_{5}) \big{)}\] where \(a^{\theta}_{k}\) are constants or neural network functions such that \(a^{\theta}_{1},a^{\theta}_{2},a^{\theta}_{3}\geq 0\). Importantly, the maximum/minimum values \(z^{\theta}_{+},z^{\theta}_{-}\) can be found analytically: \[z^{\theta}_{\pm}=a^{\theta}_{5}\pm\mathrm{atanh}\big{(}\frac{\sqrt{(a^{ \theta}_{2}a^{\theta}_{3})^{2}+4(a^{\theta}_{4})^{2}}-a^{\theta}_{2}a^{\theta}_{3 }}{2a^{\theta}_{4}}\big{)} \tag{6}\] **ExpTanh Pure Slip.** We model \(F_{y}\) as \(F^{\theta}_{y}(\alpha,\mathrm{feat})=\texttt{ExpTanh}^{\theta}(\alpha)\), where \((a^{\theta}_{i})^{5}_{i=0}=\mathrm{NN}^{\theta}(\mathrm{feat})\), feat, are the same features as in the NODE version, and \(\theta\) is the set of all parameters. In practice, we pass \(a^{\theta}_{1},a^{\theta}_{2},a^{\theta}_{3}\) through an exponential function to enforce nonnegative values. The optimum parameters \(\theta\) are given by \[\min_{\theta}\frac{1}{N}\sum_{\begin{subarray}{c}\alpha,r,V, \beta,\\ F_{y}, where the second term is a similar soft penalty on exceeding the estimated maximum friction force. **ExpTanh Combined Slip.** We model the total force as \(F_{\mathrm{tot}}^{\theta}(\kappa,\mathrm{feat})=\texttt{ExpTanh}^{\theta}(\kappa)\), where \((a_{t}^{\theta})_{i=0}^{5}=\mathrm{NN}^{\theta}(\mathrm{feat})\) and \(\mathrm{feat}\) represents the same set of features as in the combined slip NODE model. The forces \(F_{y}^{\theta}\) and \(F_{x}^{\theta}\) depend on \(F_{\mathrm{tot}}^{\theta}\) as given by (5), where the functions \(s_{1}^{\theta}\) and \(s_{2}^{\theta}\) are to be learned. Specifically, we compute \(\theta\) by solving \[\min_{\theta} \frac{1}{N}\sum_{\begin{subarray}{c}\alpha,\sigma,\nu,V,\beta,F_ {y},\\ F_{x},\mu F_{x}\in\mathcal{D}\end{subarray}}\left(F_{\mathrm{tot}}^{\theta}( \kappa,\mathrm{feat})-\bar{F}_{\mathrm{tot}}\right)^{2}+(F_{y}^{\theta}-\bar{ F}_{y})^{2}\] \[+(F_{x}^{\theta}-\bar{F}_{x})^{2}+\lambda(\mu\bar{F}_{z}-F_{ \mathrm{tot}}^{\theta}(\kappa_{+}^{\theta},\mathrm{feat}))^{2}\] **Remark 2**: _Firstly, by incorporating selected subsets of the measured states, feat, in addition to the slip values, the proposed models are able to capture the intricate coupling between the effective lumped tire force curves and vehicle motion. While we made one choice for feat, other selections are likely suitable, depending on the vehicle. Secondly, for fixed parameters \(\theta\), ExpTanh requires only two evaluations of the function \(\exp\), which is computationally cheap compared to the Magic Formula requiring three evaluations of \(\mathrm{arctan}\); the gradient is also easier to compute._ ## IV Experiments We demonstrate the data efficiency, prediction accuracy, and computational efficiency of our tire models through several experiments: Comparisons to the Magic Formula and Fiala brush on the testbed vehicle dataset, and autonomous drifting on slalom and figure-8 trajectories. The experiments in this section were performed on the Toyota Supra described in [16, 27] and heavily modified for high-performance autonomous driving. Vehicle state measurements are obtained from a commercial RTK GPS-INS unit at a rate of 250Hz. As a rear-wheel drive vehicle, the front tires operate in the pure slip regime with \(F_{xf}=0\), and the rear tires operate in the combined slip regime. We assume standard units for all quantities when they are not specified. ### _Evaluation of the Learned Tire Models_ To compare the tire models, we used a dataset \(\mathcal{D}\) (Figure 4) of manual and autonomous driving/drifting. The dataset contains \(306887\) state measurements at 100 Hz, totaling \(\sim 1\) hour accumulated over the span of three months on the same surfaces under similar summer weather conditions. For the NODE model, \(\mathrm{NN}^{\theta}_{1}\), \(\mathrm{NN}^{\theta}_{2}\), and \(\mathrm{NN}^{\theta}_{4}\) have \(2\) hidden layers with \(16\) nodes per layer, while \(\mathrm{NN}^{\theta}_{3}\) have \(4\) nodes per layer. For the ExpTanh model, \(\mathrm{NN}^{\theta}\) and \(\mathrm{NN}^{\theta}_{4}\) have \(2\) hidden layers with \(3\) nodes per layer. All neurons used \(\mathrm{tanh}\) as activation function. We used \(\lambda=0.01\) to express low confidence in the estimated \(\mu\bar{F}_{z}=7000\). We trained the models via Adam optimizer [28], where the learning rate is set to decay exponentially with a rate of \(0.01\) and an initial value of \(0.001\). On a laptop with GeForce RTX 2060, training both the pure slip \(F_{yf}\) and the combined slip \(F_{xr}\) and \(F_{yr}\) models took \(\sim 27\) minutes for NODE, and only \(\sim 4\) minutes for ExpTanh. We compare our models with the Magic Formula and Fiala model. The parameters of the Magic Formula (Chapter 4, Section \(4.3.2\) of [1]) were obtained by optimizing a mean-square-error loss over the dataset. The Fiala model parameterization was empirically tuned by usage in the existing autonomous drifting NMPC framework [15, 16]. Figure 4 summarizes our findings: Our tire models significantly improve prediction accuracy over the Magic Formula and Fiala while satisfying the tire fundamentals. The NODE model provides the best prediction accuracy while taking significantly more time to train and evaluate. In contrast, ExpTanh achieves slightly lower prediction accuracy compared to the NODE model while being easy to train and evaluate. Figure 5 shows how the learned ExpTanh model efficiently captures the coupling with the vehicle states \(r,V,\beta\). This suggests that our models not only fit the tires but can also incorporate complex chassis interactions (e.g., weight transfer and suspension dynamics). In the pure slip model, \(r\) shifts the center of the curve. This could be due to significant static and dynamic camber from the test vehicle's aggressive Fig. 4: Comparison of the different tire models trained and tested on a real-world driving dataset. The first row shows the density distribution of the prediction error, and the second row shows the forces as a function of the slip values for a fixed state \(r,V,\beta=0.7,20,0.1\), where \(F_{yr}(\alpha_{r})\) is obtained for fixed \(\sigma_{r}=0.4\) and \(F_{xr}(\sigma_{r})\) is obtained for fixed \(\alpha_{r}=0.02\). In the density plot, NODE and ExpTanh are at least \(1.5\times\) denser around zero-mean error than expert-designed Fiala and Magic Formula (MF). The second row validates that the learned NODE and ExpTanh enforce the tire fundamentals. drift-specific front suspension setup. Low speed values tend to flatten the curve, while the slip angle corresponding to the peak force decreases with increasing sideslip angle \(\beta\). For the combined slip model, the dependency on \(r,V\) is most significant in the nonlinear transitional region at low longitudinal slip. As expected from the tire fundamentals, the magnitude of \(F_{xr}\) decreases as the slip angle increases for fixed \(\sigma_{r}\). ### _Autonomous Drifting with Learned Tire Models_ To evaluate their practical closed-loop performance, we use our learned models in Figure 4 as direct drop-in replacements for a Fiala model in an existing closed-loop NMPC framework for autonomous drifting [16, 27]. The reference trajectories were pre-computed via nonlinear optimization with the benchmark Fiala model. The NMPC cost function primarily penalizes the lateral error \(e\), the deviation from the reference sideslip angle \(e_{\beta}=\beta-\beta_{ref}\), and the relative deviation \(\Delta\phi\). The sideslip \(\beta_{ref}\) enforces the drifting profile. #### Iv-B1 Drifting on Slalom Trajectory For the first experiment, we compare the closed-loop performance of the benchmark Fiala, NODE, and ExpTanh models on a transient slalom trajectory (Figure 6). The integrated NODE formulation was approximated with a neural network trained on its output. The slalom trajectory has corners with reference sideslip angle of up to \(43^{\circ}\) and velocity between \(31\)mph and \(45\)mph. Figure 6 demonstrates improved tracking performance: In terms of root mean squared error, ExpTanh tracks the path (\(e\) and \(\Delta\phi\)) up to \(3.5\times\) better than the Fiala model while achieving up to \(1.5\times\) better sideslip tracking performance. The NODE model achieves slightly lower performance than ExpTanh, possibly due to the loss of accuracy from the approximation procedure. Importantly, we also note fewer steering oscillations \(\delta\) when ExpTanh and NODE are used, as compared to the baseline Fiala model. #### Iv-B2 Autonomous Drifting with 3 Minutes of Data This set of experiments investigates the generalizability of the ExpTanh model. First, we perform experiments on a figure-8 trajectory with both the benchmark Fiala model and the ExpTanh model. We then changed the rear tires from Toyo Proxes Sport 275/35R18, which we used for all previous tests, to Bridgestone Potenza Sport 275/35R18. A safety driver then manually drove the car on the skidpad, with unstructured grip and drift maneuvers, for \(\sim 3\) minutes. This data was then used to train an ExpTanh model; this took \(<15s\) on a laptop with a GeForce RTX 2060. This freshly fitted model was then again compared to the benchmark model on the same figure-8 trajectory. Figure 7 summarizes our findings. For the original _Toyo_ tires, performance was significantly better with the ExpTanh model than the baseline Fiala model. Importantly, the closed-loop behavior with the ExpTanh model was similar after switching to the new _Bridgestone_ tires and retraining the network. In contrast, the performance with Fig. 5: Impact of the states \(r,V,\beta\) on the learned ExpTanh model. For the front tire, the blue curve corresponds to fixed \(r=1.8,\beta=0.9\), and \(V\) ranging from \(5\) to \(20\) with lower values represented by lighter colors. The green curve follows the blue curve but with \(r=0\). The orange curve uses fixed \(r=-1.8,V=12\), and \(\beta\) varying from \(-0.9\) to \(0.9\). For the rear tire, \(V\) ranges from \(5\) to \(20\) with fixed \(r=-1.8\) for the blue curve, \(r\) ranges from \(-1.8\) to \(1.8\) with fixed \(V=12\) for the green curve, and both \(V\) and \(r\) vary for the orange curve. Fig. 6: Drifting on a slalom figure. Our approaches show better accuracy at trajectory tracking and fewer steering oscillations than Fiala model. the unchanged Fiala model significantly degraded, showing that there was indeed a notable difference in the behavior of the tires that the ExpTanh model successfully adapted to with sparse data. This is also reflected in the root mean squared \(e\) and \(e_{\beta}\) values: compared to the baseline, we see a \(>4\times\) improvement in lateral error, and \(>2\times\) improvement in sideslip tracking for both tires. \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Toyo} & \multicolumn{2}{c|}{Bridgestone} \\ RMS & Fiala & ExpTanh & Fiala & ExpTanh \\ \hline \(e\) (m) & \(1.77\) & \(0.40\) & \(2.02\) & \(0.27\) \\ \(e_{\beta}\) (rad) & \(0.013\) & \(0.006\) & \(0.011\) & \(0.004\) \\ \hline \end{tabular} With ExpTanh, the controller tracks the sideslip \(\beta\) reference with less overshoot and less steering oscillation compared to the Fiala model. This difference is particularly pronounced at the end of each transition, as shown in the zoomed section on the sideslip evolution, where we expect to see more complex interactions between the vehicle states due to transient load transfer and high yaw rates. This suggests not only that the ExpTanh model is able to capture these effects but also that the controller benefits by exploiting this in closed loop. In contrast, the controller with the Fiala model tends to overshoot severely during these transitions. Another region with complex coupling is the slow transition from drifting to grip driving at the end of the experiment (e.g. \(s\in[600,660]\)). Here, the baseline _Toyo +_ Fiala model combination exhibits steering and sideslip oscillations. In contrast, with both tires, ExpTanh smoothly tracks sideslip, and has better lateral error performance. Figure 8 compares the observed optimal control problem computation time during the _Toyo_ experiments. Due to its simplicity, forward evaluation of the Fiala model is likely faster than ExpTanh. However, in NMPC, the fidelity of the tire model and the smoothness of its Jacobian are important. Figure 8 shows that NMPC with Fiala often needs more gradient iterations than ExpTanh to converge to a solution. This is exacerbated in the transitional regime, \(s\geq 600\), where the number of iterations triples with the Fiala model, but remains similar with ExpTanh. This suggests that NMPC with ExpTanh can be both faster and more consistent. ## V Conclusion We propose a family of tire force models based on neural ordinary differential equations (NODE) and ExpTanh. These models combine physics-based tire modelling fundamentals with the ability to directly learn, using onboard sensor measurements, higher-order effects from the interaction between the tires, suspension, and road. Autonomous drifting experiments, which subject the model to extreme conditions, demonstrate improved tracking performance, optimized computation time, and unprecedented data efficiency: Learning with only \(3\) minutes of driving. Finally, our rapid training time (usually a few seconds) suggests that future work could explore using these models in a life-long learning setting, where the tire curves are updated online during driving. **Acknowledgements.** The authors would like to thank our colleagues at Toyota and UT Austin's Autonomous Systems Group for their generous support. We thank ONR, AFOSR, and NSF for their support on our prior works on NODE. Fig. 8: Compute time and number of gradient iterations from the controller. Fig. 7: Drifting on a figure-8 trajectory with \(3\) minutes of data. ExpTanh shows better tracking performance with both tires, especially in transitional regions. The red line indicates where the _Bridgestone +_ Fiala test was ended due to the safety driver feeling uncomfortable with the tracking error.
2303.13685
Attention-based Speech Enhancement Using Human Quality Perception Modelling
Perceptually-inspired objective functions such as the perceptual evaluation of speech quality (PESQ), signal-to-distortion ratio (SDR), and short-time objective intelligibility (STOI), have recently been used to optimize performance of deep-learning-based speech enhancement algorithms. These objective functions, however, do not always strongly correlate with a listener's assessment of perceptual quality, so optimizing with these measures often results in poorer performance in real-world scenarios. In this work, we propose an attention-based enhancement approach that uses learned speech embedding vectors from a mean-opinion score (MOS) prediction model and a speech enhancement module to jointly enhance noisy speech. The MOS prediction model estimates the perceptual MOS of speech quality, as assessed by human listeners, directly from the audio signal. The enhancement module also employs a quantized language model that enforces spectral constraints for better speech realism and performance. We train the model using real-world noisy speech data that has been captured in everyday environments and test it using unseen corpora. The results show that our proposed approach significantly outperforms other approaches that are optimized with objective measures, where the predicted quality scores strongly correlate with human judgments.
Khandokar Md. Nayem, Donald S. Williamson
2023-03-23T21:32:53Z
http://arxiv.org/abs/2303.13685v1
# Attention-based Speech Enhancement Using Human Quality Perception Modelling ###### Abstract Perceptually-inspired objective functions such as the perceptual evaluation of speech quality (PESQ), signal-to-distortion ratio (SDR), and short-time objective intelligibility (STOI), have recently been used to optimize performance of deep-learning-based speech enhancement algorithms. These objective functions, however, do not always strongly correlate with a listener's assessment of perceptual quality, so optimizing with these measures often results in poorer performance in real-world scenarios. In this work, we propose an attention-based enhancement approach that uses learned speech embedding vectors from a mean-opinion score (MOS) prediction model and a speech enhancement module to jointly enhance noisy speech. The MOS prediction model estimates the perceptual MOS of speech quality, as assessed by human listeners, directly from the audio signal. The enhancement module also employs a quantized language model that enforces spectral constraints for better speech realism and performance. We train the model using real-world noisy speech data that has been captured in everyday environments and test it using unseen corpora. The results show that our proposed approach significantly outperforms other approaches that are optimized with objective measures, where the predicted quality scores strongly correlate with human judgments. speech enhancement, speech quantization, speech assessment, attention model, deep learning, speech quality. ## I Introduction Monaural speech enhancement aims to remove unwanted noise from an audio signal that contains speech using only a single microphone channel. Enhancing the quality of noisy speech is crucial for applications such as speech recognition, speaker verification, hearing aids, and hands-free communication. Speech enhancement approaches are generally divided into two categories: mask-based or signal-based approximation. A time-frequency (T-F) mask is estimated in mask-based approaches, where the mask filters unwanted noise from noisy speech mixtures. Early mask-based approaches estimate the ideal binary mask (IBM) [1] or the ideal ratio mask (IRM) [2], while recent approaches estimate the phase-sensitive mask (PSM) [3] or complex ideal ratio mask (cIRM) [4, 5] to enhance both the magnitude and phase. The ideal quantized mask (IQM) has recently been proposed [6], where each T-F unit of the IRM is assigned to a quantization level according to its signal-to-noise ratio. It has been shown to be a reasonable representation of the IRM as assessed by human listeners, however, estimation of the IQM and its subsequent noise removal has not be thoroughly evaluated. Signal approximation can be done in either the time [7, 8] or the T-F domains [9], where the approach directly estimates the time or T-F domain signal from a noisy speech representation. Traditionally, T-F masks produce better objective quality and intelligibility compared to direct signal approximation, mainly because masks are normalized and bounded with limited speaker variations, which makes them easier to learn. Also, masks directly modulate the mixture signal in the T-F domain. In recent years, the signal approximation models outperform mask estimation approaches in speech intelligibility [9, 10] when applied with appropriate normalization. Regardless of the approach, recent developments in deep learning have resulted in state-of-the-art performance. A wide range of deep learning architectures have been proposed, including, deep neural networks (DNNs) [11, 12], autoencoders [13, 14, 15], long short-term memory (LSTM) networks [3, 16], convolutional neural networks (CNNs) [17, 18, 19, 20], and generative adversarial networks (GANs) [21, 22, 23]. Deep recurrent networks have proven to be effective, especially compared to fully-connected DNNs, as they capture temporal correlations. CNNs are good at feature extraction, and they have been combined with recurrent networks to capture the short and long-term temporal and spectral correlations. Recently, attention-based deep architectures have been proposed with the motivation that a training target only greatly influences a few regions of the input, where the focal regions change over time. [24, 25] use attention mechanism with an U-Net [26] architecture for both time and spectral domain speech enhancement. [27, 28] successfully use self-attention to estimate a speech spectrum and T-F mask, respectively. This approach is more intuitive for speech enhancement, because humans are able to focus on the target speech with high attention while paying less attention to the noise. Deep-learning-based speech enhancement approaches traditionally use the mean square error (MSE) between the short-time spectral-amplitudes (STSA) of the estimated and clean speech signals to optimize performance. This is done due to the computational efficiency of the MSE loss function. However, the MSE tends to produce overly-smoothed speech and it is not always a strong indicator of performance [29, 30]. Thus, many studies have begun to optimize algorithms using perceptually-inspired objective measures. Multiple studies have used short-time objective intelligibility (STOI) [31] to optimize enhancement algorithms and to im prove speech intelligibility [32, 33, 34]. This is done to minimize the inconsistency between the model optimization criterion and the evaluation criterion for the enhanced speech. The reported results in [33] show that jointly optimizing with STOI and MSE improves speech intelligibility according to both objective and subjective measures. In addition, word accuracy according to automatic speech recognition (ASR) is improved. Perceptual evaluation of speech quality (PESQ) [35] scores, however, have not increased when optimizing with STOI, as reported in [33]. The signal-to-distortion ratio (SDR) [36] has also been used as an objective cost function [37]. The proposed network is pre-trained with a SDR loss to achieve network stability and later optimized with a PESQ loss in a black-box manner. Their results show that optimizing with SDR leads to overall objective quality improvements. Unlike SDR and STOI, PESQ cannot directly be used as an objective function since it is non-differential. Reinforcement learning (RL) techniques such as deep Q-network and policy gradient have thus been employed to solve the non-differentiable problem [34, 38]. In these works, PESQ and the perceptual evaluation methods for audio source separation (PEASS) [39, 40] serve as rewards that are used to optimize model parameters. Meanwhile, a new PESQ-inspired objective function that considers symmetrical and asymmetrical disturbances of speech signals has been developed in [41]. Quality-Net [42], which is a DNN approach that estimates PESQ scores given a noisy utterance, has also been used as a maximization criteria [43] and as a model selection parameter [44] to enhance speech. It is worth noting that optimizing with perceptually-inspired objective measures has been disputed in [45, 46], where these latter results show that a MSE objective function is sufficient. This may occur because objective measures of success do not always strongly correlate with subjective measures [39, 47, 48, 49]. Hence, it is inconclusive as to whether perceptually-inspired objective measures are generally useful at optimizing speech enhancement performance, so alternative strategies for incorporating perceptual feedback may be needed. Subjective evaluations from human listeners remains the gold-standard approach since it results in ratings from potential end-users. These evaluations often ask listeners to either give relative preference scores [50] or assign a numerical rating [51]. Multiple ratings are provided for each signal, where they are averaged to generate a mean-opinion score (MOS). Recently, deep-learning approaches have effectively estimated human-assessed MOS [52, 53, 54, 55]. These approaches are promising since they can provide strongly correlated quality scores for new signals. According to [56], a non-intrusive loss function can lead to improved noise suppression. Conversely, [57] proposes using embedding vectors from a multi-objective speech assessment model for speech enhancement, but they only use intrusive metrics such as PESQ, STOI, and a speech distortion index (SDI) to train the speech assessment model. As a result, it remains unclear whether a speech assessment model that predicts MOS can incorporate human perceptual information into a speech enhancement model. Joint learning has been successfully applied in speech enhancement to optimize between estimating speech and other training targets, such as phoneme classification [58], speaker identification [59], and speech recognition [22]. Our preliminary work has recently combined a speech quality estimation task with speech enhancement [60] and it shows promising results. In this work, we propose an attention-based speech enhancement model that uses the embedding vector from a MOS prediction model to produce speech with improved perceptual quality. The MOS estimator generates encoded embedding vectors that contain perceptually useful information that is important for human-based assessment. Our speech enhancement attention model is conditioned on that embedding vector and enhances the noisy speech using a separate encoder-decoder framework, which should help produce better quality speech according to human evaluation. In the enhancement stage, we incorporate a quantized spectral language model that captures the transitions probabilities across the T-F spectrum. The LM helps ensure that the resulting speech spectra exhibit realistic spectral- and temporal-fine structure that occurs within real speech signals, since it identifies the most likely spectrum in each time frame. This is accomplished by first quantizing the speech magnitude spectra into distinct classes. Our proposed signal approximation approach jointly updates both the MOS-prediction and speech-enhancement models during training, using speech enhancement and MOS prediction loss terms. The rest of the paper is organized as follows. In section II, we introduce the quality assessment model, the proposed enhancement model, and the quantized spectral language model. We describe our dataset and experimental setup in section III. In section IV, we evaluate our proposed approach and compare it with other state-of-the-art models. We discuss the implication and significance of our work in section V. Finally, we conclude our work in section VI. ## II Proposed Approach A depiction of our approach is shown in Figure 1. The model consists of a MOS prediction model (shown left) and a speech enhancement model (shown right). Our MOS prediction model is tailored to provide estimates for subjective-MOS (as rated by humans), and going forward, we will use MOS to refer to subjective-MOS unless explicitly stated otherwise, for ease of understanding. We next will provide notation and then describe each of these sub-modules. ### _Notation_ We define a clean speech signal as \(s_{t}\) and background noise as \(n_{t}\) at time \(t\). The mixture of clean speech and noise is denoted as \(m_{t}=s_{t}+n_{t}\). We aim to extract the speech from the mixture by removing the unwanted noise. The short-time Fourier transform (STFT) converts the time-domain mixture into a T-F representation, \(M_{t,f}\), that is defined at time \(t\) and frequency \(f\). The complex-valued STFT matrix, \(\mathbf{M}\), can be written as \(\mathbf{M}=|\mathbf{M}|e^{i\mathbf{\theta}^{M}}\) with magnitude \(|\mathbf{M}|\in\Re_{+}^{T\times F}\) and phase \(\mathbf{\theta}^{M}\in\Re^{T\times F}\), where \(T\) is the length of speech in time and \(F\) is the total number of frequency channels. Enhancing the magnitude response of noisy speech results in an estimate of the clean speech magnitude response, \(|\hat{\mathbf{S}}|\), using an enhancement function \(\mathcal{F}_{\delta}\) such that \(|\hat{\mathbf{S}}|=\mathcal{F}_{\delta}(|\mathbf{M}|)\). The enhancement function is modeled with a deep neural network which is trained to maximize the conditional log-likelihood of the training dataset, \[\max\frac{1}{N}\sum^{N}\log P\Big{(}|\mathbf{S}|\,\Big{|}\,|\mathbf{M}|\Big{)}\] \[\Rightarrow \max_{\delta}\frac{1}{N}\sum^{N}\log P\Big{(}\mathcal{F}_{\delta}( |\mathbf{M}|)\,\Big{|}\,|\mathbf{M}|\Big{)}\] where \(\delta\) denotes the set of tunable parameters and \(N\) is the number of training examples. The estimated magnitude response \(|\mathbf{\hat{S}}|\) is then combined with the noisy phase, \(\mathbf{\theta}^{M}\), where the inverse STFT produces an enhanced speech signal in the time domain, \(\hat{s}_{t}\). ### _Speech quality assessment model_ A MOS prediction model proposed by [61] is adapted to estimate the MOS from noisy speech. This model has been developed with real-world captured data and it has been shown to outperform comparison approaches [42, 52, 62], according to multiple metrics. The MOS prediction model consists of an attention-based encoder-decoder structure that uses stacked pyramid bi-directional long-short term memory (pBLSTM) [63] networks in the encoder. We denote this model as Pyramid-MOS (PMOS). A pBLSTM architecture gives the advantages of processing sequences at multiple time resolutions, which effectively captures short- and long-term dependencies. Speech has spectral and temporal dependencies over short and long durations, and a multi-resolution framework is effective in learning these complex relations. A single T-F frame of the noisy-speech mixture, \(|\mathbf{M}_{t}|\), is the input to the PMOS encoder. In a pyramid structure, the lower layer outputs from \(\Upsilon\) consecutive time frames are concatenated and used as inputs to the next pBLSTM layer, along with the recurrent hidden states from the previous time step. The output of a pBLSTM node is an embedding vector, \(h_{t}^{l}\), that is as defined below: \[h_{t}^{l}=pBLSTM\Big{(}h_{t-1}^{l},\big{[}h_{\Upsilon\times t-\Upsilon+1}^{l-1},h_{\Upsilon\times t}^{l-1}\big{]}\Big{)} \tag{1}\] where \(\Upsilon\) is the reduction factor (e.g., number of concatenated frames) between successive pBLSTM layers and \(l\) is the layer number. A pBLSTM reduces the time resolution from the input speech to the final latent representation \(\mathbf{H}\). Figure 2 shows the internal structure of pBLSTM module. This compressed vector accumulates the useful features for measuring speech perceptual quality that resides in a range of time-frames and ignores the least important features. The encoder outputs a concatenated version of the hidden states of the last pBLSTM layer as vector \(\mathbf{H}=\{\mathbf{h}_{1},\cdots,\mathbf{h}_{\tau},\cdots,\mathbf{h}_{\wp}\}\), where \(\wp\) is the total number of final embedding vectors with time index \(\tau\). The output of the PMOS encoder becomes the input to the PMOS decoder unit. This decoder is implemented as an attention layer followed by a fully-connected (FC) layer and it outputs an estimated MOS of the input speech utterance. Attention models learn key attributes of a latent sequence, since adjacent time frames can provide important information, which is particularly crucial for our task. The attention mechanism [64] uses the pyramid encoder output at the \(i\)-th and \(k\)-th time steps to compute the attention weights, \(\alpha_{i,k}^{PMOS}\). Attention weights are used to compute a context vector, \(c_{i}^{PMOS}\), using the following equations: \[\alpha_{i,k}^{PMOS} =\frac{\exp{(\mathbf{h}_{i}^{\top}\mathbf{Q}\mathbf{h}_{k})}}{\sum_{\phi=1}^ {\wp}\exp{(\mathbf{h}_{i}^{\top}\mathbf{Q}\mathbf{h}_{\phi})}} \tag{2}\] \[c_{i}^{PMOS} =\sum_{k=1}^{\wp}\alpha_{i,k}^{PMOS}\cdot\mathbf{h}_{k} \tag{3}\] \(\mathbf{Q}^{\wp\times\wp}\) is the trainable PMOS attention weight matrix. We learn \(\mathbf{Q}\) using a feed-forward neural network that attempts to capture Fig. 1: A depiction of our speech-enhancement model that consists of a MOS-prediction model denoted as PMOS (left side), and a speech-enhancement (SE) model (right side). An attention mechanism connects the two models. the alignment between the embeddings \(\mathbf{h}_{i}\) and \(\mathbf{h}_{k}\). The context vector is provided to a fully-connected layer to estimate the MOS. Note that the pyramid structure of the encoder results in a shorter sequence of latent representations than the original input sequence, and it leads to fewer encoding states for attention calculation at the decoding stage. Therefore, strictly \(\wp<T\), and in our case \(\wp=\lceil T/\Upsilon^{L}\rceil\), where \(L\) is the number of pBLSTM layers. We train the PMOS model separately with the parameters defined in [65]. After training, this model is held frozen during inference. ### _Proposed speech enhancement model_ Our proposed speech-enhancement (SE) model follows an encoder-decoder structure, and it is shown in Figure 1 (right). The SE encoder takes a single T-F frame of a noisy-speech mixture, \(|\mathbf{M}_{t}|\), as input and multiple BLSTM layers, are stacked together to create a hidden representation of the frame, \(\mathbf{g}_{t}\). In our SE encoder, we utilize BLSTM layers instead of pBLSTM layers since we aim to estimate an embedding frame for each time frame and pBLSTM layers reduce the number of output frames. An attention mechanism is applied using the mixture encoding from the SE model, \(\mathbf{G}=\{\mathbf{g}_{1},\mathbf{g}_{2},\cdots,\mathbf{g}_{T}\}\), and the PMOS encoding, \(\mathbf{H}\), from the MOS prediction model. This allows the SE model to exploit the MOS estimator's encoding and utilize the important perceptual feature embedding that correlates with human assessment. Considering that the pBLSTM structure of the PMOS encoder condenses the final encoding vector \(\mathbf{H}\) along time, PMOS yields a smaller time resolution than the encoding from the SE encoder, so we compute a score for each embedding vector \(\mathbf{h}_{\tau}\) using an alignment weight matrix, \(\mathbf{W}^{T\times\wp}\). Then the attention weights for the SE model, \(\alpha_{t,\tau}\), are obtained using a softmax operation over the scores of all \(\mathbf{h}_{\tau}\). Now, the PMOS encoding is summarized in a context vector \(\mathbf{c}_{t}\) for each mixture frame \(\mathbf{g}_{t}\). Prior to computing \(\mathbf{c}_{t}\), \(\mathbf{h}_{\tau}\) passes through a linear layer \(\ell\), so that we learn a different representation for the SE task. The computations are below: \[\alpha_{t,\tau} =\frac{\exp{(\mathbf{g}_{t}^{\top}\mathbf{W}\mathbf{h}_{\tau})}}{\sum_{\phi=1 }^{\wp}\exp{(\mathbf{g}_{t}^{\top}\mathbf{W}\mathbf{h}_{\phi})}} \tag{4}\] \[\mathbf{c}_{t} =\sum_{\tau=1}^{\wp}\alpha_{t,\tau}\cdot\ell(\mathbf{h}_{\tau}) \tag{5}\] Then, the context vector and SE-model embedding vector are concatenated (e.g., \([\mathbf{c}_{t},\mathbf{g}_{t}]\)) and passed to the decoder module. The SE-decoder module follows the network structure from [58]. It consists of a linear layer with a \(tanh(\cdot)\) activation function, two BLSTM layers, and a linear layer with ReLU activation. It outputs the estimated enhanced speech \(|\hat{\mathbf{S}}|\). This estimated speech magnitude with noisy phase produce the estimated clean speech, i.e. \(\hat{\mathbf{S}}=|\hat{\mathbf{S}}|e^{i\theta^{M}}\). Since we are estimating two targets MOS and enhanced speech simultaneously, the unified model will learn different representations for these tasks. Thus both PMOS and SE models will learn their corresponding targets with perceptual feature sharing. We freeze the PMOS model while training this SE model. ### _Joint-learning of PMOS and SE model_ We also develop an approach that allows the PMOS and SE models to be jointly trained. Our joint-learning objective function uses a weighted average of a time-domain signal-approximation loss \(\mathcal{L}_{sa}\) (from the SE model), the MSE of the magnitude spectrum \(\mathcal{L}_{mse}\) (from the SE model) and the MSE of the MOS estimation \(\mathcal{L}_{mos}\) (from the PMOS model). We compute the signal-approximation loss from the time-domain signal difference between the reference speech \(s\) and enhanced speech \(\hat{s}\). The overall loss function of our network is defined as below, using hyper-parameters \(\lambda_{1}\) and \(\lambda_{2}\) that control the impact of individual loss terms: \[\mathcal{L}=\lambda_{1}\left[\lambda_{2}\mathcal{L}_{mse}+(1-\lambda_{2}) \mathcal{L}_{sa}\right]+(1-\lambda_{1})\mathcal{L}_{mos} \tag{6}\] The model training order is as such. First, we train the PMOS model using \(\mathcal{L}_{mos}\) (e.g. \(\lambda_{1}=0\)). Then we train the SE model using \(\lambda_{1}=1\), while running the PMOS model in inference mode (e.g. it is held fixed). This is done to ensure that the trained PMOS model effectively encodes the key features in the embedding vector that are important to perceptual speech quality. Finally, we train both the models jointly (e.g. \(0<\lambda_{1}<1\)) using \(\mathcal{L}\) to further reduce any correctional differences between the true and estimated MOS in the PMOS model, and to increase the perceptual quality of the enhanced speech. ### _Quantized Spectral Model_ From written and spoken language, we can determine the sequences of words that are most likely to occur. This knowl Fig. 3: Quantization of a clean magnitude spectrum. Fig. 2: Illustration of pBLSTM structure with reduction factor \(\Upsilon=2\) and number of layer \(L=2\). edge is captured by a language model (LM) of an automatic speech recognition system which we can expressed as, \[\hat{words} =\operatorname*{arg\,max}_{words\in Language}\overbrace{P(input| words)}^{acoustic\ model}\ \overbrace{P(words)}^{language\ model} \tag{7}\] The LM is useful in eliminating rare and grammatically incorrect word sequences, and it enhances the performance of ASR systems. In the case of speech enhancement, models learn spectral information within frames over time, but they often neglect the temporal correlations. Our approach, as proposed in [66], suggests incorporating a "LM" to fuse temporal correlations and overcome this limitation. Therefore, we construct a bi-gram Quantized Spectral Model (QSM), which functions in a similar way to a language model (LM), in order to produce more realistic spectra. The QSM estimates the probability of spectral magnitudes along time for each frequency channel conditioned on its previous T-F spectral magnitude. On a reference speech corpora, we apply a normalization scaling function, \(\mathcal{N}_{[o,r]}(\cdot)\), that normalizes the magnitude spectrogram and re-scales the range to \([0,r]\). Then a quantization function, \(\mathcal{Q}_{\chi}(\cdot)\), converts the range constrained magnitude spectrogram into \(\mathcal{D}\) number of bins that are \(\chi\) steps apart. This produces quantized speech, i.e. \(|S|^{q}=\mathcal{Q}_{\chi}\big{(}\mathcal{N}_{[0,r]}(|S|)\big{)}\). Fig. 3 shows an example of the original clean and quantized clean magnitude spectra, where \(\chi=2\) for display purposes. Our proposed QSM has \(\mathcal{D}\) spectral levels. We construct the QSM using quantized speech magnitudes from the clean speech corpora. The QSM is less likely to suffer from the out of vocabulary problem when the model parameters, \(\chi\) and \(r\), are adequately defined. We compute per-frequency-channel QSMs along the time axis where each entry, \(d\), refers to a quantization attenuation level. We then compute the transition probability between two time consecutive T-F units, \(fQSM_{f}=P(d_{t+1,f}|d_{t,f})\). The probabilities are calculated by counting the level transitions, and then normalizing by the appropriate scalar. These probabilities are stored in the per-frequency-channel QSM resulting in a \(F\times\mathcal{D}\times\mathcal{D}\) probability matrix. We re-evaluate the transition probabilities using Good-Turing smoothing [67] to overcome the zero-probability problem in N-grams. Shallow fusion [68] is a simple method to incorporate an external LM into an encoder-decoder model, and it produces better results compared to others. Hence, we use shallow fusion to combine our QSM and SE model based on log-linear interpolations at inference time. This is shown in the below equations: \[P_{f}^{QSM}(|\hat{\mathcal{S}}_{\cdot,f}|)=\prod_{i=1}^{T}P(d_{i,f}|d_{i-1,f}) \tag{8}\] \[|\hat{\mathcal{S}}_{\cdot,f}|^{*}=\operatorname*{arg\,max}_{| \hat{\mathcal{S}}_{\cdot,f}|}\,\log P\big{(}|\hat{\mathcal{S}}_{\cdot,f}| \big{|}|\mathcal{M}|\big{)}+\mu\log P_{f}^{QSM}\big{(}|\hat{\mathcal{S}}_{ \cdot,f}|\big{)} \tag{9}\] Here \(P_{f}^{QSM}\) denotes the transitional probability of QSM at frequency \(f\), \(P(|\hat{\mathcal{S}}_{\cdot,f}|\big{|}|\mathcal{M}|\big{)}\) represents the estimated magnitude output of the LSTM layers of the SE decoder, and \(\mu\) is a hyper-parameter that is tuned to maximize the performance on a development set. Note that we train our QSM in advance on a clean speech corpus and use it in inference mode during enhancement. The tunable parameter \(\mu\) of (9) is set to zero when we do not have a trained QSM. ## III Experiments ### _Dataset_ We use the COnversational Speech In Noisy Environments (COSINE) [69] and the Voices Obscured in Complex Environmental Settings (VOiCES) [70] corpora. COSINE captures multi-party conversations on open-ended topics for spontaneous and natural dialogue. These conversations are recorded in real world environments in a variety of background settings. The audio recordings are captured using 7-channel wearable microphones that consist of a close-talking mic (e.g., near the mouth, clean reference), far-field mic (on the shoulder), throat mic, and an array of four mics (spaced 3 cm apart) positioned in front of the speaker's chest. In total, 133 English speakers record 150 hours of audio with the approximated signal-to-noise ratios (SNR) ranging from -10.1 to 11.4 dB. VOiCES contains audio recorded using 12 microphones placed throughout real rooms of different size and acoustic properties. Various background noises like TV, music, or babble are simultaneously played with foreground clean speech, so the recordings contain noise and reverberation. A foreground loudspeaker moves through the rooms during recording to imitate human conversation. This foreground speech is used as the reference clean signal, and the audio captured from the microphones is used as the reverberant-noisy speech. The approximate speech-to-reverberation ratios (SRRs) of the VOiCES signals range from -4.9 to 4.3 dB. The MOS data was collected from a listening study in [61]. Listeners assessed the speech quality of audio signals using a 100-point scale. In total, 45 hours of speech and 180k subjective human ratings are summarized into the MOS quality ratings for 18000 COSINE signals and 18000 VOiCES signals. The collected responses are processed further to mitigate rating biases [71], remove responses that were unanswered or randomly scored [72], and to deal with outliers [73, 74]. Z-score pruning [75] followed by min-max normalization is performed, resulting in a MOS rating scale of 0 to 10. The scaled ratings for each audio signal are finally averaged. We additionally evaluate using the 4th CHiME Speech Separation and Recognition Challenge (CHiME-4) [76] and the 5th CHiME Speech Separation and Recognition Challenge (CHiME-5) [77] corpora. We use these to investigate the generalization capacity of our proposed approach. ### _System Setup_ All signals are downsampled to \(16\) kHz. Noisy or reverberant stimuli of each dataset are divided into training (70%), validation (10%), and testing (20%) sets, and trained separately. For MOS prediction, the input signals are segmented into 40 ms length frames with 25% overlap. A 512-point FFT and a Hanning window are used to compute the spectrogram. Mean and variance normalization are applied to the input feature vector. The PMOS encoder consists of \(256\) nodes followed by 3 pBLSTM layers (\(L=3\)) with 128, 64 and 32 nodes in each direction, respectively. Like [61, 63], the reduction factor \(\Upsilon=2\) is adopted here. As a result, the final latent representation \(\mathbf{h}_{\tau}\) is reduced in the time resolution by a factor of \(\Upsilon^{3}=8\). The outputs of two successive BLSTM nodes are fed as input to a BLSTM node in the upper layer. In the PMOS decoder, the context vector is passed to a fully connected (FC) layer with 32 units. The model is optimized using Adam optimization [78] with convergence determined by a validation set. Early stopping with initial learning rate of \(0.001\) is applied in the training phase. The proposed SE model uses a 640-point DFT with a Hann window of 40ms and a 20ms frame shift to generate the spectrogram for the encoder input. The SE encoder consists of 2 BLSTM recurrent layers. The SE decoder has a linear layer with \(tanh\) activation, followed by 2-layers of BLSTM and a linear layer with ReLU activation [79, 58]. Each BLSTM layer contains 200 nodes and each linear layer has 321 nodes. The same optimization technique with early stopping by validation set is applied. For our proposed QSM language model, we choose a quantization step of \(\chi=0.0625\), which was validated by a listening study conducted in [66]. With parameter \(r=100\), the total number of quantization levels, \(\mathcal{D}\), is \(1600\). The QSM tunable parameter, \(\mu\), is set to \(0.01\). ## IV Results ### _MOS prediction results_ We first evaluate our MOS-prediction performance in comparison with other approaches. In particular, we compare against NISQA [62], which we modified to estimate human-accessed MOS. Originally, they estimate perceptual objective listening quality assessment (POLQA) [80] scores using a CNN and BLSTM architecture. We also compare against the PMOS model proposed in [61], which is identical in structure to our PMOS model. Finally, we include our proposed SE+PMOS approach [60] (no joint training), where our PMOS model is held fixed while the SE model is training using the embeddings from the PMOS encoder. We use four metrics to evaluate MOS-estimation performance: mean absolute error (MAE), epsilon insensitive root mean squared error (RMSE) [81], Pearson's correlation coefficient \(\gamma\) (PCC), and Spearman's rank correlation coefficient \(\rho\) (SRCC). Table I shows the results, where our proposed approach and SE+PMOS clearly outperform the other MOS prediction models according to all metrics. MAE is minimized by \(0.6\) compared to the original PMOS [61] approach. There is also a \(0.05\) reduction in RMSE. This justifies our proposed approach that combines MOS estimation and speech enhancement tasks. Note, however, that similar results are obtained for our proposed approach and the SE+PMOS approach, which suggests that joint training (e.g., fine tuning) may help speech enhancement more than MOS prediction. ### _Speech enhancement model_ For speech enhancement, we compare against a baseline approach without an attention mechanism [82]. We denote this baseline approach as SE. Five separate loss functions are applied to optimize this approach, and they are MSE, MSE plus signal approximation, MOS, signal approximation with MOS, and SDR. To compute the MOS loss function, we utilize the SE loss function from [43] which leverages objective-MOS (oMOS) ratings learned from a speech assessment model [42]. SDR [37] loss functions are proposed in literature previously with different enhancement architectures. For the SDR loss function, the SE model is optimized using the following cost function: \[\mathcal{L}_{SDR}=\sum_{n=1}^{N}\mathcal{K}_{\theta}\Big{(}10\log\frac{\|s^{ n}\|^{2}}{\|s^{n}-\hat{s}^{n}\|^{2}}\Big{)} \tag{10}\] where \(\mathcal{K}_{\theta}(a)=\theta\cdot\tanh(\frac{a}{\theta})\), \(\theta\) is a clipping parameter, \(N\) is the mini-batch size, and \(s^{n}\) and \(\hat{s}^{n}\) are the n\({}^{\text{th}}\) sample of the clean and estimated speech signal in time. We use \(\theta=20\) in our training. We also compare against a generative adversarial network (GAN) approach that individually optimizes with PESQ and STOI [23]. We denote this model as MetricGAN. They estimate the IRM for a speech mixture conditioned on a GAN discriminator that outputs evaluation scores in continuous space (i.e. scores between 0 and 1) based on either normalized PESQ or STOI target metrics. We compare our model with the ensemble-based Specialized Speech Enhancement Model Selection (SSEMS) approach [44] that uses Quality-Net [42] as its objective function in a black-box manner. Quality-Net is an oMOS approach that estimates the Perceptual Evaluation of Speech Quality (PESQ) score. The SSEMS approach uses an ensemble of enhancement models, each trained on audio at specific SNRs and speaker genders. During inference, it selects the output with the highest PESQ score. SSEMS uses a SNR threshold of \(20\) dB, while we use a threshold of \(0\) dB for balanced training and better performance. Additionally, we conduct a comparison with our initial approach that integrates MOS embeddings in speech enhancement, as presented in [60]. This model is referred to as SE+PMOS, and it does not involve joint training or the QSM language model. We evaluate SE+PMOS with varying combinations of loss functions. All models are trained using the experimental setup that is previously mentioned. We modify the comparison models using the code provided by the original authors. We assess speech enhancement performance using PESQ [35], scale-invariant SDR (SI-SDR) [83], and extended STOI (ESTOI) [84]. In the absence of actual human quality objective, we measure the predicted MOS score of the enhanced speech, using our proposed PMOS model, since we aim to improve human-assessed speech quality. We denote this metric as MOS listener quality objective (MOS-LQO). Table II shows the average results of the different enhancement models, according to each of the performance metrics on COSINE and VOiCES dataset. As the scores of the unprocessed mixtures show, the VOiCES corpus is more challenging than the COSINE corpus. With the baseline SE model, we experiment with 5 different combination of loss functions. Using the MSE loss only in SE:mse, we see improvements in objective scores, except with MOS-LQO for the COSINE data. Then we apply a MOS loss \(\mathcal{L}_{mos}\) as the sole objective criterion, as proposed in [43]. Our experimental results show that this approach results in an overall improvement of \(1.4\) in MOS-LQO compared to SE:mse. Then we separately combine the signal approximation loss with the mse loss and MOS loss (e.g., mse+sa and mos+sa). In PESQ, we gain an average of \(\geq 0.05\) and \(\geq 0.07\) compared to the models that use only the MSE loss and only the MOS loss, respectively. Furthermore, the model trained with the mos+sa loss function achieves the highest MOS-LQO score of \(4.4\) and \(5.7\) among all five loss functions tested with the SE model in COSINE and VOiCES dataset, respectively. This result is on average \(1.15\) MOS-LQO higher than that obtained with the mse+sa loss function. These scores suggest that \(\mathcal{L}_{mse}\) and \(\mathcal{L}_{sa}\) maximize the overall speech intelligibility, whereas \(\mathcal{L}_{mos}\) guides the model towards perceptual speech quality. Note that in all these \(\mathcal{L}_{mos}\) calculations, we use a separately trained PMOS model's output without joint learning. Lastly, we apply the SDR loss function as proposed in [37], which is used as the pre-training stage for model training. We observe an average gain of \(0.9\) in SI-SDR, however, it yields a poor score according to other metrics, especially a \(0.7\) loss in MOS-LQO compared to SE with mse and sa loss terms. SE+PMOS is separately investigated with 3 combinations of loss functions, i.e. mse, mse+sa, and mse+sa+mos. Compared with SE models, SE+PMOS with mse loss achieves \(0.9\) SI-SDR and \(1.75\) MOS-LQO improvements on average, which shows the benefit of incorporating the PMOS model. The SE+PMOS:mse+sa model improves the performance further with an average of \(0.14\) ESTOI gain over the SE:mse+sa model. The inclusion of the mos loss gives the best MOS-LQO scores of \(5.1\) and \(6.5\) over all the comparison models in noisy and reverberant conditions, respectively. MetricGAN optimizes PESQ or STOI, therefore, it outperforms other comparison models in terms of PESQ and ESTOI, although the scores for the SE+PMOS approaches are higher according to the other evaluation metrics even though these metrics are not leveraged during training. SSEMS yields the lowest scores across all metrics compared with SE+PMOS and MetricGAN approaches, though we do parameter tuning for this model. Chi++t\({}_{\text{OSML8}}\) estimates quantized speech, and the results show that it affects the traditional objective functions. This performs poorly compared with the SE+PMOS and MetricGAN approaches, however, on average, it outperforms SSEMS in all criteria, and the SE models in terms of PESQ. With the MOS-LQO criteria, it fails to produce good scores. This points out the importance of incorporating perceptual features during enhancement, which Chi++t\({}_{\text{OSML8}}\) clearly lacks. We calculate the performance of our proposed model using two combinations of loss functions. Using only mse and sa loss terms, we achieve the highest ESTOI scores for both corpora, though these results are nearly identical to the model trained with all three loss terms. Using \(\mathcal{L}\) (eq:6) in our proposed model, we obtain the highest SI-SDR scores while maintaining similar PESQ and ESTOI performance as compared to the best-performing model. Specifically, our proposed model achieves the highest ESTOI score and an average PESQ score that is only \(0.03\) less than that of the best performing MetricGAN:pesq model. Contrasting with the Chi++t\({}_{\text{OSML8}}\) model, which uses spectral language model to estimate quantized speech, our proposed approach outperforms the quantized model according to all metrics, which proves the significance of joint learning.When comparing MOS-LQO scores, our proposed:mse+sa+mos model achieves better scores than the other models except the SE+PMOS:mse+sa+mos model with an average of only \(0.05\) declination. Thus, the inclusion of a spectral language model helps the model proposed (e.g., mse+sa+mos) to estimate better quality speech according to the overall evaluation criteria. It is important to note that our proposed approach performs best according to SI-SDR in both noisy and reverberant environments, where this metric is not used by any of the approaches during optimization. We further examine our approaches using completely unseen corpora. We test models with the CHiME-5 and CHiME-4 corpora where the models are trained from the COSINE dataset according to the system setup mentioned in section III-B. Table III shows the performance evaluated according to PESQ, SI-SDR, ESTOI, MOS-LQO, and word error rate (WER). To calculate WER, we use the conventional ASR baseline that is provided with CHiME-5 and CHiME-4 dataset. We investigate WER with both GMM based ASR and end-to-end ASR, however, we find that the end-to-end approach results in a higher error compared to the GMM baseline. This might happen due to larger data requirements of the end-to-end ASR system as mentioned in [77]. Therefore, we use the GMM ASR approach to compare the WER performance of the enhancement models. From the scores of mixtures, we find that CHiME-5 is more challenging than CHiME-4 with a \(118.8\%\) higher WER and a \(0.46\) lower SI-SDR. Our proposed approach yields the best MOS-LQO scores with \(4.9\) with CHiME-5 and \(6\) with CHiME-4 data. The proposed mse+sa model results in the lowest WER of \(78.3\) and \(18.1\) using CHiME-5 and CHiME-4, respectively. Note that the WER of the GMM baseline ASR for the CHiME-5 challenge is \(72.8\) in binaural and \(91.7\) in single array conditions. Here our approaches enhance monaural speech, a more challenging condition. Our proposed approach outperforms other comparison models in terms of SI-SDR with a \(5.29\) average improvement compared to others. According to PESQ and ESTOI metrics, MetricGAN variants give the best performace, however, proposed model's performance is \(0.02\) and \(0.015\) lower according to PESQ and ESTOI, respectively, for the best performing MetricGAN models. Hence, our proposed approach is effective on out-of-vocabulary scenario trained by a comparable dataset. ### _Perceptual quality evaluation_ We finally evaluate our model using P.835 metric [85] to measure perceptual quality. We calculate the DNSMOS score on a scale of \([1-5]\) (1 = worst, \(5\) = best) for the mixture, PMOS+SE, MetricGAN, and our proposed models using the CHiME-4 [76] and CHiME-5 [77] datasets (simulated and real-recording). Figure 4 shows the scores. With CHiME-4, the original mixture scores range from \(1.45\) to \(2.5\) with a median of \(1.74\). Our proposed model achieves a median MOS of \(2.46\), which is higher than the others. Fon CHiME-5, the original mixture scores range from \(1.0\) to \(4.18\). Our proposed model outperforms the others with a median of \(2.25\). Our proposed model and PMOS+SE have smaller standard deviations compared to MetricGAN. Overall, our proposed model improves noisy speech in both the acoustic and perceptual aspects. ## V Discussion Our proposed model outperforms all comparison models on SI-SDR metrics for both seen and unseen datasets, without optimization of any of the models (Table II, III). This means that our approach improves speech quality by minimizing the distortion ratio when separated from the noise component. Additionally, our models yield the best MOS-LQO ratings on real-world captured audios (CHiME datasets, Table III). These results are consistent with the findings of [57, 60] that incorporating embeddings from a speech assessment model improves SE performance, and the results of [56] that using MOS loss during model optimization leads to higher MOS-LQO scores. Our proposed approach achieves PESQ and ESTOI scores that are only slightly lower than those of Fig. 4: MOS ratings of the speech enhancement modes on CHiME-4 and CHiME-5 datasets using DNSMOS P.835. the best-performing model, with a difference of only \(0.03\) and \(0.01\), respectively. This indicates that speech quality and intelligibility metrics are closely related to the subjective speech quality metric (MOS-LQO), and that these metrics can be improved without explicit optimization. Furthermore, our proposed model achieves the best average DNSMOS scores with low standard deviations on CHiME datasets (Figure 4), indicating that it is effective in a wide range of real-world noise levels. This is a desirable quality for an effective SE model to be effective not only in high SNRs and limited noisy environments, but also in large SNR ranges and real-world conditions such as those offered by the CHiME dataset. When comparing our proposed model that uses mse+ss+mos loss to the PMOS+SE model (as shown in Table III), we can observe significant improvements in all performance metrics. As both models use the same loss function, the improvements are attributed to the incorporation of LM and the joint learning method. Moreover, we found that these two models exhibit similar performance on the MOS prediction (Table I), indicating that the benefits of joint learning mostly impact the enhancement part of the model. An intriguing finding is that our proposed model shows a decline in WER% when MOS loss is incorporated, especially for larger real-world recordings such as CHiME-5, with degradation up to 1.1. Although our study is not primarily concerned with ASR performance, this suggests a potential trade-off between ASR accuracy and subjective speech quality scores. Further investigation is needed to comprehend this relationship. Our proposed method demonstrates that training a speech enhancement (SE) model and a MOS-based speech assessment model jointly can lead to better speech quality measured by objective metrics such as perceptual quality, intelligibility, and MOS ratings. However, we acknowledge that our study's use of subjective MOS (sMOS) estimation instead of actual human listeners may introduce discrepancies between MOS-LQO and human-rated MOS, which could impact our findings. To address this limitation, we plan to conduct sMOS evaluation by human listeners in future work. Although we used the same MOS prediction model for all comparison models, we believe that incorporating human-rated sMOS evaluations will provide more robust insights into our proposed method's effectiveness. For computing loss terms, we opt for the MSE loss function along with a bi-gram language model that considers only time-along transitions. Our aim is to keep the model simple and focus on the effectiveness of our approach. However, we acknowledge that using different loss functions for different loss components and employing a more complex language model that considers both temporal and spectral transition levels can be beneficial. We plan to explore these possibilities in our future work. ## VI Conclusion Our proposed speech enhancement model utilizes a speech quality MOS assessment metric in a joint learning manner and incorporate quantized ASR-style language model for better performance. The results show that it outperforms other models in both noisy and reverberant environments, as well as in unseen real-world noisy conditions. It shows that perceptually-relevant embeddings are useful for speech enhancement. However, we evaluate our model's subjective score using a MOS-estimation model. Additionally, our assessment model provides utterance-level feedback, which may be sub-optimal since the model's embeddings are calculated at the frame level. In our proposed LM, we consider only bi-gram spectral models which are generated by considering only along-time transitions. In the future, we will explore higher-order N-gram models that consider both temporal and spectral transitions to enhance both magnitude and phase responses. We will address per-frame or window level perceptual score generation in future work.
2308.07582
Characterization of fast magnetosonic waves driven by compact toroid plasma injection along a magnetic field
Magnetosonic waves are low-frequency, linearly polarized magnetohydrodynamic (MHD) waves commonly found in space, responsible for many well-known features, such as heating of the solar corona. In this work, we report observations of interesting wave signatures driven by injecting compact toroid (CT) plasmas into a static Helmholtz magnetic field at the Big Red Ball (BRB) Facility at Wisconsin Plasma Physics Laboratory (WiPPL). By comparing the experimental results with the MHD theory, we identify that these waves are the fast magnetosonic modes propagating perpendicular to the background magnetic field. Additionally, we further investigate how the background field, preapplied poloidal magnetic flux in the CT injector, and the coarse grid placed in the chamber affect the characteristics of the waves. Since this experiment is part of an ongoing effort of creating a target plasma with tangled magnetic fields as a novel fusion fuel for magneto-inertial fusion (MIF), our current results could shed light on future possible paths of forming such a target for MIF.
F. Chu, S. J. Langendorf, J. Olson, T. Byvank, D. A. Endrizzi, A. L. LaJoie, K. J. McCollam, C. B. Forest
2023-08-15T06:00:21Z
http://arxiv.org/abs/2308.07582v2
Characterization of fast magnetosonic waves driven by interaction between magnetic fields and compact toroids ###### Abstract Magnetosonic waves are low-frequency, linearly polarized magnetohydrodynamic (MHD) waves that can be excited in any electrically conducting fluid permeated by a magnetic field. They are commonly found in space, responsible for many well-known features, such as heating of the solar corona and acceleration of energetic electrons in Earth's inner magnetosphere. In this work, we present observations of magnetosonic waves driven by injecting compact toroid (CT) plasmas into a static Helmholtz magnetic field at the Big Red Ball (BRB) Facility at Wisconsin Plasma Physics Laboratory (WiPPL). We first identify the wave modes by comparing the experimental results with the MHD theory, and then study how factors such as the background magnetic field affect the wave properties. Since this experiment is part of an ongoing effort of forming a target plasma with tangled magnetic fields as a novel fusion fuel for magneto-inertial fusion (MIF, aka magnetized target fusion), we also discuss a future possible path of forming the target plasma based on our current results. ## I Introduction Magnetosonic waves are a type of linearly polarized magnetohydrodynamic (MHD) waves commonly observed in solar coronal loops [1; 2], within the magnetosheath and ionosphere at Mars [3], near the Earth's magnetic equator [4; 5], and inside the magnetosphere of a neutron star [6]. Two magnetosonic modes can be derived from the MHD theory - the fast and slow waves with both longitudinal (i.e., sound wave) and transverse (i.e., electromagnetic wave) components [7; 8]. The magnetosonic waves were first observed in the Earth's magnetosphere as equatorial noise in the 1960s [9; 10], and have gathered significant attention in recent years as they play important roles in a variety of astrophysical phenomena. For example, the fast magnetosonic waves are capable of heating radiation belt electrons through Landau damping [11] and accelerating protons through high-order harmonic resonances [12] in Earth's inner magnetosphere. Moreover, these waves are likely to cause heating in the solar corona [13]. Compared to laboratory experiments, the _in situ_ spacecraft measurements at only a single or a few points have generally suffered from limitations of insufficient spatial resolution and uncontrolled conditions [14]. The laboratory simulation of space plasmas, on the other hand, enables detailed investigations of underlying plasma physics processes [15; 16; 17] with many-point measurements, reproducible environments, and a suite of state-of-the-art diagnostics [18; 19; 20; 21; 22]. The Big Red Ball (BRB) Facility at the Wisconsin Plasma Physics Laboratory (WiPPL) [23; 24] is designed to study a range of fundamental astrophysical questions as well as geometries that mimic astrophysical systems, providing a unique capability to investigate the fast magnetosonic waves in a laboratory. In this paper, we report an experiment of driving the fast magnetosonic waves through interaction between magnetic fields and compact toroid (CT) plasmas in BRB. A CT contains both poloidal and toroidal magnetic fields and does not require magnet coils linking the hole in the plasma torus [25]. The CTs can be categorized into two major types - spheromaks [26] and field reversed configurations (FRCs) [27]. In the experiment, we first identify the wave modes resulting from the field-CT interaction by comparing the data with the MHD theory, and then study how the background field, preapplied poloidal magnetic flux in the CT injector, and the coarse conducting grid placed in the chamber affect the characteristics of these waves. The experiment presented here is part of an ongoing exploration of forming a target plasma with tangled magnetic fields in a laboratory setting as a novel fusion fuel for magneto-inertial fusion (MIF), also known as magnetized target fusion (MTF) [28; 29; 30]. In this approach, the target plasma is quasi-adiabatically compressed and heated by heavy imploding shell or "liner", with the goal of briefly attaining thermonuclear burn conditions. As the electron heat conduction goes predominantly along the field lines, the randomized magnetic fields in the target can provide very long connection lengths between the core and the liner surface, therefore effectively reduce heat loss from the fuel plasma to the colder liner [31]. In this paper, we will explore potential methods of forming such a target with tangled magnetic fields for MIF by colliding CTs with a coarse conducting grid and discuss our path forward based on our current results. The paper is organized as follows: Sec. II presents a background on fast magnetosonic waves, Sec. III gives a description of the experimental setup and the coaxial gun used to create CT plasmas, Sec. IV presents the experimental results, Sec. V discusses wave driving mechanisms, target plasma formation for MIF, and possible path forward, and Sec. VI provides a summary. ## II Background on fast magnetosonic waves To understand the basic properties of the magnetosonic waves, in this section, we consider a simple case of small-amplitude waves in a homogeneous ideal (infinitely conducting) MHD fluid. The closed set of ideal MHD equations, including the continuity equation, momentum equation, induction equation, and adiabatic equation of state, are [32; 8] \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})=0, \tag{1}\] \[\rho\frac{\partial\mathbf{u}}{\partial t}+\rho(\mathbf{u}\cdot \nabla)\mathbf{u}=-\nabla\left(P+\frac{B^{2}}{2\mu_{0}}\right)+\frac{(\mathbf{ B}\cdot\nabla)\mathbf{B}}{\mu_{0}},\] (2) \[\frac{\partial\mathbf{B}}{\partial t}=\nabla\times(\mathbf{u} \times\mathbf{B}),\] (3) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{P}{\rho^{\gamma}} \right)=0, \tag{4}\] where \(\rho\) is the mass density, \(\mathbf{u}\) is the fluid velocity, \(\mathbf{B}\) is the magnetic field, \(P\) is the pressure (isotropic pressure approximation), \(\mu_{0}\) is the vacuum permeability, and \(\gamma\) is the adiabatic index (ratio of specific heats). To obtain wave equations, the above system of equations is linearized by assuming that \(\rho\), \(\mathbf{u}\), \(\mathbf{B}\), and \(P\) are the sum of a spatially uniform time-independent equilibrium quantity (denoted with the subscript "0") plus a small first-order perturbation (denoted with the subscript "1"), i.e., \(\rho=\rho_{0}+\rho_{1}\), \(\mathbf{u}=\mathbf{u}_{1}\), \(\mathbf{B}=\mathbf{B}_{0}+\mathbf{B}_{1}\), and \(P=P_{0}+P_{1}\). Without loss of generality, as shown in Fig. 1(a), we can assume that the equilibrium magnetic field \(\mathbf{B}_{0}=(0,0,B_{0})\) is directed along the \(z\)-axis and the wave vector \(\mathbf{k}=(k\sin\theta,0,k\cos\theta)\) lies in the \(x\)-\(z\) plane, where \(\theta\) is the angle between \(\mathbf{B}_{0}\) and \(\mathbf{k}\). By Fourier-analyzing Eqs. (1)-(4) and making the operator substitutions (\(\nabla\rightarrow\mathbf{i}\mathbf{k}\) and \(\partial/\partial t\rightarrow-\mathrm{i}\omega\)), we can obtain the eigenvalue equation \[\begin{pmatrix}v_{p}^{2}-V_{S}^{2}\sin^{2}\theta-V_{A}^{2}&0&-V_{S}^{2}\sin \theta\cos\theta\\ 0&v_{p}^{2}-V_{A}^{2}\cos^{2}\theta&0\\ -V_{S}^{2}\sin\theta\cos\theta&0&v_{p}^{2}-V_{S}^{2}\cos^{2}\theta\end{pmatrix}\] \[\cdot\begin{pmatrix}u_{x1}\\ u_{y1}\\ u_{z1}\end{pmatrix}=0, \tag{5}\] where \(v_{p}=\omega/k\) is the phase velocity, \(V_{S}=\sqrt{\gamma P_{0}/\rho_{0}}\) is the sound speed, and \(V_{A}=\sqrt{B_{0}^{2}/\mu_{0}\rho_{0}}\) is the Alfven speed. The above eigenvalue equation has non-trivial solutions for \(\mathbf{u}_{1}\) if and only if the determinant of the matrix on the left side of Eq. (II) is zero, which yields the dispersion relation \[D(\omega,k)= \big{(}v_{p}^{2}-V_{A}^{2}\cos^{2}\theta\big{)}\Big{(}v_{p}^{4}-v _{p}^{2}\big{(}V_{A}^{2}+V_{S}^{2}\big{)}\] \[+V_{A}^{2}V_{S}^{2}\cos^{2}\theta\Big{)}=0. \tag{6}\] It can be shown that the dispersion relation has three independent roots, corresponding to the three different wave modes that can propagate through an MHD plasma \[v_{p}^{2}=V_{A}^{2}\cos^{2}\theta, \tag{7}\] \[v_{p\pm}^{2}=\frac{1}{2}\bigg{(}V_{A}^{2}+V_{S}^{2}\pm\sqrt{ \big{(}V_{A}^{2}-V_{S}^{2}\big{)}^{2}+4V_{A}^{2}V_{S}^{2}\sin^{2}\theta}\bigg{)}. \tag{8}\] The roots in Eqs. (7) and (8) are called the shear Alfven mode, fast magnetosonic mode, and slow magnetosonic mode, respectively. Note that \(v_{p+}\geq v_{p-}\). The fast magnetosonic mode in Eq. (8) can also be considered as the compressional Alfven mode (\(V_{S}\to 0\)) modified by a non-zero plasma pressure [33]. In a special case where \(\theta=\pi/2\), the only non-trivial solution of the dispersion relation in Eq. (II) is the fast magnetosonic mode with the wave vector \(\mathbf{k}=(k_{x},0,0)\) and phase velocity \(v_{p}^{2}=V_{A}^{2}+V_{S}^{2}\). The eigenvectors for these waves are \(\mathbf{u}_{1}=(u_{x1},0,0)\), \(\mathbf{B}_{1}=(0,0,B_{z1})\), and \(\mathbf{E}_{1}=(0,E_{y1},0)\), as shown in Fig. 1(b). Since \(\rho_{1}=(\rho_{0}/\omega)\mathbf{k}\cdot\mathbf{u}_{1}=\rho_{0}(u_{x1}/v_{p})\)[8], these eigenvectors suggest that this wave mode is associated with non-zero perturbations in the plasma density and pressure (compressible), sharing features of both a sound wave and an electromagnetic wave. In addition, as seen in Fig. 1(b), these waves involve magnetic field perturbation parallel with the background magnetic field \(\mathbf{B}_{0}\) (\(\mathbf{B}_{1}\parallel\mathbf{B}_{0}\)) and plasma motion perpendicular to \(\mathbf{B}_{0}\) (\(\mathbf{u}_{1}\perp\mathbf{B}_{0}\)). We will show later in Sec. IV that the waves observed in the experiment resulting from interaction between the magnetic fields and CT plasmas can be identified as the fast magnetosonic mode with \(\theta=\pi/2\), as described in Fig. 1(b). Figure 1: (a) Setup of the coordinate system for MHD wave analysis. The background magnetic field \(\mathbf{B}_{0}\) is along the \(z\)-axis and the wave vector \(\mathbf{k}\) lies in the \(x\)-\(z\) plane. The angle subtended between \(\mathbf{k}\) and \(\mathbf{B}_{0}\) is denoted by \(\theta\). (b) The eigenvectors for the fast magnetosonic mode in a special case where the direction of the wave propagation is perpendicular to the background magnetic field (\(\theta=\pi/2\)). Perturbations in the magnetic field, electric field, and plasma motion are denoted by \(\mathbf{B}_{1}\), \(\mathbf{E}_{1}\), and \(\mathbf{u}_{1}\), respectively. ## III Experimental setup The experiment of forming the fast magnetosonic waves is performed in BRB. The experimental setup is depicted in Fig. 2. The BRB is a 3-meter-diameter spherical chamber with multicusp magnetic plasma confinement. The cusp field is generated by an array of permanent magnets with alternating polarity and highly concentrated at the chamber wall, leaving the plasma in the core unmagnetized. The BRB is also equipped with an external 3-meter-diameter Helmholtz coil pair that provides a near-uniform axial magnetic field pointing from the southern magnetic pole to north (positive direction) throughout the plasma volume of up to 275 G. This background field helps prevent the CT plasmas from expanding too much as they travel inside the chamber [34], which can be seen later in Fig. 4. In the rest of the paper, the coordinates in the chamber are described in a cylindrical coordinate system \(R\)-\(\phi\)-\(Z\) as illustrated in Fig. 2(b), with the origin located in the center of the chamber. The hydrogen CT plasmas used to interact with the Helmholtz field are created by a coaxial plasma gun [35] mounted on the southern pole of the chamber. The diagram of the CT injector is illustrated in Fig. 3. To produce CTs, a bias magnetic flux \(\Psi_{\rm gun}\) linking the two electrodes is required for magnetizing the plasma. The magnetic field lines can then reconnect and detach from the electrodes as the plasma propagates out of the injector. In the experiment, an iron core surrounded by a copper winding of 4.1 cm diameter is inserted into the inner electrode and a poloidal bias flux of up to 0.4 mWb [34] can therefore be established by supplying a DC current \(I_{\rm bias}\) through the winding, as shown in Fig. 3. In addition, the polarity of the bias magnetic flux \(\Psi_{\rm gun}\) can be altered by switching the direction of the current in the winding. At the time when the CT injector is fired, a hydrogen gas puff is first preionized between the two coaxial electrodes and then accelerated axially (in parallel with the Helmholtz field) into the vacuum chamber Figure 3: Schematic of the CT injector used in the experiment. When the CT injector is fired, a hydrogen gas puff is first preionized between the anode and cathode, and then accelerated into the vacuum chamber along the \(Z\) direction. The iron core and solenoid inserted into the inner electrode generate a poloidal bias flux \(\Psi_{\rm gun}\), essential for producing CT plasmas. The magnitude and polarity of \(\Psi_{\rm gun}\) are controlled by the DC voltage \(V_{\rm bias}\) across the solenoid (or \(I_{\rm bias}\) in the solenoid). Figure 2: Image of the BRB (a) and poloidal cross-section of the experimental setup (b). The hydrogen CT plasma is injected into the vacuum chamber by a coaxial plasma gun mounted on the southern magnetic pole of the vessel. The chamber coordinates in this paper are described in a cylindrical coordinate system \(R\)-\(\phi\)-\(Z\), with the origin located in the center of the chamber. A removable coarse conducting grid used to interact with the CT plasmas is placed at \(Z=-34\) cm. The topology of the magnetic field is measured using two linear arrays of three-axis magnetic probes (\(B\) probes). The magnetic probe array (MP1) in front of the grid is located between \(Z=-81\) and \(-44\) cm and the one (MP2) behind the grid covers positions from \(-24\) to \(53\) cm in \(Z\). The probe arrays can also be rotated \(90^{\circ}\) around their primary shafts to measure the magnetic field topology in the \(R\) direction, as illustrated by the orange arrows. at a velocity of \(v_{\rm CT}\approx 70\) km/s. The CT plasmas are estimated to have a radius \(r\approx 4\) cm, length \(l\approx 10\) cm, electron and ion temperature \(T_{e}\approx T_{i}\approx 30\) eV, density \(n_{e}\approx n_{i}\approx 5\times 10^{15}\) cm\({}^{-3}\), and poloidal magnetic field \(B\approx 2000\) G near the gun nozzle [34]. A removable coarse conducting grid of mesh size 10 \(\times\) 10 cm, as shown in Fig. 2, is placed at \(Z=-34\) cm to study how the grid structure perturbs the CT plasmas that in turn affects the magnetosonic wave formation. The topology of the magnetic field in the pregrid and postgrid region is measured using two linear arrays of three-axis magnetic probes (\(\dot{B}\) probes), where the signal from the probe pickup coils is proportional to \(\partial B/\partial t\). The probe array in front of the grid consists of 8 equally spaced probes located between \(Z=-81\) and \(-44\) cm, while the array behind the grid contains 15 equally spaced probes covering positions from \(-24\) to 53 cm in \(Z\). A 2D structure of the magnetic field can therefore be obtained by scanning the two probe arrays in the radial direction. In addition, the probe arrays can be rotated \(90^{\circ}\) around their primary shafts to measure the magnetic field topology in the \(\phi\) direction, as shown in Fig. 2(b). Electron densities and temperatures in the experiment are measured using a Langmuir probe with 16 closely spaced, individually biased tips, allowing the current-voltage (\(I\)-\(V\)) traces to be obtained with a 2 MHz resolution at the location of the probe [34; 36]. ## IV Experimental results We observe in the experiment that the CT plasmas have a radius \(r\approx 30\) cm, length \(l\approx 80\) cm, electron \(T_{e}\approx 15\) eV, ion temperature \(T_{i}\approx 30\) eV, and density \(n_{e}\approx n_{i}\approx 1\times 10^{13}\) cm\({}^{-3}\) when they just reach the center of the chamber. The ion temperature is estimated from a previous measurement using ion Doppler spectroscopy [37; 38], as described in Ref. [34]. Based on the above plasma parameters, assuming a typical Helmholtz field \(B_{0}=60\) G, we can estimate some important characteristic scales in the CT plasmas: ion cyclotron frequency \(f_{ci}\approx 92\) kHz, electron cyclotron frequency \(f_{ce}\approx 168\) MHz, plasma frequency \(f_{p}\approx 28\) GHz, ion gyroradius \(\rho_{ci}\approx 9\) cm, electron gyroradius \(\rho_{ce}\approx 0.15\) cm, and Debye length \(\lambda_{D}\ll 0.01\) cm. ### Identification of the Wave Modes Observed in the Experiment We measure the topology of the magnetic field \(\partial B/\partial t\) along the \(Z\)-axis near \(R=0\) cm in the postgrid region (with the grid in the chamber) after the CT plasmas are injected into the chamber and the results are presented in Fig. 4. The time is referenced to the moment when the shot is executed in the control system. Figures 4(a)-4(c) show the shot fired when the background Helmholtz field \(B_{0}=60\) G, while Figs. 4(d)-4(f) for \(B_{0}=0\) G. In both cases, the bias current generating the poloidal bias flux is kept at \(I_{\rm bias}=3\) A. Two prominent features can be immediately noticed in Figs. 4(a)-4(c). The first is the CT plasma propagating along the probe array from \(\sim\)18.05 to \(\sim\)18.07 ms. After the CT travels through and expands in the chamber, a wave pattern with frequency \(f_{\rm wave}=67.8\) kHz \(\approx 0.74f_{ci}\) is detected by the probe in the \(Z\)-axis between \(\sim\)18.07 and \(\sim\)18.13 ms, as shown in the red box in Fig. 4(c). When this shot is repeated with \(B_{0}=0\) G, however, no wave patterns are observed in the data except the initial CT plasma passing by the probe array, suggesting that the background magnetic field \(B_{0}\) is required to support these waves. The signal in this shot is much weaker com Figure 4: Topology of the magnetic field \(\partial B_{R}/\partial t\), \(\partial B_{\phi}/\partial t\), \(\partial B_{Z}/\partial t\) along the \(Z\)-axis near \(R=0\) cm detected by probe array MP2. The data with the background Helmholtz field \(B_{0}=60\) G are shown in (a)–(c) and \(B_{0}=0\) G in (d)–(f). Both measurements are taken with the grid in the chamber. The time is referenced to the moment when the shot is executed in the control system. To confirm wether the waves are generated by the induced current in the grid, we repeat the shot without the grid and the data are shown in (g)–(i), where the background field \(B_{0}=50\) G. The bias current in all cases is \(I_{\rm bias}=3\) A. Waves with frequency 67.8 and 48.2 kHz are observed in (c) and (i), respectively, as shown in the red box. No waves are observed in (f). pared to \(B_{0}=60\) G due to plasma expansion in the absence of the confinement from the Helmholtz field. We also fire additional shots with similar experimental settings without the grid to confirm wether the waves are generated by the induced current in the grid and the results are presented in Figs. 4(g)-4(i). It shows that the wave pattern can still be observed without the grid, which suggests that these waves are indeed driven by the interaction between the CT and the background magnetic field. Since the wave frequency \(f_{\rm wave}\) is smaller than \(f_{ci},f_{ce},f_{p}\) and the CT plasma size is greater than \(\rho_{ci},\rho_{ce},\lambda_{D}\), we can conclude that the wave modes observed in Fig. 4(c) are MHD waves. As we can see later in Sec. IV.2, the wavelength also satisfies the condition \(\lambda_{\rm wave}>\rho_{ci},\rho_{ce},\lambda_{D}\). In addition, the wave magnetic field \(\mathbf{B}_{1}\) is mostly parallel with \(\mathbf{B}_{0}\) along the \(Z\)-axis and the wave vector \(\mathbf{k}\) is always perpendicular to \(\mathbf{B}_{1}\)[33; 8], suggesting that \(\mathbf{k}\perp\mathbf{B}_{0}\). Thus, according to the theory presented in Sec. II, we can identify the waves observed in our experiment as the fast magnetosonic mode propagating perpendicular to the background magnetic field \(\mathbf{B}_{0}\). ### Effects of the Background Magnetic Field \(\mathbf{B}_{0}\) Since the wave vector \(\mathbf{k}\) points in the radial direction, the probe arrays need to be rotated \(90^{\circ}\) from their original positions shown in Fig. 2(b) to spatially resolve wave propagation along the \(R\)-axis. The results of such measurements obtained by probe array MP2 are shown in Fig. 5, where panels 5(a)-5(c) are for the case \(B_{0}=20\) G, 5(d)-5(f) for \(B_{0}=40\) G, and 5(g)-5(i) for \(B_{0}=60\) G. The bias current \(I_{\rm bias}\) in these measurements is kept at 3 A. More specifically in each case, the top panel presents the raw \(\partial B_{Z}/\partial t\) signal at \(R=0\) cm to illustrate the waveform, while the middle panel shows \(B_{Z}\) (\(\partial B_{Z}/\partial t\) integrated over time) at multiple probe locations, where waves can be seen propagating along the \(R\)-axis. The bottom panel shows the time when peak of the waveform arrives at each probe location, used to calculate wave phase velocity. The measured wave properties (\(f_{\rm wave}\), \(\lambda_{\rm wave}\), and \(v_{p\rm exp}\)) and comparison to the theoretical phase velocity \(v_{p\rm-theory}\) at various \(B_{0}\) are summarized in Table 1. It can be seen in Table 1 that the measured phase velocities \(v_{p\rm-exp}\) are in general agreement with the theoretical predictions \(v_{p\rm-theory}\), given the error in evaluating the electron temperature and density (\(T_{e}\) and \(n_{e}\)) in the experiment and simplified assumptions used in derivation of \(v_{p\rm-theory}\), such as homogeneous plasma condition. These results again support that the waves observed in the experiment are the fast magnetosonic modes. Another noticeable feature is that the wave frequency \(f_{\rm wave}\) seems to increase with \(B_{0}\), but the wavelength remains relatively the same \(\lambda_{\rm wave}\approx 1\) m, comparable to the radius of the chamber (\(\sim 1.5\) m), suggesting that the wavelength could be bounded by the size of the plasma in BRB. In addition, we find that the waveforms shown in Figs. 5(a), 5(d), and 5(g) reverse their polarity when we flip the direction of the Helmholtz field (\(\mathbf{B}_{0}\rightarrow-\mathbf{B}_{0}\)), which is consistent with the theoretical prediction from the induction equation in Eq. (3). ### Effects of the Poloidal Bias Flux \(\Psi_{\rm gun}\) To study how the poloidal bias flux \(\Psi_{\rm gun}\) in the CT injector affects the wave formation, we repeat the measurement in Figs. 5(g)-5(i) with \(I_{\rm bias}=0\) A and the results are shown in Figs. 5(j)-5(l). The explanation of the data in each panel can be found in Sec. IV.2. Note that the plasma accelerated out of the CT injector is unmagnetized when \(I_{\rm bias}=0\) A. We can see in Table 1 that the properties of the waves driven by the unmagnetized plasma is comparable to those where \(I_{\rm bias}=3\) A, indicating that it is the plasma itself, rather than the CT embedded magnetic field that interacts with the background \(\mathbf{B}_{0}\) and excites the fast magnetosonic modes in our experiment. ## V Discussion ### Wave Driving Mechanisms Previous studies have shown that conducting spheres moving across magnetic field lines through a magnetized plasma experience an MHD drag force, causing the spheres to slow down and emit Alfven and slow magnetosonic waves due to MHD Cherenkov radiation [39; 40; 41]. This is a topic of importance with many applications to laboratory and space plasma physics, such as CTs injected into a tokamak plasma as a viable refueling scheme, or interaction of the Galilean satellites with the Jovian magnetosphere [42]. In our experiment, however, no MHD drag force is exerted on the CT since it is injected into the chamber along the magnetic field lines. One possible driver of the waves though is plasma expansion. We have shown in Sec. II and Fig. 1(b) that the fast magnetosonic mode observed in BRB is associated with non-zero perturbations in the flow velocity (\(\mathbf{u}_{1}\)) along the radial direction. Since the CT primarily has a cylindrical shape in the center of the chamber as shown in IV, the CT expansion leads to perturbations in the plasma flow along the \(R\)-axis, hence driving the fast magnetosonic waves in the experiment. After close examination of the magnetic topology measured by probe array MP2, we find that there are also small-amplitude fluctuations existing in the \(R\)- and \(\phi\)-direction, barely visible in Figs. 4(a)-4(b) and Figs. 4(g)-4(h). One possible explanation is that the wave vector of the fast magnetosonic mode \(\mathbf{k}\) is not perfectly along the \(R\)-axis (i.e., the wave magnetic field \(\mathbf{B}_{1}\) not perfectly parallel with the background field \(\mathbf{B}_{0}\)), causing perturbations in magnetic field in the \(R\)- and \(\phi\)-direction. Since the CT in our experiment has a non-uniform density profile, multiple MHD modes are expected to coexist [43]. Therefore, the small-amplitude fluctuations mentioned above can also be other MHD waves, such as shear-Alfven waves or slow magnetosonic waves. Further experiments are required to identify the nature of these fluctuations. Figure 5: Topology of the magnetic field \(\partial B_{Z}/\partial t\) and \(B_{Z}\) (\(\partial B_{Z}/\partial t\) integrated over time) along the \(R\)-axis based on the measurements from probe array MP2. The data with the bias current \(I_{\rm bias}=3\) A and background Helmholtz field \(B_{0}=20\) G are shown in (a)–(c), \(B_{0}=40\) G in (d)–(f), and \(B_{0}=60\) G in (g)–(i). The data with \(I_{\rm bias}=0\) A and \(B_{0}=60\) G are shown in (j)–(l). In each \(B_{0}\), the top panel shows \(\partial B_{Z}/\partial t\) at \(R=0\) cm to illustrate the waveform, while the middle panel shows \(B_{Z}\) at multiple probe locations to demonstrate wave propagation along the \(R\)-axis. The bottom panel shows the time when peak of the waveform arrives at each probe location. The peak of each waveform is marked by a star. Note that (b), (e), (h), and (k) are zoomed in on time with the window size of \(\Delta t=30\)\(\mu\)s. ### Target Formation with Tangled Magnetic Fields for MIF Some potential methods have been proposed in the past to create the plasma target with tangled magnetic fields for MIF, such as injection of multiple gun-formed magnetized plasmas into a limited volume [30; 31]. In this experiment, we intended to randomize the magnetic field lines through turbulence resulting from colliding the CT plasma with the coarse grid in BRB. However, the magnetic field topology in the postgrid region presented in Figs. 4(a)-4(c) suggests otherwise. The magnetic field embedded in the CT is not perturbed by the grid; the CT mostly passes through the gird and excites the fast magnetosonic modes with oscillating magnetic fields along the \(Z\)-axis. In laser-driven turbulence experiments, it is shown that the Reynolds number (ratio of flow inertia to viscosity) is usually large enough that the effect of viscosity is negligible, resulting in a turbulent flow [44; 45]. Inspired by these results, one can attempt to increase the flow Reynolds number so that transition to turbulence is more likely to occur when the CT plasma collides with the grid. Since the plasma kinematic viscosity \(\eta\propto T^{5/2}\rho^{-1}\), where \(T\) is the temperature and \(\rho\) is the plasma mass density [46], higher Reynolds number can be achieved by decreasing the CT temperature and increasing the density and speed of the CT plasma. ## VI Summary In this paper, we present detailed experimental study of the fast magnetosonic waves driven by interaction between the background magnetic field and CT plasma. By comparing the magnetic field topology obtained from a \(B\) probe array with MHD theory, we demonstrate that the waves observed in BRB are the fast magnetosonic modes propagating perpendicular to the background magnetic field. Furthermore, we find that the wave frequency increases with the background field strength, but the wavelength (\(\sim\)1 m) remains close to the radius of the chamber, suggesting that the wavelength is likely limited by the plasma size in BRB. By removing the preapplied poloidal magnetic flux in the CT injector, we show that it is the plasma itself, not the field carried by the CT that interacts with the background magnetic field and excites the fast magnetosonic modes. Last, we find that the coarse conducting grid in the chamber has no effects on wave formation. Since part of this investigation is to form a target plasma with tangled magnetic fields for MIF, in future experiments, we propose to increase the Reynolds number in the CT so that transition to turbulence is more likely to occur during collision between the CT plasma and the grid. ###### Acknowledgements. Research presented in this paper was supported by the U.S. Department of Energy (DOE) Office of Fusion Energy Sciences through the Los Alamos National Laboratory under Contract No. 89233218CNA000001. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy. The WiPPL User Facility is supported by the DOE Office of Science, Fusion Energy Sciences under Contract No. DE-SC0018266.
2307.00468
On the primitive subspace of Lando framed graph bialgebra
Lando framed graph bialgebra is generated by framed graphs modulo 4-term relations. We provide an explicit set of generators of its primitive subspace and a description of the set of relations between the generators. We also define an operation of leaf addition that endows the primitive subspace of Lando algebra with a structure of a module over the ring of polynomials in one variable and construct a 4-invariant that satisfies a simple identity with respect to the vertex-multiplication.
Maksim Karev
2023-07-02T04:25:57Z
http://arxiv.org/abs/2307.00468v3
# On the primitive subspace of Lando framed graph bialgebra ###### Abstract. We provide an explicit set of generators of the primitive subspace of Lando framed graph bialgebra, describe the set of relations between them, and define the operation of leaf addition on it. We also construct a \(4\)-invariant that satisfies a simple identity with respect to the vertex-multiplication. ## Introduction The theory of finite-type knot invariants was introduced in the pioneering work of V. Vassiliev [18]. He proposed a certain filtration of the space of knot invariants with finite-dimensional components. Arnold in [1] proposed a similar flavor filtration on invariants of plane curves. In both cases, the corresponding associated graded spaces can be realized as subspaces of dual to finite-dimensional spaces of _chord diagrams_[2, 9], provided with the additional structure of framing in the plane curves case. These associated graded spaces are known as the _(framed) weight systems_. The study of the space of finite type invariants both in cases of knots and plane curves is far from being complete. For instance, even the dimensions of the spaces of weight systems are known only for the first few terms ([13, 6]). But the existence of a structure of commutative cocommutative connected bialgebra of a finite type, rich combinatorics, and its unexpected relations to other mathematical concepts (e.g. Lie algebras) make these spaces an extremely interesting object of study. One of such relations is the existence of a map from a dual of a quotient of the algebra generated by graphs on \(n\) vertices to the degree \(n\) grading component of the space of weight systems. The existence of this map was first noted in [10] and then extended to the framed case in [9]. We refer to the above-mentioned quotient of the algebra of graphs by _Lando framed graph bialgebra_. The dimension of these subspaces are also unknown in general, see [14, 15]. The functionals on Lando framed graph bialgebra are known as _framed 4-invariants_. The state-of-art of the study of \(4\)-invariants can be found in [7]. The theory of the weight system is much more developed than the theory of \(4\)-invariants. For instance, we know \(3\) different realizations of the space of weight systems (algebras \(\mathcal{A},\mathcal{B}\) and \(\mathcal{C}\) of [3]) each with its advantages and disadvantages. For instance, for two of these realizations, the description of the corresponding primitive subspace is direct in the sense that we know the generators and relations. For Lando framed graph bialgebra the current state-of-art description of the primitive subspace is not direct - we can only say that it is generated by images of the generators of the graph space under the projection operator (see [10]). In this note, we try to fix this incompleteness by attempting to introduce an graph-theoretic analogue of algebra \(\mathcal{C}\). Namely, the construction we propose gives a more direct description of its primitive subspace in terms of generators and relations between them. We also introduce a graph-theoretic counterpart of a well-known operation of bubble insertion and propose a new 4-invariant. I dedicate this note to the memory of S.V. Duzhin, who was the first person who introduced the theory of finite-type knot invariants to me. I am grateful to B. Bychkov for valuable discussions and D. Fomichev who have implemented computer codes for verification of the constructions of this note. Below, \(\mathbb{K}\) is a characteristic 0 field. . Bialgebra structures on the spaces generated by isomorphism classes of framed finite simple graphs **Definition 1.1**.: _The framing on a finite graph is a function \(f\) from the set of its vertices to \(\mathbb{F}_{2}\)._ The framed graphs have naturally defined notion of framing preserving isomorphism. To make the exposition shorter, we will refer to the equivalence classes of framed finite simple graphs, as just framed graphs. S.A. Joni and G.C. Rota [5] have proposed to endow the vector space spanned by the framed graphs with a structure of a commutative cocommutative connected bialgebra of finite type over \(\mathbb{K}\): **Definition 1.2**.: _The graded bialgebra \(G_{\mathcal{J}\mathcal{R}}\) over \(\mathbb{K}\) is spanned by all possible framed graphs. The grading is given by the number of vertices of the graph. The product on \(G_{\mathcal{J}\mathcal{R}}\) is the extension by linearity of disjoint union of graphs. The unit element is the empty graph, counit element maps empty graph to 1, and all other graphs to 0. The coproduct is given by_ \[\Delta_{\mathcal{J}\mathcal{R}}(\Gamma)=\sum\Gamma_{p}\otimes\Gamma_{q},\] _where the sum is taken over all possible ways to split the set of vertices of graph \(\Gamma\) into two disjoint subsets \(p\) and \(q\). We denote by \(\Gamma_{p}\) (\(\Gamma_{q}\), respectively), the full subgraph of \(\Gamma\) generated by the set of vertices \(p\) (\(q\), respectively)._ We define the following algebra closely related to \(G_{\mathcal{J}\mathcal{R}}\). **Definition 1.3**.: _Let \(\Gamma\) is a framed graph. A coloring on the edges of \(\Gamma\) is the function \(C\colon E(\Gamma)\to\{b,r\}\)._ Below we will refer to the edges with coloring \(b\) (coloring \(r\), respectively) as to edges colored black (red, respectively). We define the following commutative cocommutative connected bialgebra of finite type structure on the vector space spanned by the colored framed graphs: **Definition 1.4**.: _The graded bialgebra \(G_{\mathcal{C}\mathcal{C}}\) over \(\mathbb{K}\) is spanned by all possible colored framed graphs. The grading is given by the number of vertices of the graph. The product on \(G_{\mathcal{C}\mathcal{C}}\) is the extension by linearity of disjoint union of graphs. The unit element is the empty graph, counit element maps empty graph to 1, and all other graphs to 0. The coproduct is given by_ \[\Delta_{\mathcal{C}\mathcal{C}}(\Gamma)=\sum\Gamma_{p}\otimes\Gamma_{q},\] _where the sum is taken over all possible ways to split the set of vertices of graph \(\Gamma\) into two disjoint subsets \(p\) and \(q\), such that no vertex from \(p\) is connected to a vertex from \(q\) by a red edge. We denote by \(\Gamma_{p}\) (\(\Gamma_{q}\), respectively), the full subgraph of \(\Gamma\) generated by the set of vertices \(p\) (\(q\), respectively), respecting the coloring of the edge._ The proof that the introduced operation define a bialgebra structure is the routine check that we omit. The algebra \(G_{\mathcal{J}\mathcal{R}}\) admits an injective graded bialgebra map \(\iota\) to the algerbra \(G_{\mathcal{CC}}\). Namely, every framed graph is mapped to itself all the edges colored black. In this note, we use the following convention for visualization the elements of \(G_{\mathcal{CC}}\). The coloring of an edge of a drawn graph correspond to the value of coloring function on it. The framing of the vertices is indicated by capital latin letters, and small latin letters written on edge endpoint indicate, that besides the explicitly shown edges, the corresponding vertex is also connected by edges of the corresponding color to vertices forming the subset denoted by the corresponding letter. The subsets denoted by different letters may in principle have a non-empty intersection, meaning that some of unshown vertices are connected to several vertices shown on the picture. The parts of the graph that does not fit in the pictures are assumed to be the same. We define \(I_{\mathcal{CC}}\) to be the ideal of \(G_{\mathcal{CC}}\) spanned by all possible elements of the form: **Theorem 1.1**.: _The inclusion \(\iota\colon G_{\mathcal{J}\mathcal{R}}\to G_{\mathcal{CC}}\) gives rise to a bialgebra isomorphism \(\phi\colon G_{\mathcal{J}\mathcal{R}}\to G_{\mathcal{CC}}/I_{\mathcal{CC}},\) where the coalgebra structure on \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) is induced from \(G_{\mathcal{CC}}\)._ Proof.: We define the map \(G_{\mathcal{CC}}\) to \(G_{\mathcal{J}\mathcal{R}}\) on the generators as follows. Let \(k\) be the set of the red edges of graph \(\Gamma\). For every red edge of the graph we assign one of two states: state \(b\) correspond to changing the color of the corresponding edge to black, and state \(r\) corresponds to the removing of the edge. For a collection of states \(p\in\{b;r\}^{k}\), denote by \(\Gamma_{p}\) the corresponding graph with black edges only. Intereprete this graph as an element of \(G_{\mathcal{J}\mathcal{R}}\). Define \[\psi(\Gamma)=\sum_{p\in\{b;r\}^{k}}(-1)^{p}\Gamma_{p},\] where \((-1)^{p}\) means the number of edges with state \(r\) in \(p\). Clearly, the map \(\psi\) evaluated on any element of \(I_{\mathcal{CC}}\) is \(0\). Moreover, its restriction on the image of \(\iota\) is a map mutually inverse to \(\iota\). As any element of \(G_{\mathcal{CC}}\) modulo \(I_{\mathcal{CC}}\) is clearly equivalent to a linear combination of graphs with all the edges colored black, the isomorphism on the level of algebras follows. The ideal \(I_{\mathcal{CC}}\) satisfies \[\Delta_{\mathcal{CC}}I_{\mathcal{CC}}\subset I_{\mathcal{CC}}\otimes G_{ \mathcal{CC}}+G_{\mathcal{CC}}\otimes I_{\mathcal{CC}},\] which implies that the bialgebra structure on the quotient \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) is well-defined. The fact that the map \(\psi\) is a coalgebra morphism is a routine check we omit. According to Milnor-Moore theorem [8] any commutative cocommutative graded connected bialgebra \(A\) of finite type over \(\mathbb{K}\) with coproduct operation \(\Delta\) is isomorphic to the symmetric algebra of its primitive subspace, i.e. the graded subspace of \(A\) formed by the elements \(p\in A\) such that \[\Delta(p)=p\otimes 1+1\otimes p.\] Given a bialgebra \(A\) we will denote its primitive subspace \(PA\). In particular, it follows that the restrictions of the maps \(\phi\) and \(\psi\) to the corresponding primitive subspaces are isomorphisms as well. It turns out, that the primitive subspace of \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) admits a simple description. **Proposition 1.2**.: _The primitive subspace of \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) is generated by connected framed graphs with all the edges colored red._ Proof.: Using the relations, we can represent any element of \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) as a linear combination of classes of the graphs with all the edges colored red. The class of any connected framed graph with all the edges colored red is a primitive element of the factor. From the other hand, using the relations of the ideal \(I_{\mathcal{CC}}\) any element of the factor can be represented as a disjoint union of connected framed graphs colored red. The assertion follows. The isomorphism between \(G_{\mathcal{JR}}\) and \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) implies the following formula for the projection operator from \(G_{\mathcal{JR}}\) to the subspace of its primitive elements. **Proposition 1.3**.: _Let \(\pi_{\mathcal{JR}}\) is a linear endomorphism of \(G_{\mathcal{JR}}\) defined on the generators as_ \[\pi_{\mathcal{JR}}(\Gamma)=\sum_{\Gamma^{\prime\prime}\subset\Gamma^{\prime} \subset\Gamma}(-1)^{e(\Gamma^{\prime})-e(\Gamma^{\prime\prime})}\Gamma^{ \prime\prime}\] _where the summation goes along all possible pairs \(\Gamma^{\prime\prime}\subset\Gamma^{\prime}\) of subgraphs of \(\Gamma\), such that \(\Gamma^{\prime}\) is connected, and \(\Gamma^{\prime\prime}\) a spanning subgraph of \(\Gamma\)._ _The map \(\pi_{\mathcal{JR}}\) is a projection on the primitive subspace along the subspace of decomposable elements._ Proof.: The map \(\pi_{\mathcal{JR}}\) is a composition of following operations: isomorphism \(\phi\), realization of the resulting element as a linear combination of the framed graphs with all the edges colored red, projection \(\pi_{\mathcal{CC}}\), which is defined on the graphs with all the edges colored red as \[\pi_{\mathcal{CC}}(\Gamma)=\begin{cases}\Gamma,&\text{$\Gamma$ is connected}\\ 0,&\text{otherwise}\end{cases}\] and the isomorphism \(\psi\). This formula is an alternative version of the famous projection formula [10, 12]. The reader is invited to compare this statement with the Remark to the section 2.2 of [7]. We would like to remark, that the algebra \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) admits one more projection to the subspace of primitives: a graph with all the edges colored red is mapped to the join of its connected components. It would be interesting to get an explicit description of its kernel. Recall, that the _Lando framed graph bialgebra_\(\mathcal{L}\) ([10, 9]) is defined as a quotient of \(G_{\mathcal{J}\mathcal{R}}\) by the biideal \(\mathcal{F}_{\mathcal{J}\mathcal{R}}\) generated by _4-elements_, usually written in the form. \[\Gamma-\Gamma^{\prime}_{uv}-(-1)^{f(v)}(\tilde{\Gamma}_{uv}+\tilde{\Gamma}^{ \prime}_{uv}).\] This formula has the following meaning. Let \(\Gamma\) be a graph, and let \(u,v\) be two its vertices joined by an edge. Then \(\Gamma^{\prime}_{uv}\) denotes the graph obtained from \(\Gamma\) by erasing the edge between \(u\) and \(v\). The graph \(\tilde{\Gamma}_{uv}\) is obtained from \(\Gamma\) by the following operation: for every vertex \(w\) different from \(u,v\) and connected by an edge with \(v\), the vertices \(u\) and \(w\) are joined by an edge in \(\tilde{\Gamma}_{uv}\) if and only if the vertices \(u\) and \(w\) are not joined in \(\Gamma\). The adjacencies of all other possible pairs of vertices in \(\Gamma\) and \(\tilde{\Gamma}_{uv}\) are the same. The graph \(\tilde{\Gamma}^{\prime}_{uv}\) is obtained from \(\tilde{\Gamma}_{uv}\) by erasing the edge between \(u\) and \(v\). Finally, the framing of the vertex \(u\) in both \(\tilde{\Gamma}_{uv}\) and \(\tilde{\Gamma}^{\prime}_{uv}\) is set to \(f(u)+f(v)\). Using the isomorphism between \(G_{\mathcal{J}\mathcal{R}}\) and \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) any 4-element can be written as a class of the following element of \(G_{\mathcal{CC}}\): (1) Here, the vertex \(u\) has framing \(A\), the vertex \(v\) has framing \(B\), and \(x\vartriangle y\) denotes the symmetric difference of the corresponding sets of vertices. In the following, any depicted element of \(G_{\mathcal{CC}}\) means its class modulo \(I_{\mathcal{CC}}\). The following proposition describes the image \(\mathcal{F}_{\mathcal{J}\mathcal{R}}\) under the isomorphism \(\phi\). **Proposition 1.4**.: \(\mathcal{F}_{\mathcal{CC}}=\phi(\mathcal{F}_{\mathcal{J}\mathcal{R}})\) _is a graded biideal of \(G_{\mathcal{CC}}/I_{\mathcal{CC}}\) generated by the elements:_ (2) _where the subsets \(a,b,c\) of vertices of the unshown part of the graph are pairwise disjoint._ Proof.: Take a 4-element of the form shown in equation 1. Denote \(a=x-y\), \(b=x\cap y\), \(c=y-x\). Modulo \(I_{\mathcal{CC}}\), every graph with a black edge can be presented as a sum of graph with the corresponding edge colored red, and the corresponding edge missing, respectively. Inclusion-exclusion principle implies, that the element from the hypothesis is the following combination of the 4-elements \[\sum_{\begin{subarray}{c}b^{\prime}\subset b\\ c^{\prime}\subset c\end{subarray}}(-1)^{|b-b^{\prime}|+|c-c^{\prime}|}\left( \begin{array}{c}a\\ \\ c^{\prime}\end{array}\right)\ ## 2. Leaf attachment Let's discuss a simple corollary of the relations in \(\mathcal{N}\). **Theorem 2.1**.: _The following identity holds:_ Proof.: The relations applied to the edge connecting the vertex of framing \(C\) to the vertex of framing \(B\) read: that implies For \(C=0\) the right hand part vanishes identically. Recall, that the _forest algebra_\(\mathcal{T}\)[4] is defined as a subalgebra of \(\mathcal{L}\) generated by trees with all the vertices having framing \(0\). Clearly, its image under the isomorphism \(\phi\) is a subalgebra of \(\mathcal{N}\) also generated by trees with all the vertices having framing \(0\) and all the edges colored red. The proven proposition trivially implies the structural result of [4]: **Theorem 2.2**.: _The forest algebra \(\mathcal{T}\) is a subbialgebra of \(\mathcal{N}\) with the dimension one primitive subspace in every grading._ Indeed, it tells, that any tree can be obtain by a sequence of leaf attachments to the graph on a single vertex. The resulting tree does not depend on the choice of the vertices we attach the leaves to on every step of construction. Also, we deduce the following: **Theorem 2.3**.: _There is a well-defined action of \(\mathcal{T}\) on \(P\mathcal{N}\) defined on the generators as follows. For a tree \(T\) with the framing of all the vertices identically equal to 0 and a connected graph \(\Gamma\), choose a vertex \(v\) of \(T\) and a vertex \(w\) of \(\Gamma\), and join the chosen vertices \(v\) and \(w\) by an edge._ In particular, we see, that the subspaces \(W\mathcal{L}\) has at least one non-trivial generator in every grading component: it can be obtained by the action of a tree on \(n\) vertices on a single vertex graph with the framing of the vertex 1. For every natural \(n\) the obtained element is non-zero, as the framed chromatic polynomial [6] takes on it a non-zero value. The leaf attachment operation is a intersection graph counterpart of the operation of bubble insertion [3]. P. Vogel in [17] provides a construction of a non-trivial element of the kernel of the operation of bubble insertion. It would be interesting to know, if the described action of \(\mathcal{T}\) on \(P\mathcal{N}\) is free. ## 3. A 4-invariant related to the number of 3-colorings of a graph. As it happens usually, a new realization of a space under study gives rise to new invariants. Here we describe a new 4-invariant. For a generating element \(\Gamma\) of \(\mathcal{N}\) represented by a graph with _red edges only_, define \(\mathcal{W}(\Gamma)\) to be equal the number of 3-colorings of \(\Gamma\) multiplied by \(2^{-\chi(\Gamma)}(-1)^{f}\), where \(\chi(\Gamma)\) is the Euler characteristics of \(\Gamma\), and \(f\) is the sum of framings of all the vertices of \(\Gamma\). **Theorem 3.1**.: _The function \(\mathcal{W}\) extends by linearity to a 4-invariant._ Proof.: We have to check that the linear extension of \(\mathcal{W}\) to the ideal \(\mathcal{F}_{\mathcal{CC}}\) vanishes identically. Clearly, the function \(\mathcal{W}\) is invariant under the change of the framing of the vertices of the graph, so, for simplicity we assume, that from now on all the vertices have framing 0. The elements 2 can be generated as follows. The initial data is the tuple \((\Gamma,v,a,b)\), where \(\Gamma\) is a graph, \(v\) - a vertex of \(\Gamma\), and \(a,b\) are two disjoint subsets of vertices of \(\Gamma\) such that no vertex of \(a\cup b\) is adjacent to \(v\). Attach a leaf \(u\) to \(v\). Now, every vertex in \(a\cup b\) can have 3 possible states, which we call state \(U\), state \(V\) and state \(UV\). For the collection of states \(S\colon a\cup b\to\{U,V,UV\}\) form a new graph \(\Gamma_{S}\) as follows: connect by edges all the vertices of the state \(U\) with the vertex \(u\), all the vertices of the state \(V\) to the vertex \(v\), and all the vertices of the state \(UV\) to both the vertices \(u\) and \(v\). Now the elements 2 are the linear combinations \[\sum_{\{S\colon a\cup b\to\{U,V,UV\}\,|\,S(b)=\{U\}\}}\Gamma_{S}-\sum_{\{S^{ \prime}\colon a\cup b\to\{U,V,UV\}\,|\,S^{\prime}(a)=\{U\}\}}\Gamma_{S^{\prime}}. \tag{3}\] Now consider proper colorings of the vertices of the graphs \(\Gamma_{S}\) and \(\Gamma_{S^{\prime}}\) in three colors which we denote \(E,F,H\). Without loss of generality suppose, the vertex \(u\) is colored \(E\), and the vertex \(v\) is colored \(F\). For any vertex \(w\in a\) we have * if \(w\) is in the state \(UV\), then it can only be colored \(H\), * if \(w\) is in the state \(U\), it can be either colored \(F\) or \(H\) * if \(w\) is in the state \(V\), it can be either colored \(E\) or \(H\). Notice, that the number of edges of a graph with a vertex \(w\) in the state \(UV\) is one more then those of graphs with the corresponding vertex in the states \(U\) or \(V\). Due to the factor \((-2)^{-\chi(\Gamma)}(-1)^{f}\) in the definition of \(\mathcal{W}\), with all the states of elements \(a\cup b-\{w\}\) fixed, the colorings of \(w\) in the state \(UV\) cancel out the colorings with \(w\) colored \(H\) being in the state \(U\) and \(w\) colored \(H\) being in the state \(V\). It means that having fixed the colorings of \(u\) and \(v\) the sum 3 reduces to verification of the identity \[\sum_{\{S\colon a\cup b\to\{U,V\}\,|\,S(b)=\{U\}\}}\mathcal{W}^{\prime}_{a}( \Gamma_{S})=\sum_{\{S^{\prime}\colon a\cup b\to\{U,V\}\,|\,S^{\prime}(a)=\{U \}\}}\mathcal{W}^{\prime}_{b}(\Gamma_{S^{\prime}}),\] where \(W^{\prime}_{a}(\Gamma_{S})\) for \(S\colon a\cup b\to\{U,V\}\) with \(S(b)=\{U\}\) is the number of proper colorings of the graph \(\Gamma_{S}\), with vertex \(u\) colored \(E\), vertex \(v\) colored \(F\), the vertex \(w\in a\) colored \(F\) if it is in the state \(U\), and colored \(E\), if it is in the state \(V\). Notice, that due to the structure of \(\Gamma_{S}\), the vertices of \(b\) can only be colored \(F\) or \(H\). The definition of \(W^{\prime}_{b}\) is essentially the same, but with the roles of \(a\) and \(b\) interchanged. But the identity to verify definitely holds true. Namely, the sets of colorings that contribute to left hand side is in the bijection with the set of coloring that contribute to the right hand side: keep the coloring of \(u\), \(v\), and all the vertices colored \(E\), but interchange the colors \(F\) and \(H\). We mention one evident property of the 4-invariant \(\mathcal{W}\). Namely, given two graphs \(\Gamma\) and \(\Gamma^{\prime}\), a vertex \(u\) of \(\Gamma\) and a vertex \(v\) of \(\Gamma^{\prime}\) denote \(\Gamma\nabla_{uv}\Gamma^{\prime}\) the result of identification of vertices \(u\) and \(v\). Clearly \[\mathcal{W}(\Gamma\nabla_{uv}\Gamma^{\prime})=-\frac{2}{3}\mathcal{W}(\Gamma) \mathcal{W}(\Gamma^{\prime}).\] In particular, the operation of the leaf attachment multiplies the value of \(\mathcal{W}\) by 2. It is known [16] that \(\mathfrak{sl}_{2}\)-weight system has a similar multiplicativity property with respect to the vertex-multiplication. As a conclusion, we notice that for \(n=4\) the grading component \(P\mathcal{N}_{4}\) has at least 4 linearly independent elements represented by graphs with all the edges colored red with supports given by a chain \(P_{4}\) and a cycle \(C_{4}\) and various framings of their vertices. They differ by the values of the framed chromatic polynomial and invariant \(\mathcal{W}\) on them. Existence of the action of the tree algebra on \(P\mathcal{N}\) implies, that for any \(n\geq 4\) the dimension \(\dim P\mathcal{N}_{n}\) is greater or equal 4.
2304.06549
Non-asymptotic convergence bounds for Sinkhorn iterates and their gradients: a coupling approach
Computational optimal transport (OT) has recently emerged as a powerful framework with applications in various fields. In this paper we focus on a relaxation of the original OT problem, the entropic OT problem, which allows to implement efficient and practical algorithmic solutions, even in high dimensional settings. This formulation, also known as the Schr\"odinger Bridge problem, notably connects with Stochastic Optimal Control (SOC) and can be solved with the popular Sinkhorn algorithm. In the case of discrete-state spaces, this algorithm is known to have exponential convergence; however, achieving a similar rate of convergence in a more general setting is still an active area of research. In this work, we analyze the convergence of the Sinkhorn algorithm for probability measures defined on the $d$-dimensional torus $\mathbb{T}_L^d$, that admit densities with respect to the Haar measure of $\mathbb{T}_L^d$. In particular, we prove pointwise exponential convergence of Sinkhorn iterates and their gradient. Our proof relies on the connection between these iterates and the evolution along the Hamilton-Jacobi-Bellman equations of value functions obtained from SOC-problems. Our approach is novel in that it is purely probabilistic and relies on coupling by reflection techniques for controlled diffusions on the torus.
Giacomo Greco, Maxence Noble, Giovanni Conforti, Alain Durmus
2023-04-13T13:58:25Z
http://arxiv.org/abs/2304.06549v2
# Non-asymptotic convergence bounds for Sinkhorn iterates and their gradients: a coupling approach. ###### Abstract Computational optimal transport (OT) has recently emerged as a powerful framework with applications in various fields. In this paper we focus on a relaxation of the original OT problem, the entropic OT problem, which allows to implement efficient and practical algorithmic solutions, even in high dimensional settings. This formulation, also known as the Schrodinger Bridge problem, notably connects with Stochastic Optimal Control (SOC) and can be solved with the popular Sinkhorn algorithm. In the case of discrete-state spaces, this algorithm is known to have exponential convergence; however, achieving a similar rate of convergence in a more general setting is still an active area of research. In this work, we analyze the convergence of the Sinkhorn algorithm for probability measures defined on the \(d\)-dimensional torus \(\mathbb{T}_{L}^{d}\), that admit densities with respect to the Haar measure of \(\mathbb{T}_{L}^{d}\). In particular, we prove pointwise exponential convergence of Sinkhorn iterates and their gradient. Our proof relies on the connection between these iterates and the evolution along the Hamilton-Jacobi-Bellman equations of value functions obtained from SOC -problems. Our approach is novel in that it is purely probabilistic and relies on coupling by reflection techniques for controlled diffusions on the torus. 1951-30, 2023 36th Annual Conference on Learning Theory ## 1 Introduction Computational optimal transport (OT) has known great progress over these past few years (Peyre and Cuturi, 2019), and has thus become a popular tool in a wide range of fields such as machine learning (Adler et al., 2017; Arjovsky et al., 2017), computer vision (Dominitz and Tannenbaum, 2009; Solomon et al., 2015), or signal processing (Kolouri et al., 2017). Let \(\mu\) and \(\nu\) be two probability measures defined on a measurable state space \((\mathsf{X},\mathcal{X})\). The primal OT problem (Villani, 2008) between \(\mu\) and \(\nu\), corresponding to a measurable cost function \(\mathsf{c}\,:\mathsf{X}^{2}\to[0,+\infty)\), can be formulated as solving the optimization problem \[\inf_{\pi\in\Pi(\mu,\nu)}\int\mathsf{c}(x,y)\mathrm{d}\pi(x,y)\,, \tag{1}\] where \(\Pi(\mu,\nu)\) is defined as the set of couplings between \(\mu\) and \(\nu\), _i.e._, \(\pi\in\Pi(\mu,\nu)\) if \(\pi(\mathsf{A}\times\mathsf{X})=\mu(\mathsf{A})\) and \(\pi(\mathsf{X}\times\mathsf{A})=\nu(\mathsf{A})\) for any \(\mathsf{A}\in\mathcal{X}\). This problem admits the following dual formulation \[\sup_{(\varphi^{\star},\psi^{\star})\in\mathcal{R}(\mathsf{c})}\int\{\varphi^ {\star}(x)+\psi^{\star}(y)\}\mathrm{d}(\mu\otimes\nu)(x,y)\,, \tag{2}\] where \[\mathcal{R}(\mathsf{c})=\{(\varphi^{\star},\psi^{\star})\in\mathcal{C}(\mathsf{X}) ^{2}:\text{ for any }(x,y)\in\mathsf{X}^{2},\varphi^{\star}(x)+\psi^{\star}(y)\leq\mathsf{c}(x,y)\}\] is the set of "Kantorovitch potentials" (Kellerer, 1984). In many applications of OT, \(\mathsf{X}\subset\mathbb{R}^{d}\) and one chooses the Euclidean quadratic cost \(\mathsf{c}(x,y)=\|x-y\|^{2}/2\). Under this setting, Monge-Kantorovich's theorem states that (1) admits a unique minimizer \(\pi^{\star}\). In addition, in the case where \(\mu\) admits a density with respect to the Lebesgue measure, Brenier's theorem (Brenier, 1991) established that this minimizer is also solution to the Monge problem, _i.e._, there exists a convex function \(\Psi:\mathbb{R}^{d}\to\mathbb{R}\cup\{\infty\}\) such that \(\pi^{\star}\) is the pushforward of \(\mu\) by the application \(x\mapsto(x,\mathrm{T}(x))\) with \(\mathrm{T}(x)=\nabla\Psi(x)\) if \(\Psi(x)<\infty\) and \(\mathrm{T}(x)=0\) otherwise (referred to as the "Monge" map). Moreover, \(\Psi\) is related to a Kantorovitch potential \(\varphi^{\star}\) solving (2) as \(\Psi(x)=\|x\|^{2}/2-\varphi^{\star}(x)\). Unfortunately, OT problems (1) and (2) suffer from the curse of dimensionality (Papadakis et al., 2014; Niles-Weed and Rigollet, 2022), which makes impossible to compute \(\pi^{\star}\) or the map \(\mathrm{T}\) in high-dimensional settings. Although recent works have been carried out this problem assuming regularity conditions on the domain \(\mathsf{X}\) or on some densities of \(\mu\) and \(\nu\), if they exist, solving efficiently (1) and (2) remains an open problem (Benamou et al., 2014; Niles-Weed and Rigollet, 2022; Forrow et al., 2019). To circumvent these computational limits, an approach consists in computing a regularized version of the OT problem (1), which penalizes the entropy of the joint coupling \(\pi\): \[\inf_{\pi\in\Pi(\mu,\nu)}\left\{\int\mathsf{c}(x,y)\mathrm{d}\pi(x,y)+\varepsilon \mathrm{KL}(\pi\mid\mu\otimes\nu)\right\}\,, \tag{3}\] where \(\mathrm{KL}\) denotes the Kullback-Leibler divergence and \(\varepsilon>0\) is a regularization parameter. The entropic regularization notably defines a convex minimization problem (in contrast to (1) in general settings), which admits a unique solution \(\pi^{\star}_{\varepsilon}\). In addition, under appropriate conditions, \(\{\pi^{\star}_{\varepsilon}\}_{\varepsilon>0}\) converges to a solution of (1); see e.g., Leonard (2012). The entropic OT problem (3) can be tracked back to Schrodinger (Schrodinger, 1931) and may be casted as a "static Schrodinger problem" (Leonard, 2014; Conforti, 2019; Carlier et al., 2017) given by \[\inf_{\pi\in\Pi(\mu,\nu)}\mathrm{KL}(\pi\mid\rho_{\varepsilon})\,, \tag{4}\] where \(\rho_{\varepsilon}\in\mathcal{P}(\mathsf{X}\times\mathsf{X})\) is the _reference_ measure defined by \(\mathrm{d}\rho_{\varepsilon}(x,y)/\mathrm{d}\{\mu\otimes\nu\}\propto\exp(- \mathsf{c}(x,y)/\varepsilon)\). Under some conditions on \(\mu\) and \(\nu\), \(\pi^{\star}_{\varepsilon}\) admits as density \[\frac{\mathrm{d}\pi^{\star}_{\varepsilon}}{\mathrm{d}\rho_{\varepsilon}}(x,y) =\exp[-(\varphi_{\varepsilon}(x)+\psi_{\varepsilon}(y))]\,, \tag{5}\] where \(\varphi_{\varepsilon}\in\mathrm{L}^{1}(\mu)\) and \(\psi_{\varepsilon}\in\mathrm{L}^{1}(\nu)\) are called the "Schrodinger potentials". These potentials are unique up to a trivial additive constant and can be considered as a regularized version of the Kantorovich potentials \(\varphi^{\star}\) and \(\psi^{\star}\). Indeed, under similar assumptions as Brenier's theorem, it holds that the (rescaled) Schrodinger potentials and their gradients respectively converge to the Kantorovich potentials and their gradients as \(\epsilon\) goes to 0 (Chiarini et al., 2023), hence recovering the Monge map \(\Psi\). In contrast to exact OT problems (1) and (2), (5) can be solved quickly using the Sinkhorn algorithm (Sinkhorn, 1964; Cuturi, 2013), and has thus become a popular alternative to the standard OT formulation. The Sinkhorn algorithm consists in defining sequences \((\varphi_{\varepsilon,n})_{n\in\mathbb{N}}\) and \((\psi_{\varepsilon,n})_{n\in\mathbb{N}}\) respectively approximating \(\varphi_{\varepsilon}\) and \(\psi_{\varepsilon}\), relying that these two functions are fixed points of a particular functional. In this paper, we are interested in the convergence of these two sequences to the "Schrodinger potentials" \(\varphi_{\varepsilon}\) and \(\psi_{\varepsilon}\). More precisely, our contributions are as follows. Contributions.We provide a new approach to study the convergence of the Sinkhorn algorithm for the case where the state space \(\mathsf{X}\) is chosen as the \(d\)-dimensional torus \(\mathbb{T}_{L}^{d}=\mathbb{R}^{d}/(L\mathbb{Z}^{d})\), for \(L>0\), endowed with its canonical Riemannian metric. In particular, our analysis exploits the relationship between the Schrodinger bridge problem and Stochastic Optimal Control (SOC). As shown by Leonard (2014) for the case \(\mathsf{X}=\mathbb{R}^{d}\), \(\pi_{\varepsilon}^{*}\) is the distribution of the pair of random variables \((X_{0},X_{1})\), where \((X_{t})_{t\in[0,1]}\) evolves along the stochastic differential equation \(\mathrm{d}X_{t}=u_{\varepsilon}^{*}(t,X_{t})\mathrm{d}t+\sqrt{\varepsilon} \,\mathrm{d}B_{t}\) and \(u_{\varepsilon}^{*}\) is the _control_ function solving \[\inf_{u:[0,1]\times\mathbb{R}^{d}\to\mathbb{R}^{d}}\frac{1}{2}\int_{0}^{1} \mathbb{E}\|u(t,X_{t})\|^{2}\mathrm{d}t\quad\text{such that}\quad\begin{cases} \mathrm{d}X_{t}=u(t,X_{t})\mathrm{d}t+\sqrt{\varepsilon}\,\mathrm{d}B_{t}\,\\ X_{0}\sim\mu\,X_{1}\sim\nu\,\end{cases} \tag{6}\] where \((B_{t})_{t\geq 0}\) is a standard Brownian motion over \(\mathbb{R}^{d}\). By establishing new convergence bounds for inhomogeneous controlled processes on \(\mathbb{T}_{L}^{d}\) related to (6) using coupling techniques, we show the pointwise exponential convergence of the sequence of Sinkhorn iterates. While this result is not new, our approach is new in that it is essentially probabilistic in nature. More importantly, this approach allows us to prove our second contribution, namely the convergence of the gradient for the Sinkhorn iterates \((\nabla\varphi_{\varepsilon,n})_{n\in\mathbb{N}}\) and \((\nabla\psi_{\varepsilon,n})_{n\in\mathbb{N}}\) to the gradients of the Schrodinger potentials, \(\nabla\varphi_{\varepsilon}\) resp. \(\nabla\psi_{\varepsilon}\), which are used to estimate the Brenier map. To the best of our knowledge, our analysis is the first to derive convergence of gradients independently from iterates' convergence 1. We highlight that this approach is of primary interest since it can be directly generalized to the unbounded setting. Footnote 1: During the discussion period before acceptance of the present paper, one of the reviewers pointed out that it could be possible to adapt (del Barrio et al., 2022, Lemma 4.8) to obtain convergence of the sequence of the gradients from the convergence of Sinkhorn iterates. However, the constants that would appear in the resulting convergence bounds are not explicit. Outline of the work.The paper is organized as follows. In Section 2, we introduce the theoretical setting of our analysis of Sinkhorn algorithm, detail our assumptions and present our main result. In Section 3, we discuss the dependence of the convergence rate in the parameters of the problem. We review related work and precisely detail our contributions in Section 4, and present the main steps of our proof in Section 5. Notation.For any measurable space \((\mathsf{X},\mathcal{X})\), we denote by \(\mathcal{P}(\mathsf{X})\) the space of probability measures defined on \((\mathsf{X},\mathcal{X})\). Denote by \(\mathrm{L}^{1}(\mathsf{X})\) the set of function integrable with respect to \(\mu\). For any two distributions \(\mu,\nu\in\mathcal{P}(\mathsf{X})\), we define the Kullback-Leibler divergence between \(\mu\) and \(\nu\) as \(\mathrm{KL}(\mu\mid\nu)=\int_{\mathsf{X}}\mathrm{d}\mu\log(\mathrm{d}\mu/ \mathrm{d}\nu)\) if \(\mu\ll\nu\) and \(\mathrm{KL}(\mu\mid\nu)=+\infty\) otherwise. In the case \(\mathsf{X}=\mathbb{R}^{d}\), we denote by \(\mathrm{Leb}\) the Lebesgue measure and define \(\mathrm{Ent}(\mu)=\int\mathrm{d}\mathrm{Leb}\log(\mathrm{d}\mu/\mathrm{d} \mathrm{Leb})\) if \(\mu\ll\mathrm{Leb}\) and \(+\infty\) otherwise. ## 2 Theoretical framework and main results Setting and Sinkhorn iterates.Throughout this paper, we consider two probability measures \(\mu\) and \(\nu\) defined on the torus \(\mathbb{T}_{L}^{d}\coloneqq\mathbb{R}^{d}/L\mathbb{Z}^{d}\) of length \(L>0\). Since \(\mathbb{T}_{L}^{d}\), endowed with addition, is a compact Lie group (Bump, 2013, Chapter 15), we denote by \(\mathbb{H}_{L}^{d}\) the left Haar measure which corresponds to its Riemannian volume form (Folland, 2013, Chapter 11.4). Furthermore, we consider in our paper the problem (4) for a particular class of reference measure \(\rho_{\varepsilon}\): for a fixed time horizon \(T>0\), we aim at solving the static Schrodinger problem defined by \[\inf_{\pi\in\Pi(\mu,\nu)}\mathrm{KL}(\pi\mid\mathrm{R}_{0,T})\,, \tag{7}\] where \(\mathrm{R}_{0,T}\) is a distribution on \(\mathbb{T}_{L}^{2d}\) related to the Langevin stochastic differential equation (SDE) \[\mathrm{d}X_{t}=-\nabla V(X_{t})\mathrm{d}t+\mathrm{d}B_{t}\,, \tag{8}\] for a twice continuously differentiable potential function \(V:\mathbb{T}_{L}^{d}\to\mathbb{R}\). Note that for \(V\equiv 0\) the above stochastic dynamics corresponds to Brownian motions (_i.e._, the one associated to the Laplacian operator). If we further consider as state space \(\mathsf{X}=\mathbb{R}^{d}\), then \(\mathbf{m}\) equals the Lebesgue measure \(\mathrm{Leb}\) and (7) is an equivalent formulation of the entropic transport problem \[\inf_{\pi\in\Pi(\mu,\nu)}\left\{\frac{1}{2}\int\|x-y\|^{2}\,\mathrm{d}\pi(x,y )+T\,\mathrm{KL}(\pi\mid\mu\otimes\nu)\right\}\,.\] For the general case \(V\not\equiv 0\), we refer to Garcia-Portugues et al. (2019) for an introduction to Langevin diffusion on \(\mathbb{T}_{L}^{d}\). Since \(V\) is twice continuously differentiable and \(\mathbb{T}_{L}^{d}\) is compact, by (Kent, 1978, Theorem 10.1), (8) admits a unique solution and define a Markov semigroup \((\mathrm{P}_{t})_{t\geq 0}\) with bi-continuous transition density \((p_{t})_{t>0}\) with respect to the stationary distribution, \(\mathbf{m}(\mathrm{d}x)=\mathrm{e}^{-2V(x)}\mathtt{H}(\mathrm{d}x)\), of (8) that is symmetric, _i.e._, for any \(x,y\), \(p_{t}(x,y)=p_{t}(y,x)\). As a result, for any \(T>0\), \((x,y)\mapsto\mathrm{e}^{-2V(x)}p_{T}(x,y)\) defines a joint density on \((\mathbb{T}_{L}^{d})^{2}\) and \(\mathrm{R}_{0,T}\) is the corresponding probability measure. We now state our main assumption. In particular, we will suppose that the two distributions \(\mu\) and \(\nu\) are equivalent to \(\mathbf{m}\). **Assumption 1**: _The potential \(V\) is twice continuously differentiable and there exists two continuously differentiable functions from \(\mathbb{T}_{L}^{d}\) to \(\mathbb{R}\), \(U_{\mu}\) and \(U_{\nu}\), such that_ \[\mu(\mathrm{d}x)=\exp(-U_{\mu}(x))\mathbf{m}(\mathrm{d}x)\,\quad\nu( \mathrm{d}x)=\exp(-U_{\nu}(x))\mathbf{m}(\mathrm{d}x). \tag{9}\] Under Assumption 1, \(\mathrm{KL}(\mu\mid\mathbf{m})\) and \(\mathrm{KL}(\nu\mid\mathbf{m})\) are finite and (Leonard, 2014, Theorem 2.6) shows that Problem (7) admits a unique minimizer \(\pi^{\star}\in\mathcal{P}(\mathbb{T}_{L}^{d}\times\mathbb{T}_{L}^{d})\) dominated by \(\mathrm{R}_{0,T}\), which can be expressed via Schrodinger potentials \(\varphi^{\star},\psi^{\star}:\mathbb{T}_{L}^{d}\to\mathbb{R}\cup\{\infty\}\) such that \[\frac{\mathrm{d}\pi^{\star}}{\mathrm{d}\mathrm{R}_{0,T}}(x,y)=\exp(-\varphi^{ \star}(x)-\psi^{\star}(y))\,. \tag{10}\] Since \(p_{T}\) is continuously differentiable with respect to its both variables (Kent, 1978), (Nutz, 2021, Lemma 4.11) implies that \(\varphi^{\star}\) and \(\psi^{\star}\) are also continuous and even Lipschitz. In fact, we will recover this result as a corollary of our results. Here, we assume the potentials \(\varphi^{\star},\psi^{\star}\) satisfying the symmetric normalization \(\int\varphi^{\star}\mathrm{d}\mu+\mathrm{KL}(\mu\mid\mathbf{m})=\int\psi^{ \star}\mathrm{d}\nu+\mathrm{KL}(\nu\mid\mathbf{m})\). Then, the Sinkhorn algorithm (Sinkhorn, 1964; Di Marino and Gerolin, 2020) consists in defining the sequence of potentials \((\varphi_{n})_{n\in\mathbb{N}}\) and \((\psi_{n})_{n\in\mathbb{N}}\), starting from \(\psi^{0}=0\)2, by the recursion: for \(n\in\mathbb{N}\) Footnote 2: Let us point out the fact that our results hold true for any smooth choice of \(\psi^{0}\). Here, we set \(\psi^{0}=0\) for convenience. \[\varphi^{n+1}\coloneqq U_{\mu}+\log\mathrm{P}_{T}\mathrm{e}^{-\psi^{n}}\,\qquad \psi^{n+1}\coloneqq U_{\nu}+\log\mathrm{P}_{T}\mathrm{e}^{-\varphi^{n+1}}. \tag{11}\] where \((\mathrm{P}_{t})_{t\geq 0}\) is the semigroup associated to the SDE (8). From (9) and (10), it is immediate to deduce that the couple \((\varphi^{\star},\psi^{\star})\) is a fixed point of the above iteration. Moreover, the algorithm can be interpreted as fixing one of the prescribed marginals at each step. More precisely, when \(\psi^{n}\) is given and we compute the next iterate \(\varphi^{n+1}\), we are implicitly prescribing the couple \((\varphi^{n+1},\psi^{n})\) to fit the first marginal constraint, _i.e._, we are imposing that the first marginal of the probability measure \(\mathrm{d}\pi^{n+1,n}/\mathrm{dR}_{0,T}\propto\exp(-\varphi^{n+1}(x)-\psi^{n} (y))\) is exactly \(\mu\). At the next iteration, when we compute \(\psi^{n+1}\) we forget about the first marginal and impose the constraint on the second one, which yields to imposing the second marginal of \(\mathrm{d}\pi^{n+1,n+1}/\mathrm{dR}_{0,T}\propto\exp(-\varphi^{n+1}(x)-\psi^ {n+1}(y))\) to be equal to \(\nu\). On the primal side this is also equivalent to minimizing at each step the \(\mathrm{KL}\)-divergence from the previous plan subject to a one-sided marginal constraint, _i.e._, \[\pi^{n+1,n}\coloneqq\arg\min_{\Pi(\mu,\star)}\mathrm{KL}(\cdot|\pi^{n,n})\,\qquad\pi^{n+1,n+1}\coloneqq\arg\min_{\Pi(\star,\nu)}\mathrm{KL}(\cdot|\pi^{n +1,n})\, \tag{12}\] where \(\Pi(\mu,\star)\) (resp. \(\Pi(\star,\nu)\)) is the set of probability measures on \((\mathbb{T}^{d}_{L})^{2}\) such that the first marginal is \(\mu\) (resp. the second marginal is \(\nu\)). Let us also point out that the choice of an optimal-enough regularization parameter \(T\), which guarantees both fast convergence of Sinkhorn algorithm and accurate approximation of OT, is still a very active field of research. For instance, on a discrete setting (with \(n\)-atomic supports), Altschuler et al. (2017) suggests that choosing \(T=\log(n)/\tau\) is enough in order to get a \(\tau\)-accuracy with just \(O(\log(n)/\tau^{3})\) iterations. We refer to Peyre and Cuturi (2019) for a further discussion on this trade-off. In order to be consistent with the normalization imposed on the Schrodinger potentials \(\varphi^{\star}\) and \(\psi^{\star}\), we might have to normalize at each step the obtained iterates by considering for any \(n\in\mathbb{N}\) \[\varphi^{\circ n}=\varphi^{n}-\left(\int\varphi^{n}\mathrm{d}\mu-\int\varphi^ {\star}\mathrm{d}\mu\right)\,,\qquad\psi^{\circ n}=\psi^{n}-\left(\int\psi^{n} \mathrm{d}\nu-\int\psi^{\star}\mathrm{d}\nu\right),\] so that \[\int\varphi^{\circ n}\mathrm{d}\mu=\int\varphi^{\star}\mathrm{d}\mu\ \ \text{and}\ \ \int\psi^{\circ n}\mathrm{d}\nu=\int\psi^{\star}\mathrm{d}\nu. \tag{13}\] One may also consider other normalization options, such as the pointwise condition \(\varphi^{\star}(0)=\psi^{\star}(0)=0\), or the zero-mean normalization (Di Marino and Gerolin, 2020; Carlier and Laborde, 2020; Deligiannidis et al., 2021; Carlier, 2022) We consider on the torus \(\mathbb{T}^{d}_{L}\) a _sine-distance_ which suits best our periodic situation. More precisely, for any pair \((x,\,y)\in\mathbb{T}^{d}_{L}\times\mathbb{T}^{d}_{L}\), we define \[\delta(x,y)=L\,\sqrt{\sum_{i=1}^{d}\sin^{2}\!\left(\frac{\pi}{L}(x^{i}-y^{i}) \right)}\in[0,L\ d^{1/2}]\,, \tag{14}\] where \(x=(x^{i})_{i\in[d]}\), \(y=(y^{i})_{i\in[d]}\) and the difference \((\pi/L)(x^{i}-y^{i})\) has to be thought as an element of the one dimensional unit-torus \(\mathbb{T}^{1}=\mathbb{S}^{1}\) identified with the unit-circle. Note that the above sine-distance is indeed a distance (the triangular inequality follows from the properties of \(\sin\)) and is equivalent to the flat-distance \(\mathsf{d}\) induced by the Euclidean distance function: \[(\pi\ L)^{1/2}\,\mathsf{d}(x,y)\leq\delta(x,y)\leq\,\pi\,\mathsf{d}(x,y)\,.\] Let us remark here that our motivation behind adapting such equivalent metric comes from the coupling techniques considered in Appendix A, where we need to consider a smooth metric on \(\mathbb{T}^{d}_{L}\). Finally, we define the Lispchitz norm of a function \(h:\mathbb{T}_{L}^{d}\to\mathbb{R}\) as \[\|h\|_{\rm Lip}:=\sup_{x\neq y\in\mathbb{T}_{L}^{d}}\frac{|h(x)-h(y)|}{\mathsf{d} (x,y)}\,.\] We are now ready to state our main result. **Theorem 2**: _Assume Assumption 1. Then, there exist a rate \(\gamma\in(0,1)\) and a positive constant \(c_{\rm S}>0\) such that_ \[\begin{split}\sup_{x\in\mathbb{T}_{L}^{d}}|\varphi^{\circ n}(x)- \varphi^{\star}(x)|\leq&\,Ld^{1/2}c_{\rm S}\,\gamma^{2\,n-1}\,\| \psi^{0}-\psi^{\star}\|_{\rm Lip}\\ \sup_{x\in\mathbb{T}_{L}^{d}}|\psi^{\circ n}(x)-\psi^{\star}(x)| \leq&\,Ld^{1/2}c_{\rm S}\,\gamma^{2\,n}\,\|\psi^{0}- \psi^{\star}\|_{\rm Lip}\,.\end{split} \tag{15}\] _Similarly, we get the uniform exponential convergence for the gradients_ \[\begin{split}\sup_{x\in\mathbb{T}_{L}^{d}}|\nabla\varphi^{\circ n }(x)-\nabla\varphi^{\star}(x)|\leq&\pi c_{\rm S}\,\gamma^{2\,n- 1}\,\|\psi^{0}-\psi^{\star}\|_{\rm Lip}\\ \sup_{x\in\mathbb{T}_{L}^{d}}|\nabla\psi^{\circ n}(x)-\nabla\psi^ {\star}(x)|\leq&\pi c_{\rm S}\,\gamma^{2\,n}\,\|\psi^{0}-\psi^{ \star}\|_{\rm Lip}\,.\end{split} \tag{16}\] _Moreover, \(\gamma\) and \(c_{\rm S}\) have an explicit expression that can be computed, depending on the choice of the potential \(V\), see (33)._ As detailed in the proof of Theorem 2 given in Section 5, for any potential \(V\) satisfying Assumption 1, there exists an explicit rate \(\bar{\lambda}_{V}>0\) such that the rate \(\gamma\), given by Theorem 2, can be written as \(\gamma={\rm e}^{-\bar{\lambda}_{V}\pi^{2T}}\). In fact, \(\bar{\lambda}_{V}\) corresponds to the ergodicity rate of the controlled diffusion when considering as underlying reference system the diffusion driven by \(b_{s}(x)=-\nabla V(x)+\nabla\log{\rm P}_{T-s}e^{-\psi^{\star}}(x)\), _i.e._, the Schrodinger Bridge SDE (Follmer and Gantert, 1997). ## 3 Explicit convergence rates and discussion In this section, we provide explicit _estimates_ of \(\gamma\) and \(c_{\rm S}\), defined in Theorem 2, for a potential \(V\) which is assumed to be \(\alpha\)-semiconvex for some \(\alpha\leq 0\)3, _i.e._, \(V\) is satisfies for any \(x,y\in\mathbb{T}_{L}^{d}\), Footnote 3: Let us point out that we cannot expect \(\alpha>0\) since we work on a compact Riemannian manifold. \[\sin\biggl{(}\frac{\pi}{L}(x-y)\biggr{)}^{\mathsf{T}}(\nabla V(x)-\nabla V(y) )\geq\frac{\pi\,\alpha}{2L}\,\delta(x,y)^{2}\,, \tag{17}\] where the \(\sin\) function applied to any vector of \(\mathbb{T}_{L}^{d}\) as to be understood as a component-wise map applied to a representative in \([-\pi/2,+\pi/2)\). An example of such potential is provided in Appendix C.1. For notations' sake, let us denote with \(D=L\,d^{1/2}\) the diameter of the torus \(\mathbb{T}_{L}^{d}\) and let \(\eta_{D}=\exp(D^{2}\,|\alpha|/8)\). Then the estimates of \(\gamma\) and \(c_{\rm S}\) are given by \[\log\gamma\leq\,-\pi^{2}T\frac{|\alpha|/4}{\eta_{D}-1}\,\exp\left(-D\frac{\|U _{\mu}\|_{f_{V}}\vee\|U_{\nu}\|_{f_{V}}}{1-\exp\left(-\frac{|\alpha|/4}{\eta_ {D}-1}\,\pi^{2}\,T\right)}\right) \tag{18}\] and \[\mathrm{c_{S}}\leq 2\frac{\eta_{D}}{\sqrt{L\pi}}\,\exp\left(D\frac{\|U_{\mu}\|_{f_{V }}\vee\|U_{\nu}\|_{f_{V}}}{1-\exp\left(-\frac{|\alpha|/4}{\eta_{D}-1}\,\pi^{2} \,T\right)}\right)\,, \tag{19}\] where \(f_{V}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is a concave and continuous function, employed in our proofs in order to get exponential contractive Lipschitz estimates, whereas the \(f_{V}\)-Lipschitz norm \(\|\cdot\|_{f_{V}}\) is defined for any function \(\phi:\mathbb{T}_{L}^{d}\to\mathbb{R}\) as \[\|\phi\|_{f_{V}}\coloneqq\sup_{x\neq y\in\mathbb{T}_{L}^{d}}\frac{|\phi(x)- \phi(y)|}{f_{V}(\delta(x,y))}\,.\] The proof of these bounds is postponed to Appendix C. Moreover, when considering \(V=0\) we recover the classic setting when considering Brownian motions, which corresponds to the quadratic regularized OT, and the computations in Appendix C show that the rate of convergence \(\gamma_{0}\) in the asymptotic regime \(T\to 0\) behaves as \[\log\gamma_{0}\sim-\pi^{2}\,D_{\mu,\nu}^{2}\,D^{4}\,T^{-1}\,\exp(-D_{\mu,\nu} \,D^{3}\,T^{-1})\quad\text{ and }\quad\mathrm{c_{S}}\sim\exp(D_{\mu,\nu}\,D^{3}\,T^{-1})\,\,. \tag{20}\] where \(D_{\mu,\nu}\coloneqq\frac{1}{2\pi^{2}}\,\|U_{\mu}\|_{f_{0}}\vee\|U_{\nu}\|_{f _{0}}\). The general bounds (18) and (19) (for \(\alpha<0\)), in the asymptotic regime \(T\to 0\) and \(D\to+\infty\), may be reduced to \[|\log\gamma| =\mathcal{O}\left(T\,\eta_{D}^{-1}\exp(-\eta_{D}\,D\,T^{-1}) \right)\,,\] \[\mathrm{c_{S}} =\mathcal{O}(\eta_{D}\exp(\eta_{D}\,D\,T^{-1}))\,,\] where we omitted the constants that do not affect significantly this regime. As expected, \(\gamma\to 1\) and \(\mathrm{c_{S}}\to\infty\), _i.e._, the asymptotic regime highly slows down the convergence of Sinkhorn algorithm, especially when \(d\) is large. ## 4 Comparison with existing literature and original contributions The Sinkhorn algorithm is very well-known and its study has been intensified particularly after Cuturi (2013). Nonetheless, its introduction dates back to Yule (1912), and it is often referred to as Iterative Proportional Fitting Procedure (IPFP). We refer to Peyre and Cuturi (2019) for an extensive overview on Entropic Optimal Transport, on Sinkhorn algorithm, on its generalizations and on their applications On discrete state spaces, convergence of the Sinkhorn algorithm has been proven for the first time by Sinkhorn (1964) and Sinkhorn and Knopp (1967). In this setting, Franklin and Lorenz (1989) show that Sinkhorn algorithm is equivalent to a sequence of iterations of a contraction in the Hilbert projective metric and prove its geometric (_i.e._, exponential) convergence by relying on Birkoff's theorem. We refer also to Borwein et al. (1994), who focus on fixed-point problems, in settings more general than the matrix one. In particular, they consider Sinkhorn-type algorithms which turn out to be once again equivalent to iterations of a contraction in the Hilbert projective metric. The continuous counterpart of the Hilbert metric has already been investigated in Chen et al. (2016) and in Deligiannidis et al. (2021). In the latter, the authors provide quantitative stability estimates of IPFP on compact metric spaces, from which its convergence can be deduced. Even though these original approaches provide also quantitative rates of convergence, they badly scale when applied to a multimarginal Optimal Transport setting. Recently, new ideas from convex theory have been introduced in order to tackle the convergence of Sinkhorn algorithm in the multimarginal setting too, for bounded costs (or equivalently compact spaces). Along this line of research, it is worth mentioning Carlier and Laborde (2020) where the authors prove well-posedness of Sinkhorn iterates and their smooth dependence from the marginals. In addition, Di Marino and Gerolin (2020) have proven an \(\mathrm{L}^{p}\) (qualitative) convergence of Sinkhorn iterates, and Carlier (2022) which improves the previous results by showing an exponential convergence (with a rate that scales linearly with the number of marginals). Regarding the primal formulation and the convergence of \((\pi^{n,n})_{n\in\mathbb{N}}\) defined in (12) to the optimal coupling, Nutz and Wiesel (2022) establish qualitative convergence in total variation. Following this work, Eckstein and Nutz (2022) show quantitative (polynomial) convergence in Wasserstein distance. Lastly, Ghosal and Nutz (2022) derive polynomial linear convergence (_i.e._, of order \(O(1/n)\)) with respect to a symmetric relative entropy. **Original Contribution.** In this paper we provide a new approach in the study of Sinkhorn algorithm on the \(d\)-dimensional torus \(\mathbb{T}_{L}^{d}\) and its main novelties can be summarized as follows. * Our proofs rely on probabilistic arguments and coupling methods, by exploiting the connection between the Schrodinger potentials and value functions of stochastic optimal control problems. To the best of our knowledge, this is the first paper addressing the problem relying on a (non-trivial) stochastic interpretation, while the existing literature usually relies on convex analysis and/or on the Hilbert metric. Moreover, this probabilistic approach via stochastic optimal control could in principle be carried over to the unbounded case (e.g. in the Euclidean space \(\mathbb{R}^{d}\)). Here we specify our results on the torus since it allows us to work on a compact state space while benefiting from its underlying Euclidean structure. However our approach could be extended to smooth compact manifolds without boundaries but at the expense of technicalities, in particular the definition of an appropriate coupling by reflection. Dealing with the torus allows us to reduce these complications at the bare minimum avoid technical details related to general bounded compact state spaces for which topological conditions generally have to be imposed. * We prove the convergence of Sinkhorn iterates as a corollary of the convergence of their gradients (or equivalently in Lipschitz norms). To the best of our knowledge, our result is the first one addressing the problem directly at the level of the gradients. Moreover, our probabilistic approach provides Lipschitz estimates along solutions of Hamilton-Jacobi-Bellman equations (_i.e._, for any time \(s\in[0,T]\); see (44)). Our results should be compared to Deligiannidis et al. (2021) where Lipschitz estimates close to ours are given, but for iterates \((f_{n},g_{n})_{n\in\mathbb{N}}\) corresponding to \(f_{n}=\mathrm{P}_{T}\,\mathrm{e}^{-\psi^{n}}\) and \(g_{n}=\mathrm{P}_{T}\,\mathrm{e}^{-\varphi^{n}}\). To show their result, Deligiannidis et al. (2021) rely on Birkoff's theorem for the Hilbert metric since the iterations they consider are then just linear updates. Note that convergence of the gradient of \((\varphi_{n},\psi_{n})_{n\in\mathbb{N}}\) cannot be deduce from their result since \(\varphi_{n}\) and \(\psi_{h}\) are non-linear transformation of \(g_{n}\) and \(f_{n}\) respectively. * We get an exponential rate of convergence \(\gamma=\mathrm{e}^{-\bar{\lambda}_{V}\,\pi^{2}\,T}\) which converges to \(1\) as \(T\downarrow 0\). This exponential dependence on \(T\) is not surprising. Indeed it is well known that convergence of Sinkhorn algorithm implies quantitative stability (continuous) bounds for Schrodinger problem (and entropically regularized optimal transport, see Eckstein and Nutz (2022)) while on the contrary the optimal transport map is solely \(1/2\)-Holder continuous by Gigli (2011). ## 5 Sketch of the proof We now introduce the main components of our method to analyse the convergence of the Sinkhorn iterates given by (11). We first introduce the function \(\{\mathcal{U}_{t}^{T,h}\}_{t\in[0,T]}\) defined for any measurable and bounded function \(h:\mathbb{T}_{L}^{d}\to\mathbb{R}\): \[\mathcal{U}_{t}^{T,h}=-\log\mathrm{P}_{T-t}\mathrm{e}^{-h}\,\] which can be shown, by a direct computation, to correspond to the solution of the Hamilton-Jacobi-Bellman (hereafter HJB) equation defined by \[\begin{cases}\partial_{t}u_{t}+\frac{1}{2}\Delta u_{t}-\nabla V\cdot\nabla u_ {t}-\frac{1}{2}|\nabla u_{t}|^{2}=0\\ u_{T}=h\,.\end{cases} \tag{21}\] With these notations, Sinkhorn iterates can be written as \[\varphi^{n+1}=U_{\mu}-\mathcal{U}_{0}^{T,\psi^{n}}\,\qquad\psi^{n+1}=U_{\nu}- \mathcal{U}_{0}^{T,\varphi^{n+1}}. \tag{22}\] To get some bounds on the Lipschitz constant of \(\varphi^{n+1}\) and \(\psi^{n+1}\), we then show that if \(h:\mathbb{T}_{L}^{d}\to\mathbb{R}\) is Lipschitz, \(\mathcal{U}_{0}^{T,h}\) is also Lipschitz with an explicit bound for its Lipschitz constant. To do so, we use that \(\mathcal{U}_{t}^{T,h}\) can be represented also as the value function of the SOC problem \[\begin{split}\mathcal{U}_{t}^{T,h}(x)&=\inf_{q\in \mathcal{A}_{[t,T]}}\mathbb{E}\bigg{[}\frac{1}{2}\int_{t}^{T}|q_{s}|^{2}\mathrm{ d}s+h(X_{T}^{q})\bigg{]}\\ \text{where }&\begin{cases}\mathrm{d}X_{s}^{q}=(- \nabla V(X_{s}^{q})+q_{s})\mathrm{d}s+\mathrm{d}B_{s}\\ X_{t}^{q}=x\,,\end{cases}\end{split} \tag{23}\] where \((B_{s})_{s\geq 0}\) is a \((\mathcal{F}_{s})_{s\geq 0}\)-Brownian motion on \(\mathbb{T}_{L}^{d}\) defined on the filtered probability space \((\Omega,\mathbb{P},\mathcal{F},(\mathcal{F}_{s})_{s\geq 0})\) satisfying the usual conditions, and \(\mathcal{A}_{[t,T]}\) denotes the set of admissible controls, _i.e._, \((\mathcal{F}_{s})_{s\geq 0}\)-progressively measurable processes. We provide a precise statement of this result in Proposition 9 in Appendix A, where we show the optimal control process to be the feedback-process \(q_{s}=-\nabla\mathcal{U}_{s}^{T,h}(X_{s}^{q})\). Moreover, Proposition 9 in Appendix A provides a non-trivial control of \(\|\mathcal{U}_{t}^{T,h}\|_{\mathrm{Lip}}\) for any function \(h\in\mathcal{C}^{3}(\mathbb{T}_{L}^{d})\). We give here the main ideas of the proof of this result. We first show that for any pair of stochastic processes \((X_{s},Y_{s})_{s\in[t,T]}\), starting from \(X_{t}=x\) and \(Y_{t}=y\) respectively, solution of \[\mathrm{d}X_{s} =-\nabla V(X_{s})\mathrm{d}s-\nabla\mathcal{U}_{s}^{T,h}(X_{s}) \mathrm{d}s+\mathrm{d}B_{s}\, \tag{24}\] \[\mathrm{d}Y_{s} =-\nabla V(Y_{s})\mathrm{d}s-\nabla\mathcal{U}_{s}^{T,h}(X_{s}) \mathrm{d}s+\mathrm{d}\tilde{B}_{s}\,\] where \((\tilde{B}_{s})_{s\geq 0}\) is a \((\mathcal{F}_{s})_{s\geq 0}\)-Brownian motion on \(\mathbb{T}_{L}^{d}\) defined on \((\Omega,\mathcal{F},\mathbb{P})\), it holds by (23), \[\mathcal{U}_{t}^{T,h}(y)-\mathcal{U}_{t}^{T,h}(x)\leq\mathbb{E}\bigg{[}h(Y_{T} )-h(X_{T})\bigg{]}\.\] Then, if \(h\) is Lipschitz, we consider a particular coupling which is adapted from the usual coupling by reflection for homogeneous diffusion (Wang, 1994; Eberle, 2016) to bound \(\mathcal{U}_{t}^{T,h}(y)-\mathcal{U}_{t}^{T,h}(x)\). In particular, the novelty of our approach relies in employing coupling by reflection techniques for controlled diffusion processes on the torus, endowed with the distance \(\delta\) given in (14), that defines a smooth distance on \(\mathbb{T}_{L}^{d}\) which is equivalent to the Riemannian distance \(\mathsf{d}\). An adaptation of the coupling by reflection techniques under this sine-distance is given in Appendix A. Owing to the construction given there, we obtain for any \(t\in[0,T]\) and any \(h:\mathbb{T}_{L}^{d}\to\mathbb{R}\), Lipschitz \[\|\mathcal{U}_{t}^{T,h}\|_{f_{V}}\leq\operatorname{e}^{-\lambda_{V}\,\pi^{2} \,(T-t)}\|h\|_{f_{V}} \tag{25}\] where the rate \(\lambda_{V}>0\) and the function \(f_{V}:\mathbb{R}_{+}\to\mathbb{R}_{+}\), which is concave and continuous, are defined in (35) and (36). Moreover, we prove in Proposition 8 in Appendix A that \(f_{V}\) is equivalent to the identity. Therefore, \(\|\cdot\|_{f_{V}}\) is equivalent to the usual Lipschitz norm \(\|\cdot\|_{\mathrm{Lip}}\) (_i.e._, with \(f_{V}\) being the identity and considering the flat-distance) since \(\delta(\cdot,\cdot)\) is equivalent to \(\mathsf{d}(\cdot,\cdot)\). In particular, we have for any function \(\phi:\mathbb{T}_{L}^{d}\to\mathbb{R}\) \[\frac{1}{\pi}\,\|\phi\|_{\mathrm{Lip}}\leq\|\phi\|_{f_{V}}\leq\frac{C_{V}^{-1} }{\sqrt{L\,\pi}}\,\|\phi\|_{\mathrm{Lip}}\;. \tag{26}\] where \(C_{V}\) is defined in (36). By combining (25) with (26), we are then able to bound \(\|\mathcal{U}_{t}^{T,h}\|_{\mathrm{Lip}}\) as \[\|\mathcal{U}_{t}^{T,h}\|_{\mathrm{Lip}}\leq C_{V}^{-1}\sqrt{\frac{\pi}{L}} \operatorname{e}^{-\lambda_{V}\,\pi^{2}\,(T-t)}\|h\|_{\mathrm{Lip}}\] It is then possible studying how the Lipschitzianity propagates along Sinkhorn iterates using the following result. **Lemma 3**: _Assume Assumption 1. For all \(n\geq 0\) we have_ \[\begin{split}\|\varphi^{n+1}\|_{f_{V}}&\leq\|U_{\mu }\|_{f_{V}}+\operatorname{e}^{-\lambda_{V}\,\pi^{2}\,T}\|\psi^{n}\|_{f_{V}}\\ \|\psi^{n+1}\|_{f_{V}}&\leq\|U_{\nu}\|_{f_{V}}+ \operatorname{e}^{-\lambda_{V}\,\pi^{2}\,T}\|\varphi^{n+1}\|_{f_{V}}\end{split} \tag{27}\] _Moreover, for all \(n\geq 1\) we have_ \[\begin{split}\|\psi^{n}\|_{f_{V}}&\leq\frac{\|U_{ \nu}\|_{f_{V}}+\exp(-\lambda_{V}\,\pi^{2}\,T)\|U_{\mu}\|_{f_{V}}}{1-\exp(-2 \lambda_{V}\,\pi^{2}\,T)}\\ \|\varphi^{n}\|_{f_{V}}&\leq\frac{\|U_{\mu}\|_{f_{V} }+\exp(-\lambda_{V}\,\pi^{2}\,T)\|U_{\nu}\|_{f_{V}}}{1-\exp(-2\lambda_{V}\,\pi ^{2}\,T)},\end{split} \tag{28}\] **Proof** As shown in Proposition 9 in Appendix A (see also (25)), the Lipschitz-regularity backward propagates along solutions of HJB equations. Particularly, it holds \[\|\mathcal{U}_{0}^{T,\psi^{n}}\|_{f_{V}}\leq\|\psi^{n}\|_{f_{V}}\operatorname{ e}^{-\lambda_{V}\,\pi^{2}\,T},\] which, combined with (22) and an application of the triangular inequality, gives the first claim in (27). The second claim follows by symmetry. Concatenating these two bounds, we obtain \[\|\psi^{n+1}\|_{f_{V}}\leq\|U_{\nu}\|_{f_{V}}+\operatorname{e}^{-\lambda_{V} \,\pi^{2}\,T}\|U_{\mu}\|_{f_{V}}+\operatorname{e}^{-2\lambda_{V}\,\pi^{2}\,T}\| \psi^{n}\|_{f_{V}},\] from which the first relation in (28) follows by induction. The second relation follows by symmetry. **Remark 4**: _Let us also point out that the Lipschitz estimates obtained in Lemma 3, as well as the ones proven in Lemma 6 below, hold true also for the normalized Sinkhorn iterates \(\varphi^{\circ n},\psi^{\circ n}\) (and for any other trivial additive perturbation of them). Indeed any additive normalization would cancel out when considering Lipschitz norms._ From the pointwise convergence of the normalized Sinkhorn iterates \(\varphi^{\circ n},\psi^{\circ n}\) towards the Schrodinger potentials (which in our compact and smooth setting is guaranteed from the geometric \(\mathrm{L}^{p}\) convergence in Di Marino and Gerolin (2020)), the previous regularity result propagates to the potentials and using (26), we obtain the following corollary. **Corollary 5**: _Assume Assumption 1. Then it holds_ \[\|\psi^{\star}\|_{f_{V}} \leq\frac{\|U_{\nu}\|_{f_{V}}+\exp(-\lambda_{V}\,\pi^{2}\,T)\|U_{ \mu}\|_{f_{V}}}{1-\exp(-2\lambda_{V}\,\pi^{2}\,T)} \tag{29}\] \[\|\varphi^{\star}\|_{f_{V}} \leq\frac{\|U_{\mu}\|_{f_{V}}+\exp(-\lambda_{V}\,\pi^{2}\,T)\|U_{ \nu}\|_{f_{V}}}{1-\exp(-2\lambda_{V}\,\pi^{2}\,T)}\] _and therefore the Lipschitz norm of the Schrodinger potentials can be bounded as_ \[\|\psi^{\star}\|_{\mathrm{Lip}} \leq\pi\,\frac{\|U_{\nu}\|_{f_{V}}+\exp(-\lambda_{V}\,\pi^{2}\,T )\|U_{\mu}\|_{f_{V}}}{1-\exp(-2\lambda_{V}\,\pi^{2}\,T)}\] \[\|\varphi^{\star}\|_{\mathrm{Lip}} \leq\pi\,\frac{\|U_{\mu}\|_{f_{V}}+\exp(-\lambda_{V}\,\pi^{2}\,T )\|U_{\nu}\|_{f_{V}}}{1-\exp(-2\lambda_{V}\,\pi^{2}\,T)}\,.\] We are now ready to prove the key contraction estimates, from which our main result follows. Once again the main idea behind our proof is relying on a stochastic control problem where the Schrodinger potential contributes in the final cost while its gradient drives the controlled SDE. This allows to back-propagate along an HJB equation the Lipschitz regularity of the difference between the Sinkhorn iterates and the target Schrodinger potential. Indeed, if we denote with \(\mathcal{D}_{t}^{n}\coloneqq\mathcal{U}_{t}^{T,\psi^{n}}-\mathcal{U}_{t}^{T, \psi^{\star}}\) (the difference between the evolutions along HJB of \(\psi^{n}\) and \(\psi^{\star}\) respectively) from (21) we deduce that it solves \[\begin{cases}\partial_{t}u_{t}+\frac{1}{2}\Delta u_{t}+(-\nabla V-\nabla \mathcal{U}_{t}^{T,\psi^{\star}})\cdot\nabla u_{t}-\frac{1}{2}|\nabla u_{t}|^{ 2}=0\\ u_{T}=\psi^{n}-\psi^{\star}\,,\end{cases}\] which can be represented (see Proposition 9 in Appendix A) as the value function of the stochastic control problem \[\mathcal{D}_{t}^{n}(x) =\inf_{q}\mathbb{E}\bigg{[}\frac{1}{2}\int_{t}^{T}|q_{s}|^{2} \mathrm{d}s+\psi^{n}(X_{T}^{q})-\psi^{\star}(X_{T}^{p})\bigg{]} \tag{30}\] \[\text{where}\quad\begin{cases}\mathrm{d}X_{s}^{q}=(-\nabla V(X_{s }^{q})-\nabla\mathcal{U}_{s}^{T,\psi^{\star}}(X_{s}^{q})+q_{s})\mathrm{d}s+ \mathrm{d}B_{s}\\ X_{t}^{q}=x\,.\end{cases}\] Once the connection with the stochastic optimal control formulation is established, the proof boils down once again in studying how Lipschitz-regularity backward propagates along solutions of HJB equations. **Lemma 6**: _Assume Assumption 1. There exist \(\bar{\lambda}_{V}>0\), given by (46) in Appendix B, and a continuous concave function \(\bar{f}_{V}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that_ \[\begin{split}\|\psi^{n+1}-\psi^{\star}\|_{\bar{f}_{V}}& \leq\exp(-\bar{\lambda}_{V}\,\pi^{2}\,T)\|\varphi^{n+1}-\varphi^{ \star}\|_{\bar{f}_{V}}\\ \|\varphi^{n+1}-\varphi^{\star}\|_{\bar{f}_{V}}&\leq \exp(-\bar{\lambda}_{V}\,\pi^{2}\,T)\|\psi^{n}-\psi^{\star}\|_{\bar{f}_{V}} \,.\end{split} \tag{31}\] _As a result_ \[\begin{split}\|\psi^{n+1}-\psi^{\star}\|_{\bar{f}_{V}}& \leq\exp(-2\bar{\lambda}_{V}\,\pi^{2}\,T)\|\psi^{n}-\psi^{\star} \|_{\bar{f}_{V}}\\ \|\varphi^{n+1}-\varphi^{\star}\|_{\bar{f}_{V}}&\leq \exp(-2\bar{\lambda}_{V}\,\pi^{2}\,T)\|\varphi^{n}-\varphi^{\star}\|_{\bar{f}_ {V}}\,.\end{split} \tag{32}\] **Proof** The proof is postponed to Appendix B. We can now complete the proof of Theorem 2. **Proof** [Proof of Theorem 2.] Since \(\int\varphi^{\circ n}\mathrm{d}\mu=\int\varphi^{\star}\mathrm{d}\nu\) (see (13)), uniformly on \(x\in\mathbb{T}^{d}_{L}\), it holds \[\begin{split}|\varphi^{\circ n}(x)-\varphi^{\star}(x)|& =\left|\varphi^{\circ n}(x)-\int_{\mathbb{T}^{d}_{L}}\varphi^{ \circ n}\,\mathrm{d}\mu-\varphi^{\star}(x)+\int_{\mathbb{T}^{d}_{L}}\varphi^{ \star}\,\mathrm{d}\mu\right|\\ &=\left|\int_{\mathbb{T}^{d}_{L}}\!\left[(\varphi^{n}-\varphi^{ \star})(x)-(\varphi^{n}-\varphi^{\star})(y)\right]\!\mathrm{d}\mu(y)\right|\\ &\leq\int_{\mathbb{T}^{d}_{L}}\!\left|(\varphi^{n}-\varphi^{ \star})(x)-(\varphi^{n}-\varphi^{\star})(y)\right|\!\mathrm{d}\mu(y)\\ &\leq\|\varphi^{n}-\varphi^{\star}\|_{\bar{f}_{V}}\int_{\mathbb{ T}^{d}_{L}}\bar{f}_{V}(\delta(x,y))\,\mathrm{d}\mu(y)\\ &\leq L\;d^{1/2}\,\|\varphi^{n}-\varphi^{\star}\|_{\bar{f}_{V}}\,. \end{split}\] Therefore, by concatenating (32) along \(n\) iterates, we end up with \[\begin{split}\sup_{x\in\mathbb{T}^{d}_{L}}|\varphi^{\circ\,n+1} (x)-\varphi^{\star}(x)|&\leq L\,d^{1/2}\,\exp(-2\,n\,\bar{ \lambda}_{V}\,\pi^{2}\,T)\|\varphi^{1}(x)-\varphi^{\star}(x)\|_{\bar{f}_{V}}\\ &\leq L\,d^{1/2}\,\exp(-(2\,n+1)\,\bar{\lambda}_{V}\,\pi^{2}\,T) \|\psi^{0}(x)-\psi^{\star}(x)\|_{\bar{f}_{V}}\,.\end{split}\] By reasoning in the same fashion, since \(\int\psi^{\circ n}\mathrm{d}\mu=\int\psi^{\star}\mathrm{d}\nu\) or simply by relying on (28), we conclude that \[\sup_{x\in\mathbb{T}^{d}_{L}}|\psi^{\circ n}(x)-\psi^{\star}(x)|\leq L\,d^{1/2 }\,\exp(-2\,n\,\bar{\lambda}_{V}\,\pi^{2}\,T)\|\psi^{0}(x)-\psi^{\star}(x)\|_ {\bar{f}_{V}}\,.\] Using (47), we may conclude the proof of (15) by setting \[\gamma=\exp(-\bar{\lambda}_{V}\,\pi^{2}\,T)\text{ and }c_{\mathrm{S}}=\bar{C}_{V} ^{-1}/\sqrt{L\,\pi}\,\,, \tag{33}\] where \(\bar{\lambda}_{V}\) and \(\bar{C}_{V}\) are respectively defined at (46) and (47). The proof of the convergence of the gradients can be obtained in a similar fashion since \[\sup_{x\in\mathbb{T}^{d}_{L}}|\nabla\varphi^{\circ n}-\nabla\varphi^{\star}|(x )=\sup_{x\in\mathbb{T}^{d}_{L}}|\nabla\varphi^{n}-\nabla\varphi^{\star}|(x) \leq\|\varphi^{n}-\varphi^{\star}\|_{\mathrm{Lip}}\leq\pi\|\varphi^{n}- \varphi^{\star}\|_{\bar{f}_{V}}\] and similarly for \(\psi^{\circ n}-\psi^{\star}\), from which (16) follows by concatenating the contraction in (32). ## 6 Conclusion In this paper, we have introduced a new probabilistic approach in the study of Sinkhorn algorithm. We have shown that each iteration is equivalent to solving an Hamilton-Jacobi-Bellman equation, _i.e._, computing the value function of a stochastic control problem, and showed that the Lipschitz regularity of the previous Sinkhorn iterate propagates to the next one, with a constant dissipative rate. From this contraction estimates we have deduced the exponential convergence of the Sinkhorn iterates and of their gradients. All the dissipative Lipschitz estimates for the value function of the stochastic control problems considered have been deduced via an application of coupling by reflection techniques for controlled diffusion on the torus. This approach is a complete novelty and could in principle be extended to the non-compact Euclidean case, problem that we address in the follow-up work (Conforti et al., 2023). GG thanks Ecole Polytechnique for its hospitality, where this research has been carried out, and NDNS+ for funding his visit there (NDNS/2023.004 PhD Travel Grant). GG is supported from the NWO Research Project 613.009.111 "Analysis meets Stochastics: Scaling limits in complex systems". GC acknowledges funding from the grant SPOT (ANR-20-CE40-0014). AD acknowledges support from the Lagrange Mathematics and Computing Research Center. AD would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programm _The mathematical and statistical foundation of future data-driven engineering_ when work on this paper was undertaken. This work was supported by: EPSRC grant number EP/R014604/1.
2308.06844
On enumerative problems for maps and quasimaps: freckles and scars
We address the question of counting maps between projective spaces such that images of cycles on the source intersect cycles on the target. In this paper we do it by embedding maps into quasimaps that form a projective space of their own. When a quasimap is not a map, it contains freckles (studied earlier) and/or scars, appearing when the complex dimension of the source is greater than one. We consider a lot of examples showing that freckle/scar calculus (using excess intersection theory) works. We also propose the "smooth conjecture" that may lead to computation of the number of maps by an integral over the space of quasimaps.
Olga Chekeres, Santosh Kandel, Andrey Losev, Pavel Mnev, Konstantin Wernli, Donald R. Youmans
2023-08-13T20:41:47Z
http://arxiv.org/abs/2308.06844v2
# On enumerative problems for maps and quasimaps: freckles and scars ###### Abstract. We address the question of counting maps between projective spaces such that images of cycles on the source intersect cycles on the target. In this paper we do it by embedding maps into quasimaps that form a projective space of their own. When a quasimap is not a map, it contains freckles (studied earlier) and/or scars, appearing when the complex dimension of the source is greater than one. We consider a lot of examples showing that freckle/scar calculus (using excess intersection theory) works. We also propose the "smooth conjecture" that may lead to computation of the number of maps by an integral over the space of quasimaps. The work of A. S. Losev is partially supported by Laboratory of Mirror Symmetry NRU HSE, RF Government grant, ag. N\({}^{\underline{\rm o}}\) 14.641.31.0001. The work of D. R. Youmans was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster). The work of K. Wernli was supported by the ERCSyG project, Recursive and Exact New Quantum Theory (ReNewQuantum) which received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 810573. ###### Contents * 1 Introduction * 2 The general case of the general case of the general case [MISSING_PAGE_POST] of the general case * 36 The general case of the general case of the general case * 37 The general case of the general case of the general case of the general case * 38 The general case of the general case of the general case of the general case of the general case * 39 The general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general case of the general general general case of the general case of the general case of the general general case of the general case of the general case of the general case of the general general general of the case of the general case of the general case of the general case of the general general general of the case of the general case of the general general general of the case of the general case of the general case of the general case of the general general general of the general case of the general case of the general case of the general general general of the case of the general case of the general case of the general general general general of the general case of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general of the general case of the general general general general of the general case of the general general general of the general general general of the general case of the general general general of the general case of the general general general of the general general general of the general general general of the general case of the general general general of the general general general of the general general general of the general case of the general general general of the general general general general of the general case of the general general general of the general general general general of the general general general general of the general general general of the general case of the general general general general of the general general general general of the general general general of the general general general of the general general general of the general general general of the general general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general general of the general general general of the general general general of the general general general of the general general general of the general general general of the general general general general of the general general general of the general general general general of the general general general general of the general general general general of the general general general of the general general general of the general general general of the general general general general of the general general general general of the general general general general of the general general general of the general general general of the general general general of the general general general of the general general general general of the general general general general of the general general general of the general general general of the general general general general of the general general general of the general general general general of the general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general of the general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general of the general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general of the general general general general of the general general general general of the general general general general general of the general general general general general of the general general general general of the general general general general of the general general general general of the general general general general of the general general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general general of the general general general general general of the general general general general general of the general general general general general general of the general general general general general of the general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general of the general general general general general general of the general general general general general of the general general general general general general of the general general general general general of the general general general general general general of the general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general of the general general general general general of the general general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general of the general general general general general general general general of the general general general general general general general of the general general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general general of the general general general general general general of the general general general general general general general of the general general general general general general general general general of the general general general general general general general of the general general general general general general general general general of the general general general general general general general of the general general general general general general general general of the general general general general general general general general of the general general general general general general general general of the general general general general general general general general general of the general general general general general general general general general of the general general general general general general general general general of the general general general general general general general general of the general general general general general general general general of the general general general general general general general general of the general general general general general general general general general of the general general general general general general general general general of the general general general general general general general general of the general general general general general general general general general of the general general general general general general general general of the general general general general general general general general general of the general general general general general general general general general of the general general general general general general general general general general of the general general general general general general general general general general general of the general general general general general general general general general general general of the general general general general general general general general general general general general of the general general general general general general general general general general general of the general general general general general general general general general general general of the general general general general general general general general general general general general of the general general general general general general general general general general general of the general general general general general general general general general general general general of the general general general general general general general general general general general general general general of the general general general general general general general general general general general general general general general general of the general There are two ways to extract extra contributions from the quasimap count: 1. Study the proper quasimap configurations themselves. We present here a lot of examples. The proper treatment of these configurations requires Fulton's excess intersection theory [6]. The main formula is: (1) \[\operatorname{QM}=\operatorname{KM}+\operatorname{PQM}\] where \(\operatorname{KM}\) is the number of holomorphic maps (that we call Kontsevich-Manin number1) we are interested in. \(\operatorname{QM}\) is the (easily computable by a Bezout-like formula) total number of quasimaps and \(\operatorname{PQM}\) is the count of proper quasimap configurations. Proper quasimaps include * "freckle" configurations (where the evaluation is not defined at a collection of isolated points - "freckles" - in the source) and * "scar" configurations (evaluation fails on a cycle of positive dimension on the source) appearing for \(\dim_{\mathbb{C}}(\operatorname{source})>1\). We define scars in Section 2.4 (see also Example 2.15) and study examples with them in Sections 6.5.2 and 6.6. Footnote 1: For source of complex dimension one, these numbers are known as Gromov-Witten invariants that were effectively computed by Kontsevich-Manin [14]. PQM numbers range in complexity. It is easy to treat isolated freckle configurations. A bit more work is required to treat what we call _quasistable_ examples. However, we found non-quasi-stable examples where the full machinery of excess intersection theory (in particular, Segre classes) is needed. We plan to come to this issue in a subsequent publication. 2. Study integrals of differential forms over the space of quasimaps that are smooth on the locus of actual maps. Here we experimentally observe two things: 1. such integrals are convergent, 2. they give correct answers in the simplest cases. Therefore, we propose the Smooth Conjecture that states that having enough computational power, these numbers could be computed numerically. As a byproduct, we were looking for a higher analog of quantum multiplication - a generating function for \(\operatorname{KM}\) numbers with only \(0\)-dimensional cycles on the source. Surprisingly enough, for source dimension greater than one, there is a quantum ring - a Frobenius algebra with free commutative product but nontrivial counit - which does not descend to a deformation of the cohomology ring of the target. This paper is intended as a self-contained mathematical text motivated by the problem of gauged holomorphic models [9, 16] for complex dimension of the source \(1\) and \(2\). The relation of numbers that we find here to physics will be explained elsewhere; here we focus on the mathematical side of the problem. **Acknowledgments.** D.Y. would like to thank Felipe Espreafico for interesting discussions. P.M. and K.W. would like to thank Galileo Galilei Institute where part of the work was completed for hospitality. ## Notations \begin{tabular}{l l} **Notation** & **Description** \\ \(X\) & source manifold, usually \(\mathbb{P}^{k}\) \\ \(Y\) & target manifold, usually \(\mathbb{P}^{n}\) \\ \(\mathrm{QMap}_{d}(X,Y)\) & space of degree \(d\) quasimaps \(X\not\to Y\) \\ \(\mathrm{Map}_{d}(X,Y)\) & space of degree \(d\) holomorphic maps \(X\to Y\) \\ \(\mathrm{PQL}(f)\) & proper quasimap locus of a quasimap \(f\) \\ \(\mathcal{D}\) & enumerative data for maps/quasimaps: the collection of source/target cycles \(c_{1}^{X},\ldots,c_{l}^{X},c_{1}^{Y},\ldots,c_{l}^{Y}\) and the topological type of the map \\ \(\mathrm{KM}(X,Y,\mathcal{D})\) & number of holomorphic maps \(X\to Y\) subject to conditions \(\mathcal{D}\) \\ \(\mathrm{QM}(X,Y,\mathcal{D})\) & number of quasimaps \(X\not\to Y\) subject to conditions \(\mathcal{D}\) \\ \(\mathrm{PQM}(X,Y,\mathcal{D})\) & number of proper quasimaps \(X\not\to Y\) subject to conditions \(\mathcal{D}\) \\ \(\mathrm{Var}\) & "space of variables" - the product of the space of quasimaps and the source cycles, see (46) \\ \(h=c_{1}(O(1)_{\mathbb{P}^{k}})\) & generator of of \(H^{2}(\mathbb{P}^{k})\) \\ \(H=c_{1}(O(1)_{\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})})\) & generator of \(H^{2}(\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n}))\); note that \(\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})=\mathbb{P}^{n_{k,n,d}}\) \\ \(\zeta=c_{1}(O(1)_{Z})\) & generator of \(H^{2}(Z)\) for a component \(Z\subset\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\) of the zero locus of the section \(\sigma\) of the equation bundle \(E\), assuming \(Z\) is a projective space \\ \(\tilde{e}(E)\) & Euler class of a vector bundle \(E\) \\ \(c(E)\) & total Chern class (or a de Rham representative of it) of a vector bundle \(E\to B\) \\ \end{tabular} ## 2. Projective space of quasimaps between projective spaces ### Definition of a quasimap Let \(X\) be a compact complex manifold and let \[Y=\mathbb{C}^{N}\,/\!\!\!/\,\,G=(\mathbb{C}^{N}\backslash\mathbf{\Gamma})/G\] be the GIT quotient of \(\mathbb{C}^{N}\) by an action of a reductive algebraic group \(G\) by linear transformations.2 Here \(\mathbf{\Gamma}\subset\mathbb{C}^{N}\) is the unstable locus of the \(G\)-action. Footnote 2: We are thinking here of the projective GIT quotient \(\operatorname{Proj}\left(\Gamma(\mathbb{C}^{N},\oplus_{i\geq 0}\mathcal{L}^{ \otimes i})^{G}\right)\), with \(\mathcal{L}\) the trivial line bundle over \(\mathbb{C}^{N}\) equipped with “linearization” – an extension of the \(G\)-action on the base. Recall that a point \(x\in\mathbb{C}^{N}\) is semistable if the closure of the \(G\)-orbit of \((x,l)\in\mathcal{L}^{\vee}\) (with \(\mathcal{L}^{\vee}\) the dual line bundle) is disjoint from the zero-section for any nonzero element \(l\in\mathcal{L}^{\vee}_{x}\). A non-semistable point of \(\mathbb{C}^{N}\) is “unstable.” We refer the reader e.g. to [22] and [24] for details on GIT quotients. **Definition 2.1**.: A _quasimap_\(f\) from \(X\) to \(Y\) (we will write \(f\colon X\not\to Y\)) is a pair \((\mathcal{P},\underline{f})\), consisting of a holomorphic \(G\)-bundle \(\mathcal{P}\) over \(X\) and a holomorphic section \(\underline{f}\) of the associated vector bundle \(\mathcal{P}\times_{G}\mathbb{C}^{N}\) over \(X\).3 Footnote 3: Such pairs are closely related to Brill-Noether pairs, see e.g. [11]. The notion of a quasimap is due to V. Drinfeld [5]. Also: the definition of a quasimap we give here is the same as a map to the stack quotient \([\mathbb{C}^{N}/G]\), see e.g. [3, Section 2.3]. If the section \(\underline{f}\) satisfies \[\underline{f}(x)\not\in\mathbf{\Gamma}\quad\text{ for all }\;x\in X, \tag{2}\] then it defines a holomorphic map \(X\to Y\) (by abuse of notations we will also denote this map \(f\)). If (2) fails, we call \(f\) a "proper" quasimap. Then we call the set \[\operatorname{PQL}(f):=\{x\in X\mid\underline{f}(x)\in\mathbf{\Gamma}\}\quad \subset X\] the "proper quasimap locus" of \(f\). If \(\operatorname{PQL}(f)\) is collection of isolated points, we call these points "freckles."4 Footnote 4: The term “freckle” in this context was introduced in [17]. We will denote the space of quasimaps \(\operatorname{QMap}(X,Y)\), while the space of holomophic maps \(X\to Y\) will be denoted by \(\operatorname{Map}(X,Y)\). By the discussion above, maps are quasimaps with \(\operatorname{PQL}(f)=\varnothing\): \(\operatorname{Map}(X,Y)\hookrightarrow\operatorname{QMap}(X,Y)\). _Remark 2.2_.: We will mostly discuss the case \(G=\mathbb{C}^{*}\) or \((\mathbb{C}^{*})^{l}\) in this paper, with the quotient \(Y\) a toric manifold. However, another very interesting example is \(G=GL(n,\mathbb{C})\) (and \(Y\) can be e.g. the Grassmannian), which has a connection to gauge theory with nonabelian gauge group and Nekrasov theory. We will discuss this connection in some detail in a separate paper. _Remark 2.3_.: If \(\mathbb{C}^{N}=(\mathbb{C}^{n})^{\times p}\), with \(G\)-action being the diagonal extension of a \(G\)-action on \(\mathbb{C}^{n}\), then a quasimap \(X\not\to Y\) is the same as a choice of a \(G\)-bundle \(\mathcal{P}\) over \(X\), plus a \(p\)-tuple of sections of the bundle \(\mathcal{P}\times_{G}\mathbb{C}^{n}\) considered modulo \(G\) (acting diagonally on the \(p\)-tuple). E.g., if \(G=GL(n,\mathbb{C})\), this data is equivalent to a choice of a rank \(n\) vector bundle \(V\) over \(X\) and a \(p\)-tuple of its sections, considered modulo \(G\). _Remark 2.4_.: Instead of using the language of GIT quotient for the target \(Y\), one can alternatively use the language of symplectic (Marsden-Weinstein) reduction of \(\mathbb{C}^{N}\) by a compact subgroup \(G_{\operatorname{cpt}}\) of \(G\). ### Evaluation map A quasimap \(f\colon X\not\to Y\) can be evaluated at a point \(x\in X\setminus\operatorname{PQL}(f)\) i.e., one has an evaluation map \[\operatorname{ev}\colon(\operatorname{QMap}(X,Y)\times X)\backslash\{(f,x)\mid x \not\in\operatorname{PQL}(f)\}\ \longrightarrow\ Y. \tag{3}\] In particular, evaluation of a quasimap at a point of \(\operatorname{PQL}(f)\) (e.g. a freckle) is not defined. Restricted to holomorphic maps (quasimaps with \(\operatorname{PQL}(f)=\varnothing\)), this is the usual evaluation map \[\operatorname{ev}\colon\operatorname{Map}(X,Y)\times X\to Y. \tag{4}\] _Remark 2.5_.: The evaluation maps (3) and (4) are invariant under the group \(\operatorname{Aut}(X)\) of holomorphic automorphisms of \(X\) acting by \(g\cdot(f,x)=(f\circ g^{-1},g(x))\). In particular, for \(X=\mathbb{P}^{k}\), the group of automorphisms is the group of "higher Mobius transformations", \(\operatorname{Aut}(X)=\operatorname{PSL}(k+1,\mathbb{C})\). Below we give a concrete example (Example 2.11) of how the evaluation at a freckle fails. ### The main example: quasimaps \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\) **Explicit formula for \(\operatorname{QMap}(\mathbb{P}^{k},\mathbb{P}^{n})\).** The main example of Definition 2.1 relevant for this paper will be the following: * the origin. We refer to [22, Example 1.5] for details. * \(X=\mathbb{P}^{k}\) is also a complex projective space (possibly of different dimension). * the "degree." In this case we will be speaking of a quasimap of degree \(d\) and denote the space of such quasimaps \(\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\). Footnote 5: More precisely: \(\mathcal{P}\) the principle \(\mathbb{C}^{*}\)-bundle obtained by cutting out the zero-section from the line bundle \(O(d)\). Using homogeneous coordinates \((x^{0}:\dots:x^{k})\) on the source \(\mathbb{P}^{k}\), a general degree \(d\) quasimap \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\) is a collection \[P^{a}(x^{0},\dots,x^{k}),\quad a=0,\dots,n\] of \(n+1\) homogeneous polynomials of degree \(d\) in \(k+1\) variables, considered up to multiplying all polynomials simultaneously by \(\lambda\in\mathbb{C}^{*}\). We will require that the polynomials \(P^{a}\) are not all identically zero. (Thus, we exclude the most degenerate quasimap from consideration.) As implied by the description above, degree \(d\) quasimaps \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\) are parametrized by the collection of coefficients of the polynomials \(P^{a}\) modulo multiplying all coefficients by \(\lambda\in\mathbb{C}^{*}\). Thus we obtained the following. **Proposition 2.6**.: (5) \[\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})=\mathbb{P}^{n_{k,n,d}}\] _- the projective space of dimension_ \[\mathfrak{n}_{k,n,d}=(n+1)\left(\begin{array}{c}d+k\\ k\end{array}\right)-1. \tag{6}\] Here \(-1\) corresponds to the quotient by \(\mathbb{C}^{*}\) and the binomial coefficient \(\left(\begin{array}{c}d+k\\ k\end{array}\right)\) is the number of coefficients in a single homogeneous polynomial \(P^{a}\). _Remark 2.7_.: In the spirit of Remark 2.3, one can identify degree \(d\) quasimaps \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\) with \((n+1)\)-tuples of sections of \(O(d)\) (not all identically zero) modulo \(\mathbb{C}^{*}\) acting diagonally: _Remark 2.8_.: One has the following generalization of the result (5). Quasimaps \(\mathbb{P}^{k_{1}}\times\mathbb{P}^{k_{2}}\not\to\mathbb{P}^{n}\) are characterized by a bi-degree \((d_{1},d_{2})\) (i.e., the corresponding line bundle \(\mathcal{P}\) over \(\mathbb{P}^{k_{1}}\times\mathbb{P}^{k_{2}}\) is \(O(d_{1})\boxtimes O(d_{2})\)). One can interpret a quasimap \(\mathbb{P}^{k_{1}}\times\mathbb{P}^{k_{2}}\not\to\mathbb{P}^{n}\) as a degree \(d_{1}\) quasimap from \(\mathbb{P}^{k_{1}}\) to \(\operatorname{QMap}_{d_{2}}(\mathbb{P}^{k_{2}},\mathbb{P}^{n})=\mathbb{P}^{ n_{k_{2},n,d_{2}}}\). Thus, one has \[\operatorname{QMap}_{d_{1},d_{2}}(\mathbb{P}^{k_{1}}\times \mathbb{P}^{k_{2}},\mathbb{P}^{n}) =\Gamma(\mathbb{P}^{k_{1}}\times\mathbb{P}^{k_{2}},(O(d_{1}) \boxtimes O(d_{2}))\otimes\mathbb{C}^{n+1})\,\big{/}\!\!/\,\mathbb{C}^{*}\] \[=\Gamma(\mathbb{P}^{k_{1}},O(d_{1})\otimes\Gamma(\mathbb{P}^{k_{2 }},O(d_{2})\otimes\mathbb{C}^{n+1}))\,\big{/}\!\!/\,\mathbb{C}^{*}\] \[=\operatorname{QMap}_{d_{1}}(\mathbb{P}^{k_{1}},\underbrace{ \operatorname{QMap}_{d_{2}}(\mathbb{P}^{k_{2}},\mathbb{P}^{n})}_{\mathbb{P}^{ n_{k_{2}},n,d_{2}}})\] \[=\mathbb{P}^{\mathfrak{n}_{k_{1},n_{k_{2}},n,d_{2},d_{1}}}\] \[=\mathbb{P}^{\mathfrak{n}_{(k_{1},k_{2}),n,(d_{1},d_{2})}} \tag{7}\] - the projective space of dimension \[\mathfrak{n}_{(k_{1},k_{2}),n,(d_{1},d_{2})}=(n+1)\left(\begin{array}{c}d_{ 1}+k_{1}\\ k_{1}\end{array}\right)\left(\begin{array}{c}d_{2}+k_{2}\\ k_{2}\end{array}\right)-1.\] Likewise, the space of quasimaps from any product of projective spaces to \(\mathbb{P}^{n}\) of given multi-degree is itself a projective space: \[\operatorname{QMap}_{d_{1},\ldots,d_{m}}(\mathbb{P}^{k_{1}}\times\cdots \times\mathbb{P}^{k_{m}},\mathbb{P}^{n})=\mathbb{P}^{\mathfrak{n}_{(k_{1}, \ldots,k_{m}),n,(d_{1},\ldots,d_{m})}}, \tag{8}\] where \[\mathfrak{n}_{(k_{1},\ldots,k_{m}),n,(d_{1},\ldots,d_{m})}=(n+1)\prod_{i=1}^{ m}\left(\begin{array}{c}d_{i}+k_{i}\\ k_{i}\end{array}\right)-1. \tag{9}\] ### Stratification of \(\operatorname{QMap}\) (by the type of \(\operatorname{PQL}\)) The space of quasimaps has a natural stratification by the type of the proper quasimap locus (the class \(\alpha\) of \(\operatorname{PQL}(f)\) in homology of \(X\)6): Footnote 6: Or, more appropriately, the class of the cycle \(\operatorname{PQL}(f)\subset\mathbb{P}^{k}\) in the Chow ring, \(\alpha\in A_{*}(X)\). \[\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})=\bigsqcup_{\alpha} \operatorname{QMap}_{d}^{\alpha}(\mathbb{P}^{k},\mathbb{P}^{n}). \tag{10}\] In particular, holomorphic maps \(\mathbb{P}^{k}\to\mathbb{P}^{n}\) are embedded into QMap as the stratum with, \(\alpha=\varnothing\). The union of all the other strata of QMap corresponds to "proper" quasimaps; we will denote it \(\mathrm{QMap}^{\mathrm{pr}}\). We will denote \(\mathrm{QMap}^{m}\) the stratum corresponding to quasimaps with \(m=1,2,\ldots\) freckles (counted with multiplicities). Thus, stratification (10) has the form \[\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})=\\ =\underbrace{\mathrm{QMap}_{d}^{\varnothing}(\mathbb{P}^{k}, \mathbb{P}^{n})}_{\mathrm{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})}\sqcup \underbrace{\bigsqcup_{m\geq 1}\mathrm{QMap}_{d}^{m}(\mathbb{P}^{k}, \mathbb{P}^{n})\sqcup\bigsqcup_{\alpha,\dim\alpha\geq 1}\mathrm{QMap}_{d}^{ \alpha}(\mathbb{P}^{k},\mathbb{P}^{n})}_{\mathrm{QMap}_{d}^{\nu}(\mathbb{P}^{ k},\mathbb{P}^{n})}. \tag{11}\] The last term here corresponds to quasimaps \(f\) for which \(\mathrm{PQL}(f)\) is not just a collection of isolated points in \(X\), but a cycle of positive dimension. We will call such positive-dimensional PQL loci "scars." ### Examples **Example 2.9**.: Quasimaps of degree \(d=0\) are constant maps: \[\mathrm{QMap}_{0}(\mathbb{P}^{k},\mathbb{P}^{n})=\mathrm{Map}_{0}(\mathbb{P}^ {k},\mathbb{P}^{n})=\mathbb{P}^{n}.\] **Example 2.10**.: Quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{1}\) of degree \(d\) are given by a pair of homogeneous polynomials \[\begin{split} y^{0}=P^{0}(x^{0},x^{1})&=\sum_{i=0}^{ d}A^{0i}(x^{0})^{i}(x^{1})^{d-i},\\ y^{1}=P^{1}(x^{0},x^{1})&=\sum_{i=0}^{d}A^{1i}(x^ {0})^{i}(x^{1})^{d-i},\end{split} \tag{12}\] up to multiplying both of them by \(\lambda\in\mathbb{C}^{*}\). In terms of nonhomogeneous coordinates \(z=x^{1}/x^{0}\) and \(y=y^{1}/y^{0}\) on the source and target, one has7 Footnote 7: For simplicity here we assume \(A^{0,0},A^{1,0}\neq 0\). \[y=\frac{P^{1}(1,z)}{P^{0}(1,z)}=C\,\frac{\prod_{i=1}^{d}(z-z_{i}^{0})}{\prod_{ j=1}^{d}(z-z_{j}^{\infty})}. \tag{13}\] The constant \(C\) and positions of zeros/poles \(z_{i}^{0},z_{i}^{\infty}\) are the parameters of a quasimap. This is an actual map of degree \(d\) if \(z_{i}^{0}\neq z_{j}^{\infty}\) for all \(i,j\). If \(z_{i}^{0}=z_{j}^{\infty}=w\) (i.e. polynomials (12) have a common linear factor), then (12), (13) is a proper quasimap with a freckle at \(w\). Evaluation of the quasimap at points \(z\neq w\) then corresponds to a holomorphic map one degree lower. If \(m\) pairs of \(z_{i}^{0}\)'s and \(z_{j}^{\infty}\)'s coincide, we have an \(m\)-freckle quasimap; on the complement of freckles its evaluation corresponds to a holomorphic map of degree \(d-m\). **Example 2.11**.: As an illustration of how the evaluation map (3) can fail to exist at a freckle, consider the following sub-example of (12). Consider a \(1\)-parametric family of degree \(1\) quasimaps \(f_{a}\colon\mathbb{P}^{1}\not\to\mathbb{P}^{1}\) given by \[y^{0}=x^{1},y^{1}=ax^{0},\] with \(a\) a parameter. In nonhomogeneous coordinates, the family is \[y=\frac{a}{z}.\] The family consists of maps for \(a\neq 0\) and a proper quasimap for \(a=0\), with freckle at \(z=0\). Let \(\omega=\frac{i}{2\pi}\frac{dyd\bar{y}}{(1+|y|^{2})^{2}}\) be the Fubini-Study form on the target. Then the limit of the pullback of \(\omega\) by the evaluation map, while simultaneously taking \(a\) to zero and the evaluation point to the freckle \[\lim_{a,z\to 0}(\mathrm{ev}|_{f_{a}})^{*}(\omega)=\lim_{a,z\to 0}\frac{i}{2\pi} \frac{(zda-adz)(\bar{z}d\bar{a}-\bar{a}d\bar{z})}{(|z|^{2}+|a|^{2})^{2}} \tag{14}\] fails to exist. Let us introduce the notation \[\mathrm{Conf}_{m}(X):=X^{\times m}/\mathrm{Sym}_{m} \tag{15}\] for the configuration space of \(m\) unordered points on \(X\) (where the points are allowed to collide). **Example 2.12**.: A quasimap \(\mathbb{P}^{1}\not\to\mathbb{P}^{n}\) of degree \(d\) is determined by the (nonzero) matrix of coefficients \((A^{ai})_{0\leq a\leq n,0\leq i\leq d}\) of polynomials \[y^{a}=P^{a}(x^{0},x^{1})=\sum_{i=0}^{d}A^{ai}(x^{0})^{i}(x^{1})^{d-i},\quad a= 0,\ldots,n, \tag{16}\] considered modulo scaling \(A^{ai}\to\lambda A^{ai}\) for \(\lambda\in\mathbb{C}^{*}\). A quasimap has a freckle if all \(P^{a}\)'s have a linear polynomial as a common divisor (the root of this polynomial is the freckle position). Likewise, if all \(P^{a}\)'s have a common divisor \(Q\) of degree \(m\), the quasimap has \(m\) freckles located at the roots of \(Q\). This discussion implies that the stratification (11) has the form \[\mathrm{QMap}_{d}(\mathbb{P}^{1},\mathbb{P}^{n})=\bigsqcup_{m=0}^{d}\mathrm{ QMap}_{d}^{m}(\mathbb{P}^{1},\mathbb{P}^{n}). \tag{17}\] The closure of the \(m\)-freckle stratum can be identified with \[\overline{\mathrm{QMap}_{d}^{m}(\mathbb{P}^{1},\mathbb{P}^{n})}=\mathrm{QMap }_{d-m}(\mathbb{P}^{1},\mathbb{P}^{n})\times\mathbb{P}^{m}. \tag{18}\] Here the first factor corresponds to the quasimap defined by the polynomials \(P^{a}/Q\) and the second factor parametrizes the possible polynomials \(Q\) modulo \(\mathbb{C}^{*}\).8 Footnote 8: One can also write the last factor in (18) as \(\mathrm{Conf}_{m}(\mathbb{P}^{1})\), parametrizing the polynomial \(Q\) by its \(m\) roots (as an unordered set) – the positions of freckles. The denominator \(\mathrm{Sym}_{m}\) in (15) is the Galois group. **Proposition 2.13**.: 1. _If_ \(n\geq k\)_, then a quasimap_ \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\) _in general position is a map and, for_ \(d\geq 1\)_, the_ \(1\)_-freckle stratum_ \(\mathrm{QMap}_{d}^{1}(\mathbb{P}^{k},\mathbb{P}^{n})\) _has complex codimension_ \(n+1-k\) _in_ \(\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\)_._ _Moreover, for_ \(m\geq 1\) _and_ \(d\) _large enough, the_ \(m\)_-freckle stratum_ \(\mathrm{QMap}_{d}^{m}(\mathbb{P}^{k},\mathbb{P}^{n})\) _has codimension_ \(m(n+1-k)\) _in_ \(\mathrm{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\)_._ _._ 2. _If_ \(n<k\)_, then a quasimap_ \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\) _of degree_ \(d\geq 1\) _is always proper. For a quasimap_ \(f\) _in general position,_ \(\operatorname{PQL}(f)\) _is a cycle of complex dimension_ \(k-n-1\) _in_ \(\mathbb{P}^{k}\)_._ Proof.: (i): The condition that a quasimap has a freckle at a given point \(x\in\mathbb{P}^{k}\) is a system of \(n+1\) homogeneous linear equations \(P^{a}(x)=0,\ a=0,\ldots,n\) on the coefficients of the polynomials \(P^{a}\). Its solution locus is an intersection of \(n+1\) transversal hyperplanes in QMap, i.e., a projective space \(\mathbb{P}_{x}\subset\operatorname{QMap}\) of codimension \(n+1\). Taking a union over possible positions of the freckle, we obtain a cycle \(\alpha=\cup_{c\in\mathbb{P}^{k}}\mathbb{P}_{x}\) of codimension \(n+1-k\) in \(\operatorname{QMap}\). For \(n\geq k\) this codimension is positive, so a generic quasimap is a map. A subtlety is that a quasimap might have more than one freckle, so the stratum of quasimaps with exactly one freckle is \(\alpha\) with some higher codimension strata removed. The \(m\)-freckle case is similar. (ii): Given a quasimap \(\mathbb{P}^{k}\not\to\mathbb{P}^{n}\), its \(\operatorname{PQL}\) locus is the intersection of the zero-loci of polynomials \(P^{a}\), i.e., an intersection of \(n+1\) hypersurfaces in \(\mathbb{P}^{k}\) of degree \(d\). The intersection is, generically (when the hypersurfaces intersect transversally), a cycle of codimension \(n+1\).9 Footnote 9: Another reason why one cannot have a map \(f\colon\mathbb{P}^{k}\to\mathbb{P}^{n}\) of degree \(d>0\) if \(n<k\): consider the generator of the second cohomology of the target \([\omega_{Y}]\in H^{2}(\mathbb{P}^{n})\) and \([\omega_{X}]\) the generator of \(H^{2}(\mathbb{P}^{k})\). One has the relation \([\omega_{Y}]^{n+1}=0\). Since \(f\) has degree \(d\), one also has \(f^{*}[\omega_{Y}]=d[\omega_{X}]\), which implies \(d^{n+1}[\omega_{X}]^{n+1}=0\), which is false in the cohomology of the source. A related remark: a quasimap \(f\) is a section of the bundle \(\underbrace{O(d)\oplus\cdots\oplus O(d)}_{n+1}\) over \(\mathbb{P}^{k}\). Its Euler class is \((d[\omega_{X}])^{n+1}\in H^{2(n+1)}(\mathbb{P}^{k})\) and it is the Poincaré dual of the homology class of \(\operatorname{PQL}(f)\) for a generic quasimap \(f\). Note that the proof above also shows that the closure of the \(m\)-freckle stratum is (for \(d\) large enough) birationally equivalent10 to a bundle over \(\operatorname{Conf}_{m}(\mathbb{P}^{k})\) with fiber \(\mathbb{P}^{n_{k,n,d}-m(n+1)}\).11 Footnote 10: I.e., isomorphic outside higher-codimension strata. Footnote 11: For \(d\) too small, the \(m\)-freckle stratum might vanish. E.g., for \(m\geq 2\), \(m\)-freckle strata in \(\operatorname{QMap}(\mathbb{P}^{2},\mathbb{P}^{2})\) vanish for \(d=1\); for \(m\geq 5\), \(m\)-freckle strata vanish for \(d=1,2\), cf. Example 2.15. **Example 2.14**.: For \(k\geq 1\), all quasimaps from \(\mathbb{P}^{k}\) to \(\mathbb{P}^{0}\) of degree \(d\geq 1\) are proper. Note that \(\mathbb{P}^{0}=\mathbb{C}\not/\mathbb{C}^{*}\) is a point with a particular presentation as a GIT quotient. A map to a point is necessarily constant, while the space of quasimaps (5) is nontrivial. **Example 2.15**.: A degree \(d\) quasimap \(\mathbb{P}^{2}\not\to\mathbb{P}^{n}\) has a one-dimensional proper quasimap locus ("scar") if all polynomials \(P^{a}\) are divisible by a nonzero polynomial \(Q\) of some degree \(1\leq\Delta\leq d\). Then the scar is the zero-locus of \(Q\) - a degree \(\Delta\) curve in \(\mathbb{P}^{2}\). It is easy to see that the closure of the corresponding stratum in \(\operatorname{QMap}\) is \[\overline{\operatorname{QMap}_{d}^{\Delta-\operatorname{scar}}(\mathbb{P}^{2}, \mathbb{P}^{n})}=\mathbb{P}^{n_{2,n,d-\Delta}}\times\mathbb{P}^{n_{2,0,\Delta}}. \tag{19}\] Here the first factor parametrizes the polynomials \(P^{a}/Q\) modulo \(\mathbb{C}^{*}\) and the second factor parametrizes the polynomial \(Q\) itself modulo \(\mathbb{C}^{*}\). * For instance, degree 1 quasimaps \(\mathbb{P}^{2}\not\to\mathbb{P}^{2}\) form \(\mathbb{P}^{8}\), with the following strata: \begin{tabular}{c|c|c} stratum \(\sigma\) & codim(\(\sigma\)) & shape of \(\overline{\sigma}\) \\ \hline maps & 0 & \(\mathbb{P}^{8}\) \\ 1-freckle & 1 & \(\mathbb{P}^{5}\)-bundle over \(\mathbb{P}^{2}\) \\ \(\Delta=1\) scar & 4 & \(\mathbb{P}^{2}\times\mathbb{P}^{2}\) \\ \end{tabular} Here we note that if for a degree 1 quasimap, PQL contains two points, then it also contains a line through those points. This shows that there are no \(m>1\) freckle cases in the table above (they are subsumed by the scar case). The "shape" of each stratum is given up to birational equivalence. * For degree 1 quasimaps \(\mathbb{P}^{2}\not\to\mathbb{P}^{1}\), the similar table is: \begin{tabular}{c|c|c} stratum \(\sigma\) & codim(\(\sigma\)) & shape of \(\overline{\sigma}\) \\ \hline 1-freckle & 0 & \(\mathbb{P}^{5}\) \\ \(\Delta=1\) scar & 2 & \(\mathbb{P}^{1}\times\mathbb{P}^{2}\) \\ \end{tabular} * Degree 1 quasimaps \(\mathbb{P}^{2}\not\to\mathbb{P}^{0}\) all belong to \(\Delta=1\) scar stratum. * Another example: degree 2 quasimaps \(\mathbb{P}^{2}\not\to\mathbb{P}^{2}\). The space QMap\({}_{2}(\mathbb{P}^{2},\mathbb{P}^{2})=\mathbb{P}^{17}\) has the following strata: \begin{tabular}{c|c|c} stratum \(\sigma\) & codim(\(\sigma\)) & shape of \(\overline{\sigma}\) \\ \hline maps & 0 & \(\mathbb{P}^{17}\) \\ 1-freckle & 1 & \(\mathbb{P}^{14}\)-bundle over Conf\({}_{1}(\mathbb{P}^{2})\) \\ 2-freckle & 2 & \(\mathbb{P}^{11}\)-bundle over Conf\({}_{2}(\mathbb{P}^{2})\) \\ 3-freckle & 3 & \(\mathbb{P}^{8}\)-bundle over Conf\({}_{3}(\mathbb{P}^{2})\) \\ 4-freckle & 4 & \(\mathbb{P}^{5}\)-bundle over Conf\({}_{4}(\mathbb{P}^{2})\) \\ \(\Delta=1\) scar & 7 & \(\mathbb{P}^{8}\times\mathbb{P}^{2}\) \\ \(\Delta=2\) scar & 10 & \(\mathbb{P}^{2}\times\mathbb{P}^{5}\) \\ \end{tabular} In the last two lines, the second factor describes the position of the scar. Note that if one has more than 5 points in PQL (common zeros of the polynomials \(P^{a}\)) in general position, then PQL is a scar, since a conic in \(\mathbb{P}^{2}\) is uniquely determined by 5 points and so the zero-loci of polynomials \(P^{a}\) coincide. ## 3. Counting holomorphic maps ("KM numbers") ### Formulation of the problem **Enumerative Problem A**.: _Let \(X\) and \(Y\) be two compact complex manifolds (source and target), of dimensions \(k\) and \(n\) respectively. Fix a collection of cycles12\(c_{1}^{X},\ldots,c_{l}^{X}\) in \(X\) and a collection of cycles \(c_{1}^{Y},\ldots,c_{l}^{Y}\) in \(Y\) (\(l\) is the same in both). Also, fix an element \(\delta\) in the set \([X,Y]\) of homotopy classes of maps \(X\to Y\)._ Footnote 12: By default, a “cycle” in this paper stands for a holomorphic (or, equivalently, algebraic) cycle. _We are interested in the number13_ Footnote 13: The notation stands for “Kontsevich-Manin number” and refers to [14]. \[\operatorname{KM}(X,Y;\{c_{i}^{X},c_{i}^{Y}\}_{i=1,\ldots,l}|\delta) \tag{20}\] of holomorphic maps \(f\colon X\to Y\) in homotopy class \(\delta\) such that the image of \(c_{i}^{X}\) in \(Y\) intersects \(c_{i}^{Y}\) for each \(i=1,\dots,l\)._ By convention, if holomorphic maps subject to the condition above have continuous moduli, we set \(\operatorname{KM}=0\). We are thinking of the problem above in terms of an \(l\)-tuple \(x_{1},\dots,x_{l}\) of points in \(X\) which we want to be mapped to the target cycles. Some of the points \(x_{i}\) may be "fixed" (i.e. the respective source cycle \(c_{i}^{X}\) is a point), some may be "moving freely" (the respective source cycle is \(c_{i}^{X}\) is the entire \(X\)); in general the points \(x_{i}\) are constrained to their respective source cycles \(c_{i}^{X}\). While it is possible to study this enumerative problem in full generality, in this paper we will restrict ourselves to the case of maps between projective spaces \(\mathbb{P}^{k}\to\mathbb{P}^{n}\), since most of the phenomena show up already in this case. _Remark 3.1_.: The usual setup of genus zero Gromov-Witten invariants (Gromov-Witten classes integrated over the full moduli space of curves with \(l\) marked points) is a special case of the Enumerative Problem A where \(X=\mathbb{P}^{1}\), with the following choice of source cycles: three cycles among \(c_{i}^{X}\) are fixed points (e.g. \(c_{1}^{X}=\{0\},c_{2}^{X}=\{1\},c_{3}^{X}=\{\infty\}\) in \(X=\mathbb{P}^{1}\) - needed in order to fix the group of automorphisms of \(\mathbb{P}^{1}\)) and the rest are copies of the fundamental cycle \(X\). (I.e., we have \(3\) fixed points and \(l-3\) moving points.) Instead of a homotopy class of a map \(\delta\), one usually specifies an element \(\beta\in H_{2}(Y,\mathbb{Z})\), requiring that the holomorphic maps are such that the homology class of the image of \(X\) in \(Y\) is \(\beta\). _Remark 3.2_.: The numbers that we are considering look similar to Donaldson invariants [4, 25]. ### Important remark: meaningfulness of the problem. Syzygies of holomorphicity equations It is widely believed that the problem of holomorphic maps from a higher-dimensional source has virtual dimension equal to \(-\infty\) and that is why it should not be studied on the same footing as holomorphic maps from \(1\)-dimensional source. However, this argument is not quite correct due to syzygies, as we will outline below. Let \((X,J^{X})\) and \((Y,J^{Y})\) be two almost complex manifolds. A map \(f\colon X\to Y\) is called _pseudo-holomorphic_ if its differential intertwines the two almost complex structures, i.e. if \[J^{Y}\circ f_{*}=f_{*}\circ J^{X}. \tag{21}\] Consequently, a pseudo-holomorphic map \(f\) intertwines the Nijenhuis tensors of \(X\) and \(Y\): \[\partial_{\mu}f^{a}\ (N_{J^{X}})_{\bar{\rho}\bar{\sigma}}^{\mu}=(N_{J^{Y}} \circ f)_{\bar{b}\bar{c}}^{a}\ \overline{\partial_{\rho}f^{b}}\ \overline{\partial_{\sigma}f^{c}}. \tag{22}\] A sufficient condition to satisfy the constraints (22) is the integrability of the source and target complex structures \(J^{X}\) and \(J^{Y}\), which we will consider henceforth. The space of holomorphic maps \(X\to Y\) between two complex manifolds naively has negative infinite virtual dimension if \(\dim_{\mathbb{C}}X\geq 2\). Indeed, if \(\dim_{\mathbb{C}}X=k\) and \(\dim_{\mathbb{C}}Y=n\), then if \(z^{i}\) are local complex coordinates of \(X\) the holomorphicity equation \[\bar{\partial}f^{a}(z)=0 \tag{23}\] yield \(nk\) pointwise conditions for the \(n\) variables \(f^{a}\). Therefore \[\dim_{\mathrm{vir}}\mathrm{Map}(X,Y)=(\#\mathrm{variables}-\#\mathrm{equations}) \cdot\#\mathrm{points}=n(1-k)\cdot\infty\] which equals \(-\infty\) when \(k\geq 2\). However, in higher dimensions (\(k\geq 2\)) there exist syzygies (linear relations among the conditions (23)) which render the dimension of the space of holomorphic maps finite. Since the holomorphicity equation (23) give point-wise conditions, syzygies can be expressed in terms of integrals. For \(\dim_{\mathbb{C}}X=k\geq 2\), we call \(\sigma_{a}\in\Omega^{(k,k-1)}\) a _syzygy_ if \[\int_{X}\sigma_{a}\wedge\bar{\partial}f^{a}=0, \tag{24}\] independently whether or not \(\bar{\partial}f^{a}=0\). Syzygies in the above sense are thus given by \(\bar{\partial}\)-closed forms. If \(k\) is large enough, syzygies are determined only up to \(\bar{\partial}\)-exact forms. This redundancy usually indicates the presence of syzygies among syzygies. _Remark 3.3_.: In principle it is possible to construct the full tower of syzygies by constructing the Koszul-Tate resolution of an appropriate bundle: Let \(\mathcal{E}\) be the vector bundle over the space of smooth maps \(\mathrm{Map}^{\mathrm{sm}}(X,Y)\), whose fiber above a map \(f\) is given by \[\mathcal{E}_{f}=\Omega^{0,1}(X,f^{*}T^{1,0}Y).\] There exists a natural section \(s\in\Gamma(\mathcal{E})\), which takes a function \(f\) to its Dolbeault differential \(\bar{\partial}f\). Equation (23) is thus expressed as the zero set of the section \(s\). Syzygies (and syzygies for syzygies and so forth) can now be described in terms of Tate generators of the Koszul-Tate resolution of the sheaf of function on the zero-locus of \(s\). ### The case of maps \(\mathbb{P}^{k}\to\mathbb{P}^{n}\) Consider the Enumerative Problem A in the case \(X=\mathbb{P}^{k}\), \(Y=\mathbb{P}^{n}\), with \(n\geq k\). Instead of specifying the homotopy class \(\delta\) of maps between the two projective spaces we will be specifying their degree \(d\). We will also restrict ourselves to cycles \(c_{i}^{X},c_{i}^{Y}\) given by intersection of several hyperplanes in general position. The space14 Footnote 14: More appropriately, \(\mathcal{M}\) is a cycle in \(\mathrm{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\) with \(\mathbb{Z}\)-coefficients. From this viewpoint, \(p\) in (26) should be replaced with the pushforward \(p_{*}\) of cycles. \[\mathcal{M}(\mathbb{P}^{k},\mathbb{P}^{n};\{c_{i}^{X},c_{i}^{Y}\}|d) \tag{25}\] of degree \(d\) holomorphic maps \(\mathbb{P}^{k}\to\mathbb{P}^{n}\), such that the images of \(c_{i}^{X}\) intersect \(c_{i}^{Y}\) for \(i=1,\dots,l\), can be represented as follows: \[\mathcal{M}=p\left(\bigcap_{i=1}^{l}\mathrm{ev}_{i}^{-1}c_{i}^{Y}\right)\quad \subset\mathrm{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n}). \tag{26}\] Here \(p\colon\operatorname{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\times\prod_{i=1}^{l}c _{i}^{X}\to\operatorname{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\) is the projection onto the first factor and \[\operatorname{ev}_{i}\colon\operatorname{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n} )\times\prod_{i=1}^{l}c_{i}^{X}\to\mathbb{P}^{n} \tag{27}\] is the evaluation of a map on a point of the \(i\)-th cycle. It follows that the expected (or "virtual") complex dimension of the space (25) is \[\dim_{\operatorname{vir}}\mathcal{M}(\mathbb{P}^{k},\mathbb{P}^{n};\{c_{i}^{X},c_{i}^{Y}\}|d)=\mathfrak{n}_{k,n,d}+\sum_{i=1}^{l}\dim c_{i}^{X}-\sum_{i=1}^{ l}\operatorname{codim}c_{i}^{Y}. \tag{28}\] We assume that if the virtual dimension vanishes, \(\mathcal{M}\) is actually a finite set (for source/target cycles in general position). It is true in all examples that we encounter in this paper. In this case, the KM number (20) counts the points of \(\mathcal{M}\). Note that the evaluation map (27) is invariant under the automorphism group of the source, \(\operatorname{Aut}(\mathbb{P}^{k})=\operatorname{PSL}(k+1,\mathbb{C})\). Hence, in order to have \(\dim\mathcal{M}=0\), we need to assume that the configuration of source cycles \(\{c_{i}^{X}\}_{i=1,\dots,l}\) has discrete stabilizer in \(\operatorname{Aut}(\mathbb{P}^{k})\). In particular, if the source cycles are \(l_{1}\) points in general position and \(l_{2}\) copies of \(\mathbb{P}^{k}\) (i.e. we have \(l_{1}\) fixed points and \(l_{2}\) freely moving points), then we need to assume \(l_{1}\geq k+2\).15 Footnote 15: Note that \(\operatorname{PSL}(k+1,\mathbb{C})\) acts \(3\)-transitively on \(\mathbb{P}^{k}\) for \(k=1\) and “generically \((k+2)\)-transitively” if \(k\geq 2\). More precisely: for two configurations of \(k+2\) points _in general position_ on \(\mathbb{P}^{k}\) there is a unique element of \(\operatorname{PSL}(k+1,\mathbb{C})\) mapping the first configuration to the second. ### Easy enumerative problem For maps between projective spaces, consider the case when all source cycles \(c_{i}^{X}\) are points \(x_{1},\dots,x_{l}\in\mathbb{P}^{k}\) in general position (i.e., in the terminology of Section 3.1, we have only "fixed points" on the source). Assume that for each \(i=1,\dots,l\), the target cycle \(c_{i}^{Y}\) is the intersection of hyperplanes \[c_{i}^{Y}=\{y\in\mathbb{C}^{n+1}\backslash\{0\}\mid H_{i,\alpha_{i}}(y)=0\}/ \mathbb{C}^{*} \tag{29}\] specified by nonzero covectors \(H_{i,\alpha_{i}}\in(\mathbb{C}^{n+1})^{\vee}\), with \(\alpha_{i}=1,\dots,\operatorname{codim}c_{i}^{Y}\). Then solutions of the Enumerative Problem A are the nonzero solutions (modulo \(\mathbb{C}^{*}\)) of the system of homogeneous linear equations \[\sum_{a=0}^{n}H_{i,\alpha_{i},a}P^{a}(x_{i}^{0},\dots,x_{i}^{k})=0,\quad i=1, \dots,l,\ \ \alpha_{i}=1,\dots,\operatorname{codim}c_{i}^{Y}\] on coefficients of the polynomials \(P^{a}\) determining the map. Thus we have exactly one solution modulo \(\mathbb{C}^{*}\) if the dimension (28) vanishes, and no solutions or a continuous family of solutions otherwise. In summary, we have the following **Proposition 3.4**.: _Let the source cycles \(\{c_{i}^{X}\}\) be points in general position in \(\mathbb{P}^{k}\) and let the target cycles be intersections of hyperplanes in \(\mathbb{P}^{n}\). Then the answer to the Enumerative Problem A is:_ \[\operatorname{KM}(\mathbb{P}^{k},\mathbb{P}^{n};\{c_{i}^{X},c_{i}^{Y}\}_{i=1, \dots,l}|d)=\left\{\begin{array}{ll}1&\text{if }\mathfrak{n}_{k,n,d}=\sum_{i=1}^{l} \operatorname{codim}c_{i}^{Y},\\ 0&\text{otherwise}\end{array}\right. \tag{30}\] ### Toward higher quantum cohomology and a mysterious theta-function In the case \(X=\mathbb{P}^{1}\), the numbers (30) summed over \(d\) with weight \(q^{d}\) organize into the quantum cohomology of the target \(\mathbb{P}^{n}\), i.e. into the commutative associative product \(*\) on \(H^{\bullet}(\mathbb{P}^{n})[[q]]\) such that \[\langle\alpha_{1}*\alpha_{2}*\cdots*\alpha_{l-1},\alpha_{l}\rangle=\sum_{d \geq 0}q^{d}\text{KM}(\mathbb{P}^{1},\mathbb{P}^{n};\{\text{point}_{i},\alpha_{ i}^{\vee}\}_{i=1,\dots,l}|d), \tag{31}\] where \(\alpha_{i}\in H^{\bullet}(\mathbb{P}^{n})\) are target cohomology classes and \(c_{i}^{Y}=\alpha_{i}^{\vee}\) are cycles in the target representing the Poincare duals of \(\alpha_{i}\);16\(\langle,\rangle\) is the standard Poincare pairing on cohomology. In particular, the "easy KM numbers" (30) for \(l=3\) give the structure constants of the quantum product \(*\) and the case \(l\geq 4\) is recovered as the iterated quantum product.17 Footnote 16: We are implicitly extending the numbers (30) by linearity to general target cycles. Footnote 17: In this discussion one can replace the target \(\mathbb{P}^{n}\) with any compact complex manifold \(Y\); the KM numbers with source cycles being points will be more complicated, but will still arrange into a commutative associative ring structure on \(H^{\bullet}(Y)[[q_{1},\dots,q_{m}]]\), where \(m=\text{rk}\,H^{2}(Y)\). In particular, the quantum cohomology ring of \(\mathbb{P}^{n}\) can be identified with \[\mathbb{C}[[q]][\omega]/(\omega^{n+1}-q) \tag{32}\] - the quotient of the ring of polynomials in the variable \(\omega\) (the class of the Fubini-Study 2-form) by the ideal generated by \(\omega^{n+1}-q\). For instance, in this ring one has \(\omega^{n}*\omega=q\) (while the classical cup product is \(\omega^{n}\smile\omega=0\)). One can reformulate this structure in terms of a weak18 Frobenius algebra \(\mathbb{C}[[q]][x]\) - the algebra of polynomials in \(x\) (a formal variable identified with the class \(\omega\)) equipped with the standard product and the trace (counit) \(\eta\colon\mathbb{C}[[q]][x]\to\mathbb{C}[[q]]\) defined by Footnote 18: By “weak” we mean that the inner product induced by the counit is degenerate. \[\eta(p(x)):=\frac{1}{2\pi i}\oint_{\gamma}\frac{p(x)dx}{x^{n+1}-q}=\sum_{d \geq 0}q^{d}\frac{1}{2\pi i}\oint_{\gamma}\frac{p(x)dx}{x^{(n+1)(d+1)}},\] with \(p(x)\) a polynomial and \(\gamma\) a closed contour going around the origin. This counit induces a (degenerate) pairing \((p_{1},p_{2}):=\eta(p_{1}p_{2})\). One can then identify the quantum cohomology ring (32) as the quotient of \(\mathbb{C}[[q]][x]\) by the kernel of the pairing \((,)\). The cyclic quantum product (31) can then be written as \[\eta(p_{1}p_{2}\cdots p_{l})=\frac{1}{2\pi}\oint_{\gamma}\frac{p_{1}(x)\cdots p _{l}(x)dx}{x^{n+1}-q},\] with \(p_{i}(x)\) the polynomials corresponding to the cohomology classes \(\alpha_{i}\). Next, consider the case \(X=\mathbb{P}^{k}\) with \(k\geq 2\). Consider the following multilinear operation ("cyclic higher quantum product") on target cohomology (33) \[\begin{array}{llll}\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{ \@@@@LTX$}}}}$}}$}}\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{\@@LTX}}}}$}}$}}\text{$\text{$\text{$\text{$\text{$\text{\@@LTX}}}$}}$} \text{$\text{$\text{$\text{$\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{$ \text{\@@LTX}}}$}}$}\text{$\text{$\text{$\text{$\text{\@@LTX}}}$}}\text{$ \text{$\text{$\text{$\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{$\text{ \@@LTX}}}$}}$}\text{$\text{$\text{$\text{\text{$\text{\@@LTX}}}$}}$}\text{$ \text{$\text{$\text{\text{$\text{\@@LTX}}}$}}$}\text{$\text{$\text{$\text{ \@@LTX}}}$}}\text{$\text{$\text{$\text{\text{$\text{\@@LTX}}}$}}$}\text{$\text{$ \text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{\text{\@@LTX}}}$}}$} \text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{ \@@LTX}}}$}}\text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$ \text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$ \text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$ \text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{\@@LTX}}$}}$} \text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{\text{ \@@LTX}}}$}}\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$\text{ \@@LTX}}$}}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}}\text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{ \text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}}\text{$\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{$ \text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{ \text{\@@LTX}}}$}}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{ \text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$\text{$\text{\text{ \@@LTX}}$}}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{\text{\text{\@@LTX}}}$}}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}}\text{$ \text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$ \text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{ \@@LTX}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$} \text{$\text{$\text{\@@LTX}}$}}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{ \text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$} \text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\@@LTX}}$}\text{$\text{$\text{ \@@LTX}}$}\text{$\text{$\text{\@@LTX}}$}}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$ \text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{ \text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$\text{\@@LTX}}$}\text{$\text{\text{\@@LTX}}$}}\text{$\text{$\text{ \text{\@@LTX}}}$}\text{$\text{$\text{\text{\@@LTX}}}$}\text{$\text{$\text{\text{ \@@LTX}}}$}\text{$\text{$ We define the _higher quantum cohomology_ of \(Y=\mathbb{P}^{n}\) parametrized by \(X=\mathbb{P}^{k}\) as the Frobenius algebra \(\mathcal{A}_{\mathbb{P}^{k},\mathbb{P}^{n}}:=\mathbb{C}[[q]][x]\) with the standard product and the counit \[\eta(p(x)):=\sum_{d\geq 0}q^{d}\frac{1}{2\pi i}\oint_{\gamma}\frac{p(x)dx}{x^{ \mathfrak{n}_{k,n,d}+1}}. \tag{34}\] Note that \(\eta\) acts on monomials as \[\eta(x^{j})=\left\{\begin{array}{ll}q^{d}&\text{if }j=\mathfrak{n}_{k,n,d}, \\ 0&\text{otherwise}\end{array}\right.\] **Example 3.5**.: For instance, for \(k=n=2\), \(\eta\) maps monomials as \[x^{2}\mapsto 1,\;x^{8}\mapsto q,\;x^{17}\mapsto q^{2},\;x^{29}\mapsto q^{3}, \;x^{44}\mapsto q^{4},\dots\] while monomials of other degrees \(j\not\in\{2,8,17,29,44,\dots\}\) are mapped to zero. _Remark 3.6_.: For \(k=2\) (i.e. for maps \(\mathbb{P}^{2}\to\mathbb{P}^{n}\)), one can write the counit (34) in terms of the Jacobi theta function \(\theta_{10}\) (in Mumford's notation) as follows: \[\eta(p(x))=\frac{1}{2\pi i}\oint_{\gamma}p(x)F(q,x),\] where \[F(q,x) =\sum_{d\geq 0}\frac{q^{d}}{x^{(n+1)\frac{(d+2)(d+1)}{2}}}dx\] \[=\text{reg}_{q=0}\left(q^{-1}\sum_{d^{\prime}\in\mathbb{Z}}\frac{ q^{d^{\prime}}}{x^{(n+1)\frac{d^{\prime}(d^{\prime}+1)}{2}}}dx\right)\] \[=\text{reg}_{q=0}\Big{(}x^{\frac{n+1}{8}}q^{-\frac{3}{2}}\sum_{d ^{\prime}\in\mathbb{Z}}q^{d^{\prime}+\frac{1}{2}}x^{-\frac{n+1}{2}(d^{\prime} +\frac{1}{2})^{2}}\,dx\Big{)}.\] Here \(\text{reg}_{q=0}\) stands for the operation of subtracting the negative part of the Laurent expansion in \(q\); the variables \(z,\tau\) are related to \(q,x\) by \[x^{-\frac{n+1}{2}}=e^{\pi i\tau},\;\;q=e^{2\pi iz}.\] Thus, one has \[\eta(p(x))=-\frac{1}{n+1}\mathsf{P}\oint_{\tilde{\gamma}}d\tau\,p(e^{-\frac{2 \pi i\tau}{n+1}})e^{-2\pi i\tau(\frac{1}{n+1}+\frac{1}{8})}e^{-3\pi iz}\theta_ {10}(z,\tau), \tag{35}\] where \(\mathsf{P}\) is the projection to nonnegative Fourier modes in the variable \(z\); \(\tilde{\gamma}\) is the image of the contour \(\gamma\) in the plane of the modular parameter \(\tau\). The following is an obvious consequence of (30). **Corollary 3.7**.: _One has_ \[\mathsf{CQP}(\alpha_{1},\dots,\alpha_{l})=\eta(p_{1}(x)\cdots p_{l}(x)), \tag{36}\] _where \(\alpha_{i}\) are cohomology classes of \(\mathbb{P}^{n}\) and \(p_{i}(x)\) are the corresponding polynomials._ **Lemma 3.8**.: _For \(k\geq 2\), the pairing on \(\mathcal{A}_{\mathbb{P}^{k},\mathbb{P}^{n}}\) induced by the counit, \((p_{1},p_{2})=\eta(p_{1}p_{2})\), has zero kernel._ Proof.: Assume that a polynomial \[p(x)=\sum_{i=0}^{\delta}p_{i}(x)q^{i}\] (where \(p_{i}\) are polynomials in \(x\) independent of \(q\)) is in the kernel of \((,)\), i.e., that \(\eta(p(x)x^{j})=0\) for all \(j\geq 0\). Assume that \(p_{\delta}\) is nonzero. Choose \(d\) large enough such that one has * \(\mathsf{n}_{k,n,d}\geq\deg p_{\delta}\), * \(\mathsf{n}_{k,n,d}+\deg p_{i}<\mathsf{n}_{k,n,d+1}\) for \(i<\delta\). The fact that it is possible to satisfy the second condition relies on the assumption \(k\geq 2\) - then the gaps between consecutive numbers in the sequence \(\{\mathsf{n}_{k,n,d}\}_{d=0,1,2,\ldots}\) are growing. The expression \[\eta(p(x)x^{\mathsf{n}_{k,n,d}-\deg p_{\delta}}) \tag{37}\] contains the term \(q^{\delta+d}\cdot(\text{top coefficient of }p_{\delta})\) coming from \(p_{\delta}\) which cannot be canceled by anything from \(p_{<\delta}(x)\). Thus, (37) is nonzero. Hence, a nonzero \(p(x)\) cannot be in the kernel of \((,)\). In particular, Lemma 3.8 implies that unlike in the case of the usual quantum product \((k=1)\), \(\mathcal{A}_{\mathbb{P}^{k},\mathbb{P}^{n}}\) does not have a finite-dimensional quotient isomorphic to the cohomology of the target with a \(q\)-deformed product. Considering the higher quantum product as a \(2\to 1\) operation \(*\), if we identify \(x^{j}\) with cohomology classes \(\omega^{j}\) of \(\mathbb{P}^{n}\) for \(j=0,\ldots,n\), we should say that the quantum product of several cohomology classes \(\xi=\omega^{j_{1}}*\cdots*\omega^{j_{l}}\) is \(\omega^{j_{1}+\cdots+j_{l}}\) if \(j_{1}+\cdots+j_{l}\leq n\), otherwise \(\xi\) does not correspond to an element of \(H^{\bullet}(\mathbb{P}^{n})\) (but corresponds to an element of \(\mathcal{A}_{\mathbb{P}^{k},\mathbb{P}^{n}}\)). In other words, \(H^{\bullet}(\mathbb{P}^{n})\) is identified with a subspace of \(\mathcal{A}_{\mathbb{P}^{k},\mathbb{P}^{n}}\) which is not closed under the quantum product. In the language of QFT (a higher-dimensional A-model localizing to holomorphic maps \(X\to Y\)), we expect the higher quantum cohomology ring to describe the OPE algebra of a class of observables: elements of \(H^{\bullet}(Y)\) correspond to the usual evaluation observables, while other elements of \(\mathcal{A}_{\mathbb{P}^{k},\mathbb{P}^{n}}\) correspond to a new type of observables.19 In particular, OPEs of evaluation observables can contain the "new" observables. Footnote 19: Perhaps, a higher-dimensional counterpart of the “tangency observables” known in the A-model. _Remark 3.9_.: The discussion above generalizes straightforwardly to maps \(\mathbb{P}^{k_{1}}\times\cdots\times\mathbb{P}^{k_{m}}\to\mathbb{P}^{n}\) (cf. Remark 2.8). In this case, one introduces formal parameters \(q_{1},\ldots,q_{m}\) so that a map of multi-degree \((d_{1},\ldots d_{m})\) is counted with weight \(q_{1}^{d_{1}}\cdots q_{m}^{d_{m}}\). The higher quantum cohomology in this case is the Frobenius algebra \(\mathcal{A}_{\mathbb{P}^{k_{1}}\times\cdots\times\mathbb{P}^{k_{m}},\mathbb{P} ^{n}}=\mathbb{C}[[q_{1},\ldots,q_{m}]][x]\) with the standard product and the counit \[\eta(p(x))=\sum_{d_{1},\ldots,d_{m}\geq 0}q_{1}^{d_{1}}\cdots q_{m}^{d_{m}} \frac{1}{2\pi i}\oint_{\gamma}\frac{p(x)dx}{x^{\mathsf{n}_{(k_{1},\ldots,k_{m}),n,(d_{1},\ldots,d_{m})}+1}}\] with the exponents \(\mathsf{n}_{(k_{1},\ldots,k_{m}),n,(d_{1},\ldots,d_{m})}\) as in (9). For instance, for maps \(\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{n}\) one has \(\mathcal{A}_{\mathbb{P}^{1}\times\mathbb{P}^{1},\mathbb{P}^{n}}=\mathbb{C}[[q _{1},q_{2}]][x]\) with \[\eta(p(x))=\sum_{d_{1},d_{2}\geq 0}q_{1}^{d_{1}}q_{2}^{d_{2}}\frac{1}{2\pi i} \oint_{\gamma}\frac{p(x)dx}{x^{(n+1)(d_{1}+1)(d_{2}+1)}}.\] ## 4. Counting quasimaps ("QM numbers") ### Formulation of the problem We now describe the _quasimap counting problem_. Let \(X\) be a compact complex manifold and \(Y=\mathbb{C}^{N}\mathbin{/\!\!/}G\). We denote \(\pi\colon\mathbb{C}^{N}\setminus\boldsymbol{\Gamma}\to Y\) the quotient map. We want to count quasimaps \(f=(\mathcal{P},\underline{f})\) subject to certain conditions. Namely, fix a natural number \(l\) and closed submanifolds \(c_{1}^{X},\ldots,c_{l}^{X}\subset X\) and \(c_{1}^{Y},\ldots,c_{l}^{Y}\subset Y\). We have that \(\overline{\pi^{-1}(c_{i}^{Y})}\) is a \(G\)-space and we can consider the inclusion of fiber bundles \[\iota\colon\mathcal{P}\times_{G}\overline{\pi^{-1}(c_{i}^{Y})}\to\mathcal{P} \times_{G}\mathbb{C}^{N}. \tag{38}\] We denote \(\widetilde{c_{i}^{Y}}=\mathcal{P}\times_{G}\overline{\pi^{-1}(c_{i}^{Y})}\). To condense the notation, we denote \[\mathcal{D}=(\{c_{i}^{X}\}_{i=1}^{l},\{c_{i}^{Y}\}_{i=1}^{l}|\mathcal{P}) \tag{39}\] and call it the _quasimap counting data_. **Enumerative Problem B**.: _Given a quasimap counting data (39), we consider the set_ \[\mathrm{QMap}(X,Y,\mathcal{D})\colon=\!\left\{f\in\mathrm{QMap}(X,Y)\colon f= (\mathcal{P},\underline{f}),\ \underline{f}(c_{i}^{X})\cap\widetilde{c_{i}^{Y}}\neq\varnothing\right\}. \tag{40}\] _If this set is finite we call the quasimap data stable and we define the QM number_ \[\mathrm{QM}(X,Y,\mathcal{D})\colon=\#\mathrm{QMap}(X,Y,\mathcal{D}). \tag{41}\] _Remark 4.1_.: Suppose \(f\in\mathrm{QMap}(X,Y,\mathcal{P},(c_{i}^{X})_{i=1}^{n},(c_{i}^{Y})_{i=1}^{n})\) and \(\mathrm{PQL}(f)=\varnothing\). Then \(f\) defines a holomorphic map \(f\colon X\to Y\) such that \(f(c_{i}^{X})\cap c_{i}^{Y}\neq\varnothing\), whose homotopy class \(\delta\) is determined by the principal bundle \(\mathcal{P}\). _Remark 4.2_.: We can extend the QM number to collections of _cycles_\(c_{i}^{X}\), \(c_{i}^{Y}\) by multilinearity. #### 4.1.1. Main example Let us consider the main example of quasimaps \(\mathbb{P}^{k}\mathbin{\not\to}\mathbb{P}^{n}\). In this case \(\mathcal{P}=O(d)\) for some \(d\geq 1\) and we will denote the quasimap data simply by \(\mathcal{D}=(\{c_{i}^{X}\}_{i=1}^{l},\{c_{i}^{Y}\}_{i=1}^{l}|d).\) Quasimaps \(f=(O(d),P^{\alpha}(x))\) are given by a collection of \(n+1\) homogeneous polynomials in \(k+1\) variables of degree \(d\). We consider target cycles \(c_{i}^{Y}\subset\mathbb{P}^{n}\) such that the closure of the preimage \(\overline{\pi^{-1}(c_{i}^{Y})}\) is a linear subspace of the same codimension and hence can be written as an intersection of \(n_{i}^{Y}=\operatorname{codim}c_{i}^{Y}\) hyperplanes \(H_{i}^{j}\). Then, quasimaps \(f\in\mathrm{QMap}(\mathbb{P}^{k},\mathbb{P}^{n},\{c_{i}^{X}\}_{i=1}^{l},\{c_{i }^{Y}\}_{i=1}^{l})|d)\) are given by solutions to the system of equations \[H_{i}^{j}(P^{\alpha}(x_{i}))=0 \tag{42}\] subject to the condition \(x_{i}\in c_{i}^{X}\). Here the coefficients of these equations are the coefficients of \(H_{i}^{j}\), the unknowns are the coefficients of the homogeneous polynomials \(P^{\alpha}\) and the points \(x_{i}\). In particular, the virtual dimension of \(\operatorname{QMap}(\mathbb{P}^{k},\mathbb{P}^{n},\{c_{i}^{X}\}_{i=1}^{l},\{c_{i }^{Y}\}_{i=1}^{l})|d)\) is given by \[\dim_{\operatorname{vir}}\operatorname{QMap}(\mathbb{P}^{k}, \mathbb{P}^{n},(c_{i}^{X})_{i=1}^{l},(c_{i}^{Y}\}_{i=1}^{l}|d)=\\ =\mathfrak{n}_{k,n,d}+\sum_{i=1}^{l}\dim c_{i}^{X}-\operatorname{ codim}c_{i}^{Y}, \tag{43}\] cf. (28). **Example 4.3**.: Consider the example \(k=1,n=2\) and \(d\geq 1\). For \(i=1,2,3\) we let \(c_{i}^{X}\) be points in \(\mathbb{P}^{1}\) and \(c_{i}^{Y}\) be lines in \(\mathbb{P}^{2}\). Let \(l^{\prime}\) be a natural number. For \(i=4,\ldots,l^{\prime}+3=:l\), we let \(c_{i}^{X}=\mathbb{P}^{1}\) and \(c_{i}^{Y}\) be points in \(\mathbb{P}^{2}\). Then the virtual dimension of the quasimap space \(\operatorname{QMap}(\mathbb{P}^{k},\mathbb{P}^{n},(c_{i}^{X})_{i=1}^{l},(c_{ i}^{Y})_{i=1}^{l}|d)\) is \[\underbrace{3(d+1)-1}_{n_{1,2,d}}+\underbrace{3\cdot 0+l^{\prime}\cdot 1}_{ \sum\dim c_{i}^{X}}-\underbrace{(2l^{\prime}+3)}_{\sum\operatorname{ codim}c_{i}^{Y}}=3d+2-l^{\prime}-3=3d-1-l^{\prime}. \tag{44}\] We interpret this example as follows: By fixing the images of three points in \(\mathbb{P}^{1}\) to lie on lines we "gauge fix" the \(\operatorname{PSL}(2,\mathbb{C})\) action on \(\mathbb{P}^{1}\) (notice that this reduces the dimension of the quasimap space by \(3=\dim\operatorname{PSL}(2,\mathbb{C})\)). To have a valid quasimap count we need to demand that the quasi-map pass through \(3d-1\) points in the target \(Y\). In this way we recover the zero virtual dimension condition for counting holomorphic maps. ### Explicit answer from intersection theory In the main example, we can easily compute the QM number from intersection theory. Namely, we observe that given \(H\in(\mathbb{C}^{n+1})^{\vee}\), the map \((P^{\alpha},x)\mapsto H(P^{\alpha}(x))\) can be interpreted as a section of the line bundle \[L_{d}:=\mathcal{O}(1)\boxtimes\mathcal{O}(d)\to\operatorname{QMap}_{d}( \mathbb{P}^{k},\mathbb{P}^{n})\times\mathbb{P}^{k}. \tag{45}\] Equation (42) can then be interpreted as the statement that such a section has a zero. We will use this observation to reformulate the quasimap count as a the count of zeros of a section. Namely, given a collection of submanifolds \(c_{i}^{X}\subset\mathbb{P}^{k},i=1,\ldots,k\), we define the compact complex manifold \[\operatorname{Var}=\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n}) \times c_{1}^{X}\times\ldots\times c_{l}^{X}. \tag{46}\] We have the obvious maps \(p_{i}\colon\operatorname{Var}\to\operatorname{QMap}_{d}(\mathbb{P}^{k}, \mathbb{P}^{n})\times\mathbb{P}^{k}\) given by composing the inclusion \(c_{i}^{X}\hookrightarrow\mathbb{P}^{k}\) and the projection to the \(i\)th factor. We then have the following **Proposition 4.4**.: _Given the quasimap counting data \(\mathcal{D}=\big{(}\{c_{i}^{X}\}_{i=1}^{l},\{c_{i}^{Y}\}\big{)}_{i=1}^{l}|d)\) we define the following vector bundle \(E\) over \(\operatorname{Var}\)_ \[E\equiv E(\mathcal{D}):=\bigoplus_{i=1}^{l}\oplus_{j=1}^{\operatorname{ codim}c_{i}^{Y}}p_{i}^{*}L_{d}. \tag{47}\] _Writing \(\overline{\pi^{-1}(c_{i}^{Y})}=\cap_{j=1}^{\operatorname{ codim}c_{i}^{Y}}H_{i}^{j}\), one has a section \(\sigma\) of \(E\) given by_ \[\sigma\colon\underbrace{(\underline{f},x_{1},\ldots,x_{l})}_{\in\operatorname{ Var}}\mapsto\underbrace{\Big{(}\Big{(}H_{i}^{j}(\underline{f}(x_{i}))\Big{)}_{i=1} ^{l}\Big{)}_{j=1}^{\operatorname{ codim}c_{i}^{Y}}}_{\in E_{\underline{f},x_{1},\ldots,x_{l}}}.\] _Then_ \[p(\sigma(\operatorname{Var})\cap E_{0})=\operatorname{QMap}(\mathbb{P}^{k}, \mathbb{P}^{n},\mathcal{D}), \tag{48}\] _where \(E_{0}\) denotes the zero section of \(E\) and \(p\) is the projection to the first factor in the r.h.s. of (46)._ _Remark 4.5_.: The base space \(\operatorname{Var}\) has dimension \[\dim\operatorname{Var}=\mathfrak{n}_{k,n,d}+\sum_{i}\dim c_{i}^{X},\] while the vector bundle \(E\) has rank \[\operatorname{rk}(E)=\sum_{i}\sum_{j=1}^{\operatorname{codim}c_{i}^{Y}} \operatorname{rk}(p_{i}^{*}L_{d})=\sum_{i}\operatorname{codim}c_{i}^{Y}.\] The virtual dimension can thus be expressed as \[\dim_{\operatorname{vir}}\operatorname{QMap}(\mathbb{P}^{k},\mathbb{P}^{n}, \mathcal{D})=\dim\operatorname{Var}-\operatorname{rk}(E).\] Observing that the number of zeroes of a generic section of a vector bundle \(E\) is given by the Euler number \(e(E)\) of that vector bundle, we obtain the following **Corollary 4.6**.: _For generic stable quasimap data \(\mathcal{D}\), we have_ \[\operatorname{QM}(\mathbb{P}^{k},\mathbb{P}^{n},\mathcal{D})=e(E(\mathcal{D} )). \tag{49}\] The Euler number can be readily computed from the fact that the Euler class \([\tilde{e}]\) of a complex vector bundle is its top Chern class and that the Euler class is multiplicative under the Whitney sum of vector bundles \[\tilde{e}(E\oplus E^{\prime})=\tilde{e}(E)\wedge\tilde{e}(E^{\prime}).\] Let us denote by \(H\in\Omega^{2}_{cl}(\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n}))\) a representative of the class Poincare dual to a hyperplane in \(\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})=\mathbb{P}^{n_{k,n,d}}\) and by \(h\in\Omega^{2}_{cl}(\mathbb{P}^{k})\) - a representative of the class Poincare dual to a hyperplane in \(\mathbb{P}^{k}\). Then the first Chern class of \(L_{d}\) is \[c_{1}(L_{d})=H+h. \tag{50}\] (We suppress in the notation the pullbacks along the projections from \(\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\times\mathbb{P}^{k}\) to the first or second summand.) In particular, we have the following result. **Corollary 4.7**.: _For stable quasimap counting data \(\mathcal{D}\), the \(\operatorname{QM}\) number is given by the following integral_ \[\operatorname{QM}(\mathbb{P}^{k},\mathbb{P}^{n},\mathcal{D})=\int_{ \operatorname{Var}}\tilde{e}(E)=\int_{\operatorname{Var}}\bigwedge_{i=1}^{l} (H+d\,h_{i})^{\operatorname{codim}c_{i}^{Y}}. \tag{51}\] _Here \(h_{i}=p_{i}^{*}h\)._ _Remark 4.8_.: Sometimes we will work with the space \(\widetilde{\operatorname{Var}}=\operatorname{QMap}(X,Y)\times X^{\times l}\) instead. Then, for arbitrary representatives \(\delta_{1},\dots,\delta_{l}\) of classes Poincare dual to \(c_{1}^{X},\dots,c_{l}^{X}\), we have \[\begin{split}\operatorname{QM}(\mathbb{P}^{k},\mathbb{P}^{n}, \mathcal{D})&=\int_{\widetilde{\operatorname{Var}}}\pi_{1}^{*} \delta_{1}\wedge\dots\wedge\pi_{l}^{*}\delta_{l}\wedge\tilde{e}(E)\\ &=\int_{\widetilde{\operatorname{Var}}}\pi_{1}^{*}\delta_{1} \wedge\dots\wedge\pi_{l}^{*}\delta_{l}\wedge\bigwedge_{i=1}^{l}(H+d\,\pi_{i}^{* }h)^{\operatorname{codim}\,c_{i}^{Y}}.\end{split} \tag{52}\] where \(\pi_{i}\colon\widetilde{\operatorname{Var}}\to\mathbb{P}^{k}\) is the projection to the \(i\)th copy of \(\mathbb{P}^{k}\). ### QM numbers as sums of map counts and proper quasimap numbers Let us consider again the general case of quasimaps \(f\colon X\not\to Y\). Imposing stable counting data \(\mathcal{D}\) on the decomposition \[\operatorname{QMap}(X,Y)=\operatorname{Map}(X,Y)\sqcup\operatorname{QMap}^{ \operatorname{pr}}(X,Y),\] we obtain \[\operatorname{QMap}(X,Y,\mathcal{D})=\operatorname{Map}(X,Y,\mathcal{D}) \sqcup\operatorname{QMap}^{\operatorname{pr}}(X,Y,\mathcal{D}). \tag{53}\] This immediately implies the following statement: \[\operatorname{QM}(X,Y,\mathcal{D})=\operatorname{KM}(X,Y,\mathcal{D})+\# \operatorname{QMap}^{\operatorname{pr}}(X,Y,\mathcal{D}), \tag{54}\] where \(\operatorname{KM}(X,Y,\mathcal{D})\) is the Kontsevich-Manin number defined in (20). **Definition 4.9**.: Given stable quasimap counting data \(\mathcal{D}\) we denote the number of proper quasimaps in \(\operatorname{QMap}(X,Y,\mathcal{D})\) by \[\operatorname{PQM}(X,Y,\mathcal{D}):=\#\operatorname{QMap}^{\operatorname{pr} }(X,Y,\mathcal{D}). \tag{55}\] We restate (54) in this terminology: **Proposition 4.10**.: _For \(X,Y\) as in Section 4.1 and stable quasimap counting data \(\mathcal{D}\), the \(\operatorname{QM}\) number is the sum of the Kontsevich-Manin and the proper quasimap numbers,_ \[\operatorname{QM}(X,Y,\mathcal{D})=\operatorname{KM}(X,Y,\mathcal{D})+ \operatorname{PQM}(X,Y,\mathcal{D}). \tag{56}\] ## 5. Counting quasimaps in non-stable cases via excess intersection theory ### Quasi-stable case As we will see in the next section, in many examples the counting data turns out to be _unstable_, i.e. the zero locus of the section \(\sigma\) has components of positive dimension. In this case the QM number can be defined by the integral (51), i.e. \[\operatorname{QM}(X,Y,\mathcal{D}):=\int_{\operatorname{Var}}\tilde{e}(E). \tag{57}\] The right hand side no longer has an interpretation as counting the zeros of the section \(\sigma\). However, one can still make sense of the formula \[\operatorname{QM}(X,Y,\mathcal{D})=\operatorname{KM}(X,Y,\mathcal{D})+ \operatorname{PQM}(X,Y,\mathcal{D})\] via _excess intersection theory_[6] (see also the nice introduction in [13, Chapter 8]). We denote by \(\operatorname{Z}(\sigma)\) the zero set of the section \(\sigma\) restricted to proper quasimaps. Then, suppose that \[Z\subseteq\operatorname{Z}(\sigma)\] is a connected component, possibly of positive dimension. If \(Z\) is a smooth submanifold of \(\operatorname{Var}\), we can define the _excess bundle_ on \(Z\) by \[B_{Z}=\frac{E|_{Z}}{\sigma_{*}(N_{Z})}, \tag{58}\] where \(N_{Z}\) is the normal bundle of \(Z\) in \(\operatorname{Var}\) that we identify as a subbundle of \(E\) via the section \(\sigma\). We give a special name to configurations where all components of the zero set are of this form: **Definition 5.1**.: In the situation of the Enumerative Problem B, suppose that \(\operatorname{KM}(X,Y,\mathcal{D})\) is finite and \(\operatorname{Z}(\sigma)\) is a union of connected components that are submanifolds defined by transverse intersections, \[\operatorname{Z}(\sigma)=\bigsqcup_{i}Z_{i}.\] Then, we call the counting data \(\mathcal{D}\)_quasi-stable_. For quasi-stable counting data, we have the following result. **Proposition 5.2**.: _Suppose the counting data is quasi-stable and denote \(Z_{i}\) the connected components of \(\operatorname{Z}(\sigma)\). Let \(c(B_{Z_{i}})\) denote the total Chern class of the bundle \(B_{Z_{i}}\). Then_ \[\operatorname{QM}(X,Y,\mathcal{D})=\int_{\operatorname{Var}}\tilde{e}(E)= \operatorname{KM}(X,Y,\mathcal{D})+\sum_{i}\int_{Z_{i}}c(B_{Z_{i}}).\] Proof.: For a holomorphic vector bundle \(E\to\operatorname{Var}\), the self-intersection class of the zero section is the the Euler class of \(\operatorname{Var}\), \[[E_{0}]\cdot[E_{0}]=c_{\operatorname{rk}E}\frown[\operatorname{Var}]\in A_{0} (\operatorname{Var}),\] see e.g. [1, Example 2.9]. Here \(c_{\operatorname{rk}E}\) denotes the \(\operatorname{rk}E\) Chern class of \(E\). Since the image \(\sigma(\operatorname{Var})\) of a holomorphic section \(\sigma\colon\operatorname{Var}\to E\) is rationally equivalent to the zero section, the intersection product \([\sigma(\operatorname{Var})]\cdot[E_{0}]=c_{\operatorname{rk}E}\frown[ \operatorname{Var}]\) as well. Then, we have that \[\sigma(\operatorname{Var})\cap E_{0}=\widehat{\operatorname{Map}}(X,Y, \mathcal{D})\sqcup\bigsqcup_{i}Z_{i}. \tag{59}\] In formula (59), \(\widehat{\operatorname{Map}}(X,Y,\mathcal{D})\) is the intersection of the zero set of \(\sigma\) with \(\operatorname{Map}_{d}(X,Y)\times\prod_{i}c_{i}^{X}\). By a Bezout-like theorem, on the one hand the degree of \([\sigma(\operatorname{Var})]\cdot[E_{0}]\) is the quasimap number \(\operatorname{QM}(X,Y,\mathcal{D})\). On the other hand, the intersection product \([\sigma(\operatorname{Var})]\cdot[E_{0}]\) decomposes (in Fulton's terminology, this is "canonical decomposition") as the sum of pieces supported on connected components in the r.h.s. of (59). From [6, Proposition 9.1.1], the contribution to \([\sigma(\operatorname{Var})]\cdot[E_{0}]\) supported on any smooth component \(Z_{i}\) is precisely \[c(E|_{Z_{i}})c(N_{Z_{i}})^{-1}\frown[Z_{i}]=c(B_{Z_{i}})\frown[Z_{i}].\] Passing to degrees of \(0\)-cycles and summing over connected components, we obtain the statement. Then we define the proper quasimap number of \(Z_{i}\) by \[\operatorname{PQM}(X,Y,\mathcal{D};Z_{i}):=\int_{Z_{i}}c(B_{Z_{i}}) \tag{60}\] and the total proper quasimap number as \[\operatorname{PQM}(X,Y,\mathcal{D}):=\sum_{i}\operatorname{PQM}(X,Y,\mathcal{ D};Z_{i}). \tag{61}\] In this way, we obtain the generalization of Proposition 4.10: **Proposition 5.3**.: _For X,Y as in Section 4.1 and quasi-stable counting data \(\mathcal{D}\), we have_ \[\operatorname{QM}(X,Y,\mathcal{D})=\operatorname{KM}(X,Y,\mathcal{D})+ \operatorname{PQM}(X,Y,\mathcal{D}). \tag{62}\] We remark that because of the splitting principle, for any component \(Z\subset\operatorname{QMap}(X,Y,\mathcal{D})\) we can compute the total Chern class \(c(B_{Z})\) appearing in (60) as \[c(B_{Z})=\frac{c(E|_{Z})}{c(N_{Z})}=\frac{c(E|_{Z})c(Z)}{c(\operatorname{Var})| _{Z}}. \tag{63}\] For later convenience, let us write out \(c(B_{Z})\) in detail: The external product \(E_{1}\boxtimes E_{2}\to X_{1}\times X_{2}\) of two vector bundles \(E_{i}\to X_{i}\), \(i=1,2\), is defined by the tensor product \(\operatorname{pr}_{1}^{*}E_{1}\otimes\operatorname{pr}_{2}^{*}E_{2}\), where \(\operatorname{pr}_{i}\colon X_{1}\times X_{2}\to X_{i}\) is the projection to the first, respectively the second factor. Now recall from Section 4.2 that for a given quasimap counting data \(\mathcal{D}\), we defined the space of variables \[\operatorname{Var}=\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n}) \times c_{1}^{X}\times\cdots\times c_{l}^{X}\] and the vector bundle of equations \(E\to\operatorname{Var}\) \[E=\bigoplus_{i=1}^{l}\oplus_{j=1}^{\operatorname{codim}c_{i}^{Y}}p_{i}^{*}L_ {d}. \tag{64}\] By the Whitney sum formula and naturality of the total Chern class, one has \[c(E|_{Z})=\prod_{i=1}^{l}\prod_{j=1}^{\operatorname{codim}c_{i}^{Y}}p_{i}^{*} c(L_{d})|_{Z}, \tag{65}\] where \[p_{i}^{*}c(L_{d})|_{Z}=(1+H+d\,h_{i})|_{Z}. \tag{66}\] Moreover, since \(\operatorname{QMap}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})=\mathbb{P}^{n_{k,n,d}}\), \[T\operatorname{Var}=\pi_{\mathbb{P}^{n_{k,n,d}}}^{*}T\mathbb{P}^{n_{k,n,d}} \oplus\bigoplus_{i=1}^{l}\pi_{c_{i}^{X}}^{*}Tc_{i}^{X}, \tag{67}\] where \(\pi_{\mathbb{P}^{n_{k,n,d}}}\colon\mathrm{Var}\to\mathbb{P}^{n_{k,n,d}}\) and \(\pi_{c_{i}^{X}}\colon\mathrm{Var}\to c_{i}^{X}\) denote the projections to the components of \(\mathrm{Var}\). Hence, \[\begin{split} c(B_{Z})&=\frac{\prod_{i=1}^{l}\prod_{j= 1}^{\mathrm{codim}\,c_{i}^{Y}}(1+H+d\,h_{i})|_{Z}\cdot c(Z)}{\pi_{\mathbb{P}^{n_ {k,n,d}}}^{*}c(\mathbb{P}^{n_{k,n,d}})\big{|}_{Z}\,\prod_{i=1}^{l}\pi_{c_{i}^{X} }^{*}c(c_{i}^{X})\big{|}_{Z}}\\ &=\frac{\prod_{i=1}^{l}\prod_{j=1}^{\mathrm{codim}\,c_{i}^{Y}}(1+ H+d\,h_{i})|_{Z}\cdot c(Z)}{(1+H)^{n_{k,n,d}+1}\big{|}_{Z}\,\prod_{i=1}^{l}\pi_{c_{i}^{ X}}^{*}c(c_{i}^{X})\big{|}_{Z}}.\end{split} \tag{68}\] ### General unstable case. First appearance of Segre classes In the general case, connected components \(Z\subset\mathrm{Z}(\sigma)\) need not be smooth submanifolds of \(\mathrm{Var}\). In this case, \(Z\) does not have a well-defined normal bundle and hence we cannot apply the formulas given above directly. To generalize them to the unstable case, first note that if \(\mathcal{D}\) is quasi-stable we can rewrite (60) as \[\mathrm{PQM}(X,Y,\mathcal{D};Z_{i})=\int_{Z_{i}}c(B_{Z_{i}})=\int_{\mathrm{ Var}}c(E)c(N_{Z_{i}})^{-1}\eta_{Z_{i}},\] where \(\eta_{Z_{i}}\) denotes a representative of the class Poincare dual to \(Z_{i}\). For the unstable case, \(c(N_{Z_{i}})^{-1}\eta_{Z_{i}}\) should be replaced with the Segre class \(s(Z_{i},\mathrm{Var})\) of \(Z_{i}\) in \(\mathrm{Var}\). Recall (see e.g. [1]) that for a variety \(Y\), and a closed embedding \(X\subset Y\), there is a Segre class \(s^{\vee}(X,Y)\in A_{*}(X)\) (the Chow ring of \(X\)) uniquely characterized by the properties that 1. for regular embeddings \(s^{\vee}(X,Y)=c^{\vee}(N_{X}Y)^{-1}\cap[X]\) and 2. for \(f\colon Y^{\prime}\to Y\) a proper, onto, birational morphism of varieties, we have (69) \[s^{\vee}(X,Y)=(f|_{f^{-1}(X)})_{*}s^{\vee}(f^{-1}(X),Y^{\prime}).\] We will denote without further comment20 by \(s(Z,\mathrm{Var})\in\Omega^{\bullet}(\mathrm{Var})\) a representative of a class Poincare dual to the pushforward of the Segre class to the Chow ring of \(\mathrm{Var}\). We then define the PQM number of \(Z_{i}\) as Footnote 20: Our notation is somewhat opposite to the usual algebraic geometry conventions because we work in the differential form formalism. \[\mathrm{PQM}(X,Y,\mathcal{D};Z_{i}):=\int_{\mathrm{Var}}c(E)s(Z_{i},\mathrm{ Var}). \tag{70}\] Defining the total PQM number again by (61), we see that with this definition Proposition 4.10 extends to the unstable case. In general, computation of Segre classes is a hard problem, we will defer examples of such computations to the next paper. ## 6. Enumerative examples In this section we consider various examples for the count of quasimaps and the decomposition of the QM number into holomorphic maps and proper quasimaps. We consider a number of examples. In these examples we will meet 1- and 2-dimensional sources. For 2-dimensional sources we also study 1-dimensional cycles in the source. We will meet both freckles and scars. Examples range in complexity. In the simplest cases, it is just computation of the number of freckle configurations. In more complicated cases, we have to apply excess intersection theory. In the simplest (quasi-stable) cases it easily doable. In more complicated cases we have to invoke Segre classes. However, there are two examples where PQL locus is so peculiar, see Figures 15 and 12 that we postpone computation to a subsequent paper. ### Quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{2}\) Let us first consider quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{2}\) of degree \(1\) and \(2\), so that \[\dim\operatorname{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{2}) =5,\] \[\dim\operatorname{QMap}_{2}(\mathbb{P}^{1},\mathbb{P}^{2}) =8.\] #### 6.1.1. \(0+2=2\) - An example with no holomorphic maps Take \(d=1\). The minimal amount of cycles we need for a stable quasimap counting data is \(l=3\), with \(c_{i}^{X}\) a point for \(i=1,2\), \(c_{3}^{X}=\mathbb{P}^{1}\) and \(c_{i}^{Y}\) a point for \(i=1,2,3\), cf. Figure 1A. This counting data is stable and \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{2},\mathcal{D})=\int_{\mathbb{P }^{5}\times\mathbb{P}^{1}_{(3)}}H^{2}H^{2}(H+p_{3}^{*}h)^{2}=2,\] with notations \(H,h\) as in (51); the subscript in \(\mathbb{P}^{1}_{(3)}\) reminds that it is the third source cycle. However, for \(c_{i}^{Y}\) in general position there is no degree \(1\) holomorphic map \(\mathbb{P}^{1}\to\mathbb{P}^{2}\) passing through all of them, as three points in \(\mathbb{P}^{2}\) are generically not contained in a line. Hence, there must be exactly two proper quasimaps for this counting data. Indeed, a degree \(1\) quasimap sending two fixed points in \(\mathbb{P}^{1}\) to two fixed points in \(\mathbb{P}^{2}\) is, up to a change of coordinates, of the form \[\underline{f}(x^{0}:x^{1})=(ax^{0}:bx^{1}:0),\] Figure 1. Configuration of source and target cycles in the \(0+2=2\) example. We have two fixed points \(c_{1}^{X}=(0:1)=0\), \(c_{2}^{X}=(1:0)=\infty\) in the source and one running point \(x\) (the circle around \(x\) indicates it is running.) Number of lines \(\lambda_{i}\) through a source point denotes the codimension of corresponding target cycle \(c_{i}^{Y}\) which is given as intersection of \(\lambda_{i}\) hyperplanes in the target \(\mathbb{P}^{2}\) (in this case, \(\lambda_{i}\equiv 2\).) where \((a:b)\in\mathbb{P}^{1}\) are projective coordinates (put differently, this corresponds to fixing \(c_{1}^{X}=(1:0),c_{2}^{X}=(0:1),c_{1}^{Y}=(1:0:0),c_{2}^{Y}=(0:1:0)\)). In particular, either \(a\) or \(b\) can be zero, in which the quasimap has a freckle, at the point \((1:0)\) for \(a=0\) or at the point \((0:1)\) for \(b=0\). Explicitly, the two proper quasimaps are given by \[\underline{f}_{1}(x^{0}:x^{1}) =(x^{0}:0:0),\] \[\underline{f}_{2}(x^{0}:x^{1}) =(0:x^{1}:0).\] That is, the two proper quasimap solutions to the enumerative problem have a freckle at \(c_{i}^{X}\), \(i=1,2\). Note that in this case, the proper quasimap solves the equation at the running point exactly when the running point hits the freckle, i.e. sits equally at \(c_{i}^{X}\). The situation is schematically depicted in Figure 1B. One might think that those violate the condition \(f(c_{i}^{X})=c_{i}^{Y}\), but this is not the case: Indeed, as a quasimap \(\underline{f}\) maps to \(\mathbb{C}^{3}\) (rather than \(\mathbb{P}^{2}\)) and we ask that it intersects the _lifts_ of the target cycles, evaluating \(\underline{f}\) at the freckle yields \(0\) which lies in the intersection of all lines defining the cycle \(c_{i}^{Y}\). In general, one observes the _freckle principle_: at the freckle _all_ equations are satisfied. _Remark 6.1_.: Note that in this example there is a \(1_{\mathbb{C}}\)-dimensional subgroup of \(\operatorname{PSL}(2,\mathbb{C})\) preserving the source cycles. #### 6.1.2. \(1+1=2\) For the next example, let again \(d=1\). We consider \(l=4\) source cycles with \(c_{i}^{X}\) a point for \(i=1,2,3\) and \(c_{4}^{X}=\mathbb{P}^{1}\), i.e. \(3\) fixed and \(1\) running point. For the target cycles we choose hyperplanes (i.e. lines) for \(c_{1}^{Y},c_{2}^{Y}\) and points for \(c_{3}^{Y},c_{4}^{Y}\). Then, we can compute the quasimap number as follows: \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{2},\mathcal{D})=\int_{\mathbb{P }^{5}\times\mathbb{P}^{1}_{(4)}}H\,H\,H^{2}\,(H+p_{4}^{*}h)^{2}=2. \tag{71}\] We know that there is a unique holomorphic map sending the points \(c_{i}^{X}\mapsto c_{i}^{Y}\) for \(i=3,4.\) Hence the decomposition of the QM number into holomorphic maps and proper quasimaps is \[\operatorname{QM}=2=\underbrace{1}_{\operatorname{KM}}+\underbrace{1}_{ \operatorname{PQM}}. \tag{72}\] It is instructive to derive this decomposition directly. Namely, consider a quasi-map \(\mathbb{P}^{1}\to\mathbb{P}^{2}\) given by three degree \(1\) homogeneous polynomials \[\underline{f}(x^{0}:x^{1})=(P^{0}(x^{0}:x^{1}),P^{1}(x^{0}:x^{1}),P^{2}(x^{0}:x^ {1})),\] with \(P^{i}(x^{0}:x^{1})=a_{0}^{i}x^{0}+a_{1}^{i}x^{1}\). We fix the source cycles to be points \[c_{1}^{X}=(0:1),\ c_{2}^{X}=(1:1),\ c_{3}^{X}=(1:0)\] and the target cycles to be the hyperplanes \[c_{1}^{Y}=(0:y^{1}:y^{2}),\ c_{2}^{Y}=(y^{0}:y^{1}:y^{0}+y^{1})\] and \(c_{3}^{Y}=(1:0:0)\), \(c_{4}^{Y}=q\) a generic point in \(\mathbb{P}^{2}\). These quasimaps are parametrized by \[\underline{f}(x^{0}:x^{1})=(ax^{0}:bx^{1}:(a+b)x^{1}).\] When \(a=0\), then \(\underline{f}\) defines a proper quasimap. Indeed, the quasimap \[\underline{f}(x^{0}:x^{1})=(0:bx^{1}:bx^{1})\] vanishes at \(c_{3}^{X}=(1:0)\) and sends the complement to \(c_{1}^{Y}\cap c_{1}^{Y}=(0:1:1)\). We see that in this case \[\operatorname{PQM}(\mathbb{P}^{1},\mathbb{P}^{2},\mathcal{D})=1.\] For \(a\neq 0\), we can use the \(\mathbb{C}^{*}\) action on quasimaps to set it to \(1\) so that \[\underline{f}(x^{0}:x^{1})=(x^{0}:bx^{1}:(1+b)x^{1}). \tag{73}\] The last condition \(f(c_{4}^{X})=q\) fixes the value of \(b\) and therefore \[\operatorname{KM}(\mathbb{P}^{1},\mathbb{P}^{2},\mathcal{D})=1.\] _Remark 6.2_.: This example has a generalization to the case when the dimension \(n\) of the target is bigger than \(2\) (see Section 6.4.1). However, in that case the map is no longer uniquely fixed on the complement of the freckle, because the intersection of two generic hypersurfaces \(c_{1}^{Y}\) and \(c_{2}^{Y}\) has positive dimension in \(\mathbb{P}^{n}\) for \(n>2\). Figure 2. Configuration of source and target cycles in the \(1+1=2\) example. We have three fixed points \(c_{1}^{X}=(0:1)=0\), \(c_{2}^{X}=(1:1)=1,c_{3}^{X}=(0:1)=\infty\) in the source and one running point \(x\). Target cycles are hyperplanes \(c_{1}^{Y},c_{2}^{Y}\) and points \(c_{3}^{Y},c_{4}^{Y}\). #### 6.1.3. \(1+3=4\) We consider again the case \(d=1\). Consider the counting data described in Example 4.3, i.e. \(l=3d-1+3=5\), with \(c_{i}^{X}\) a point for \(i=1,2,3\) and \(c_{i}^{X}=\mathbb{P}^{1}\) for \(i=4,5\), \(c_{i}^{Y}\) a line for \(i=1,2,3\) and \(c_{i}^{Y}\) a point for \(i=4,5\), cf. Figure 3A. The quasimap number for this problem is \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{2},\mathcal{D})=\int_{\mathbb{P} ^{5}\times\mathbb{P}^{1}_{(4)}\times\mathbb{P}^{1}_{(5)}}H^{3}(H+p_{4}^{*}h)^{ 2}(H+p_{5}^{*}h)^{2}=4. \tag{74}\] We claim that in this case there are three proper quasimaps and a unique holomorphic map satisfying the conditions of the enumerative problem, so that \[\underbrace{4}_{\operatorname{QM}}=\underbrace{1}_{\operatorname{KM}}+ \underbrace{3}_{\operatorname{PQM}}.\] Indeed, let us choose \(c_{i}^{X}=\{0,1,\infty\}\) as above and lines \(c_{i}^{Y}=\{y^{i}=0\}\), so that our quasimap satisfies \[P^{i}(c_{i}^{X})=0.\] It is easy to see that such a quasi-map is given by \[\underline{f}(x^{0}:x^{1})=(ax^{0}:b(x^{1}-x^{0}):cx^{1}), \tag{75}\] where the parameters \(a,b,c\) form a two-dimensional projective space, \((a:b:c)\in\mathbb{P}^{2}\). In particular, the quasimap has a freckle if and only if two out of the three parameters vanish, and in this case the freckle will be at one of the fixed points \(c_{i}^{X}\). If at most one of \(a,b,c\) is zero, then \(\underline{f}\) defines a holomorphic Figure 3. \(4=1+3\) enumerative problem map \(f\colon\mathbb{P}^{1}\to\mathbb{P}^{2}\), which is uniquely fixed by the requirement that it passes through the points \(c_{4}^{Y}\) and \(c_{5}^{Y}\). _Remark 6.3_.: Maps \(\mathbb{P}^{1}\to\mathbb{P}^{2}\) of degree 1 describe parametrized lines in \(\mathbb{P}^{2}\). By demanding that the three points \(c_{i}^{X}\), \(i=1,2,3\) have images in prescribed lines, one fixes the parametrization. Geometrically, we are thus counting lines through two given points in \(\mathbb{P}^{2}\) and it is clear that there exists a unique such map. ### Higher-dimensional target #### 6.2.1. \(2+0=2\) - An example with no freckles Let \(k=1\), \(n=3\), \(d=1\) - i.e. we are counting quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{3}\) of degree 1. We have \(\dim\operatorname{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{3})=7.\) Consider \(c_{i}^{X}\) to be a point for \(i=1,2,3\) and \(c_{4}^{X}=\mathbb{P}^{1}\). As target cycles, consider \(c_{i}^{Y}\) to be a line \((\operatorname{codim}c_{i}^{Y}=2)\). Then \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{3},\mathcal{D})=\int_{\mathbb{P} ^{7}\times\mathbb{P}^{1}_{(4)}}H^{2}H^{2}H^{2}(H+p_{4}^{*}h)^{2}=2. \tag{76}\] Again, we let \(c_{i}^{X}=\{0,1,\infty\}\) for \(i=1,2,3\) and we consider the lines \[c_{1}^{Y} =\{y^{1}=y^{2}=0\},\] \[c_{2}^{Y} =\{y^{1}-y^{3}=y^{2}-y^{4}=0\},\] \[c_{3}^{Y} =\{y^{3}=y^{4}=0\}.\] These lines are generic in the sense that they have trivial pairwise intersections. Such quasimaps are of the form \[\underline{f}(x^{0}:x^{1})=(ax^{0},bx^{0},ax^{1},bx^{1})\] and such quasimaps have no freckle. Taking a fourth line in general position with respect to the first three (i.e. it does not intersect any of them) yields a quadratic equation for the quasimap parameter which then determines the running point. ### Higher-dimensional source Next, we consider some examples with source \(\mathbb{P}^{2}\). #### 6.3.1. \(1+0=1\) As an easy example, consider the case \(k=n=2\), \(d=1\), i.e. degree 1 quasimaps \(\mathbb{P}^{2}\not\to\mathbb{P}^{2}\). Here, we have \(\dim\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^{2})=8\). The simplest configuration is \(l=4\) with source and target cycles being 4 fixed points. The QM number is 1, and indeed there is a unique degree 1 holomorphic map \(f\colon\mathbb{P}^{2}\to\mathbb{P}^{2}\) mapping four fixed points in \(\mathbb{P}^{2}\) to other four fixed points in \(\mathbb{P}^{2}\).21 #### 6.3.2. \(1+2=3\) We consider degree \(1\) quasimaps from \(X=\mathbb{P}^{2}\) to \(Y=\mathbb{P}^{3}\). Then \[\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^{3})=\mathbb{P}^{11}. \tag{77}\] We consider five cycles \(c_{i}^{X}\) in the source and five cycles \(c_{i}^{Y}\) in the target. Consider the following quasimap data \(\mathcal{D}\): Out of the five cycles in the source we consider four to be fixed and one to be running, i.e. \(c_{i}^{X}=\operatorname{p_{i}}\) for \(i=1,\dots,4\) and \(c_{5}^{X}=\mathbb{P}^{2}\). Moreover, let three out of the five cycles in the target be points and the remaining two be lines, i.e. \(c_{j}^{Y}=\text{point for }i=1,2,3\) and \(c_{j}^{Y}=\ell_{j}\) for \(j=4,5\). Let us first consider quasimaps that pass through a line, i.e. that map the running point to a line in the target. The situation is depicted in Figure 4A The QM number of this problem is given by \[\operatorname{QM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D})=\int_{\mathbb{P} ^{11}\times\mathbb{P}^{2}_{(5)}}H^{3}H^{3}H^{2}(H+p_{5}^{*}h)^{2}=1. \tag{78}\] This corresponds to a unique holomorphic map (i.e. \(\operatorname{KM}=1\)). Its image is the unique plane in the target through three points \(\operatorname{p_{1}},\operatorname{p_{2}},\operatorname{p_{3}}\). It automatically passes through the lines \(\ell_{4},\ell_{5}\). The condition that the four fixed points on the source have prescribed images fixes the parametrization (i.e. gauge-fixes the source automorphism group \(\operatorname{PSL}(3,\mathbb{C})\)). Now, let us consider a slightly modified problem, where the running point is mapped not to a line but to a point, cf. Figure 4B. In this case the QM number is \[\operatorname{QM}=\int_{\mathbb{P}^{11}\times\mathbb{P}^{2}_{(5)}}H^{3}H^{3}H^ {2}H^{2}(H+p_{5}^{*}h)^{3}=3. \tag{79}\] However, the geometric situation has not changed, i.e. there still exists a unique map solving the constraints. Therefore we expect that the proper quasimap contribution to the problem is \(2\). To compute the proper quasimap contribution, recall from Section 2.4 that the one-freckle stratum has complex codimension \[\operatorname{codim}\operatorname{QMap}^{1}_{1}(\mathbb{P}^{2},\mathbb{P}^{3} )=3+1-2=2, \tag{80}\] Figure 4. Quasiplanes in \(\mathbb{P}^{3}\) passing through three points and two lines. that is the one-freckle stratum is a complex nine dimensional subspace in QMap. Consider the situation where the freckle and the running point collide with one of the two cycles which get mapped to a point, cf. Figure 5. Such a configuration has codimension \(2\) in the freckle stratum (because we fix the position of the freckle in \(\mathbb{P}^{2}\)) and therefore dimension \(7\). This is precisely the number of remaining equations, so generically there is a unique quasimap with a freckle at \(c_{1}^{X}\) that satisfies all equations. Similarly, there is such a quasimap with a freckle at \(c_{2}^{X}\). However, when the freckle sits at \(c_{3}^{X}\) or \(c_{4}^{X}\) there are generically no solutions since we have \(8\) remaining equations. Therefore, the total PQM number is \(2\) as expected. ### Quasi-stable examples In this subsection we consider examples where the naive quasimap count is degenerate, i.e. even though we are considering a balanced configuration of cycles, there are positive-dimensional components in the space of quasimaps satisfying all the equations. Intuitively, we encounter the following situation: for each pair of source and target cycles \(c_{i}^{X},c_{i}^{Y}\) we have \(\dim c_{i}^{X}\) moduli and \(\operatorname{codim}c_{i}^{Y}\) equations, i.e. by adding such a pair we expect to reduce the dimension of the space of quasimaps by \(d_{i}=\operatorname{codim}c_{i}^{Y}-\dim c_{i}^{X}.\) For _proper_ quasimaps, however, the equations become void when the running points on \(c_{i}^{X}\) hit the freckle. If in this way, we lose more \(d_{i}\) than the codimension of the corresponding freckle stratum, we obtain a family of such quasimaps. For example, for \(k=1,n=2\), the freckle stratum has codimension \(2\) (common zero locus of three polynomials). A running point (sent to a point) has \(d=1\), so if we have \(3\) running points colliding in a freckle, we obtain a family of positive dimension. #### 6.4.1. A family of degenerate examples: \(1+(n-1)=n\) Consider the case \(k=1,n=N,d=1\) and \(l=4\), i.e. degree \(1\) quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{N}\), with \(l=4\) given cycles in the source and the target. We have \(\dim\operatorname{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{N})=2N+1\). The following is a balanced configuration: We fix three points \(c_{1}^{X},c_{2}^{X},c_{3}^{X}\) and keep one running point \(c_{4}^{X}\) in the source, while we take \(c_{1}^{Y},c_{2}^{Y}\) to be hyperplanes in \(\mathbb{P}^{N}\) and \(c_{3}^{Y},c_{4}^{Y}\) fixed points, as depicted in Figure 6. Figure 5. One of the two \(1\)-freckle configurations Then the QM number is \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{N},\mathcal{D})=\int_{\mathbb{P}^{2N+ 1}\times\mathbb{P}^{1}_{(4)}}H\,H\,H^{N}(H+p_{4}^{*}h)^{N}=N. \tag{81}\] However, there is a unique holomorphic map, whose image is the unique line through \(c_{3}^{Y}\) and \(c_{4}^{Y}\). Therefore we expect that proper quasimaps contribute to the QM number with \(N-1\). However, there exist a family of proper quasimaps: consider the situation where the freckle and the running point collide with \(c_{3}^{X}\), cf. Figure 7. On the complement of the freckle, the quasimap defines a degree zero, hence a constant map. This map is constraint to map \(c_{1}^{X}\) and \(c_{2}^{X}\) to the hyperplane \(c_{1}^{Y}\) and \(c_{2}^{Y}\) respectively. Since the map is constant, it maps both \(c_{1}^{X}\) and \(c_{2}^{X}\) to the intersection \(c_{1}^{Y}\cap c_{2}^{Y}\). Since two hyperplanes meet in a cycle of codimension \(2\), i.e. of dimension \(N-2\), if \(N>2\), there exists a moduli for the map and hence for the proper quasimaps. Figure 6. Pictorial representation of the enumerative problem. Here \(N\) lines run through \(c_{3}^{X}\) and \(c_{4}^{X}\) each representing a hyperplane in \(\mathbb{P}^{N}\). Figure 7. The unique quasimap stratum is where the freckle and the running point sit at \(c_{3}^{X}\). In fact this modulus is a \(\mathbb{P}^{N-2}\subset\mathbb{P}^{2N+1}\), which is the unique connected component of \[\operatorname{QMap^{pr}}(X,Y,\mathcal{D})=Z=\mathbb{P}^{N-2}\times\{c_{3}^{X}\} \subset\mathbb{P}^{2N+1}\times\mathbb{P}^{1}_{(4)}.\] _Remark 6.4_.: To be fully precise one has \[Z=\mathbb{P}^{N-2}\times\{c_{1}^{X}\}\times\{c_{2}^{X}\}\times\{c_{3}^{X}\} \times\{c_{3}^{X}\}\subset\operatorname{Var}.\] Note that any vector bundle over a point is trivial and therefore has total Chern class equal to \(1\). Points in \(Z\) therefore contribute trivially, i.e. by a factor of \(1\) at the appropriate place, to \(c(B)\), cf. (68). Hence, fixing \(\{c_{i}^{X}\}_{i=1}^{3}\) to be points, effectively reduces the problem over \(\operatorname{Var}\) to a problem over \(\mathbb{P}^{N-2}\). We can compute its PQM number by first computing the Chern class of the excess bundle using (68). Let \(\zeta\) denote the generator of \(H^{2}(Z)=H^{2}(\mathbb{P}^{N-2})\). Then \[\begin{split} c(B)&=\frac{\left(1+p_{\mathbb{P}^{2 N+1}}^{*}c_{1}(\mathcal{O}(1)_{\operatorname{QMap}})\right)^{2N+2}\Big{|}_{Z}c( \mathbb{P}^{N-2})}{c(\mathbb{P}^{2N+1}\times\mathbb{P}^{1}_{(4)})|_{Z}}\\ &=\frac{(1+H)^{2N+2}|z(1+\zeta)^{N-1}}{(1+H)^{2N+2}|_{Z}}\\ &=\frac{(1+\zeta)^{2N+2}(1+\zeta)^{N-1}}{(1+\zeta)^{2N+2}}=(N-1) \zeta^{N-2}+\dots,\end{split} \tag{82}\] where the dots denote terms of lower than top degree. From this, we get \[\operatorname{PQM}(X,Y,\mathcal{D})=\operatorname{PQM}(X,Y,\mathcal{D};Z)= \int_{\mathbb{P}^{N-2}}(N-1)\zeta^{N-2}=N-1. \tag{83}\] #### 6.4.2. \(1+(k-1)=k\) We can generalize this example by fixing only \(K<N\) equations at the running point. In order to obtain a balanced problem, we then fix \(N_{i}\) equations at \(c_{i}^{X}\), with \(N_{1}+N_{2}+N_{3}+K=2N+2\). See Figure 8. Figure 8. Pictorial representation of the enumerative problem. Here \(N_{i}\) lines run through \(c_{i}^{X}\) and \(K\) lines through \(c_{4}^{X}\) each representing a hyperplane in \(\mathbb{P}^{N}\). \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{N},\mathcal{D})=\int_{\mathbb{P}^{2N+ 1}\times\mathbb{P}^{1}_{(4)}}H^{N_{1}}H^{N_{2}}H^{N_{3}}(H+p_{4}^{*}h)^{K}=K. \tag{84}\] _Remark 6.5_.: Surprisingly, the QM numbers violate the symmetry that is well-known in Gromov-Witten theory. Observe that the QM number (84) depends on \(K\). Naively, this seems to indicate that the QM counting problem is _not_ invariant w.r.t. automorphisms of the source: naively, one can apply a Mobius transformation which is constant at points \(c_{1,2}^{X}\), stops \(c_{4}^{X}\) and makes it non-moving, and as a result makes \(c_{3}^{X}\) a moving point - but the corresponding QM number is \(N_{3}\), not \(K\)! The problem with this argument is that it implicitly assumes genericity of the configuration, in particular that the moving point doesn't collide with any of the stationary points (otherwise, one cannot separate points in this non-generic configuration by a Mobius transformation). On the other hand there are freckle contributions exactly from configurations when the moving point and the freckle collide with one of the stationary points. For example, consider a special case of the example above, with \(N=2\), cf. Figure 9. The QM numbers of the configurations depicted in Figure 9A and Figure 9B are \(1\) and \(2\) respectively, while \(\operatorname{KM}=1\) in both cases (a single line through two points in \(\mathbb{P}^{2}\)). In Figure 9C we show the degenerate configuration of Figure 9B - a proper quasimap, which contributes the additional \(1\) to the QM number. Importantly, the running point (together with the freckle) collapses with \(c_{2}^{X}\). This is clearly a non-generic situation and hence cannot be reached from a generic situation, such as shown in Figure 9A, by action of a Mobius transformation. Now, let us first assume that \(N_{2}+N_{3}>N\). Then there are two freckle strata \(Z_{i}\), where the freckle and the running point sit at \(c_{i}^{X}\) for \(i=2,3\), and its dimension is \(d_{i}=N-N_{j}-N_{k}\), with \(\{i,j,k\}=\{1,2,3\}\) (the freckle at \(c_{1}^{X}\) is prohibited by \(N_{2}+N_{3}>N\)). The contributi Figure 9. An apparent “contradiction” to Möbius invariance of QM numbers. PQM is \[\int_{Z_{i}}c(B_{Z_{i}})=\int_{Z_{i}}\frac{(1+\zeta_{i})^{2N+2}(1+\zeta_{i})^{d_{i }+1}}{(1+\zeta_{i})^{2N+2}}=d_{i}+1, \tag{85}\] where \(\zeta_{i}\) is the generator of \(H^{2}(Z_{i})=H^{2}(\mathbb{P}^{d_{i}})\). Therefore, we obtain the total PQM contribution \[d_{2}+d_{3}+2=2N-2N_{1}-N_{2}-N_{3}+2=2N+1-(N_{1}-N_{2}-N_{3})-N_{1}=K-N_{1}.\] Here we have used that \(N_{1}+N_{2}+N_{3}=2N+2-K\). In particular, the KM number is \(N_{1}\). #### 6.4.3. Conic through five points: \(1+6+9=16\) In this example we will recover the fact that there is a unique conic through five points in general position, using quasimap counting machinery. We set \(k=1\), \(n=2\) and \(d=2\). Remember that \(\dim\operatorname{QMap}_{2}(\mathbb{P}^{1},\mathbb{P}^{2})=8\), so we can fix \(l=5\), with source cycles \(c_{i}^{X}\) points for \(i=1,2,3\) and \(\mathbb{P}^{1}\) otherwise, and \(c_{i}^{Y}\) a collection of points (see Figure 10A). The QM number is then \[\operatorname{QM}(\mathbb{P}^{1},\mathbb{P}^{2},\mathcal{D})=\int_{\mathbb{P }^{8}\times\mathbb{P}^{1}_{(4)}\times\mathbb{P}^{1}_{(5)}}(H^{2})^{3}\wedge_{ i=4}^{5}(H+2\,p_{i}^{*}h)^{2}=16. \tag{86}\] Let us fix again \(c_{i}^{X}=\{0,1,\infty\}\) as before, and take \(c_{1}^{Y}=(0:0:1),c_{2}^{Y}=(0:1:0),c_{3}^{Y}=(1:0:0)\). Such degree 2 quasimaps can be parametrized by \((a:b:c)\in\mathbb{P}^{2}\) by \[\underline{f}(x^{0}:x^{1})=(a((x^{0})^{2}-x^{0}x^{1}):bx^{0}x^{1}:c(x^{0}x^{1 }-(x^{1})^{2})). \tag{87}\] We notice that \(\underline{f}\) has \(k\) freckles if \(k\) of the parameters \(a,b,c\) are zero. That is, for \(a,b,c\) nonzero we obtain a degree 2 holomorphic map \(f\colon\mathbb{P}^{1}\to\mathbb{P}^{2}\), which is uniquely fixed by the requirement that it passes through the five points \(c_{i}^{Y}\). If one out of the three parameters is zero, then \(\underline{f}\) has a freckle at one of the \(c_{i}^{X}\), \(i=1,2,3\), and both running points sit at this freckle - there are three such configurations, depicted in Figure 10B. For each of these there is actually a \(\mathbb{P}^{1}\) of solutions \(Z_{i}\), since there are four remaining equations, but the dimension of the space of degree 1 quasimaps is 5. However, if two of the parameters vanish, then \(\underline{f}\) has two freckles at two of the \(c_{i}^{X}\), \(i=1,2,3\). There are 3 possible configurations for the 2-freckle locus, however, we have to take into account the 2 running points: They can sit at the two 2 freckles in any configuration, however, if they go to the same freckle, sitting at \(c_{i}^{X}\), then that is actually part of the stratum \(Z_{i}\). See Figure 10C. The set of proper quasimaps, \(\mathrm{QMap}(X,Y,\mathcal{D})^{\mathrm{pr}}\), admits the following stratification: \[\mathrm{QMap}(X,Y,\mathcal{D})^{\mathrm{pr}}=\sqcup_{i\neq j=1}^{3}Z_{ij}\sqcup_ {i=1}^{3}Z_{i}\subset\mathbb{P}^{8}\times\mathbb{P}^{1}\times\mathbb{P}^{1}, \tag{88}\] where the six components \(Z_{ij}\) are given by \[Z_{ij}=\{(\underline{f}_{(i,j)},c_{i}^{X},c_{j}^{X})\}. \tag{89}\] Here \(\underline{f}_{(i,j)}\) is the unique quasimap that has freckles at fixed points \(c_{i}^{X}\) and \(c_{j}^{X}\) and on the complement is given by \(c_{k}^{Y}\). Since all those components are points, they contribute with \(1\) to \(\mathrm{PQM}(X,Y,\mathcal{D})\) so that the total contribution of \(\sqcup_{i\neq j=1}^{3}Z_{ij}\) is \(6\). The components \(Z_{i}\) are given by \[Z_{i}=\tilde{Z}_{i}\times\{c_{i}^{X}\}\times\{c_{i}^{X}\}, \tag{90}\] where \(\tilde{Z}_{i}\cong\mathbb{P}^{1}\) is the stratum of quasimaps \(\underline{f}=P\cdot\underline{f}_{1}\) with \(P\) vanishing at \(c_{i}^{X}\), and \(\underline{f}_{1}\) the degree \(1\) quasimap satisfying the equations given at \(c_{j}^{X}\) for \(j\in\{1,2,3\}\setminus\{i\}\). Here \(\tilde{Z}_{i}\cong\mathbb{P}^{1}\) because there is a \(\mathbb{P}^{5}\) of degree \(1\) quasimaps \(\underline{f}_{1}\) on which we impose \(4\) linear equations. We notice that this counting data is quasi-stable. The PQM number of the stratum \(Z_{i}=\tilde{Z}_{i}\times\{c_{i}^{X}\}\times\{c_{i}^{X}\}\) is computed by the Chern class of the excess bundle according to (68): \[c\left(B_{Z_{i}}\right) =\frac{\left(1+p_{\mathrm{ps}}^{*}c_{1}(\mathcal{O}(1))\right)^{ 10}\lvert_{Z_{i}}c(\mathbb{P}^{1})}{c(\mathbb{P}^{8})\lvert_{Z_{i}}}\] \[=\frac{(1+H)^{10}\lvert_{Z_{i}}(1+\zeta_{i})^{2}}{(1+H)^{9}\lvert _{Z_{i}}}=1+3\zeta_{i},\] Figure 10. Quasimap count for degree \(2\) quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{2}\). where again \(\zeta_{i}\) denotes the generator of \(H^{2}(Z_{i})=H^{2}(\mathbb{P}^{1})\). Hence, \[\operatorname{PQM}(X,Y,\mathcal{D};\sqcup_{i=1}^{3}Z_{i})=\sum_{i=1}^{3} \operatorname{PQM}(X,Y,\mathcal{D};Z_{i})=\sum_{i=1}^{3}\int_{Z_{i}}c\left(B_{Z _{i}}\right)=3\cdot 3=9.\] In total we therefore have \(\operatorname{PQM}(X,Y,\mathcal{D})=6+9=15\), and therefore we obtain that \[\operatorname{KM}(X,Y,\mathcal{D})=\operatorname{QM}(X,Y,\mathcal{D})- \operatorname{PQM}(X,Y,\mathcal{D})=1.\] We remark that in \(Z_{i}\), there are quasimaps with a second freckle (which has to be located at another fixed point \(c_{j}^{X}\)) but these contributions are not counted separately. ### Unstable examples: the need for Segre class computations To compute the PQM number in non-quasistable examples (with the zero-locus \(Z\subset\operatorname{Var}\) of the section \(\sigma\) not given by a disjoint union of smooth submanifolds), one needs to compute Segre classes, see Section 5.2. We defer such computations to a future paper and restrict ourselves to the analysis of the locus \(Z\) in several examples. #### 6.5.1. The simplest non-quasistable example Consider degree \(1\) quasimaps \(\mathbb{P}^{1}\not\to\mathbb{P}^{3}\) with \(l=5\) source/target cycles: \(3\) fixed points mapping to planes and two running points mapping to lines, see Figure 11. Figure 11. An unstable quasimap counting data for \(\operatorname{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{3})\). In this example \(\operatorname{QM}=9\), \(\operatorname{KM}=1\). One has a running freckle stratum \(Z_{r}\) (Figure 11C) and 3 fixed freckle strata \(Z_{i}\) (Figure 11B), each of them is a \(\mathbb{P}^{1}\). These strata intersect when the running freckle hits a fixed point, see Figure 11D, therefore \(Z=Z_{2}\cup Z_{3}\cup Z_{4}\cup Z_{r}\) is singular. Computing the contributions of the components independently, without taking intersections into account, we would have obtained \[\underbrace{3+3+3}_{Z_{2},Z_{3},Z_{4}}+\underbrace{5}_{Z_{r}}\neq\underbrace {8}_{\operatorname{QM}-\operatorname{KM}}. \tag{91}\] This shows that due to the non-trivial intersection of the strata, one cannot treat the \(Z_{(I)}\) independently and one is led to the problem to compute the Segre class of \(Z=\bigcup_{I}Z_{(I)}\). #### 6.5.2. An unstable example with a scar Consider degree 1 quasimaps \(\mathbb{P}^{2}\not\rightarrow\mathbb{P}^{N}\), with the same configuration as in Figure 4B, but now we put \(N\) equations at \(c_{1}^{X},c_{2}^{X},c_{5}^{X}\). The total number of equations is then \(3N+4\), whereas \(\dim\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^{N})=3N+2\). Hence, \[\dim\operatorname{Var}=\dim\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^ {N})+2,\] so that the counting data is balanced. The QM number of this configuration is \(\binom{N}{2}\). The 1-freckle stratum now has codimension \(N+1-2\), so \(\dim\operatorname{QMap}_{1}^{1}=2N+3\). In particular, the 1-freckle strata are again given by configurations where the freckle and the running point collide with a fixed point, see Figure 12A. These configurations are strata of dimension \(2N+1-N-4=N-3\), in particular, for \(N>3\) they have position dimension and the counting data is not stable. However, in this case there is also a _scar_ stratum: we can have two freckles, each at one of the fixed points \(c_{1}^{X},c_{2}^{X}\). In this case, the line through this two points forms a scar (cf. Section 2.4) and the running point can be at any point on the scar, as shown in Figure 12B. This stratum is of the form \(\mathbb{P}^{N-4}\times\mathbb{P}^{1}\), where the second factor describes the position of the running point along the scar, and intersects both 1-freckle strata in a \(\mathbb{P}^{N-4}\) when this running point hits either \(c_{1}^{X}\) or \(c_{2}^{X}\). #### 6.5.3. An example with a lot of stuff Consider again the setup of example 6.4.3 but with a third running point added. We thus have \(k=1\), \(n=2\), \(d=2\) but now \(l=6\) and we take source cycles \(c_{i}^{X}\) to be points for \(i=1,2,3\) and \(c_{i}^{X}=\mathbb{P}^{1}\) for \(i=4,5,6\). To compensate for the extra constraint introduced by the extra running point we take \(c_{1}^{Y}\) to be a line and \(c_{i}^{Y}\) to be a point for \(i>1\). The configuration is sketched in Figure 13. Since a line intersects a conic in two points, the KM number of this configuration is 2. On the other hand, the QM number is \[\int_{\mathbb{P}^{8}\times\mathbb{P}^{1}_{(4)}\times\mathbb{P}^{1}_{(5)}\times \mathbb{P}^{1}_{(6)}}H^{5}\cdot\wedge_{i=4}^{6}(H+2\,p_{i}^{*}h)^{2}=64.\] Hence we ought to find that the proper quasimap contribution is PQM = 62. Figure 12. Unstable example with a scar. The space of proper quasimaps can be decomposed into the strata dipicted in Figure 14. There is a unique stratum of the form shown in Figure 14A and 14C, two strata of the form depicted in Figure 14B, six strata of the situation shown in Figure 14D and 14E and three strata of the situation shown in Figure 14F. We label the strata according to their graphical depiction by \(Z_{(A)}\) to \(Z_{(F)}\). In situation \(Z_{(A)}\), on the complement of the freckle the quasimap has degree 1. Since \(\dim\operatorname{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{2})=5\), the quasimap is uniquely fixed by imposing the remaining 5 equations. The only degree of freedom left is the Figure 13. Pictorial representation of the enumerative problem. position of the freckle, hence \(Z_{(A)}\cong\mathbb{P}^{1}\). This \(\mathbb{P}^{1}\) is embedded into \(\mathrm{Var}\) diagonally: if \(y=(y^{0}:y^{1})\in\mathbb{P}^{1}\) denotes the position of the freckle, then \[\begin{split} Z_{(A)}&\hookrightarrow\mathrm{Var}= \mathbb{P}^{8}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathbb{P}^{1}\\ y&\mapsto(\underline{f},\ y,\ y,\ y)\end{split} \tag{92}\] where \(\mathbb{P}^{8}=\mathrm{QMap}_{2}(\mathbb{P}^{1},\mathbb{P}^{2})\) and \[\underline{f}=(Q(x)P_{1}(x):Q(x)P_{3}(x):Q(x)P_{3}(x)). \tag{93}\] Here, the \(P_{i}(x)\) are homogeneous degree 1 polynomials of \(x=(x^{0}:x^{1})\) and \(Q(x)=(y^{1}x^{0}-y^{0}x^{1})\). In situation \(Z_{(B)}\), on the complement of the freckle, one is faced with a degree 1 quasimap subject to three constraints. Since \(\mathrm{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{2})=\mathbb{P}^{5}\), this means that \(Z_{(B)}\cong\mathbb{P}^{2}\times\{c_{3}^{X}\}\times\{c_{3}^{X}\}\times\{c_{3 }^{X}\}\subset\mathrm{Var}\). The stratum \(Z_{(C)}\) is analyzed analogously to the stratum \(Z_{(B)}\). However, now one has to impose four constraints on the complement of the freckle resulting in \(Z_{(C)}\cong\mathbb{P}^{1}\times\{c_{1}^{X}\}\times\{c_{1}^{X}\}\times\{c_{1 }^{X}\}\subset\mathrm{Var}\). For \(Z_{(D)}\) the quasimap on the complement of the freckle is constant and hence uniquely fixed by imposing the remaining two equations. For example, with the notation of Figure 14, if \(c_{i}^{X}=y_{i}=(y_{i}^{0}:y_{i}^{1})\) then \[Z_{(D)}=\{(\underline{f},c_{1}^{X},c_{3}^{X},c_{3}^{X})\}\subset\mathrm{Var}, \tag{94}\] where \[\underline{f}(x)=(aQ(y_{1})Q(y_{3}):bQ(y_{1})Q(y_{3}):cQ(y_{1})Q(y_{3})), \tag{95}\] with \(Q(y)=(y^{1}x^{0}-y^{0}x^{1})\) and \(c_{2}^{Y}=(a:b:c)\in\mathbb{P}^{2}\). By the same reasoning, the stratum \(Z_{(E)}\) is likewise a point, which can be described analogously as \[Z_{(E)}=\{(\underline{f},c_{1}^{X},c_{1}^{X},c_{3}^{X})\}\subset\mathrm{Var}, \tag{96}\] where \[\underline{f}(x)=(aQ(y_{1})Q(y_{3}):bQ(y_{1})Q(y_{3}):cQ(y_{1})Q(y_{3})). \tag{97}\] Finally, in the situation \(Z_{(F)}\), the quasimap on the complement of the freckle is again constant, but is mapped to a line \(\mathbb{P}^{1}\subset\mathbb{P}^{2}\), rather than a point. Therefore, \(Z_{(F)}\cong\mathbb{P}^{1}\times\{c_{2}^{X}\}\times\{c_{3}^{X}\}\times\{c_{3 }^{X}\}\subset\mathrm{Var}\). It is important to note that the strata \(Z_{(I)}\) are not all disjoint. Indeed \(Z_{(A)}\) intersects \(Z_{(B)}\) and \(Z_{(C)}\) non-trivially, cf. Figure 15. If we were to compute the contributions of the strata \(Z_{(I)}\) independently (without taking intersections into account) we would find \[\sum_{I\in\{A,\ldots,F\}}\operatorname{PQM}(\mathbb{P}^{1},\mathbb{P}^{2}, \mathcal{D};Z_{(I)})=10+20+4+6+6+12=58\neq\underbrace{62}_{\operatorname{QM-KM }}. \tag{98}\] Again, the correct computation should take intersection into account and involve the Segre class. A quasi-stable example with non-trivial source cycles (a computation where scars and freckles work together) We consider degree 1 quasimaps from \(\mathbb{P}^{2}\) to \(\mathbb{P}^{3}\) with \(\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^{3})=\mathbb{P}^{11}\). As quasimap counting data, we consider three points \(\{c_{i}^{X}\}_{i=1}^{3}\) and two lines \(\{c_{i}^{X}\}_{i=4}^{5}\) in the source and likewise three points \(\{c_{i}^{Y}\}_{i=1}^{3}\) and two lines \(\{c_{i}^{Y}\}_{i=4}^{5}\) in the target. We will represent a line in the source by a dashed line and a point moving on that line by a box around the moving point. As before, we denote by a solid line a hyperplane in the target. We then consider the following three situations, cf. Figure 16: * each of the three points \(\{c_{i}^{X}\}_{i=1}^{3}\) is mapped to a point while each of the two lines \(\{c_{i}^{X}\}_{i=4}^{5}\) pass through a line * two of the three points \(\{c_{i}^{X}\}_{i=1}^{3}\) are mapped to a point, while the other is mapped to a line; one of the lines \(\{c_{i}^{X}\}_{i=4}^{5}\) passes by a point, while the other passes through a line * two of the three points \(\{c_{i}^{X}\}_{i=1}^{3}\) are mapped to a line, while the other is mapped to a point; each line \(\{c_{i}^{X}\}_{i=4}^{5}\) passes through a point **Situation \(D_{(1)}\).** In this case the QM number is given by \[\operatorname{QM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D}_{(1)})=\int_{\mathbb{ P}^{11}\times\mathbb{P}^{1}_{(4)}\times\mathbb{P}^{1}_{(5)}}H^{9}(H+p_{4}^{*}h)^{2}(H+p_ {5}^{*}h)^{2}=4. \tag{99}\] **Proposition 6.6**.: _In the present situation, there exists an unique holomorphic map, i.e._ \[\operatorname{KM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D}_{(1)})=1.\] Proof by geometry.: There is a unique plane \(\Pi\) through three points \(c_{1}^{Y},c_{2}^{Y},c_{3}^{Y}\) in \(\mathbb{P}^{3}\). It passes through the lines \(c_{4}^{Y},c_{5}^{Y}\). Its parametrization is uniquely fixed by the data \(\mathcal{D}\). More explicitly: fix some map \(f_{0}\colon\mathbb{P}^{2}\to\mathbb{P}^{3}\) with image \(\Pi\). The preimages of \(c_{i}^{Y}\), \(i=1,\dots,5\) are five points on \(\mathbb{P}^{2}\). The problem of finding a \(\operatorname{PSL}(3,\mathcal{C})\) transformation \(g\) moving those points to points \(c_{1}^{X},c_{2}^{X},c_{3}^{X}\) and lines \(c_{4}^{X},c_{5}^{X}\) is a linear problem and has a unique solution. Then \(f=f_{0}\circ g^{-1}\) is the desired (unique) holomorphic map. Proof by counting proper quasimaps.: Note that a quasimap satisfying all conditions cannot have a freckle. Indeed, assume there exists a freckle. Recall that the one-freckle stratum \(\operatorname{QMap}^{1}_{1}(\mathbb{P}^{2}.\mathbb{P}^{3})\) has complex codimension \(2\), cf. 2.4. In order to have a chance to solve all equations, the freckle must sit at the intersection of the lines \(c_{4}^{X}\) and \(c_{5}^{X}\), see Figure 17A. Its position is hence fixed, which imposes two more equations. The space of possible once-freckled quasimaps has therefore dimension 7. However, we must impose 9 further equations since the remaining three points are each mapped to a point, cf. Figure 17A. We thus conclude that there cannot be a freckle. However, the quasimap can have a scar. Suppose that the scar passes through two of the fixed points, say through \(c_{1}^{X}\) and \(c_{3}^{X}\). Note that the scar intersects the two lines \(\{c_{i}^{X}\}_{i=4}^{5}\). Hence on the scar all equations but the three equations demanded at \(c_{2}^{X}\) are satisfied, cf. Figure 17B. Since away from the scar the quasi map is constant, the remaining three equations fixes the scarred quasimap uniquely. Since the scar can pass through any two of the three points \(\{c_{i}^{X}\}_{i=1}^{3}\), there exists three such one-scar strata and hence three proper quasimaps. This allows us to conclude that \[\operatorname{KM}=\operatorname{QM}-\operatorname{PQM}=4-3\cdot\underbrace{1 }_{\operatorname{scar\,configurations}}=1.\] **Situation \(D_{(2)}\).** In this case the QM number is given by \[\operatorname{QM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D}_{(2)})=\int_{ \mathbb{P}^{11}\times_{(4)}^{\mathbb{P}^{1}}(5)}H^{8}(H+p_{4}^{*}h)^{3}(H+p_{5 }^{*}h)^{2}=6. \tag{100}\] **Proposition 6.7**.: _In the present situation, there exists again an unique holomorphic map, i.e._ \[\operatorname{KM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D}_{(2)})=1.\] Proof by counting proper quasimaps.: By the same arguments as for the counting quasimap data \(\mathcal{D}_{(1)}\), a proper quasimap cannot have a freckle. If it would have a freckle, it must again sit at the intersection of the lines \(c_{4}^{X}\) and \(c_{5}^{X}\). Such strata has again dimension 7 while we still have to impose 8 equations. As before, a proper quasimap can admit a scar. There are two situations to consider: _Case 1._ Assume that the scar passes by the two points which are mapped to points, say \(c_{1}^{X}\) and \(c_{2}^{X}\), cf. Figure 18A. Figure 17. PQL for the quasimap counting data \(\mathcal{D}_{(1)}\). On the complement of the scar the quasimap is a constant map to \(\mathbb{P}^{3}\) upon which we impose 2 equations. This strata is thus a \(Z=\mathbb{P}^{1}\subset\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^{3})\) and its contribution must be calculated by excess intersection theory. Since the position of \(x\) and \(y\) are fixed to be the intersection points of the scar and the line \(c_{4}^{X}\) resp. \(c_{5}^{X}\), one has \(c(E|_{Z})=(1+\zeta)^{13}\) where \(\zeta\) denotes the generator of \(H^{2}(Z)\). Furthermore, it follows that \(c(\operatorname{Var})|_{Z}=c(\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P} ^{3})|_{Z})=(1+\zeta)^{12}\). This allows us to calculate the contribution of \(Z\) according to equation (63): \[\int_{Z}c(B_{Z})=\int_{Z}\frac{c(E|_{Z})c(Z)}{c(\operatorname{Var})|_{Z}}=\int _{\mathbb{P}^{1}}\frac{(1+\zeta)^{13}(1+\zeta)^{2}}{(1+\zeta)^{12}}=3. \tag{101}\] _Case 2._ Assume that the scar passes by the point which is mapped to a line and one of the points that is mapped to a point, say through \(c_{3}^{X}\) and \(c_{1}^{X}\), cf. Figure 18B. On the complement of the scar the quasimap is uniquely fixed by imposing the three equations at \(c_{2}^{X}\). Such a stratum hence contributes with 1 and there exist two such strata (the scar can pass through \(c_{3}^{X}\) and either \(c_{1}^{X}\) or \(c_{2}^{X}\)). This allows us to conclude \[\operatorname{KM}=\operatorname{QM}-\operatorname{PQM}=6-\underbrace{3}_{ \operatorname{scar\,through}\,c_{1}^{X},c_{2}^{X}}-2\cdot\underbrace{1}_{ \operatorname{scar\,through}\,c_{3}^{X},c_{1}^{X}\,{\rm or}\,2}=1. \tag{102}\] **Situation \(D_{(3)}\).** In this situation, the QM number is given by \[\operatorname{QM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D}_{(3)})=\int_{ \mathbb{P}^{11}\times\mathbb{P}^{1}_{(4)}\times\mathbb{P}^{1}_{(5)}}H^{7}(H+ p_{4}^{*}h)^{3}(H+p_{5}^{*}h)^{3}=9. \tag{103}\] **Proposition 6.8**.: _In the present situation, there exists again an unique holomorphic map, i.e._ \[\operatorname{KM}(\mathbb{P}^{2},\mathbb{P}^{3},\mathcal{D}_{(3)})=1.\] _Proof by counting proper quasimaps._ Unlike for the quasimap counting data \(\mathcal{D}_{(1)}\) and \(\mathcal{D}_{(2)}\), a proper quasimap now may admit a freckle: As before, the frecke sits at the intersection of the lines \(c_{4}^{X}\) and \(c_{5}^{X}\). The one-freckle stratum Figure 18. PQL of the counting data \(\mathcal{D}_{(2)}\). has hence dimension 7. But unlike to before, we now only impose exactly 7 equations which determines the once-freckled map uniquely. On the other hand, the quasimap may admit a scar. There are again two cases to consider: _Case 1._ The scar passes through the two points which are mapped to a line, \(c_{1}^{X}\) and \(c_{3}^{X}\), cf. Figure 19A. Since the quasimap away from the scar is constant, it is uniquely fixed by the remaining equations imposed at \(c_{2}^{X}\). This stratum hence contributes with 1. _Case 2._ The scar passes through the point which is mapped to a point and through one of the points which is mapped to a line, say \(c_{1}^{X}\), cf. Figure 19B. In this case, we impose only two equations on the complement of the scar, and hence the stratum is a \(\mathbb{P}^{1}\subset\operatorname{QMap}_{1}(\mathbb{P}^{2},\mathbb{P}^{3}, \mathcal{D}_{(2)})\). This situation is equivalent to the situation we encountered in the proof of Proposition 6.7. As follows from an analogous computation, the excess contribution of this stratum is given by 3. Note that we have two such strata, namely one when the scar passes by \(c_{2}^{X}\) and \(c_{1}^{X}\), and another when the scar passes by \(c_{2}^{X}\) and \(c_{3}^{X}\). In total we conclude \[\operatorname{KM}=\operatorname{QM}-\operatorname{PQM}=9-\underbrace{1}_{ \operatorname{freckle}}-\underbrace{1}_{\operatorname{scar\,through}\,c_{1}^{X},c_{3}^{X}}-2\cdot\underbrace{3}_{\operatorname{scar\,through}\,c_{2}^{X},c_{1 \operatorname{or}3}^{X}}=1. \tag{104}\] ## 7. Smooth conjecture ### The conjecture Fix cycles \(c_{i}^{X}\) in the source \(X=\mathbb{P}^{k}\) and cycles \(c_{i}^{Y}\) in the target \(Y=\mathbb{P}^{n}\), \(i=1,\ldots,l\). We are interested in the solution \[\operatorname{KM}(\mathbb{P}^{k},\mathbb{P}^{n};\{c_{i}^{X},c_{i}^{Y}\}|d) \tag{105}\] of Enumerative Problem A for degree \(d\) holomorphic maps. Let \[\operatorname{ev}_{i}\colon\operatorname{Maps}_{d}(\mathbb{P}^{k},\mathbb{P }^{n})\times\mathbb{P}_{1}^{k}\times\cdots\times\mathbb{P}_{l}^{k}\to\mathbb{ P}^{n}\] Figure 19. PQL of the counting data \(\mathcal{D}_{(3)}\). be the evaluation of a map at the \(i\)-th source point, \(i=1,\ldots,l\). **Conjecture 7.1** (Smooth conjecture).: * (Strong version.) For any _smooth_ representatives \(\alpha_{i}^{X}\in\Omega_{cl}(\mathbb{P}^{k}),\alpha_{i}^{Y}\in\Omega_{cl}( \mathbb{P}^{n})\) of Poincare dual cohomology classes of the homology classes of cycles \(c_{i}^{X,Y}\), the integrals (106a) \[N^{\mathrm{C},\mathrm{S}}(\alpha_{1}^{X},\alpha_{1}^{Y};\cdots; \alpha_{l}^{X},\alpha_{l}^{Y}|d):=\int\limits_{\mathrm{Maps}_{d}(\mathbb{P}^{ k},\mathbb{P}^{n})\times c_{1}^{X}\times\cdots\times c_{l}^{X}}\;\prod_{i=1}^{l} \mathrm{ev}_{i}^{*}(\alpha_{i}^{Y}),\] (106b) \[N^{\mathrm{S},\mathrm{S}}(\alpha_{1}^{X},\alpha_{1}^{Y};\cdots; \alpha_{l}^{X},\alpha_{l}^{Y}|d):=\int\limits_{\mathrm{Maps}_{d}(\mathbb{P}^{ k},\mathbb{P}^{n})\times\mathbb{P}^{k}_{1}\times\cdots\times\mathbb{P}^{k}_{l}}\; \prod_{i=1}^{l}(\alpha_{i}^{X}\wedge\mathrm{ev}_{i}^{*}(\alpha_{i}^{Y}))\] are both convergent and equal to the number (105).22 Footnote 22: Superscripts \(C,S\) stand for “cycles on the source, smooth representatives on the target”; \(S,S\) stands for “smooth representatives on both source and target.” * (Specialized version.) Assume that cycles \(c_{i}^{X},c_{i}^{Y}\) are in complex codimension \(n_{i}^{X},n_{i}^{Y}\) in the source/target and assume that their homology classes are \(d_{i}^{X,Y}\) times the generator of the respective homology group.23 Then one has that the integrals (107a) \[\prod_{i=1}^{l}d_{i}^{Y}\cdot\int_{\mathrm{Maps}_{d}(\mathbb{P}^{ k},\mathbb{P}^{n})\times c_{1}^{X}\times\cdots\times c_{l}^{X}}\prod_{i=1}^{l} \mathrm{ev}_{i}^{*}(\omega_{Y}^{\wedge n_{i}^{Y}}),\] (107b) \[\prod_{i=1}^{l}(d_{i}^{X}d_{i}^{Y})\cdot\int_{\mathrm{Maps}_{d}( \mathbb{P}^{k},\mathbb{P}^{n})\times\mathbb{P}^{k}_{1}\times\cdots\times \mathbb{P}^{k}_{l}}\prod_{i=1}^{l}(\omega_{X}^{\wedge n_{i}^{X}}\wedge\mathrm{ ev}_{i}^{*}(\omega_{Y}^{\wedge n_{i}^{Y}}))\] both exist and are equal to the number (105). Here \(\omega_{X},\omega_{Y}\) are the Fubini-Study 2-forms on the source and the target, respectively. Footnote 23: Recall that \(H_{2j}(\mathbb{P}^{n},\mathbb{Z})=\mathbb{Z}\) for \(j=0,\ldots,n\) and zero otherwise. ### Numerical evidence for the smooth conjecture For simplicity, we will consider a modified version of the integral formula (106a). We will consider the case where \(c_{i}^{X}\) are points for \(i=1,\ldots,k+2\), and \(c_{i}^{X}=\mathbb{P}^{k}\) for \(i>k+2\). We reduce the space of maps by demanding that a map \(f\) sends the cycles \(c_{i}^{X}\) (for \(i=1,\ldots,k+2\)) to the cycles \(c_{i}^{Y}\). We will call the resulting space of maps the _reduced_ space of maps \(\mathrm{Map}_{d}^{\mathrm{red}}(\mathbb{P}^{k},\mathbb{P}^{n})\). It is a section of the \(\mathrm{PSL}(k+1,\mathbb{C})\)-action on \(\mathrm{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\) (cf. footnote 15). Transitioning to the integral over \(\mathrm{Map}_{d}^{\mathrm{red}}(\mathbb{P}^{k},\mathbb{P}^{n})\) corresponds to choosing \(\alpha_{i}^{Y}=\delta_{c_{i}^{Y}}\) to be distributional delta-forms on the cycles \(c_{i}^{Y}\) in (106a) for \(i=1,\ldots,k+2\) and performing the fiber integral in \(\mathrm{Map}_{d}(\mathbb{P}^{k},\mathbb{P}^{n})\to\mathrm{Map}_{d}(\mathbb{P}^ {k},\mathbb{P}^{n})/\mathrm{PSL}(k+1,\mathbb{C})\simeq\mathrm{Map}_{d}^{ \mathrm{red}}(\mathbb{P}^{k},\mathbb{P}^{n})\). In particular, the integrals in Conjecture 7.1 can now be expressed over \(\mathrm{Map}_{d}^{\mathrm{red}}(\mathbb{P}^{k},\mathbb{P}^{n})\), e.g. \[N^{C,S}(\alpha_{k+3}^{X},\alpha_{k+3}^{Y};\ldots;\alpha_{l}^{X},\alpha_{l}^{Y} \mid d)=\int_{\mathrm{Map}_{d}^{\mathrm{red}}(\mathbb{P}^{k},\mathbb{P}^{n}) \times\mathbb{P}^{k}_{k+3}\times\cdots\times\mathbb{P}^{k}_{l}}\prod_{i=k+3}^{l }\mathrm{ev}_{i}^{*}(\alpha_{i}^{Y}).\] Let now \(k=1\), \(n=2\). Let \((x^{0}:x^{1})\) be homogeneous coordinates on the source and \((y^{0}:y^{1}:y^{2})\) homogeneous coordinates on the target. We denote by \(z=x^{1}/x^{0}\) and \(w=x^{0}/x^{1}\) the affine coordinates on the source. We will also write \(d^{2}z=\frac{i}{2}dz\wedge d\bar{z}\) for the real measure on \(\mathbb{C}\). In the examples below we choose the forms on the target \(\alpha_{i}^{Y}\) to be appropriate powers of the Fubini-Study form: \[\alpha_{i}^{Y}=\omega_{Y}^{\text{\rm codim}\,c_{i}^{Y}}\qquad\text{for $i=k+3,\dots,l$.} \tag{108}\] Finally, we will always consider the Fubini-Study form \(\omega\in\Omega^{2}(\mathbb{P}^{2})\) to be normalized by \[\int_{\mathbb{P}^{2}}\omega=1.\] #### 7.2.1. 1 or 2 We start with the example of Section 6.1.2. Fix \[c_{1}^{X}=(0:1),\quad c_{2}^{X}=(1:1),\quad c_{3}^{X}=(1:0),\quad c_{4}^{X}= \mathbb{P}^{1}\] and \[c_{1}^{Y}=(0:y^{1}:y^{2}),\quad c_{2}^{Y}=(y^{0}:y^{1}:y^{0}+y^{1}),\quad c_{3 }^{Y}=(1:0:0),\quad c_{4}^{Y}=q\] where \(q\) is a fixed point in \(\mathbb{P}^{2}\). By our convention (108), we set \(\alpha_{4}^{Y}=\omega^{2}\). The map counting data is depicted in Figure 20 below. We impose the condition that \(f\colon c_{i}^{X}\mapsto c_{i}^{Y}\) for \(i=1,2,3\), i.e. we opt to compute the integral \[N^{C,S}(1,\omega^{2}\mid 1)=\int_{\text{Map}^{\text{red}}_{1}(\mathbb{P}^{1}, \mathbb{P}^{2})\times\mathbb{P}^{1}}\text{ev}_{4}^{*}\omega^{2}. \tag{109}\] As we have shown in Section 6.1.2, cf. (73), a quasimap which sends \(c_{i}^{X}\) to \(c_{i}^{Y}\), for \(i=1,2,3\), is parametrized by \[\underline{f}(x^{0}:x^{1})=(x^{0}:bx^{1}:(1+b)x^{1})\] and defines an proper map \(f\) for \(b\neq 0\). Thus \(\text{Map}^{\text{red}}_{1}(\mathbb{P}^{1},\mathbb{P}^{2})=\mathbb{C}^{*}\). Let us consider the chart \(x_{0}\neq 0\) and \(b\neq 0\). In this chart, we can write the map \(f\) as \[f_{b}(z)=(bz,(1+b)z). \tag{110}\] Figure 20. Map counting data. Since the point \(b=0\) has zero measure, we can compute (109) by \[N^{C,S}(1\mid\omega^{2})=\int_{\mathbb{C}^{2}}\operatorname{ev}(f_{b},z)^{*} \omega^{2},\] where now \[\operatorname{ev}(f_{b},z)^{*}\omega^{2}=\frac{2}{\pi^{2}}\frac{|z|^{2}}{(1+|bz |^{2}+|(1+b)z|^{2})^{3}}\ d^{2}b\wedge d^{2}z. \tag{111}\] This integral evaluates exactly to \(1\), which is the KM number of the problem. #### 7.2.2. 1 or 4 Let us revisit the example of Section 6.1.3. Fix \[c_{1}^{X}=(0:1),\quad c_{2}^{X}=(1:1),\quad c_{3}^{X}=(1:0),\quad c_{4}^{X}=c_{ 5}^{X}=\mathbb{P}^{1},\] as well as \[c_{i}^{Y}=\{y^{i}=0\},\quad c_{4}^{Y}=p,\quad c_{5}^{Y}=q,\] where \(i=1,2,3\) and \(p,q\in\mathbb{P}^{2}\) are two fixed points. By convention (108), we set \(\alpha_{4}^{Y}=\alpha_{5}^{Y}=\omega^{2}\). The quasimap counting data is depicted in Figure 21 below. We again impose the condition that \(f\colon c_{i}^{X}\mapsto c_{i}^{Y}\) for \(i=1,2,3\), so that \[N^{C,S}(1,\omega^{2};1,\omega^{2}\mid 1)=\int_{\operatorname{Map}_{1}^{\operatorname{ red}}(\mathbb{P}^{1},\mathbb{P}^{2})\times\mathbb{P}^{1}}\operatorname{ev}_{4}^{*} \omega^{2}\wedge\operatorname{ev}_{5}^{*}\omega^{2}. \tag{112}\] A quasimap \(\underline{f}\in\operatorname{QMap}_{1}(\mathbb{P}^{1},\mathbb{P}^{2})\) that maps \(c_{i}^{X}\) to \(c_{i}^{Y}\) for \(i=1,2,3\), can be described by a point \((a:b:c)\in\mathbb{P}^{2}\), cf. (75) \[\underline{f}(x^{0}:x^{1})=(ax^{0},b(x^{1}-x^{0}),cx^{1}),\] which defines a proper map if at most one of the coefficients \(a,b,c\) vanishes. The space of reduced maps is therefore \(\operatorname{Map}_{1}^{\operatorname{red}}(\mathbb{P}^{1},\mathbb{P}^{2})=( \mathbb{C}^{*})^{2}\). In the charts \(x^{1}\neq 0\) of the source \(\mathbb{P}^{1}\) and \(c\neq 0\) of \(\mathbb{P}^{2}\), the map takes the form \[f_{\alpha,\beta}(w)=(\alpha w,\beta(1-w)), \tag{113}\] where \(\alpha=a/c\) and \(\beta=b/c\). One then finds that \[\operatorname{ev}(f_{\alpha,\beta},w_{1})^{*}\omega^{2}\wedge\operatorname{ ev}(f_{\alpha,\beta},w_{2})^{*}\omega^{2}=\frac{4}{\pi^{4}}\frac{|\alpha|^{2}| \beta|^{2}|w_{1}-w_{2}|^{2}}{(F(w_{1})F(w_{2}))^{3}}d\mu_{\mathbb{C}^{4}}, \tag{114}\] Figure 21. Map counting data. where \[d\mu_{\mathbb{C}^{4}}=d^{2}w_{1}\wedge d^{2}w_{2}\wedge d^{2}\alpha\wedge d^{2}\beta\] and \[F(w)=1+|\alpha|^{2}|w|^{2}+|\beta|^{2}|1-w|^{2}.\] Recall that in order to omit quasimaps, we need to have \(\alpha\cdot\beta\neq 0\). It follows that \[N^{C,S}(1,\omega^{2};1,\omega^{2}\mid 1)=\int_{\mathbb{C}^{4}}\operatorname{ev}(f _{\alpha,\beta},w_{1})^{*}\omega^{2}\wedge\operatorname{ev}(f_{\alpha,\beta}, w_{2})^{*}\omega^{2} \tag{115}\] which evaluates numerically to \[N^{C,S}\approx 1.13106\] (using Mathematica's built-in NIntegrate with "Infinity" as integration boundaries). It is quite close to 1, which is again the KM number of the problem. #### 7.2.3. 1 or 16 We now turn to the example of degree 2 maps from \(\mathbb{P}^{1}\) to \(\mathbb{P}^{2}\). As in Section 6.4.3, we fix \[c_{1}^{X}=(0:1),\quad c_{2}^{X}=(1:1),\quad c_{3}^{X}=(1:0),\quad c_{4}^{X}=c_ {5}^{X}=\mathbb{P}^{1},\] and \[c_{1}^{Y}=(1:0:0),\quad c_{2}^{Y}=(0:1:0)\quad c_{3}^{Y}=(0:0:1),\quad c_{4}^{ Y}=p,\quad c_{5}^{Y}=q,\] where \(p\) and \(q\) are again two points in \(\mathbb{P}^{2}\). By convention (108), we again set \(\alpha_{4}^{Y}=\alpha_{5}^{Y}=\omega^{2}\). Consider the problem of maps which send \(c_{i}^{X}\) to \(c_{i}^{Y}\) as shown in Figure 22 below: Any such map can be parametrized by a point \((a:b:c)\in\mathbb{P}^{2}\), cf. (87) \[\underline{f}(x^{0}:x^{1})=(ax^{0}(x^{0}-x^{1}):bx^{0}x^{1}:cx^{1}(x^{0}-x^{ 1})).\] In the charts \(x^{0}\neq 0\), \(b\neq 0\), the map is given by \[f_{\alpha,\beta}(z)=(\alpha(1-1/z),\beta(1-z)),\qquad\alpha\neq 0, \tag{116}\] Figure 22. Map counting data. where \(\alpha=a/b\) and \(\beta=c/b\). The space of reduced maps is therefore \(\operatorname{Map}_{2}^{\operatorname{red}}(\mathbb{P}^{1},\mathbb{P}^{2})= \mathbb{C}^{*}\times\mathbb{C}\) so that \[N^{C,S}(1,\omega^{2};1,\omega^{2}\mid 2)=\int_{\operatorname{Map}_{2}^{ \operatorname{red}}(\mathbb{P}^{1},\mathbb{P}^{2})\times\mathbb{P}^{1}\times \mathbb{P}^{1}}\operatorname{ev}_{4}^{*}\omega^{2}\wedge\operatorname{ev}_{5}^ {*}\omega^{2}.\] One then finds \[\operatorname{ev}(f_{\alpha,\beta},z_{1})^{*}\omega^{2}\wedge \operatorname{ev}(f_{\alpha,\beta},z_{2})^{*}\omega^{2}=\\ =\frac{4}{\pi^{4}}\frac{|\alpha|^{2}|\beta|^{2}|z_{1}|^{2}|z_{2}| ^{2}|1-z_{1}|^{2}|1-z_{2}|^{2}|z_{1}-z_{2}|^{2}}{(F(z_{1})F(z_{2}))^{3}}d\mu_ {\mathbb{C}^{4}} \tag{117}\] where \[F(z)=|z|^{2}+|\alpha|^{2}|1-z|^{2}+|\beta|^{2}|z|^{2}|1-z|^{2}.\] Now \[N^{C,S}(1,\omega^{2};1,\omega^{2}\mid 2)=\int_{\operatorname{Map}_{2}^{ \operatorname{red}}(\mathbb{P}^{1},\mathbb{P}^{2})\times\mathbb{P}^{1}\times \mathbb{P}^{1}}\operatorname{ev}_{4}^{*}\omega^{2}\wedge\operatorname{ev}_{5}^ {*}\omega^{2}. \tag{118}\] It evaluates numerically to \[N^{C,S}\approx 1.14587\] again using Mathematica's built-in NIntegrate with "Infinity" as integration boundaries. This is again close to 1, which is the KM number. ## Concluding remarks (1) There is another approach to the enumerative numbers studied in this text based on higher-dimensional generalization of Morse-Bott-Floer theory. In particular, holomorphic maps of \(k\)-dimensional toric manifolds may be considered as \(k\)-Morse theory on the space of \(k\)-loops. This approach may lead to a higher-dimensional generalization of the WDVV equation where the important intermediate result is the relation between 2-Morse theory and the algebra of the infrared [7, 12, 23]. We are going to explore it in a subsequent paper. (2) We gave arguments showing that theories with higher-dimensional source may be considered similarly to theories with 1-dimensional source. Therefore, the natural question is tropicalization of such theories, similar to tropicalization of Gromov-Witten invariants [20, 21, 15, 10, 8]. (3) The examples showed in this paper can be seen as showing the phenomena with 0-defects (freckles) and \(2_{\mathbb{R}}\)-dimensional defects (scars) in 4-dimensional holomorphic gauged linear model. Quantum field theory treatment of this problem will appear elsewhere.
2306.10552
Weighted Subsequential ergodic theorems on Orlicz spaces
For a semifinite von Neumann algebra M, individual convergence of subsequential, \mathcal{Z}(M) (center of M) valued weighted ergodic averages are studied in noncommutative Orlicz spaces. In the process, we also derive a maximal ergodic inequality corresponding to such averages in noncommutative L^p~ (1 \leq p < \infty) spaces using the weak (1,1) inequality obtained by Yeadon.
Panchugopal Bikram, Diptesh Saha
2023-06-18T13:21:09Z
http://arxiv.org/abs/2306.10552v1
# Weighted subsequential ergodic theorems on Orlicz spaces ###### Abstract. For a semifinite von Neumann algebra \(M\), individual convergence of subsequential, \(\mathcal{Z}(M)\) (center of \(M\)) valued weighted ergodic averages are studied in non commutative Orlicz spaces. In the process, we also derive a maximal ergodic inequality corresponding to such averages in noncommutative \(L^{p}\) (\(1\leq p<\infty\)) spaces using the weak \((1,1)\) inequality obtained by Yeadon. Key words and phrases:maximal ergodic inequality, individual ergodic theorems, Besicovitch weights, vector valued weights, non-commutative Orlicz spaces 2010 Mathematics Subject Classification: Primary: 46L55, 47A35 ; Secondary: 46L51, 46L52 ## 1. Introduction The connection between ergodic theory and von Neumann algebra dates back to the very inception of theory of operator algebra. The study of pointwise ergodic theorems plays a center role in classical ergodic theory and has a very deep connection with statistical physics as well. However, the study of analogous ergodic theorems in the non commutative settings originated only in the pioneering work of Lance [10] in 1976. After that the theory flourished and many authors extended the results of Lance to various directions. We refer here to [1], [2], [3] and the references therein. Yeadon [11] first studied the ergodic theorems in the predual of a semifinite von Neumann algebra. He proved a maximal ergodic theorem in noncommutative \(L^{1}\) space, which still appears frequently in modern proofs of noncommutative ergodic theorems. The corresponding maximal ergodic theorem is extended to noncommutative \(L^{p}\) (\(1<p<\infty\)) space in the celebrated work [2]. Also as a consequence the analogous individual ergodic theorems are proved in the same article. On the other hand an alternative approach solely based on Yeadon's weak \((1,1)\) inequality was opted by various authors to prove various individual ergodic theorems on non commutative \(L^{p}\) spaces. In [12], the author introduced the notion of noncommutative uniform continuity and bilateral uniform continuity in measure at zero and provided an alternative proof of the individual ergodic theorems from [2]. Several attempts has been made since then to improve the results. One natural generalisation is towards the proof of subsequential ergodic theorems. In [13], first attempt was made to prove an individual ergodic theorem along the so called uniform sequence in the von Neumann algebra setting. Simultaneously weighted ergodic theorems also became an interesting area of research. In [14], the authors studied the convergence of standard ergodic averages for actions of free groups and also for the weighted averages. Several other related works are available in the literature. The reader may look into [1], [2], [15], [16], [17] and the references therein. Another extension of these results which has been studied extensively is in the realm of symmetric spaces, in particular, the Orlicz spaces. It is known that the class of Orlicz spaces is significantly wider than the class of \(L^{p}\) spaces. The first account of study of individual ergodic theorems in the case of noncommutative Orlicz spaces is found in [10]. In [10], ergodic theorems for weighted averages is studied in fully symmetric spaces. In this article we study various ergodic theorems associated with (vector valued) weighted ergodic averages along some special subsequences in noncommutative Orlicz spaces. Before this, ergodic averages with vector valued weights has been studied in [11]. Very recently, in [12], the author studied convergence of (scalar) weighted ergodic averages along subsequences in noncommutative \(L^{p}\) (\(1\leq p<\infty\)) spaces. Our aim in this article is to establish an individual ergodic theorem for positive Dunford-Schwartz operator (see Definition 2.12) with von Neumann algebra valued Besicovitch weighted (see Definition 3.1) ergodic averages along subsequence of density one in Orlicz spaces (see Theorem 3.15). Our proof essentially based upon the notion of bilateral uniform continuity in measure for normed linear spaces. Now we describe the layout of this article. In SS2, we collect all the materials which are essential for this article. In particular, we recall some basic facts about von Neumann algebras \(M\) with faithful normal semifinite trace \(\tau\) and space of \(\tau\)- measurable operators. We also discuss a few topologies on this space. After that, we recollect the definition of non commutative Orlicz spaces and some of its properties which are essential for this article. We also define Dunford Schwartz operators and bilaterally uniformly equicontinuity in measure (b.u.e.m.) at zero of sequences and end this section with the recollection of few important theorems regarding this. SS3 begins with the appropriate definition of subsequential weighted ergodic averages. Then we prove a suitable form of maximal ergodic inequality and use it to prove that sequence of averages under study is b.u.e.m. at zero, which essentially helps us to obtain a convergence result in \(L^{1}\cap M\). Finally our main result is achieved. ## 2. Preliminaries Throughout this article we assume that \(M\) is a semifinite von Neumann algebra with faithful, normal, semifinite (f.n.s.) trace \(\tau\) represented on a separable Hilbert space \(\mathcal{H}\). Let \(\mathcal{P}(M)\) (resp. \(\mathcal{P}_{0}(M)\)) denotes the collection of all (resp. non-zero) projections in the von Neumann algebra \(M\). For each \(e\in\mathcal{P}(M)\) we assign \(e^{\perp}\) for the projection \(1-e\), where \(1\) denotes the identity element of \(M\). Let \(\mathcal{B}(\mathcal{H})\) denotes the space of all bounded operators of the Hilbert space \(\mathcal{H}\). A closed densely defined operator \(x:\mathcal{D}_{x}\subseteq\mathcal{H}\to\mathcal{H}\) is called affiliated to a \(M\) if \(y^{\prime}x\subseteq xy^{\prime}\) for all \(y^{\prime}\in M^{\prime}\), where \(M^{\prime}\) denotes the commutant of \(M\) which is a von Neumann algebra by its own right. Equivalently, one can define \(x\) to be affiliated to \(M\) if \(u^{\prime}x=xu^{\prime}\) holds for all unitary \(u^{\prime}\) in \(M^{\prime}\). When \(x\) is affiliated to \(M\), it is denoted by \(x\eta M\). The center of the von Neumann algebra \(M\) is defined by \(M\cap M^{\prime}\) and it is denoted by \(\mathcal{Z}(M)\). Now we recall that for two positive, self-adjoint operators \(x,y\) defined on \(\mathcal{H}\), \(x\leq y\) is defined as: \(\mathcal{D}_{y}\subseteq\mathcal{D}_{x}\) and \(\left\|x^{1/2}\xi\right\|^{2}\leq\left\|y^{1/2}\xi\right\|^{2}\) for all \(\xi\in\mathcal{D}_{y}\). **Proposition 2.1**.: _Let \(x\) be a positive, self-adjoint operator affiliated to \(M\) and \(z\in\mathcal{Z}(M)_{+}\) be such that \(z\leq C\) for some constant \(C>0\). Then \(0\leq zx\leq Cx\)._ Proof.: First observe that \(\mathcal{D}_{zx}=\mathcal{D}_{x}\subseteq\mathcal{D}_{x^{1/2}}\). Also, \(\mathcal{D}_{zx}\subseteq\mathcal{D}_{(zx)^{1/2}}\). Let \(\xi\in\mathcal{D}_{x}\). Then \[\left\|(zx)^{1/2}\xi\right\|^{2}=\left\langle zx\xi,\xi\right\rangle =\left\langle xz\xi,\xi\right\rangle\text{ ( since }zx\subset xz)\] \[=\left\langle x^{1/2}z\xi,x^{1/2}\xi\right\rangle\text{ ( since }\xi\in\mathcal{D}_{x^{1/2}})\] \[=\left\langle zx^{1/2}\xi,x^{1/2}\xi\right\rangle\text{ ( since }x\eta M)\] \[\leq C\left\|x^{1/2}\xi\right\|^{2}.\] A closed, densely defined operator \(x\) affiliated to \(M\) is said to be \(\tau\)-measurable if for every \(\epsilon>0\) there is a projection \(e\) in \(M\) such that \(e\mathcal{H}\subseteq\mathcal{D}_{x}\) and \(\tau(e^{\perp})<\epsilon\). The set of all \(\tau\)-measurable operators associated to \(M\) is denoted by \(L^{0}(M,\tau)\) or simply \(L^{0}\). For all \(\epsilon,\delta>0\), let us define the following neighborhoods of zero. \[\mathcal{N}(\epsilon,\delta):=\{x\in L^{0}:\exists\;e\in\mathcal{P}(M)\text{ such that }\left\|xe\right\|\leq\epsilon\text{ and }\tau(e^{\perp})\leq\delta\},\text{ and}\] \[\mathcal{N}^{\prime}(\epsilon,\delta):=\{x\in L^{0}:\exists\;e\in\mathcal{P}(M )\text{ such that }\left\|exe\right\|\leq\epsilon\text{ and }\tau(e^{\perp})\leq\delta\}.\] It is established in [15, Theorem 2.2] that the families \(\{\mathcal{N}(\epsilon,\delta):\epsilon>0,\delta>0\}\) and \(\{\mathcal{N}^{\prime}(\epsilon,\delta):\epsilon>0,\delta>0\}\) generate same topology on \(L^{0}\), and it is termed as measure topology in the literature. It is also well-known that \(L^{0}\) becomes a complete, metrizable topological \(*\)-algebra with respect to the measure topology containing \(M\) as a dense subspace [see [11, Theorem 4.12]]. In this article, we also deal with so called almost uniform (a.u.) and bilateral almost uniform (b.a.u) convergence of sequences in \(L^{0}\). We describe it in the following definition. **Definition 2.2**.: A sequence of operators \(\{x_{n}\}_{n\in\mathbb{N}}\subset L^{0}\) converges a.u. (resp. b.a.u.) to \(x\in L^{0}\) if for all \(\delta>0\) there exists a projection \(e\in M\) such that \[\tau(e^{\perp})<\delta\text{ and }\lim_{n\to\infty}\|(x_{n}-x)e\|=0\text{ (resp. }\tau(e^{\perp})<\delta\text{ and }\lim_{n\to\infty}\|e(x_{n}-x)e\|=0).\] Now we recall the following useful lemma from [14, Lemma 3]. **Lemma 2.3**.: _If a sequence \(\{a_{n}\}_{n\in\mathbb{N}}\subset M\) is such that for every \(\epsilon>0\) there is a b.a.u. (a.u. ) convergent sequence \(\{b_{n}\}_{n\in\mathbb{N}}\subset M\) and a positive integer \(N_{0}\) satisfying \(\|a_{n}-b_{n}\|<\epsilon\) for all \(n\geq N_{0}\), then \(\{a_{n}\}_{n\in\mathbb{N}}\) converges b.a.u. (a.u.)._ Next we provide a brief description of noncommutative Orlicz spaces. We follow [16] as our main references. ### Noncommutative Orlicz spaces Let \(M\) be a von Neumann algebra equipped with a f.n.s trace \(\tau\) as mentioned above. The trace \(\tau\) is extended to the positive cone \(L^{0}_{+}\) of \(L^{0}\) as follows. Suppose \(x\in L^{0}_{+}\) with the spectral decomposition \(x=\int_{0}^{\infty}\lambda de_{\lambda}\). Then \(\tau(x)\) is defined by \[\tau(x):=\int_{0}^{\infty}\lambda d\tau(e_{\lambda}).\] For \(0<p\leq\infty\), the noncommutative \(L^{p}\)-space associated to \((M,\tau)\) is defined as \[L^{p}(M,\tau):=\begin{cases}\{x\in L^{0}:\|x\|:=\tau(|x|^{p})^{1/p}<\infty\}& \text{ for }p\neq\infty\\ (M,\|\cdot\|)&\text{ for }p=\infty\end{cases}\] where, \(|x|=(x^{*}x)^{1/2}\). From here onwards we will simply write \(L^{p}\) for noncommutative \(L^{p}\)-spaces. Let \(x\in L^{0}\). Consider the spectral decomposition \(|x|=\int_{0}^{\infty}sde_{s}\). The distribution function of \(x\) is defined by \[(0,\infty):s\mapsto\lambda_{s}(x):=\tau(e_{s}^{\perp}(|x|))\in[0,\infty]\] and the generalised singular number of \(x\) is defined by \[(0,\infty):t\mapsto\mu_{t}(x):=\inf\{s>0:\lambda_{s}(x)\leq t\}\in[0,\infty].\] Note that both the functions are decreasing and continuous from right on \((0,\infty)\). Among many other properties of generalised singular number, here we recall the following ones which will be used later. **Proposition 2.4**.: _Let \(a,b,c\in L^{0}\). Then_ 1. \(\mu_{t}(f(|a|))=f(\mu_{t}(a))\)_,_ \(t>0\) _and for any continuous increasing function_ \(f\) _on_ \([0,\infty)\) _with_ \(f(0)\geq 0\)_._ 2. \(\mu_{t}(bac)\leq\|b\|\,\|c\|\,\mu_{t}(a)\) _for all_ \(t>0\)_._ 3. \(\tau(f(|a|))=\int_{0}^{\infty}f(\mu_{t}(a))dt\) _for any continuous increasing function_ \(f\) _on_ \([0,\infty)\) _with_ \(f(0)=0\)_._ Proof.: For the proofs we refer to [12, Lemma 2.5 and Corollary 2.8]. **Definition 2.5**.: A convex function \(\Phi:[0,\infty)\to[0,\infty)\) which is continuous at \(0\) with \(\Phi(0)=0\) and \(\Phi(t)>0\) when \(t\neq 0\) is called an Orlicz function. It is to be noted that the convexity of the function \(\Phi\) and continuity at \(0\) imply that the function is continuous on \([0,\infty)\). Moreover, it is also evident that \(\Phi(\lambda t)\leq\lambda\Phi(t)\) whenever \(0\leq\lambda\leq 1\) and \(t\in[0,\infty)\), which implies \(\Phi(t_{1})<\Phi(t_{2})\) for all \(0\leq t_{1}<t_{2}\). Hence the function \(\Phi\) is increasing. The following result from [10, Lemma 2.1] is crucial. **Lemma 2.6**.: _Let \(\Phi\) be an Orlicz function. Then for all \(\delta>0\) there exists \(u>0\) satisfying the condition_ \[u\cdot\Phi(t)\geq t\ \text{ whenever }t\geq\delta.\] _In particular, \(\lim_{t\to\infty}\Phi(t)=\infty\)._ Now let \(\Phi\) be an Orlicz function and consider \(x\in L^{0}_{+}\) with the spectral decomposition \(x=\int_{0}^{\infty}\lambda d(e_{\lambda})\). Then by means of functional calculus, we have \[\Phi(x)=\int_{0}^{\infty}\Phi(\lambda)de_{\lambda}.\] The noncommutative Orlicz space associated to \((M,\tau)\) for the Orlicz function \(\Phi\) is defined as \[L^{\Phi}=L^{\Phi}(M,\tau):=\Big{\{}x\in L^{0}:\tau\Big{(}\Phi\Big{(}\frac{|x|} {\lambda}\Big{)}\Big{)}<\infty\text{ for some }\lambda>0\Big{\}}.\] The space \(L^{\Phi}\) is equipped with the norm (called Luxemburg norm) \[\|x\|:=\inf\Big{\{}\lambda>0:\tau\Big{(}\Phi\Big{(}\frac{|x|}{\lambda}\Big{)} \Big{)}\leq 1\Big{\}},\ x\in L^{\Phi}.\] It follows from [13, Proposition 2.5] that \(L^{\Phi}\) equipped with the norm defined above is a Banach space. We now prove the following result. **Proposition 2.7**.: _Suppose \(x\in L^{\Phi}\), then_ 1. _if_ \(a,b\in M\)_, then_ \(axb\in L^{\Phi}\)_. Moreover,_ \(\left\|axb\right\|_{\Phi}\leq\left\|a\right\|\left\|b\right\|\left\|x\right\|_{\Phi}\) _and_ 2. _if_ \(\left\|x\right\|\leq 1\)_, then_ \(\tau(\Phi(\left|x\right|))\leq\left\|x\right\|\)_._ Proof.: \((i)\) Let \(\lambda>0\) and observe that \[\tau\Big{(}\Phi\Big{(}\frac{\left|axb\right|}{\left\|a\right\| \left\|b\right\|\left\lambda\right)}\Big{)} =\int_{0}^{\infty}\Phi\Big{(}\mu_{t}\Big{(}\frac{axb}{\left\|a \right\|\left\|b\right\|\left\lambda\right)}\Big{)}dt\ [\text{ by }(iii)\text{ of Proposition \ref{prop:1}}]\] \[\leq\int_{0}^{\infty}\Phi\Big{(}\mu_{t}\Big{(}\frac{x}{\lambda} \Big{)}\Big{)}dt\ [\text{ by }(ii)\text{ of Proposition \ref{prop:1}}]\] \[=\tau\Big{(}\Phi\Big{(}\frac{\left|x\right|}{\lambda}\Big{)} \Big{)}\ [\text{ by }(iii)\text{ of Proposition \ref{prop:1}}]. \tag{2.1}\] Then, note that \[\inf\{\lambda>0:\tau\Big{(}\Phi\Big{(}\frac{\left|axb\right|}{ \lambda}\Big{)}\Big{)}\leq 1\} =\inf\{\left\|a\right\|\left\|b\right\|\left\lambda>0:\tau\Big{(} \Phi\Big{(}\frac{\left|axb\right|}{\left\|a\right\|\left\|b\right\|\left\lambda \right)}\Big{)}\leq 1\}\] \[=\left\|a\right\|\left\|b\right\|\inf\{\lambda>0:\tau\Big{(}\Phi \Big{(}\frac{\left|axb\right|}{\left\|a\right\|\left\|b\right\|\left\lambda \right)}\Big{)}\leq 1\}.\] Therefore, by Eq. 2.1 we have \[\left\|axb\right\|_{\Phi}=\inf\{\lambda>0:\tau\Big{(}\Phi\Big{(} \frac{\left|axb\right|}{\lambda}\Big{)}\Big{)}\leq 1\} \leq\left\|a\right\|\left\|b\right\|\inf\{\lambda>0:\tau\Big{(} \Phi\Big{(}\frac{\left|x\right|}{\lambda}\Big{)}\Big{)}\leq 1\}\] \[=\left\|a\right\|\left\|b\right\|\left\|x\right\|_{\Phi}.\] _Proof of \((ii)\)_; it follows immediately from [17, Proposition 2.2]. Let us now recall that a Banach space \((E,\left\|\cdot\right\|)\subset L^{0}\) is called fully symmetric if \[x\in E,\ y\in L^{0},\ \int_{0}^{s}\mu_{t}(y)dt\leq\int_{0}^{s}\mu_{t}(x)dt\ \forall\ s>0\ \Rightarrow\ y\in E\ \text{ and }\left\|y\right\|\leq\left\|x\right\|\] and a fully symmetric space \((E,\left\|\cdot\right\|)\subseteq L^{0}\) is said to have Fatou Property if \[x_{\alpha}\in E_{+},\ x_{\alpha}\leq x_{\beta}\ \text{ for }\alpha\leq\beta \text{ and }\sup_{\alpha}\left\|x_{\alpha}\right\|<\infty\Rightarrow\exists\ x=\sup_{ \alpha}x_{\alpha}\in E\text{ and }\left\|x\right\|=\sup_{\alpha}\left\|x_{\alpha}\right\|.\] Now the following proposition holds true. **Proposition 2.8**.: \((L^{\Phi},\left\|\cdot\right\|)\) _is a fully symmetric space with the Fatou property and an exact interpolation space for the Banach couple \((L^{1},M)\)._ Proof.: Proof follows from [17, Corollary 2.2]. As a consequence we remark the following. **Remark 2.9**.: It follows from [16, Theorem 4.1] and Proposition 2.8 that unit ball of \((L^{\Phi},\left\|\cdot\right\|)\) is closed under measure topology. **Definition 2.10**.: An Orlicz function \(\Phi\) is said to satisfy \(\Delta_{2}\) condition if there exists \(d>0\) such that \[\Phi(2t)\leq d\Phi(t)\text{ for all }t\geq 0.\] Observe that for every \(1\leq p<\infty\), \(\Phi(u)=\frac{u^{p}}{p}\), \(u\geq 0\) is an Orlicz function which satisfy the \(\Delta_{2}\) condition. Also, in this case \(L^{\Phi}=L^{p}\) for all \(1\leq p<\infty\). **Proposition 2.11**.: _Let \(\Phi\) be an Orlicz function satisfying \(\Delta_{2}\) condition. Then the linear subspace \(L^{1}\cap M\) is dense in \((L^{\Phi},\left\|\cdot\right\|)\)._ Proof.: For the proof we refer to [17, Proposition 2.3]. **Definition 2.12**.: A linear map \(T:L^{1}+M\to L^{1}+M\) is called Dunford-Schwartz operator if it contracts both \(L^{1}\) and \(M\), i.e, \[\left\|Tx\right\|_{\infty}\leq\left\|x\right\|_{\infty}\ \ \forall\ x\in M\ \text{and}\ \left\|Tx\right\|_{1}\leq\left\|x\right\|_{1}\ \ \forall\ x\in L^{1}.\] If in addition \(T(x)\geq 0\) for all \(x\geq 0\) then we call \(T\) is a positive Dunford-Schwartz operator. We write \(T\in DS\) (resp. \(T\in DS^{+}\)) to denote \(T\) is a Dunford-Schwartz operator (resp. positive Dunford-Schwartz operator). Let \(T\in DS\). Then observe that for an Orlicz function \(\Phi\) the space \(L^{\Phi}\) is an exact interpolation space for the Banach couple \((L^{1},M)\) (by Proposition 2.8). Therefore we have \[T(L^{\Phi})\subseteq L^{\Phi}\ \text{and}\ \left\|T\right\|\leq 1.\] **Definition 2.13**.: Let \((X,\left\|\cdot\right\|)\) be a normed linear space and \(Y\subseteq X\) be such that the zero of \(X\) is a limit point of \(Y\). A family of maps \(A_{\alpha}:X\to L^{0}\), \(\alpha\in I\), is called uniformly equicontinuous in measure (u.e.m) [ bilaterally uniformly equicontinuous in measure (b.u.e.m)] at zero on \(Y\) if for all \(\epsilon,\delta>0\), there exists \(\gamma>0\) such that for all \(x\in Y\) with \(\left\|x\right\|<\gamma\) there exists \(e\in\mathcal{P}(M)\) such that \[\tau(e^{\perp})<\epsilon\ \text{and}\ \sup_{\alpha\in I}\left\|A_{\alpha}(x)e \right\|_{\infty}<\delta\ \text{(respectively,}\ \sup_{\alpha\in I}\left\|eA_{\alpha}(x)e \right\|_{\infty}<\delta).\] Now we recall the following significant result from [14, Theorem 2.1] which will play an important role in our studies. **Theorem 2.14**.: _Let \((X,\left\|\cdot\right\|)\) be a Banach space and \(A_{n}:X\to L^{0}\) be a sequence of additive maps. If the sequence \(\left\{A_{n}\right\}_{n\in\mathbb{N}}\) is b.u.e.m (u.e.m.) at zero on \(X\), then the set_ \[\left\{x\in X:\left\{A_{n}(x)\right\}\ \text{converges b.a.u (a.u.)}\right\}\] _is closed in \(X\)._ We end this section with a brief introduction to density and lower density of a sequence of natural numbers. **Definition 2.15**.: A sequence \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) of natural numbers is said to have density (resp, lower density) \(d\) if \[\lim_{n\to\infty}\frac{\left|\left\{0,1,\ldots,n\right\}\cap\mathbf{k}\right| }{n+1}=d\ \text{(resp,}\ \liminf_{n\to\infty}\frac{\left|\left\{0,1,\ldots,n\right\}\cap\mathbf{k} \right|}{n+1}=d).\] **Remark 2.16**.: We remark that if a sequence \(\mathbf{k}\) has density \(d\), then \(\lim_{n\to\infty}\frac{k_{n}}{n}=\frac{1}{d}\). Moreover, we recall from [11, Lemma 40] that a sequence \(\mathbf{k}\) has lower density \(d\) if and only if \(\sup_{n\in\mathbb{N}}\frac{k_{n}}{n}<\infty\). ## 3. Convergence along sequence of density one Throughout this section \(M\) is assumed to be a semifinite von Neumann algebra with f.n.s trace \(\tau\) and \(T\in DS^{+}\). In this section, we will study the convergence of ergodic averages with \(M\)-valued Besicovitch weights (see Definition 3.1 and Definition 3.11) along sequence of density one. In particular, we will prove the b.a.u. convergence of sequences of such averages in the spaces \(L^{\Phi}\) for some Orlicz function \(\Phi\). Convergence of usual vector valued weighted averages in norm and b.a.u. topology has already been studied in [11]. In this section, we also extend some of these results. We begin with few definitions of ergodic averages. **Definition 3.1**.: Let \(T\in DS^{+}\). For \(\{b_{j}\}_{j\in\mathbb{N}}\subset M\) and \(\{d_{j}\}_{j\in\mathbb{N}}\subset M\) and any sequence \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) of natural numbers, define \[A_{n}(\{b_{j}\},\{d_{j}\},x):=\frac{1}{n}\sum_{j=0}^{n-1}T^{j}(b _{j}xd_{j}),\qquad A_{n}(\{b_{j}\},x):=\frac{1}{n}\sum_{j=0}^{n-1}T^{j}(b_{j} x);\] \[\text{and }A_{n}^{\mathbf{k}}(\{b_{j}\},\{d_{j}\},x)):=\frac{1}{n} \sum_{j=0}^{n-1}T^{k_{j}}(b_{k_{j}}xd_{k_{j}}),\qquad A_{n}^{\mathbf{k}}(\{b_ {j}\},x):=\frac{1}{n}\sum_{j=0}^{n-1}T^{k_{j}}(b_{k_{j}}x)\] for all \(n\in\mathbb{N}\) and \(x\in L^{1}+M\). Here we observe that when the sequence \(\{b_{j}\}_{j\in\mathbb{N}}\) consists of only scalars \(\beta:=\{\beta_{j}\}_{j\in\mathbb{N}}\) and the set \(\{d_{j}\}_{j\in\mathbb{N}}\) consists of only identity of \(M\), then the averages mentioned above will be denoted by \(A_{n}^{\beta}(x)\) and \(A_{n}^{\beta,\mathbf{k}}(x)\) respectively for \(x\in L^{1}+M\). Convergence of such averages are studied in [10]. Let us now recall the following maximal ergodic theorem from [13]. This result is crucial in obtaining a maximal ergodic inequality in the form required for our purpose. **Theorem 3.2**.: _Let \(T\in DS^{+}\). Then for all \(x\in L^{1}_{+}\) and \(\epsilon>0\) there exists \(e\in\mathcal{P}(M)\) such that_ \[\tau(e^{\perp})\leq\frac{\|x\|_{1}}{\epsilon}\text{ and }\sup_{n\in\mathbb{N}} \|eA_{n}(\{1\},x)e\|\leq\epsilon.\] Although the following lemma is a part of the proof of Theorem 2.1 in [10], we include the proof here for the sake of completeness. **Lemma 3.3**.: _Let \(1\leq p<\infty\), \(x\in L^{p}_{+}\) and \(\epsilon>0\). Then there exists \(e\in\mathcal{P}(M)\) such that_ \[\tau(e^{\perp})\leq\Big{(}\frac{\|x\|_{p}}{\epsilon}\Big{)}^{p} \text{ and }\sup_{n\in\mathbb{N}}\|eA_{n}(\{1\},x)e\|_{\infty}\leq 2\epsilon\] Proof.: Consider the spectral decomposition of \(x=\int_{0}^{\infty}\lambda de_{\lambda}\). Note that since \(\lambda\geq\epsilon\Rightarrow\lambda\leq\epsilon^{1-p}\lambda^{p}\), we have \[\int_{\epsilon}^{\infty}\lambda de_{\lambda}\leq\epsilon^{1-p} \int_{\epsilon}^{\infty}\lambda^{p}de_{\lambda}\leq\epsilon^{1-p}x^{p}.\] Therefore, we obtain \[x=\int_{0}^{\epsilon}\lambda de_{\lambda}+\int_{\epsilon}^{ \infty}\lambda de_{\lambda}\leq x_{\epsilon}+\epsilon^{1-p}x^{p},\] where \(x_{\epsilon}=\int_{0}^{\epsilon}\lambda de_{\lambda}\). Now since \(x^{p}\in L^{1}_{+}\), it follows from Theorem 3.2 that there exist \(e\in\mathcal{P}(M)\) such that \[\tau(e^{\perp})\leq\frac{\left\|x^{p}\right\|_{1}}{\epsilon^{p}}=\Big{(}\frac{ \left\|x\right\|_{p}}{\epsilon}\Big{)}^{p}\text{ and }\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{1\},x^{p})e\right\|\leq \epsilon^{p}.\] Consequently, for all \(n\in\mathbb{N}\) we have \[0\leq eA_{n}(\{1\},x)e\leq eA_{n}(\{1\},x_{\epsilon})e+\epsilon^{1-p}eA_{n}( \{1\},x^{p})e.\] Since \(x_{\epsilon}\in M\) and \(\left\|T(x_{\epsilon})\right\|_{\infty}\leq\left\|x_{\epsilon}\right\|_{\infty}\leq\epsilon\), we conclude that \[\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{1\},x)e\right\|_{\infty}\leq 2\epsilon.\] Now the following result holds. **Theorem 3.4**.: _Let \(\{b_{j}\}_{j\in\mathbb{N}}\) be a bounded sequence in \(\mathcal{Z}(M)\) and \(x\in L^{p}\)\((1\leq p<\infty)\). Then for all \(\epsilon>0\) there exists \(e\in\mathcal{P}(M)\) such that_ \[\tau(e^{\perp})\leq 4\Big{(}\frac{\left\|x\right\|_{p}}{\epsilon}\Big{)}^{p} \text{ and }\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{b_{j}\},x)e\right\|_{\infty}\leq 48C\epsilon,\] _where \(C=\sup_{j\in\mathbb{N}}\left\|b_{j}\right\|_{\infty}\)._ Proof.: First consider \(x\in L^{p}_{+}\) and observe that if \(b_{j}=1\) for all \(j\in\mathbb{N}\), then it follows from Lemma 3.3 that for all \(\epsilon>0\) there exists \(e\in\mathcal{P}(M)\) such that \[\tau(e^{\perp})\leq\Big{(}\frac{\left\|x\right\|_{p}}{\epsilon}\Big{)}^{p} \text{ and }\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{1\},x)e\right\|_{\infty}\leq 2\epsilon. \tag{3.1}\] Now consider \(\{b_{j}\}_{j\in\mathbb{N}}\) to be a bounded sequence in \(\mathcal{Z}(M)\) with \(\left\|b_{j}\right\|_{\infty}\leq C\) for all \(j\in\mathbb{N}\). Then we have \(0\leq\text{Re}(b_{j})+C\leq 2C\) and similarly \(0\leq\text{Im}(b_{j})+C\leq 2C\) for all \(j\in\mathbb{N}\). Therefore, we must have for all \(j\in\mathbb{N}\) \[0\leq(\text{Re}(b_{j})+C)x\leq 2Cx\text{ and }0\leq(\text{Im}(b_{j})+C)x\leq 2Cx.\] Also, for all \(j\in\mathbb{N}\), we have \[T^{j}(b_{j}x)=T^{j}((\text{Re}(b_{j})+C)x)+iT^{j}((\text{Im}(b_{j})+C)x)-(1+i) CT^{j}(x).\] Then Eq 3.1 implies that for all \(\epsilon>0\) there exists \(e\in\mathcal{P}(M)\) such that \[\tau(e^{\perp})\leq\Big{(}\frac{\left\|x\right\|_{p}}{\epsilon}\Big{)}^{p} \text{ and }\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{b_{j}\},x)e\right\|_{\infty}\leq 6C\sup_{n \in\mathbb{N}}\left\|eA_{n}(\{1\},x)e\right\|_{\infty}\leq 12C\epsilon. \tag{3.2}\] For \(x\in L^{p}\), write \(x=(x_{1}-x_{2})+i(x_{3}-x_{4})\), where \(x_{l}\in L^{p}_{+}\) and \(\left\|x_{l}\right\|_{p}\leq\left\|x\right\|_{p}\) for all \(l\in\{1,\ldots,4\}\). Therefore, it follows from Eq 3.2 that there exist projections \(e_{l}\in M\) such that \[\tau(e_{l}^{\perp})\leq\Big{(}\frac{\left\|x\right\|_{p}}{\epsilon}\Big{)}^{p} \text{ and }\sup_{n\in\mathbb{N}}\left\|e_{l}A_{n}(\{b_{j}\},x)e\right\|_{\infty}\leq 1 2C\epsilon\text{ \ for all }l\in\{1,\ldots,4\}.\] Now consider \(e=\wedge_{l=1}^{4}e_{l}\) to obtain the required result. Before we move to our next theorem we need to fix some notations. From here onwards \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) will always denote a strictly increasing sequence of natural numbers. For any sequence \(\{b_{j}\}_{j\in\mathbb{N}}\subset M\) and \(n\in\mathbb{N}\), \(A_{n}(\{b_{j}\},x)\) recall the definition of \(A_{n}^{\mathbf{k}}(\{b_{j}\},x)\) and \(A_{n}^{\mathbf{k}}(\{b_{j}\},x)\) from Definition 3.1, where \(x\in L^{1}+M\). **Theorem 3.5**.: _Let \(\{b_{j}\}_{j\in\mathbb{N}}\) be a bounded sequence in \(\mathcal{Z}(M)\). If the strictly increasing sequence \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) of natural numbers has lower density \(d>0\), then the sequences \(\{A_{n}(\{b_{j}\},\cdot)\}_{n\in\mathbb{N}}\) and \(\{A_{n}^{\mathbf{k}}(\{b_{j}\},\cdot)\}_{n\in\mathbb{N}}\) are b.u.e.m at zero on \((L^{\Phi},\left\|\cdot\right\|_{\Phi})\)._ Proof.: It is enough to to show that the sequences \(\{A_{n}(\{b_{j}\},\cdot)\}_{n\in\mathbb{N}}\) and \(\{A_{n}^{\mathbf{k}}(\{b_{j}\},\cdot)\}_{n\in\mathbb{N}}\) are b.u.e.m at zero on \((L^{\Phi}_{+},\left\|\cdot\right\|_{\Phi})\). Now fix \(\epsilon,\delta>0\). Then by Lemma 2.6, there exists a \(t>0\) such that \[t\cdot\Phi(\lambda)\geq\lambda\ \text{ for all }\lambda\geq\frac{\delta}{2C}.\] Choose \(0<\gamma<\min\{1,\frac{\delta\epsilon}{4\times 96Ct}\}\). Let \(x\in L^{\Phi}_{+}\) with \(\left\|x\right\|_{\Phi}<\gamma\) and let \(x=\int_{0}^{\infty}\lambda de_{\lambda}\) be its spectral decomposition. Then we can write \[x=\int_{0}^{\frac{\delta}{2C}}\lambda de_{\lambda}+\int_{\frac{\delta}{2C}}^{ \infty}\lambda de_{\lambda}\leq x_{\delta}+t\int_{\frac{\delta}{2C}}^{\infty} \Phi(\lambda)de_{\lambda}\leq x_{\delta}+t\Phi(x),\] where \(x_{\delta}=\int_{0}^{\frac{\delta}{2C}}\lambda de_{\lambda}\) and \(\Phi(x)=\int_{0}^{\infty}\Phi(\lambda)de_{\lambda}\). Observe that, \(\left\|x_{\delta}\right\|\leq\frac{\delta}{2C}\) and since \(T\) is a positive Dunford-Schwarz operator we must have \[\sup_{n\in\mathbb{N}}\left\|A_{n}(\{b_{j}\},x_{\delta})\right\|\leq\frac{C \delta}{2C}=\frac{\delta}{2},\] where, \(C=\sup_{j\in\mathbb{N}}\left\|b_{j}\right\|_{\infty}\). Also, since \(\left\|x\right\|_{\Phi}<\gamma<1\), by Proposition 2.7 we have \(\left\|\Phi(x)\right\|_{1}\leq\left\|x\right\|_{\Phi}\). Furthermore, since \(\Phi(x)\in L^{1}_{+}\), by Theorem 3.4 we find \(e\in\mathcal{P}(M)\) satisfying \[\tau(e^{\perp})<\frac{4\times 96Ct\left\|\Phi(x)\right\|_{1}}{ \delta}\leq\frac{4\times 96Ct\left\|x\right\|_{\Phi}}{\delta}<\epsilon\] \[\text{and,}\] \[\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{b_{j}\},\Phi(x))e\right\|< \frac{48C\delta}{96Ct}=\frac{\delta}{2t}.\] Therefore, \[\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{b_{j}\},x)e\right\| \leq\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{b_{j}\},x_{\delta})e \right\|+t\cdot\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{b_{j}\},\Phi(x))e\right\|\] \[<\frac{\delta}{2}+t\cdot\frac{\delta}{2t}=\delta.\] Hence, the sequence \(\{A_{n}(\{b_{j}\},\cdot)\}_{n\in\mathbb{N}}\) is b.u.e.m at zero on \((L^{\Phi}_{+},\left\|\cdot\right\|_{\Phi})\). To show the sequence \(\left\{A_{n}^{\mathbf{k}}(\{b_{j}\},\cdot)\right\}_{n\in\mathbb{N}}\) is b.u.e.m at zero on \((L^{\Phi}_{+},\left\|\cdot\right\|_{\Phi})\), we first consider the sequence \(\{c_{j}\}_{j\in\mathbb{N}}\), where for all \(j\in\mathbb{N}\), \(c_{j}:=\chi_{\mathbf{k}}(j)\). Observe that for all \(n\in\mathbb{N}\), \[A_{n}^{\mathbf{k}}(\{b_{j}\},x)=\frac{k_{n-1}+1}{n}A_{k_{n-1}+1}(\{c_{j}b_{j} \},x). \tag{3.3}\] By the first part of the proof we observe that the sequence \(\{A_{n}(\{c_{j}b_{j},\cdot\})\}_{n\in\mathbb{N}}\) is is b.u.e.m at zero on \((L^{\Phi}_{+},\left\|\cdot\right\|_{\Phi})\). Let \(K=\sup_{n\in\mathbb{N}}\frac{k_{n}}{n}\). It follows from Remark 2.16 that \(0<K<\infty\). Let \(\epsilon,\delta>0\). Let \(\gamma>0\) be such that for all \(x\in L^{\Phi}_{+}\) there exists \(e\in\mathcal{P}(M)\) such that \[\tau(e^{\perp})<\epsilon\ \ \text{and}\ \ \sup_{n\in\mathbb{N}}\left\|eA_{n}(\{c_{j}b_{j}\},x)e \right\|_{\infty}<\frac{\delta}{K}.\] Consequently, \[\sup_{n\in\mathbb{N}}\left\|eA^{\mathbf{k}}_{n}(\{b_{j}\},x)e \right\|_{\infty} =\sup_{n\in\mathbb{N}}\frac{k_{n-1}+1}{n}\left\|eA_{k_{n-1}+1}(\{c _{j}b_{j}\},x)e\right\|_{\infty}\] \[\leq K\sup_{n\in\mathbb{N}}\left\|eA_{n}(\{c_{j}b_{j}\},x)e \right\|_{\infty}\] \[<K\frac{\delta}{K}=\delta.\] This completes the proof. **Corollary 3.6**.: _Let \(\{\beta_{j}\}_{j\in\mathbb{N}}\subset l^{\infty}(\mathbb{C})\). If the strictly increasing sequence \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) of natural numbers has lower density \(d>0\), then the sequences \(\{A^{\beta}_{n}\}_{n\in\mathbb{N}}\) and \(\{A^{\beta,\mathbf{k}}_{n}\}_{n\in\mathbb{N}}\) are b.u.e.m. at zero on \((L^{\Phi},\left\|\cdot\right\|_{\Phi})\)._ **Remark 3.7**.: Let \(\{\beta_{j}\}_{j\in\mathbb{N}}\subset l^{\infty}(\mathbb{C})\). Note that it follows from [11, Proposition 3.1] that the sequences \(\{A^{\beta}_{n}\}_{n\in\mathbb{N}}\) and \(\{A^{\beta,\mathbf{k}}_{n}\}_{n\in\mathbb{N}}\) are b.u.e.m. at zero on \(L^{p}\) for \((1\leq p<\infty)\) where the sequence \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) is of lower density \(d>0\). Therefore, Corollary 3.6 substantially improves Proposition 3.1 of [11]. As a consequence we prove the following proposition which is an important ingredient in proving our main result. **Proposition 3.8**.: _Let \(\{b_{j}\}_{j\in\mathbb{N}}\) be a bounded sequence in \(\mathcal{Z}(M)\). If the strictly increasing sequence \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) of natural numbers has lower density \(d>0\), then the sets_ \[\mathcal{S}^{\{b_{j}\}}:=\left\{x\in L^{\Phi}:\left\{A_{n}(\{b_{j }\},x)\right\}\text{ converges }b.a.u\right\}\text{ and,}\] \[\mathcal{S}^{\{b_{j}\},\mathbf{k}}:=\left\{x\in L^{\Phi}:\left\{ A^{\mathbf{k}}_{n}(\{b_{j}\},x)\right\}\text{ converges }b.a.u\right\}\] _are closed in \(L^{\Phi}\)._ Proof.: Since \((L^{\Phi},\left\|\cdot\right\|_{\Phi})\) is a Banach space and \(\left\{A_{n}(\{b_{j}\},\cdot)\right\}\) and \(\left\{A^{\mathbf{k}}_{n}(\{b_{j}\},\cdot)\right\}\) are sequences of additive maps, the result follows immediately from Theorem 3.5 and Theorem 2.14. **Remark 3.9**.: Let \(\beta:=\{\beta_{j}\}_{j\in\mathbb{N}}\subset l^{\infty}(\mathbb{C})\) and \(\mathbf{k}:=\{k_{j}\}_{j\in\mathbb{N}}\) be as stated in Proposition 3.8. Then we remark that it is evident from Proposition 3.8 that the sets \[\mathcal{S}^{\beta}:=\left\{x\in L^{\Phi}:\left\{A^{\beta}_{n}(x) \right\}\text{ converges }b.a.u\right\}\text{ and,}\] \[\mathcal{S}^{\beta,\mathbf{k}}:=\left\{x\in L^{\Phi}:\left\{A^{ \beta,\mathbf{k}}_{n}(x)\right\}\text{ converges }b.a.u\right\}\] are closed in \(L^{\Phi}\). In what follows \(U(M)\) will always denote the group of unitary operators in \(M\) and \(\sigma(x)\) will denote the spectrum of an operator in \(x\in M\). Let us define, \[U_{f}:=\{u\in U(M):\sigma(u)\text{ is finite}\}.\] **Definition 3.10**.: Let \(U_{0}\subseteq U(M)\). A function \(\psi:\mathbb{N}\to M\) is called a trigonometric polynomial over \(U_{0}\) if for some \(m\in\mathbb{N}\) there exists \(\{z_{j}\}_{1}^{m}\subset\mathbb{C}\) and \(\{u_{j}\}_{1}^{m}\subset U_{0}\) such that \[\psi(k)=\sum_{j=1}^{m}z_{j}u_{j}^{k}\,k\in\mathbb{N}.\] For a trigonometric polynomial \(\psi\) over \(U_{0}\) as defined above, it is clear that \(\left\|\psi\right\|\leq\sum_{j=1}^{m}|z_{j}|\). **Definition 3.11**.: Let \(U_{0}\subseteq U(M)\). A sequence \(\{b_{j}\}\subset M\) is called \(U_{0}\)-besicovitch if for all \(\epsilon>0\) there exists a trigonometric polynomial \(\psi\) over \(U_{0}\) such that \[\limsup_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}\left\|b_{j}-\psi(j)\right\|_{ \infty}\leq\epsilon.\] A \(U_{0}\)-besicovitch sequence \(\{b_{j}\}\) is called bounded if \(\sup_{j\in\mathbb{N}}\left\|b_{j}\right\|_{\infty}<\infty\). Now we recall the following result from [2] regarding the convergence of sequence of ergodic averages and immediately after that we extend it to the case of ergodic averages along a sequence of density \(1\). **Theorem 3.12**.: _Let \(\{b_{j}\}\) and \(\{d_{j}\}\) be \(U_{f}\)-besicovitch sequences such that at least one of which is bounded. Then the averages \(A_{n}(\{b_{j}\},\{d_{j}\},x)\) converge a.u. for all \(x\in L^{1}\cap M\)._ Proof.: For proof we refer to [2, Theorem 5.1]. **Theorem 3.13**.: _Let \(\{b_{j}\}\) and \(\{d_{j}\}\) be \(U_{f}\)-besicovitch sequences with at least one of them is bounded and \(\{k_{j}\}\) be a strictly increasing sequence of natural numbers of density \(1\). Then the sequence of averages \(A_{n}^{\mathrm{k}}(\{b_{j}\},\{d_{j}\},x))\) converges a.u. for all \(x\in L^{1}\cap M\)._ Proof.: Without loss of generality we assume that \(\{d_{j}\}\) is bounded and define \(C:=\sup_{j}\left\|d_{j}\right\|<\infty\). Fix \(\epsilon>0\) and let \(\psi_{1}(\cdot)=\sum_{i=1}^{m}z_{i}u_{i}^{(\cdot)}\) and \(\psi_{2}(\cdot)=\sum_{i=1}^{l}w_{i}v_{i}^{(\cdot)}\) be such that \(\{z_{i}\},\{w_{i}\}\subset\mathbb{C}\), \(\{u_{i}\},\{v_{i}\}\subset U_{f}\) and \[\limsup_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}\left\|b_{j}-\psi_{1}(j) \right\|_{\infty}\leq\epsilon,\ \limsup_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}\left\|d_{j}-\psi_{2}(j) \right\|_{\infty}\leq\epsilon. \tag{3.4}\] Let \(x\in L^{1}\cap M\). Note that by Theorem 3.12 the averages \(A_{n}(\{b_{j}\},\{d_{j}\},x)\) converges a.u. In particular, the averages \(A_{n}(\{\psi_{1}(j)\},\{\psi_{2}(j)\},x)\) converges a.u. Hence the subsequence \(A_{k_{n}}(\{\psi_{1}(j)\},\{\psi_{2}(j)\},x)\) converges a.u. Define, \[M_{n}(\{\psi_{1}(j)\},\{d_{j}\},x):=\frac{1}{k_{n}}\sum_{j=0}^{n-1}T^{k_{j}}( \psi_{1}(k_{j})xd_{k_{j}}),\ n\in\mathbb{N}.\] Now, we have \[\left\|A_{k_{n}}(\{\psi_{1}(j)\},\{\psi_{2}(j)\},x)-M_{n}(\{\psi_ {1}(j)\},\{d_{j}\},x)\right\|\] \[= \left\|\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}T^{j}(\psi_{1}(j)x\psi_ {2}(j))-\frac{1}{k_{n}}\sum_{j=0}^{n-1}T^{k_{j}}(\psi_{1}(k_{j})xd_{k_{j}})\right\|\] \[\leq \left\|\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}T^{j}(\psi_{1}(j)x\psi_ {2}(j))-\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}T^{j}(\psi_{1}(j)xd_{j})\right\|\] \[\leq\epsilon\left\|x\right\|C\left(\text{ by Eq. \ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeq **Theorem 3.15**.: _Assume that the Orlicz function \(\Phi\) satisfies \(\Delta_{2}\) condition. Let \(\mathbf{k}:=\{k_{j}\}\) be a sequence of density \(1\) and \(\{b_{j}\}_{j\in\mathbb{N}}\) be a bounded \(U_{f}\)-besicovitch sequence in \(\mathcal{Z}(M)\). Then for every \(x\in L^{\Phi}\) the sequence \(\{A_{n}^{\mathbf{k}}(\{b_{j}\},x)\}\) converges b.a.u to some \(\hat{x}\in L^{\Phi}\)._ Proof.: Define, \(\mathcal{S}^{\{b_{j}\},\mathbf{k}}:=\left\{x\in L^{\Phi}:\left\{A_{n}^{ \mathbf{k}}(\{b_{j}\},x)\right\}\text{ converges }b.a.u\right\}\). Note that, by Proposition 3.8 the set \(\mathcal{S}^{\{b_{j}\},\mathbf{k}}\) is closed in \(L^{\Phi}\). Since \(L^{1}\cap M\) is dense in \(L^{\Phi}\), we have \(\mathcal{S}^{\{b_{j}\},\mathbf{k}}=L^{\Phi}\). Let \(x\in L^{\Phi}\). Then by Proposition 2.7\(\{A_{n}^{\mathbf{k}}(\{b_{j}\},x)\}_{n\in\mathbb{N}}\subset L^{\Phi}\). Also there exists \(\hat{x}\in L^{0}\) such that \(A_{n}^{\mathbf{k}}(\{b_{j}\},x)\) converges b.a.u. to \(\hat{x}\), hence in measure. Now since \(\left\|T\right\|\leq 1\), we observe that for all \(n\in\mathbb{N}\), \[\left\|A_{n}^{\mathbf{k}}(\{b_{j}\},x)\right\|_{\Phi}\leq\frac{1}{n}\sum_{j=0 }^{n-1}\left\|b_{k_{j}}\right\|\left\|x\right\|_{\Phi}\leq C\left\|x\right\|_{ \Phi},\] where \(C=\sup_{j\in\mathbb{N}}\left\|b_{j}\right\|<\infty\). Therefore, for all \(n\in\mathbb{N}\), \(A_{n}^{\mathbf{k}}(\{b_{j}\},x)\) belongs to the closed ball of \((L^{\Phi},\left\|\cdot\right\|_{\Phi})\) of radius \(C\left\|x\right\|_{\Phi}\). Consequently by Remark 2.9, \(\hat{x}\in L^{\Phi}\). **Remark 3.16**.: 1. Following Definition 3.10 and 3.11 one can always define a scalar valued Besicovitch sequence. In particular, A scalar valued trigonometric polynomial is a function \(P:\mathbb{N}\to\mathbb{C}\) satisfying \[P(k)=\sum_{j=1}^{s}r_{j}\lambda_{j}^{k},\ k\in\mathbb{Z}\] for some \(\{r_{j}\}_{j=1}^{s}\subset\mathbb{C}\) and \(\{\lambda_{j}\}_{j=1}^{s}\subset\mathbb{C}^{1}\), where \(\mathbb{C}^{1}:=\{z\in\mathbb{C}:|z|=1\}\). A sequence \(\{\beta_{j}\}_{j=1}^{\infty}\) of complex numbers is called a Besicovitch sequence if for all \(\epsilon>0\) there exists a trigonometric polynomial \(P\) such that \[\limsup_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}|\beta_{j}-P(j)|<\epsilon.\] The sequence \(\{\beta_{j}\}_{j=1}^{\infty}\) is bounded is \(\sup_{j\in\mathbb{N}}|\beta_{j}|<\infty\). 2. Very recently in [1, Corollary 3.2], the author proved the conclusion of Theorem 3.15 when \(x\in L^{p}\) (\(1\leq p<\infty\)) and also under the hypothesis that the Besicovitch weights are scalar valued. Hence our theorem generalises Corollary 3.2 of [1].
2307.07307
Verification of Quantum Systems using Barrier Certificates
Various techniques have been used in recent years for verifying quantum computers, that is, for determining whether a quantum computer/system satisfies a given formal specification of correctness. Barrier certificates are a recent novel concept developed for verifying properties of dynamical systems. In this article, we investigate the usage of barrier certificates as a means for verifying behaviours of quantum systems. To do this, we extend the notion of barrier certificates from real to complex variables. We then develop a computational technique based on linear programming to automatically generate polynomial barrier certificates with complex variables taking real values. Finally, we apply our technique to several simple quantum systems to demonstrate their usage.
Marco Lewis, Paolo Zuliani, Sadegh Soudjani
2023-07-14T12:32:46Z
http://arxiv.org/abs/2307.07307v1
# Verification of Quantum Systems using Barrier Certificates ###### Abstract Various techniques have been used in recent years for verifying quantum computers, that is, for determining whether a quantum computer/system satisfies a given formal specification of correctness. Barrier certificates are a recent novel concept developed for verifying properties of dynamical systems. In this article, we investigate the usage of barrier certificates as a means for verifying behaviours of quantum systems. To do this, we extend the notion of barrier certificates from real to complex variables. We then develop a computational technique based on linear programming to automatically generate polynomial barrier certificates with complex variables taking real values. Finally, we apply our technique to several simple quantum systems to demonstrate their usage. Keywords:barrier certificates, dynamical systems, quantum systems ## 1 Introduction Quantum computers are powerful devices that allow certain problems to be solved faster than classical computers. The research area focusing on the formal verification of quantum devices and software has witnessed the extension of verification techniques from classical systems [6, 19] to the quantum realm. Classical techniques that have been used include theorem provers [11, 15], Binary Decision Diagrams [4, 26], SMT solvers [5, 22] and other tools [12, 23]. Quantum systems evolve according to the Schrodinger equation from some initial state. However, the initial state may not be known completely in advance. One can prepare a quantum system by making observations on the quantum objects, leaving the quantum system in a basis state, but this omits the global phase which is not necessarily known after measurement. Further, the system could be disturbed through some external influence before it begins evolving. This can slightly change the quantum state from the basis state to a state in superposition or possibly an entangled state. By taking into account these uncertain factors, a set of possible initial states from which the system evolves can be constructed. From this initial set, we can see if the system evolves according to some specified behaviour such as reaching or avoiding a particular set of states. As an example, consider a single qubit system that evolves according to a Hamiltonian \(\hat{H}\) implementing the controlled-NOT operation. Through measurement and factoring in for noise, we know the system starts close to \(|10\rangle\). The controlled-NOT operation keeps the first qubit value the same and so we want to verify that, as the system evolves via \(\hat{H}\), the quantum state does not evolve close to \(|00\rangle\) or \(|01\rangle\). The main purpose of this work is to study the application of a technique called _barrier certificates_, used for verifying properties of classical dynamical systems, to check properties of quantum systems similar to the one mentioned above. The concept of barrier certificates has been developed and used in Control Theory to study the safety of dynamical systems from a given set of initial states on real domains [18]. This technique can ensure that given a set of initial states from which the system can start and a set of unsafe states, the system will not enter the unsafe set. This is achieved through separating the unsafe set from the initial set by finding a _barrier_. Barrier certificates can be defined for both deterministic and stochastic systems in discrete and continuous time [2, 14]. The concept has also been used for verification and synthesis against complicated logical requirements beyond safety and reachability [13]. The conditions under which a function is a barrier certificate can be automatically and efficiently checked using SMT solvers [3]. Such functions can also be found automatically using learning techniques even for non-trivial dynamical systems [17]. Dynamical systems are naturally defined on real domains (\(\mathbb{R}^{n}\)). To handle dynamical systems in complex domains (\(\mathbb{C}^{n}\)), one would need to decompose the system into its real and imaginary parts and use the techniques available for real systems. This has two disadvantages, the first being that this doubles the number of variables being used for the analysis. The second disadvantage is that the analysis may be easier to perform directly with complex variables than their real components. As quantum systems use complex values, it is desirable to have a technique to perform the reachability analysis using complex variables. In this paper, we explore the problem of safety verification in quantum systems by extending barrier certificates from real to complex domains. Our extension is inspired by a technique developed by Fang and Sun [9], who studied the stability of complex dynamical systems using Lyapunov functions (where the goal is to check if a system eventually stops moving). Further, we provide an algorithm to generate barrier certificates for quantum systems and use it to generate barriers for several examples. ## 2 Background ### Safety Analysis We begin by introducing the problem of safety for dynamical systems with real state variables \(x\in\mathbb{R}^{n}\). More details can be found in [18]. A continuous dynamical system is described by \[\dot{x}=\frac{\mathrm{d}x}{\mathrm{d}t}=f(x),\quad f:\mathbb{R}^{n}\to\mathbb{R}^{ n},\] where the evolution of the system is restricted to \(X\subseteq\mathbb{R}^{n}\) and \(f\) is usually Lipschitz continuous to ensure existence and uniqueness of the differential equation solution. The set \(X_{0}\subseteq X\) is the set of initial states and the unsafe set \(X_{u}\subseteq X\) is the set of values that the dynamics \(x(t)\) should avoid. These sets lead to the idea of safety for real continuous dynamical systems: Definition 1 (Safety): A system, \(\dot{x}=f(x)\), evolving over \(X\subseteq\mathbb{R}^{n}\) is considered safe if the system cannot reach the unsafe set, \(X_{u}\subseteq X\), from the initial set, \(X_{0}\subseteq X\). That is for all \(t\in\mathbb{R}_{+}\) and \(x(0)\in X_{0}\), then \(x(t)\notin X_{u}\). The safety problem is to determine if a given system is safe or not. Numerous techniques have been developed to solve this problem [10]. Barrier certificates are discussed in Section 2.2. Here, we describe two other common techniques. Abstract InterpretationOne way to perform reachability analysis of a system is to give an abstraction [7; 8] of the system's evolution. Given an initial abstraction that over-approximates the evolution of the system, the abstraction is improved based on false bugs. False bugs are generated when the current abstraction enters the unsafe space but the actual system does not. This method has been investigated for quantum programs in [25], where the authors can verify programs using up to 300 qubits. Backward and Forward ReachabilityA second approach is to start from the unsafe region and reverse the evolution of the system from there. A system is considered unsafe if the reversed evolution enters the initial region. This is backward reachability. Conversely, forward reachability starts from the initial region and is considered safe if the reachable region does not enter the unsafe region. Both backward and forward reachability are discussed in [16; 20; 21]. ### Barrier Certificates Barrier certificates [18] are another technique used for safety analysis. This technique attempts to divide the reachable region from the unsafe region by putting constraints on the initial and unsafe set, and on how the system evolves. The benefit of barrier certificates over other techniques is that one does not need to compute the system's dynamics at all to guarantee safety, unlike in abstract interpretation and backward (or forward) reachability. A barrier certificate is a differentiable function, \(B:\mathbb{R}^{n}\to\mathbb{R}\), that determines safety through the properties that \(B\) has. Generally, a barrier certificate needs to meet the following conditions: \[B(x) \leq 0,\forall x\in X_{0} \tag{1}\] \[B(x) >0,\forall x\in X_{u}\] (2) \[x(0)\in X_{0}\implies B(x(t)) \leq 0,\forall t\in\mathbb{R}_{+}. \tag{3}\] Essentially, these conditions split the evolution space into a (over-approximate) reachable region and an unsafe region, encapsulated by Conditions (1) and (2) respectively. These regions are separated by a "barrier", which is the contour along \(B(x)=0\). Condition (3) prevents the system evolving into the unreachable region and needs to be satisfied for the system to be safe. However, Condition (3) can be replaced with stronger conditions that are easier to check. For example, the definition of one simple type of barrier certificate is given. Definition 2 (Convex Barrier Certificate): For a system \(\dot{x}=f(x)\), \(X\subseteq\mathbb{R}^{n}\), \(X_{0}\subseteq X\) and \(X_{u}\subseteq X\), a function \(B:\mathbb{R}^{n}\rightarrow\mathbb{R}\) that obeys the following conditions: \[B(x) \leq 0,\forall x\in X_{0}\] \[B(x) >0,\forall x\in X_{u}\] \[\frac{\mathrm{d}B}{\mathrm{d}x}f(x) \leq 0,\forall x\in X, \tag{4}\] is a convex barrier certificate. Note that in Condition (4): \(\frac{\mathrm{d}B}{\mathrm{d}x}\frac{\mathrm{d}x}{\mathrm{d}t}=\frac{\mathrm{ d}B}{\mathrm{d}t}\). This condition can be viewed as a constraint on the evolution of the barrier as the system evolves over time. Now, if a system has a barrier certificate, then the system is safe. We show the safety theorem for convex barrier certificates. Theorem 3.1: _If a system, \(\dot{x}=f(x)\), has a convex barrier certificate, \(B:\mathbb{R}^{n}\rightarrow\mathbb{R}\), then the system is safe [18]._ Proofs of Theorem 3.1 are standard and can be found in, _e.g.,_[18]. The intuition behind the proof is that since the system starts in the negative region and the barrier can never increase, then the barrier can never enter the positive region. Since the unsafe set is within the positive region of the barrier, this set can therefore never be reached. Thus, the system cannot evolve into the unsafe set and so the system is safe. Figure 1 shows an example of a dynamical system with a barrier based on the convex condition. Remark 1: The term "convex" is used for these barriers as the set of barrier certificates satisfying the conditions in Definition 2 is convex. In other words, if \(B_{1}\) and \(B_{2}\) are barrier certificates for a system, the function \(\lambda B_{1}+(1-\lambda)B_{2}\) is also a barrier certificate for any \(\lambda\in[0,1]\). See [18] or the proof of Proposition 1 in Appendix B for (similar) details. There are a variety of different barrier certificates to choose from with different benefits, _e.g.,_ the convex condition given is simple but may not work for complicated or nonlinear systems. In comparison, the non-convex condition given in [18] changes Condition (4) such that \(\frac{\mathrm{d}B}{\mathrm{d}x}f(x)\leq 0;\forall x\in X,B(x)=0\) (instead of \(\forall x\in X\)). This is a weaker condition allowing for more functions to be a suitable barrier certificate. However, a different computational method is required because the set of such barrier certificates is non-convex. Each barrier certificate requires a different proof that if the system has a satisfying barrier certificate, then the system is safe. It should be noted that Theorem 2.1 only has a one way implication, a system does not necessarily have a barrier certificate even if it is safe. In [24], the authors showed the converse holds for systems defined on a compact manifold and using convex barrier certificates. ## 3 Complex-valued Barrier Certificates Now we wish to extend the use of barrier certificates into a complex space \((\mathbb{C}^{n})\). We use \(\mathrm{i}=\sqrt{-1}\) as the imaginary unit in the rest of the paper. The complex dynamical systems considered are of the form \[\dot{z}=\frac{\mathrm{d}z}{\mathrm{d}t}=f(z),\quad f:\mathbb{C}^{n}\to\mathbb{ C}^{n},\] which evolves in \(Z\subseteq\mathbb{C}^{n}\). The initial and unsafe sets are defined in the usual way except now we have \(Z_{0}\subseteq Z\) and \(Z_{u}\subseteq Z\), respectively. The notion of safety for this system is similar to Definition 1. Definition 3 (Safety): A complex system, \(\dot{z}=f(z)\), with \(Z\subseteq\mathbb{C}^{n}\), \(Z_{0}\subseteq Z\) and \(Z_{u}\subseteq Z\), is considered safe if for any \(z(0)\in Z_{0}\), then \(\forall t\in\mathbb{R}^{+},z(t)\notin Z_{u}\). Figure 1: Example adapted from Section V-A in [18]. The initial region is the green circle centred at \((1.5,0)\) and the system evolves according to the dynamical system given by differential equations \(\dot{x}=[x_{2},-x_{1}+\frac{1}{3}x_{1}^{3}-x_{2}]\). The unsafe region is the red circle centred at \((-1,-1)\) and is separated from the initial region by a barrier, the dashed purple line defined by \(B(x)=0\) where \(B(x)=-13+7x_{1}^{2}+16x_{2}^{2}-6x_{1}^{2}x_{2}^{2}-\frac{7}{6}x_{1}^{4}-3x_{1 }x_{2}^{3}+12x_{1}x_{2}-\frac{12}{3}x_{1}^{3}x_{2}\). Whilst it is easy to extend the safety problem and required definitions into the complex plane, extending the notion of barrier certificates requires particular attention. Conditions (1), (2) and (3) are changed respectively to \[B(z) \leq 0,\forall z\in Z_{0}; \tag{5}\] \[B(z) >0,\forall z\in Z_{u};\] (6) \[z(0)\in Z_{0}\implies B(z(t)) \leq 0,\forall t\in\mathbb{R}_{+}. \tag{7}\] Many barrier certificates use differential equations to achieve Condition (7), which restricts the class of functions that can be used. This is because differentiable complex functions must satisfy the Cauchy-Riemann equations. For our purposes, we consider a holomorphic function, \(g(z):\mathbb{C}^{n}\to\mathbb{C}\), to be a function whose partial derivatives, \(\frac{\partial g(z)}{\partial z_{j}}\), are holomorphic on \(\mathbb{C}\), _i.e.,_ they satisfy the Cauchy-Riemann equations (for several variables). That is for \(z_{j}=x_{j}+\mathrm{i}y_{j}\) and \(g(z)=g(x,y)=u(x,y)+\mathrm{i}v(x,y)\), then \[\frac{\partial u}{\partial x_{j}}=\frac{\partial v}{\partial y_{j}}\qquad \frac{\partial u}{\partial y_{j}}=-\frac{\partial v}{\partial x_{j}}.\] Using an adapted technique developed by Fang and Sun [9] allows us to reason about barrier certificates in the complex plane. We begin by introducing a family of complex functions that are key to our technique. Definition 4 (Conjugate-flattening function): A function, \(b:\mathbb{C}^{n}\times\mathbb{C}^{n}\to\mathbb{C}^{n}\), is conjugate-flattening if \(\forall z\in\mathbb{C}^{n},b(z,\overline{z})\in\mathbb{R}\). Definition 5 (Complex-valued barrier function): A function, \(B:\mathbb{C}^{n}\to\mathbb{R}\), is a complex-valued barrier function if \(B(z)=b(z,\overline{z})\) where \(b:\mathbb{C}^{n}\times\mathbb{C}^{n}\to\mathbb{C}^{n}\) is a conjugate-flattening, holomorphic function. Suppose now that we have a system that evolves over time, \(z(t)\). To use the complex-valued barrier function, \(B(z(t))\), for barrier certificates we require the differential of \(B\) with respect to \(t\). Calculating this differential reveals that \[\begin{split}\frac{\mathrm{d}B(z(t))}{\mathrm{d}t}=\left.\frac{ \mathrm{d}b(z(t),\overline{z(t)})}{\mathrm{d}t}\right.&=\left. \frac{\mathrm{d}b(z,u)}{\mathrm{d}z}\right|_{u=\overline{z}}\left.\frac{ \mathrm{d}z}{\mathrm{d}t}\right.&+\left.\frac{\mathrm{d}b(z,u)}{ \mathrm{d}u}\right|_{u=\overline{z}}\frac{\overline{\mathrm{d}z}}{\mathrm{d}t }\\ &=\left.\frac{\mathrm{d}b(z,u)}{\mathrm{d}z}\right|_{u=\overline{z }}f(z)+\left.\frac{\mathrm{d}b(z,u)}{\mathrm{d}u}\right|_{u=\overline{z}} \overline{f(z)},\end{split} \tag{8}\] where \(\frac{\mathrm{d}b(z,u)}{\mathrm{d}z}=\left[\frac{\partial b(z,u)}{\partial z _{1}},\frac{\partial b(z,u)}{\partial z_{2}},\ldots,\frac{\partial b(z,u)}{ \partial z_{n}}\right]\) is the gradient of \(b(z,u)\) with respect to \(z\) and the gradient is defined with respect to \(u\) in a similar way. Given Equation (8), barrier certificates that include a differential condition can be extended into the complex domain quite naturally. For example, the convex barrier certificate is extended to the complex domain. Definition 6 (Complex-valued Convex Barrier Certificate): For a system \(\dot{z}=f(z)\), \(Z\subseteq\mathbb{C}^{n}\), \(Z_{0}\subseteq Z\) and \(Z_{u}\subseteq Z\); a complex-valued barrier function \(B:\mathbb{C}^{n}\to\mathbb{R}\), \(B(z)=b(z,\overline{z})\), that obeys the following conditions,_ \[b(z,\overline{z}) \leq 0,\forall z\in Z_{0} \tag{9}\] \[b(z,\overline{z}) >0,\forall z\in Z_{u}\] (10) \[\frac{\mathrm{d}b(z,u)}{\mathrm{d}z}\bigg{|}_{u=\overline{z}}\,f( z)+\left.\frac{\mathrm{d}b(z,u)}{\mathrm{d}u}\right|_{u=\overline{z}}\overline{f(z)} \leq 0,\forall z\in Z, \tag{11}\] _is a complex-valued convex barrier certificate._ With this definition, we can ensure the safety of complex dynamical systems: Theorem 3.1: _If a complex system, \(\dot{z}=f(z)\), has a complex-valued convex barrier certificate, \(B:\mathbb{C}^{n}\to\mathbb{R}\), then the system is safe._ Proposition 1: _The set of complex-valued barrier certificates satisfying the conditions of Definition 2 is convex._ The proofs of these results are given in Appendix 0.A and 0.B respectively. ## 4 Generating Satisfiable Barrier Certificates for Quantum Systems We now describe how to compute a complex-valued barrier function. Throughout, let \(\dot{z}=f(z)\), \(Z\subseteq\mathbb{C}^{n}\), \(Z_{0}\subseteq Z\) and \(Z_{u}\subseteq Z\) be defined as before. We introduce a general family of functions that will be used as "templates" for complex barrier certificates. Definition 7: A \(k\)-degree polynomial function is a complex function, \(b:\mathbb{C}^{n}\to\mathbb{C}\), such that \[b(z_{1},\ldots,z_{n})=\sum_{\boldsymbol{\alpha}\in A_{n,k}}a_{\boldsymbol{ \alpha}}z^{\boldsymbol{\alpha}} \tag{12}\] where \(A_{n,k}:=\{\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\subseteq\mathbb{ N}^{n}:\sum_{j=1}^{n}\alpha_{j}\leq k\}\), \(a_{\boldsymbol{\alpha}}\in\mathbb{C}\), and \(z^{\boldsymbol{\alpha}}=\prod_{j=1}^{n}z_{j}^{\alpha_{j}}\). The family of \(k\)-degree polynomials are polynomial functions where no individual term of the polynomial can have a degree higher than \(k\). Note that \(k\)-degree polynomial functions are holomorphic. Further, some \(k\)-degree polynomials are conjugate-flattening. For example, the 2-degree polynomial \(b(z_{1},u_{1})=z_{1}u_{1}\) is conjugate-flattening since \(z\overline{z}=\left|z\right|^{2}\), whereas the 1-degree polynomial \(b(z_{1},u_{1})=z_{1}\) is not. Thus, a subset of this family of functions are suitable to be used for barrier certificates as complex-valued barrier functions. The partial derivative of the polynomials in Equation (12) is required for ensuring the function meets Condition (11). The partial derivative of the function is \[\frac{\partial b}{\partial z_{j}}=\sum_{\boldsymbol{\alpha}\in A_{n,k}}a_{ \boldsymbol{\alpha}}\alpha_{j}z_{j}^{-1}z^{\boldsymbol{\alpha}}. \tag{13}\] We write \[B(a,z):=b(a,z,\overline{z}):=\sum_{\begin{subarray}{c}(\boldsymbol{\alpha}, \boldsymbol{\beta})\in A_{2n,k}\\ \boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\\ \boldsymbol{\beta}=(\alpha_{n+1},\ldots\alpha_{2n})\end{subarray}}a_{ \boldsymbol{\alpha},\boldsymbol{\beta}}z^{\boldsymbol{\alpha}}\overline{z}^{ \boldsymbol{\beta}},\] where \(a=(a_{\boldsymbol{\alpha},\boldsymbol{\beta}})\in\mathbb{R}^{|A_{2n,k}|}\) is a vector of real coefficients to be found and \(\overline{z}^{\boldsymbol{\beta}}=\prod_{j=1}^{n}\overline{z_{j}^{\alpha_{n+j}}}\). The following (polynomial) inequalities find the coefficient vector: \[\begin{split}\textbf{find}\ a^{T}\\ \textbf{subject to}\ B(a,z)&\leq 0,\forall z\in Z_{0}\\ \ B(a,z)&>0,\forall z\in Z_{u}\\ \frac{\mathrm{d}B(a,z)}{\mathrm{d}t}&\leq 0,\forall z \in Z\\ \ B(a,z)&\in\mathbb{R}\\ &-1\leq a_{\boldsymbol{\alpha},\boldsymbol{\beta}}\leq 1.\end{split} \tag{14}\] The coefficients, \(a_{\boldsymbol{\alpha},\boldsymbol{\beta}}\in\mathbb{R}\), are restricted to the range \(\left[-1,1\right]\) since any barrier certificate \(B(a,z)\), can be normalised by dividing \(B\) by the coefficient of greatest weight, \(m=\max|a_{\boldsymbol{\alpha},\boldsymbol{\beta}}|\). The resulting function \(\frac{1}{m}B(a,z)\) is still a barrier certificate. A barrier certificate generated from these polynomial inequalities can then freely be scaled up by multiplying it by a constant. ### An Algorithmic Solution One approach of solving the inequalities in (14) is to convert the system to real numbers and solve using sum of squares (SOS) optimisation [18]; another method is to use SMT solvers to find a satisfiable set of coefficients; or it is possible to use neural network based approaches to find possible barriers [1, 17]. We consider as a special case, an approach where \(\frac{\mathrm{d}B(a,z)}{\mathrm{d}t}=0\) rather than \(\frac{\mathrm{d}B(a,z)}{\mathrm{d}t}\leq 0\), which allows the problem to be turned into a linear program. This restriction allows us to consider a subset of barrier certificates that still ensures the safety of the system. This is motivated by the fact that simple quantum systems of interest exhibit periodic behaviour; that is for all \(t\in\mathbb{R}^{+}\), \(z(t)=z(t+T)\) for some \(T\). The barrier must also exhibit periodic behaviour,4 and this can be achieved by setting \(\frac{\mathrm{d}B(a,z)}{\mathrm{d}t}=0\). Whilst there are other properties that ensure a function is periodic, these would involve non-polynomial terms such as trigonometric functions. Further, linear programs tend to be solved faster than SOS methods. This is because SOS programs are solved through semidefinite programming techniques, which are extensions of linear programs and therefore harder to solve. Footnote 4: The barrier being periodic can be seen by interpreting the barrier as a function over time: \(B(t)=B(z(t))=B(z(t+T))=B(t+T),\forall t\in\mathbb{R}^{+}\) We begin by transforming the differential constraint, \(\frac{\mathrm{d}B(a,z)}{\mathrm{d}t}=0\). To obey the third condition for the complex-valued convex barrier certificate, we can substitute terms in Equation (8) with the partial derivatives from Equation (13). Essentially one will end up with an equation of the form \[(\mathbf{A}a)^{\top}\zeta=0,\] where \(\zeta\) is a vector of all possible polynomial terms of \(z_{j},\overline{z_{j}}\) with degree less than \(k\),5 and \(\mathbf{A}\) is a matrix of constant values. By setting \(\mathbf{A}a=\vec{0}\) the constraint is satisfied. Therefore, each row of the resultant vector, \((\mathbf{A}a)_{j}=0\), is added as a constraint to a linear program. Footnote 5: _e.g.,_ for \(k=2\) acceptable terms include \(z_{j}^{a},z_{j}z_{l},z_{j}\overline{z_{l}},\overline{z_{j}}^{a},\overline{z_{j }}\overline{z_{l}}\) for \(0\leq a\leq 2\). To transform the real constraint \((B(a,z)\in\mathbb{R})\) note that if \(x\in\mathbb{C}\), then \(x\in\mathbb{R}\) if and only if \(x=\overline{x}\). Therefore, \(B(a,z)-\overline{B(a,z)}=0\) and we have \[B(a,z)-\overline{B(a,z)} =\sum_{\begin{subarray}{c}(\alpha_{j})\in A_{2n,k}\\ \boldsymbol{\alpha}=\{\alpha_{1},\ldots,\alpha_{n}\}\\ \boldsymbol{\beta}=\{\alpha_{n+1},\ldots\alpha_{2n}\}\end{subarray}}a_{ \boldsymbol{\alpha},\boldsymbol{\beta}}z^{\boldsymbol{\alpha}}\overline{z}^{ \boldsymbol{\beta}}-\sum_{\begin{subarray}{c}(\alpha_{j})\in A_{2n,k}\\ \boldsymbol{\alpha}^{\prime}=\{\alpha_{1},\ldots,\alpha_{n}\}\\ \boldsymbol{\beta}^{\prime}=\{\alpha_{n+1},\ldots\alpha_{2n}\}\end{subarray}} \overline{a}_{\boldsymbol{\alpha}^{\prime},\boldsymbol{\beta}^{\prime}}z^{ \boldsymbol{\beta}^{\prime}}\overline{z}^{\boldsymbol{\alpha}^{\prime}}\] \[=\sum_{\begin{subarray}{c}(\alpha_{j})\in A_{2n,k}\\ \boldsymbol{\alpha}=\{\alpha_{1},\ldots,\alpha_{n}\}\\ \boldsymbol{\beta}=\{\alpha_{n+1},\ldots\alpha_{2n}\}\end{subarray}}(a_{ \boldsymbol{\alpha},\boldsymbol{\beta}}-\overline{a}_{\boldsymbol{\beta}, \boldsymbol{\alpha}})z^{\boldsymbol{\alpha}}\overline{z}^{\boldsymbol{\beta}}.\] The whole polynomial is equal to \(0\) if all coefficients are \(0\). Thus, taking the coefficients and noting that \(a_{j}\) are real gives the transformed constraints \(a_{\boldsymbol{\alpha},\boldsymbol{\beta}}=a_{\boldsymbol{\beta},\boldsymbol {\alpha}}\) for \(\boldsymbol{\alpha}=(\alpha_{j})_{j=1}^{n},\boldsymbol{\beta}=(\alpha_{j})_{j=n +1}^{2n},(\alpha_{j})\in A_{2n,k}\). These constraints to the coefficients are then also added to the linear program. The final constraints we need to transform are the constraints on the initial and unsafe set: \(B(a,z)\leq 0\) for \(z\in Z_{0}\) and \(B(a,z)>0\) for \(z\in Z_{u}\), respectively. We begin by noting that \(B(a,z)=c+b(a,z,\overline{z})\) where \(b(a,z,\overline{z})\) is a \(k\)-degree polynomial (with coefficients \(a\)) and \(c\in\mathbb{R}\) is a constant. When considering the differential and real constraint steps, \(c\) is not involved in these equations since \(c\) does not appear in the differential term and \(c\) is cancelled out in the real constraint \((c-\overline{c}=c-c=0)\). Considering the initial and unsafe constraints, we require that \[\forall z\in Z_{0},\ c+b(a,z,\overline{z})\leq 0,\,\text{and}\] \[\forall z\in Z_{u},\ c+b(a,z,\overline{z})>0.\] Therefore, \(c\) is bounded by \[\max_{z\in Z_{u}}-b(a,z,\overline{z})<c\leq\min_{z\in Z_{0}}-b(a,z,\overline{ z}).\] Finding \(c=\min_{z\in Z_{0}}-b(a,z,\overline{z})\) and then checking \(\max_{z\in Z_{u}}-b(a,z,\overline{z})<c\) will ensure the initial and unsafe constraints are met for the barrier. The final computation is given in Algorithm 1. Note that the algorithm can fail since the function \(b\) may divide the state space in such a way that a section of \(Z_{0}\) may lie on the same contour as a section of \(Z_{u}\). This means that either the function \(b\) is unsuitable or the system is inherently unsafe. ## 5 Application to Quantum Systems We consider quantum systems that evolve within Hilbert spaces \(\mathcal{H}^{n}=\mathbb{C}^{2^{n}}\) for \(n\in\mathbb{N}\). We use the computational basis states \(\left|j\right\rangle\in\mathcal{H}^{n}\), for \(0\leq j<2^{n}\), as an orthonormal basis within the space, where \((\left|j\right\rangle)_{l}=\delta_{jl}\).6 General quantum states, \(\left|\phi\right\rangle\in\mathcal{H}^{n}\), can then be written in the form Footnote 6: \(\delta_{jl}\) is the Kronecker delta, which is \(1\) if \(j=l\) and \(0\) otherwise. \[\left|\phi\right\rangle=\sum_{j=0}^{2^{n}-1}z_{j}\left|j\right\rangle,\] where \(z_{j}\in\mathbb{C}\) and \(\sum_{j=0}^{2^{n}-1}\left|z_{j}\right|^{2}=1\).7 Quantum states reside within the unit circle of \(\mathbb{C}^{2^{n}}\). For simplicity, we consider quantum systems that evolve according to the Schrodinger equation Footnote 7: For readers familiar with the Dirac notation, \(z_{j}=\langle j|\phi\rangle\) and \(\overline{z_{j}}=\langle\phi|j\rangle\). \[\frac{\mathrm{d}\left|\phi\right\rangle}{\mathrm{d}t}=-\mathrm{i}\hat{H} \left|\phi\right\rangle,\] where \(\hat{H}\) is a Hamiltonian, a complex matrix such that \(\hat{H}=\hat{H}^{\dagger}=\overline{\hat{H}^{\top}}\); and \(\left|\phi\right\rangle\) is a quantum state.8 In the rest of this section, we make use of Algorithm 1 in order to find suitable barrier certificates for operations that are commonly used in quantum computers. Footnote 8: We set the Planck constant \(\hbar=1\) in the Schrödinger equation. ### Hadamard Operation Example The evolution of the Hadamard operation, \(H=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\), is given by \(\hat{H}_{H}=\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\) and \(\left|\phi\right\rangle\) is one qubit, \(z_{0}\left|0\right\rangle+z_{1}\left|1\right\rangle\). We have \(z(t)=\begin{pmatrix}z_{0}(t)\\ z_{1}(t)\end{pmatrix}\) and \[\dot{z}=-\mathrm{i}\hat{H}_{H}z=-\mathrm{i}\begin{pmatrix}z_{0}+z_{1}\\ z_{0}-z_{1}\end{pmatrix}.\] The system evolves over the surface of the unit sphere, \(Z=\{(z_{0},z_{1})\in\mathbb{C}^{2}:\left|z_{0}\right|^{2}+\left|z_{1}\right|^{ 2}=1\}\). The initial set is defined as \(Z_{0}=\{(z_{0},z_{1})\in Z:\left|z_{0}\right|^{2}\geq 0.9\}\) and the unsafe set as \(Z_{u}=\{(z_{0},z_{1})\in Z:\left|z_{0}\right|^{2}\leq 0.1\}\). Note that the definitions of \(Z_{0}\) and \(Z_{u}\) are restricted by \(Z\), therefore \(\left|z_{1}\right|^{2}\leq 0.1\) and \(\left|z_{1}\right|^{2}\geq 0.9\) for \(Z_{0}\) and \(Z_{u}\) respectively. A barrier function computed by our Algorithm 1 is \[B(z)=\frac{11}{5}-3z_{0}\overline{z_{0}}-z_{0}\overline{z_{1}}-\overline{z_{0 }}z_{1}-z_{1}\overline{z_{1}}.\] By rearranging and using properties of the complex conjugate, we find that \[B(z)=2(\frac{1}{10}-\left|z_{0}\right|^{2}+\frac{1}{2}-\mathrm{Re}\{z_{0} \overline{z_{1}}\}).\] The derivation is given in Appendix 0.C. The first term of the barrier \((\frac{1}{10}-\left|z_{0}\right|^{2})\) acts as a restriction on how close to \(\left|0\right\rangle\) as \(\left|\phi\right\rangle\) evolves, whereas the second term \((\frac{1}{2}-\mathrm{Re}\{z_{0}\overline{z_{1}}\})\) is a restriction on the phase of the quantum state. Next, we double check that \(B\) is indeed a barrier certificate. Proposition 2: _The system evolving according to Equation (5.1), initial set \(Z_{0}\) and unsafe set \(Z_{u}\) is safe._ The proposition is proved in Appendix 0.D. A visualisation on a Bloch sphere representation of the example system and its associate barrier are given in Figure 2. ### Phase Operation Example The evolution of the phase operation \(S=\begin{pmatrix}1&0\\ 0&\mathrm{i}\end{pmatrix}\) is given by the Hamiltonian \(\hat{H}_{S}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\) for a single qubit \(z_{0}\left|0\right\rangle+z_{1}\left|1\right\rangle\). Thus, the evolution of the system for \(z(t)=\begin{pmatrix}z_{0}(t)\\ z_{1}(t)\end{pmatrix}\) is \[\dot{z}=-\mathrm{i}\begin{pmatrix}z_{0}\\ -z_{1}\end{pmatrix}. \tag{15}\] Again, \(Z\) represents the unit sphere as described previously. Two pairs of safe and unsafe regions are given. The first pair \(Z_{1}=(Z_{0}^{1},Z_{u}^{1})\) is given by \[Z_{0}^{1}=\{(z_{0},z_{1})\in Z:\left|z_{0}\right|^{2}\geq 0.9\},\ \ Z_{u}^{1}=\{(z_{0},z_{1})\in Z: \left|z_{1}\right|^{2}>0.11\};\] and the second pair \(Z_{2}=(Z_{0}^{2},Z_{u}^{2})\) is given by \[Z_{0}^{2}=\{(z_{0},z_{1})\in Z:\left|z_{1}\right|^{2}\geq 0.9\},\ \ \ Z_{u}^{2}=\{(z_{0},z_{1})\in Z:\left|z_{0}\right|^{2}>0.11\}.\] The pair \(Z^{1}\) starts with a system that is close to the \(\left|0\right>\) state and ensures that the system cannot evolve towards the \(\left|1\right>\) state. The pair \(Z^{2}\) has similar behaviour with respective states \(\left|1\right>\) and \(\left|0\right>\). The system for each pair of constraints is considered safe by the following barriers computed by Algorithm 1: \[B_{1}(z)=0.9-z_{0}\overline{z_{0}},\ \ \ B_{2}(z)=0.9-z_{1}\overline{z_{1}},\] where \(B_{1}\) is the barrier for \(Z^{1}\) and \(B_{2}\) is the barrier for \(Z^{2}\).9 The system with different pairs of regions can be seen on Bloch spheres in Figure 3. Again, both functions \(B_{1}\) and \(B_{2}\) are valid barrier certificates. Footnote 9: These barriers can similarly be written using the Dirac notation. **Proposition 3**.: _The system given by Equation 15 with the set of initial states \(Z_{0}^{1}\) and the unsafe set \(Z_{u}^{1}\) is safe._ **Proposition 4**.: _The system given by Equation 15 with the set of initial states \(Z_{0}^{2}\) and the unsafe set \(Z_{u}^{2}\) is safe._ The proofs are omitted as they are similar to the proof given in Proposition 2. These barriers give bounds on how the system evolves, _i.e.,_ the system must Figure 2: System evolution on a Bloch sphere. The initial state of the system is \(\sqrt{0.9}\left|0\right>+\mathrm{i}\sqrt{0.1}\left|1\right>\) (the black dot) and evolves according to the black line (in an anti-clockwise rotation with a period of \(t=\pi\)). The green surface around the north pole (\(\left|0\right>\)) is the initial region, \(Z_{0}\), and the red surface around the south pole (\(\left|1\right>\)) is the unsafe region, \(Z_{u}\). The blue surface is the plane of the barrier function when \(B(z)=0\), with \(x<-z\) being the unsafe region. only change the phase of the system and not the amplitude. This can be applied in general by combining barriers to show how a (disturbed) system is restricted in its evolution. ### Controlled-NOT Operation Example The final example we consider is the controlled-NOT (CNOT) operation acting on two qubits; a control qubit, \(|\phi_{c}\rangle\), and a target qubit, \(|\phi_{t}\rangle\), with the full quantum state being \(|\phi_{c}\phi_{t}\rangle\). The CNOT operation performs the NOT operation on a target qubit (\(|0\rangle\rightarrow|1\rangle\) and \(|1\rangle\rightarrow|0\rangle\)) if the control qubit is set to \(|1\rangle\) and does nothing if the control qubit is set to \(|0\rangle\). The CNOT operation and its associated Hamiltonian are given by \[\text{CNOT}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix},\quad\hat{H}_{\text{CNOT}}=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&1&-1\\ 0&0&-1&1\end{pmatrix}.\] The system \(z(t)=(z_{j}(t))_{j=0,\ldots,3}\) evolves according to \[\dot{z}=-\mathrm{i}\begin{pmatrix}0\\ 0\\ z_{2}-z_{3}\\ -z_{2}+z_{3}\end{pmatrix}.\] This system evolves over \(Z=\{(z_{0},\ldots,z_{3})\in\mathbb{C}^{4}:\sum_{j=0}^{3}|z_{j}|^{2}=1\}\). Using this as our system, various initial and unsafe regions can be set up to reason about the behaviour of the CNOT operation. Figure 3: State evolution of (15) demonstrated on a Bloch sphere. **Control in \(|\mathbf{0}\rangle\)**: Here we consider the following initial and unsafe regions \[Z_{0} =\{(z_{j})_{j=0}^{3}\in\mathbb{C}^{4}:\left|z_{0}\right|^{2}\geq 0.9\},\] \[Z_{u} =\{(z_{j})_{j=0}^{3}\in\mathbb{C}^{4}:\left|z_{1}\right|^{2}+ \left|z_{2}\right|^{2}+\left|z_{3}\right|^{2}\geq 0.11\}.\] The initial set, \(Z_{0}\), encapsulates the quantum states that start in the \(|00\rangle\) state with high probability and \(Z_{u}\) captures the states that are not in the initial region with probability greater than \(0.11\). These regions capture the behaviour that the quantum state should not change much when the control qubit is in the \(|0\rangle\) state. Using Algorithm 1, the barrier \(B(z)=0.9-z_{0}\overline{z_{0}}\) can be generated to show that the system is safe. A similar example can be considered where the initial state \(|00\rangle\) is replaced with \(|01\rangle\) instead (swap \(z_{0}\) and \(z_{1}\) in \(Z_{0}\) and \(Z_{u}\)). The behaviour that the state of the system should not change much is still desired; the function \(B(z)=0.9-z_{1}\overline{z_{1}}\) is computed as a barrier to show this behaviour is met. #### 5.3.2 Control in \(|\mathbf{1}\rangle\) Now consider when the initial region has the control qubit near the state \(|1\rangle\). The following regions are considered: \[Z_{0} =\{(z_{j})_{j=0}^{3}\in\mathbb{C}^{4}:\left|z_{2}\right|^{2}\geq 0.9\},\] \[Z_{u} =\{(z_{j})_{j=0}^{3}\in\mathbb{C}^{4}:\left|z_{1}\right|^{2}+ \left|z_{2}\right|^{2}\geq 0.11\}.\] This system starts close to the \(|10\rangle\) state and the evolution should do nothing to the control qubit. Note that the specified behaviour does not captures the NOT behaviour on the target qubit. Our Algorithm 1 considers this system safe by outputting the barrier certificate \(B(z)=0.9-z_{2}\overline{z_{2}}-z_{3}\overline{z_{3}}\). This is also the barrier if the system were to start in the \(|11\rangle\) state instead. ## 6 Conclusions In this paper, we extended the theory of barrier certificates to handle complex variables and demonstrated that barrier certificates can be extended to use complex variables. We then showed how one can automatically generate simple complex-valued barrier certificates using polynomial functions and linear programming techniques. Finally, we explored the application of the developed techniques by investigating properties of time-independent quantum systems. There are numerous directions for this research to take. In particular, one can consider (quantum) systems that are time-dependent, have a control component or are discrete-time, _i.e.,_ quantum circuits. Data-driven approaches for generating barrier certificates based on measurements of a quantum system can also be considered. A final challenge to consider is how to verify large quantum systems. Techniques, such as Trotterization, allow Hamiltonians to be simulated either by simpler Hamiltonians of the same size or of lower dimension. How barrier certificates can ensure safety of such systems is a route to explore. ## Acknowledgements M.Lewis is supported by the UK EPSRC (project reference EP/T517914/1). The work of S. Soudjani is supported by the following grants: EPSRC EP/V043676/1, EIC 101070802, and ERC 101089047. Data availability.The public repository with an implementation of the algorithm from Section 4 and case studies from Section 5 is available on GitHub: [https://github.com/marco-lewis/quantum-barrier-certificates](https://github.com/marco-lewis/quantum-barrier-certificates).
2306.10234
Federated Few-shot Learning
Federated Learning (FL) enables multiple clients to collaboratively learn a machine learning model without exchanging their own local data. In this way, the server can exploit the computational power of all clients and train the model on a larger set of data samples among all clients. Although such a mechanism is proven to be effective in various fields, existing works generally assume that each client preserves sufficient data for training. In practice, however, certain clients may only contain a limited number of samples (i.e., few-shot samples). For example, the available photo data taken by a specific user with a new mobile device is relatively rare. In this scenario, existing FL efforts typically encounter a significant performance drop on these clients. Therefore, it is urgent to develop a few-shot model that can generalize to clients with limited data under the FL scenario. In this paper, we refer to this novel problem as federated few-shot learning. Nevertheless, the problem remains challenging due to two major reasons: the global data variance among clients (i.e., the difference in data distributions among clients) and the local data insufficiency in each client (i.e., the lack of adequate local data for training). To overcome these two challenges, we propose a novel federated few-shot learning framework with two separately updated models and dedicated training strategies to reduce the adverse impact of global data variance and local data insufficiency. Extensive experiments on four prevalent datasets that cover news articles and images validate the effectiveness of our framework compared with the state-of-the-art baselines. Our code is provided at https://github.com/SongW-SW/F2L.
Song Wang, Xingbo Fu, Kaize Ding, Chen Chen, Huiyuan Chen, Jundong Li
2023-06-17T02:25:56Z
http://arxiv.org/abs/2306.10234v3
# Federated Few-shot Learning ###### Abstract. Federated Learning (FL) enables multiple clients to collaboratively learn a machine learning model without exchanging their own local data. In this way, the server can exploit the computational power of all clients and train the model on a larger set of data samples among all clients. Although such a mechanism is proven to be effective in various fields, existing works generally assume that each client preserves sufficient data for training. In practice, however, certain clients may only contain a limited number of samples (i.e., few-shot samples). For example, the available photo data taken by a specific user with a new mobile device is relatively rare. In this scenario, existing FL efforts typically encounter a significant performance drop on these clients. Therefore, it is urgent to develop a few-shot model that can generalize to clients with limited data under the FL scenario. In this paper, we refer to this novel problem as _federated few-shot learning_. Nevertheless, the problem remains challenging due to two major reasons: the global data variance among clients (i.e., the difference in data distributions among clients) and the local data insufficiency in each client (i.e., the lack of adequate local data for training). To overcome these two challenges, we propose a novel federated few-shot learning framework with two separately updated models and dedicated training strategies to reduce the adverse impact of global data variance and local data insufficiency. Extensive experiments on four prevalent datasets that cover news articles and images validate the effectiveness of our framework compared with the state-of-the-art baselines. Our code is provided1. Footnote 1: [https://github.com/SongW-SW/2L](https://github.com/SongW-SW/2L) Federated Learning; Few-shot Learning; Knowledge Distillation 2023 Footnote 1: [https://github.com/SongW-SW/2L](https://github.com/SongW-SW/2L) Footnote 2: [https://github.com/SongW-SW/2L](https://github.com/SongW-SW/2L) Footnote 3: [https://github.com/SongW-SW/2L](https://github.com/SongW-SW/2L) ## 1. Introduction The volume of valuable data is growing massively with the rapid development of mobile devices [4, 34]. Recently, researchers have developed various machine learning methods [5, 62, 5] to analyze and extract useful information from such large-scale real-world data. Among these methods, Federated Learning (FL) is an effective solution, which aims to collaboratively optimize a centralized model over data distributed across a large number of clients [7, 13, 22, 63]. In particular, FL trains a global model on a server by aggregating the local models learned on each client [2]. Moreover, by avoiding the direct exchange of private data, FL can provide effective protection of local data privacy for clients [31]. As an example, in Google Photo Categorization [12, 33], the server aims to learn an image classification model from photos distributed among a large number of clients, i.e., mobile devices. In this case, FL can effectively conduct learning tasks without revealing private photos to the server. In fact, new learning tasks (e.g., novel photo classes) are constantly emerging over time [51, 60]. In consequence, FL can easily encounter a situation where the server needs to solve a new task with limited available data as the reference. In the previous example of Google Photo Categorization, as illustrated in Fig. 1, the server may inevitably need to deal with novel photo classes such as the latest electronic products, where only limited annotations are available. Nevertheless, existing FL works generally assume sufficient labeled samples for model training, which inevitably leads to unsatisfying classification performance for new tasks with limited labeled samples [14]. Therefore, to improve the practicality of FL in realistic scenarios, it is important to solve this problem by learning an FL model that can achieve satisfactory performance on new tasks with limited samples. In this paper, we refer to this novel problem setting as _federated few-shot learning_. Recently, many few-shot learning frameworks [15, 53, 56, 55] have been proposed to deal with new tasks with limited samples. Typically, the main idea is to learn meta-knowledge from _base classes_ with abundant samples (e.g., photo classes such as portraits). Then such meta-knowledge is generalized to _novel classes_ with limited samples (e.g., photo classes such as new electronic products), where novel classes are typically disjoint from base classes. However, as illustrated in Fig. 1, it remains challenging to conduct few-shot learning under the federated setting due to the following reasons. First, due to the _global data variance_ (i.e., the differences in data distributions across clients), the aggregation of local models on the server side will disrupt the learning of meta-knowledge in each client (Zhou et al., 2017). Generally, the meta-knowledge is locally learned from different classes in each client and thus is distinct among clients, especially under the non-IID scenario, where the data variance can be even larger among clients compared with the IID scenario. Since the server will aggregate the local models from different clients and then send back the aggregated model, the learning of meta-knowledge in each client will be potentially disrupted. Second, due to the _local data insufficiency_ in clients, it is non-trivial to learn meta-knowledge from each client. In FL, each client only preserves a relatively small portion of the total data (Beng et al., 2017; Chen et al., 2018). However, meta-knowledge is generally learned from data in a variety of classes (Zhou et al., 2017; Chen et al., 2018). As a result, it is difficult to learn meta-knowledge from data with less variety, especially in the non-IID scenario, where each client only has a limited amount of classes. To effectively solve the aforementioned challenges, we propose a novel Federated Few-shot Learning framework, named F\({}^{2}\)L. First, we propose a decoupled meta-learning framework to mitigate the disruption from the aggregated model on the server. Specifically, the proposed framework retains a unique _client-model_ for each client to learn meta-knowledge and a shared _server-model_ to learn client-invariant knowledge (e.g., the representations of samples), as illustrated in Fig. 2. Specifically, the client-model in each client is updated locally and will not be shared across clients, while the server-model can be updated across clients and sent to the server for aggregation. Such a design decouples the learning of meta-knowledge (via client-model) from learning client-invariant knowledge (via server-model). In this way, we can mitigate the disruption from the aggregated model on the server caused by global data variance among clients. Second, to compensate for local data insufficiency in each client, we propose to leverage global knowledge learned from all clients with two dedicated update strategies. In particular, we first transfer the learned meta-knowledge in client-model to server-model by maximizing the mutual information between their output (i.e., _local-to-global knowledge transfer_). Then we propose a partial knowledge distillation strategy for each client to selectively extract useful knowledge from server-model (i.e., _global-to-local knowledge distillation_). In this manner, each client can leverage the beneficial knowledge in other clients to learn meta-knowledge from more data. In summary, our contributions are as follows: * **Problem**. We investigate the challenges of learning meta-knowledge in the novel problem of federated few-shot learning from the perspectives of _global data variance_ and _local data insufficiency_. We also discuss the necessity of tackling these challenges. * **Method**. We develop a novel federated few-shot learning framework F\({}^{2}\)L with three essential strategies: (1) a decoupled meta-learning framework to mitigate disruption from the aggregated model on the server; (2) mutual information maximization for local-to-global knowledge transfer; (3) a novel partial knowledge distillation strategy for global-to-local knowledge distillation. * **Experiments**. We conduct experiments on four few-shot classification datasets covering both news articles and images under the federated scenario. The results further demonstrate the superiority of our proposed framework. ## 2. Preliminaries ### Problem Definition In FL, given a set of \(I\) clients, i.e., \(\{\mathbb{C}^{(i)}\}_{i=1}^{I}\), where \(I\) is the number of clients, each \(\mathbb{C}^{(i)}\) owns a local dataset \(\mathcal{D}^{(i)}\). The main objective of FL is to learn a global model over data across all clients (i.e., \(\{\mathcal{D}^{(i)}\}_{i=1}^{I}\)) without the direct exchange of data among clients. Following the conventional FL strategy (Zhou et al., 2017; Chen et al., 2018; Chen et al., 2018), a server \(\mathbb{S}\) will aggregate locally learned models from all clients for a global model. Under the prevalent few-shot learning scenario, we consider a supervised setting in which the data samples for client \(\mathbb{C}^{(i)}\) are from its local dataset: \((x,y)\in\mathcal{D}^{(i)}\), where \(x\) is a data sample, and \(y\) is the corresponding label. We first denote the entire set of classes on all clients as \(\mathcal{C}\). Depending on the number of labeled samples in each class, \(\mathcal{C}\) can be divided into two categories: base classes \(C_{b}\) and novel classes \(C_{n}\), where \(\mathcal{C}=\mathcal{C}_{b}\cup C_{n}\) and \(\mathcal{C}_{b}\cap C_{n}=\emptyset\). In general, the number of labeled samples in \(\mathcal{C}_{b}\) is sufficient, while it is generally small in \(C_{n}\)(Zhou et al., 2017; Chen et al., 2018). Correspondingly, each local dataset can be divided into a base dataset \(\mathcal{D}^{(i)}_{b}=\{(x,y)\in\mathcal{D}^{(i)}:y\in\mathcal{C}_{b}\}\) and a novel dataset \(\mathcal{D}^{(i)}_{n}=\{(x,y)\in\mathcal{D}^{(i)}:y\in\mathcal{C}_{n}\}\). In the few-shot setting, the evaluation of the model generalizability to novel classes \(C_{n}\) is conducted on \(\mathcal{D}^{(i)}_{n}\), which contains only limited labeled samples. The data samples in \(\mathcal{D}^{(i)}_{b}\) will be used for training. Then we can formulate the studied problem of federated few-shot learning as follows: Definition 1 ().: _Federated Few-shot Learning: Given a set of \(I\) clients \(\{\mathbb{C}^{(i)}\}_{i=1}^{I}\) and a server \(\mathbb{S}\), federated few-shot learning aims to learn a global model after aggregating model parameters locally learned from \(\mathcal{D}^{(i)}_{b}\) in each client such that the model can accurately predict labels for unlabeled samples (i.e., query set \(\mathcal{Q}\)) in \(\mathcal{D}^{(i)}_{n}\) with only a limited number of labeled samples (i.e., support set \(\mathcal{S}\))._ More specifically, if the support set \(\mathcal{S}\) consists of exactly \(K\) labeled samples for each of \(N\) classes from \(\mathcal{D}^{(i)}_{n}\), and the query set \(\mathcal{Q}\) is sampled from the same \(N\) classes, the problem is defined as Federated \(N\)-way \(K\)-shot Learning. Essentially, the objective of federated Figure 1. The two challenges of federated few-shot learning as an example in Google Photo Categorization: local data insufficiency and global data variance. few-shot learning is to learn a globally shared model across clients that can be fast adapted to data samples in \(\mathcal{D}_{n}^{(i)}\) with only limited labeled samples. Therefore, the crucial part is to effectively learn meta-knowledge from the base datasets \(\{\mathcal{D}_{b}^{(i)}\}_{i=1}^{I}\) in all clients. Such meta-knowledge is generalizable to novel classes unseen during training and thus can be utilized to classify data samples in each \(\mathcal{D}_{n}^{(i)}\), which consists of only limited labeled samples. ### Episodic Learning In practice, we adopt the prevalent episodic learning framework for model training and evaluation, which has proven to be effective in various few-shot learning scenarios (K #### 3.1.2. Local Meta-training on Clients Based on the episodic learning strategy, in each round, the training process of each client \(\mathbb{C}^{(i)}\) is conducted through \(\tau\) steps, where each step is a local update based on a meta-training task randomly sampled from the local base dataset \(\mathcal{D}^{(i)}_{i}\). In particular, for client \(\mathbb{C}^{(i)}\) on round \(t=1,2,\ldots,T\) and step \(s=1,2,\ldots,\tau\), we denote the sampled meta-task as \(\mathcal{T}^{t,s}_{t}=\{\mathcal{S}^{t,s}_{i},\mathcal{Q}^{t,s}_{i}\}\). To learn meta-knowledge from meta-\(\mathcal{T}^{t,s}_{i}\), we adopt the prevalent MAML (Mang et al., 2018) strategy to update client-model in one fine-tuning step and one meta-update step. We first fine-tune the client-model to fast adapt it to support set \(\mathcal{S}^{t,s}_{i}\): \[\widetilde{\psi}^{t,s}_{i}=\psi^{t,s}_{i}-\alpha_{ft}\nabla_{\psi}\mathcal{L}_ {ft}\left(\mathcal{S}^{t,s}_{i};\{\phi^{t,s}_{i},\psi^{t,s}_{i}\}\right), \tag{3}\] where \(\mathcal{L}_{ft}\) is the fine-tuning loss, which is the cross-entropy loss calculated on the support set \(\mathcal{S}^{t,s}_{i}\). Here, \(\alpha_{ft}\) is the learning rate, and \(\psi^{t,s}_{i}\) on \(\phi^{t,s}_{i}\) denotes the parameters of client-model (or server-model) on round \(t\) and step \(s\). Then we update the client-model based on the query set \(\mathcal{Q}^{t,s}_{i}\): \[\psi^{t,s+1}_{i}=\psi^{t,s}_{i}-\alpha_{\psi}\nabla_{\psi}\mathcal{L}_{\psi} \left(\mathcal{Q}^{t,s}_{i};\{\phi^{t,s}_{i},\widetilde{\psi}^{t,s}_{i}\} \right), \tag{4}\] where \(\mathcal{L}_{\psi}\) is the loss for client-model on the query set \(\mathcal{Q}^{t,s}_{i}\), and \(\alpha_{\psi}\) is the meta-learning rate for \(\psi\). In this regard, we can update client-model with our global-to-local knowledge distillation strategy. For the update of server-model, we conduct one step of update based on the support set and parameters of client-model: \[\phi^{t,s+1}_{i}=\phi^{t,s}_{i}-\alpha_{\phi}\nabla_{\phi}\mathcal{L}_{\phi} \left(\mathcal{S}^{t,s}_{i};\{\phi^{t,s}_{i},\psi^{t,s}_{i}\}\right), \tag{5}\] where \(\mathcal{L}_{\phi}\) is the loss for the server-model, and \(\alpha_{\phi}\) is the meta-learning rate for \(\phi\). In this manner, we can update the server-model with our local-to-global knowledge transfer strategy. After repeating the above updates for \(\tau\) steps, the final parameters of server-model \(\phi^{t,r}_{i}\) is used as \(\widetilde{\phi}^{t}_{i}\) in Eq. (2) and sent back to the server for aggregation, while the client-model (with parameters \(\psi^{t,r}_{i}\)) will be kept locally. By doing this, we can decouple the learning of local meta-knowledge in client-model while learning client-invariant knowledge in server-model to avoid disruption from the server. ### Local-to-Global Knowledge Transfer With our decoupled meta-learning framework, we can mitigate the disruption to the learning of local meta-knowledge in each client. Nevertheless, we still need to transfer the learned meta-knowledge to server-model (i.e., Local-to-global Knowledge Transfer), so that it can be further leveraged by other clients to handle the local data insufficiency issue. Specifically, to effectively transfer local meta-knowledge, we propose to maximize the mutual information between representations learned from server-model encoder \(q_{\phi}\) and client-model encoder \(q_{\psi}\). In this way, the server-model can maximally absorb the information in the learned local meta-knowledge. #### 3.2.1. Mutual Information Maximization Given a meta-training task \(\mathcal{T}=\{\mathcal{S},\mathcal{Q}\}\), as described in Sec. 3.1, the server-model encoder \(q_{\phi}\) and client-model encoder \(q_{\psi}\) will output \(\mathbf{h}_{\phi}\) and \(\mathbf{h}_{\psi}\) for each sample, respectively. By stacking the learned representations of samples in the support set \(\mathcal{S}\) (\(|\mathcal{S}|=D\), where \(D=N\times K\)), we can obtain the representations of support samples learned by the server-model, i.e., \(\mathbf{H}_{\phi}\in\mathbb{R}^{D\times k}\), and the client-model, i.e., \(\mathbf{H}_{\psi}\in\mathbb{R}^{D\times k}\). For simplicity, we omit the annotations of round \(t\), step \(s\), and client \(i\). The objective of maximizing the information between \(\mathbf{H}_{\phi}\) and \(\mathbf{H}_{\psi}\) can be formally represented as follows: \[\max_{\phi}I(\mathbf{H}_{\phi};\mathbf{H}_{\psi})=\max_{\phi}\sum_{i=1}^{D} \sum_{j=1}^{D}p(\mathbf{h}^{i}_{\phi};\mathbf{h}^{j}_{\psi};\phi)\log\frac{p( \mathbf{h}^{j}_{\psi}|\mathbf{h}^{i}_{\phi};\phi)}{p(\mathbf{h}^{j}_{\phi}; \phi)}, \tag{6}\] where \(\mathbf{h}^{i}_{\phi}\) (or \(\mathbf{h}^{i}_{\psi}\)) is the \(i\)-th row of \(\mathbf{H}_{\phi}\) (or \(\mathbf{H}_{\psi}\)). Since the mutual information \(I(\mathbf{H}_{\phi};\mathbf{H}_{\psi})\) is difficult to obtain and thus infeasible to be maximized (Kang et al., 2018), we re-write it to achieve a more feasible form: \[I(\mathbf{H}_{\phi};\mathbf{H}_{\psi})=\sum_{i=1}^{D}\sum_{j=1}^{D}p(\mathbf{h }^{i}_{\phi}|\mathbf{h}^{j}_{\psi};\phi)p(\mathbf{h}^{j}_{\psi};\phi)\log\frac{ p(\mathbf{h}^{j}_{\psi}|\mathbf{h}^{i}_{\phi};\phi)}{p(\mathbf{h}^{j}_{\psi}; \phi)}. \tag{7}\] Since the support set \(\mathcal{S}\) of size \(D\) is randomly sampled, we can assume that the prior probability \(p(\mathbf{h}^{j}_{\psi};\phi)\) follows a uniform distribution, and set it as \(p(\mathbf{h}^{j}_{\psi};\phi)=1/D\). According to the Bayes' theorem, the Eq. (7) becomes: \[I(\mathbf{H}_{\phi};\mathbf{H}_{\psi})=\frac{1}{D}\sum_{i=1}^{D}\sum_{j=1}^{D}p( \mathbf{h}^{i}_{\phi}|\mathbf{h}^{j}_{\psi};\phi)\left(\log(p(\mathbf{h}^{j}_{ \psi}|\mathbf{h}^{i}_{\phi};\phi))+\log D\right). \tag{8}\] We next present alternative strategies to estimate \(p(\mathbf{h}^{i}_{\phi}|\mathbf{h}^{j}_{\psi};\phi)\) and \(p(\mathbf{h}^{j}_{\psi}|\mathbf{h}^{i}_{\phi};\phi)\) in detail. #### 3.2.2. Estimation of \(p(\mathbf{h}^{i}_{\phi}|\mathbf{h}^{j}_{\psi};\phi)\) Since the client-model is fine-tuned on the support set \(\mathcal{S}\) of the meta-task \(\mathcal{T}\), we can leverage the classification results of the client-model to estimate \(p(\mathbf{h}^{i}_{\phi}|\mathbf{h}^{j}_{\psi};\phi)\). We denote \(C(j)\) as the set of sample indices in the support set \(\mathcal{S}\) that shares the same class as the \(j\)-th sample (including itself), i.e., \(C(j)\equiv\{k:y_{k}=y_{j},k=1,2,\ldots,D\}\). Here, we first set \(p(\mathbf{h}^{j}_{\phi}|\mathbf{h}^{j}_{\psi};\phi)=0\) for all \(i\notin C(j)\), since we assume the client-model can only infer representations from the same class. Intuitively, in the Figure 2. The illustration of our decoupled meta-learning framework. \(\psi\) denotes the client-model, which will be locally kept in each client. \(\phi\) denotes the server-model, which will be aggregated and sent to the server. case of \(i\in C(j)\), which means \(i\)-th and the \(j\)-th samples share the same class, \(p(\mathbf{h}_{\phi}^{i}|\mathbf{h}_{\psi}^{j};\phi)\) can be considered as the confidence of client-model regarding the class of the \(j\)-the sample. Therefore, it should reflect the degree to which the sample representation \(\mathbf{h}_{\phi}^{j}\) is relevant to its class. Utilizing the client-model classification output (i.e., normalized class probabilities) for the \(i\)-th sample \(\mathbf{p}_{\psi}^{i}\in\mathbb{R}^{N}\), we can compute \(p(\mathbf{h}_{\phi}^{i}|\mathbf{h}_{\psi}^{j};\phi)\) as follows: \[p(\mathbf{h}_{\phi}^{i}|\mathbf{h}_{\psi}^{j};\phi)=\left\{\begin{aligned} \frac{\mathbf{p}_{\psi}^{i}(y_{j})}{\sum_{k\in C(j)}\mathbf{p}_{\psi}^{k}(y_{ j})}&\quad\text{if}\;\;i\in C(j)\\ 0&\quad\text{otherwise}\end{aligned}\right., \tag{9}\] where \(\mathbf{p}_{\psi}^{i}(y_{j})\in\mathbb{R}\) denotes the classification probability for the \(i\)-th sample regarding class \(y_{j}\) (\(y_{i}=y_{j}\) when \(i\in C(j)\)). #### 3.2.3. Estimation of \(p(\mathbf{h}_{\psi}^{i}|\mathbf{h}_{\phi}^{i};\phi)\) Next we elaborate on how to estimate \(p(\mathbf{h}_{\psi}^{j}|\mathbf{h}_{\phi}^{i};\phi)\). Although we can similarly leverage the classification results of the server-model, such a strategy lacks generalizability. This is because the server-model aims at classifying all base classes instead of the \(N\) classes in each meta-training task. We instead propose to estimate \(p(\mathbf{h}_{\psi}^{j}|\mathbf{h}_{\phi}^{i};\phi)\) based on the Euclidean distance (divided by 2 for simplicity) between learned representations of the server-model and the client-model. Specifically, we normalize the distances with a softmax function: \[p(\mathbf{h}_{\psi}^{j}|\mathbf{h}_{\phi}^{i};\phi)=\frac{\exp\left(-\|\mathbf{ h}_{\phi}^{i}-\mathbf{h}_{\psi}^{j}\|_{2}^{2}/2\right)}{\sum_{k\in C(i)}\exp \left(-\|\mathbf{h}_{\phi}^{i}-\mathbf{h}_{\psi}^{k}\|_{2}^{2}/2\right)}\;. \tag{10}\] Then if we further apply the \(\ell_{2}\) normalization to both \(\mathbf{h}_{\phi}^{i}\) and \(\mathbf{h}_{\psi}^{j}\), we can obtain \(\|\mathbf{h}_{\phi}^{i}-\mathbf{h}_{\psi}^{j}\|_{2}^{2}/2=1-\mathbf{h}_{\phi}^ {i}\cdot\mathbf{h}_{\psi}^{j}\). Moreover, since the value of \(\sum_{i=1}^{D}\sum_{j=1}^{D}p(\mathbf{h}_{\phi}^{i}|\mathbf{h}_{\psi}^{j};\phi)\) equals a constant \(D\), the term \(\sum_{i=1}^{D}\sum_{j=1}^{D}p(\mathbf{h}_{\phi}^{i}|\mathbf{h}_{\psi}^{j};\phi) \cdot\log(D)/D\) in Eq. (8) is also a constant and thus can be ignored in the objective: \[\frac{1}{D}\sum_{i=1}^{D}\sum_{j=1}^{D}p(\mathbf{h}_{\phi}^{i}|\mathbf{h}_{ \psi}^{j};\phi)\log\left(D\right)=\frac{1}{D}\cdot D\cdot\log\left(D\right)= \log\left(D\right). \tag{11}\] Combining the above equations, the optimal server-model parameter \(\phi^{*}\) for the final optimization objective (i.e., \(\max_{\phi}I(\mathbf{H}_{\phi};\mathbf{H}_{\psi})\)) can be obtained as follows: \[\phi^{*}=\operatorname*{argmax}_{\phi}I(\mathbf{H}_{\phi};\mathbf{H}_{\psi})= \operatorname*{argmin}_{\phi}\mathcal{L}_{MI}. \tag{12}\] Here \(\mathcal{L}_{MI}\) is defined as follows: \[\mathcal{L}_{MI} =\frac{1}{D}\sum_{i=1}^{D}\sum_{j=1}^{D}p(\mathbf{h}_{\phi}^{i}| \mathbf{h}_{\psi}^{j};\phi)\log(p(\mathbf{h}_{\psi}^{j}|\mathbf{h}_{\phi}^{i}; \phi))\] \[=\frac{1}{D}\sum_{j=1}^{D}\sum_{i\in C(j)}\frac{\mathbf{p}_{\psi} ^{i}(y_{j})\left(\mathbf{h}_{\phi}^{i}\cdot\mathbf{h}_{\psi}^{j}\right)}{\sum_ {k\in C(j)}\mathbf{p}_{\psi}^{k}(y_{j})}\] \[+\frac{\mathbf{p}_{\psi}^{i}(y_{j})}{\sum_{k\in C(j)}\mathbf{p}_{ \psi}^{k}(y_{j})}\log\left(\sum_{k\in C(i)}\exp\left(\mathbf{h}_{\phi}^{i} \cdot\mathbf{h}_{\psi}^{k}\right)\right), \tag{13}\] where we exchange the order of summation over \(i\) and \(j\) for clarity. It is noteworthy that \(\mathcal{L}_{MI}\) is different from the InfoNCE loss (Han et al., 2017; Wang et al., 2018), which considers different augmentations of samples, while \(\mathcal{L}_{MI}\) focuses on the classes of samples in \(\mathcal{S}\). Moreover, \(\mathcal{L}_{MI}\) also differs from the supervised contrastive loss (Wang et al., 2019), which combines various augmentations of samples and label information. In contrast, our loss targets at transferring the meta-knowledge by maximally preserving the mutual information between representations learned by the server-model and the client-model. More differently, the term \(\mathbf{p}_{\psi}^{i}(y_{j})/\sum_{k\in C(j)}\mathbf{p}_{\psi}^{k}(y_{j})\) acts as an adjustable weight that measures the importance of a sample to its class. Combining the objective Figure 3. An illustration of the overall process of our framework \(\mathbf{F}^{2}\)L. Specifically, each client receives the server-model from the server at the beginning of each round. To perform one step of local update, each client first samples a meta-task (2-way 2-shot in the illustration), which consists of a support set and a query set, from the local data. Then the server-model and the client-model will both compute output for the support samples and query samples. After that, the server-model and the client-model are updated via mutual information maximization and knowledge distillation, respectively. Finally, the server-model is sent back to the server for aggregation, while the client-model is locally preserved by each client. described in Eq. (13) and the standard cross-entropy loss, we can obtain the final loss for the server-model: \[\mathcal{L}_{\phi}=(1-\lambda_{MI})\mathcal{L}_{CE}(\mathcal{S})+\lambda_{MI} \mathcal{L}_{MI}, \tag{14}\] where \(\mathcal{L}_{CE}(\mathcal{S})\) is defined as follows: \[\mathcal{L}_{CE}(\mathcal{S})=-\frac{1}{D}\sum_{i=1}^{D}\sum_{j=1}^{|\mathcal{ C}_{i}|}y_{c_{j}}^{i}\log\mathbf{p}_{\phi}^{i}(c_{j}), \tag{15}\] where \(\mathbf{p}_{\phi}^{i}(c_{j})\in\mathbb{R}\) denotes the classification probability for the \(i\)-th support sample belonging to the \(j\)-th class \(c_{j}\) in \(\mathcal{C}_{b}\), computed by the server-model. Here \(y_{c_{j}}^{i}=1\) if the \(i\)-th support sample belongs to \(c_{j}\), and \(y_{c_{j}}^{i}=0\), otherwise. Moreover, \(\lambda_{MI}\in[0,1]\) is an adjustable hyper-parameter to control the weight of \(\mathcal{L}_{MI}\). ### Global-to-Local Knowledge Distillation With the learned meta-knowledge in each client transferred from the client-model to the server-model, other clients can leverage such meta-knowledge to deal with the local data insufficiency issue. However, since each meta-task only contains \(N\) classes, directly extracting meta-knowledge in the server-model can inevitably involve meta-knowledge from other classes, which can be harmful to the learning of local meta-knowledge from these \(N\) classes in each client. Instead, we propose a partial knowledge distillation strategy to selectively extract useful knowledge from the server-model, i.e., global-to-local knowledge distillation. #### 3.3.1. Partial Knowledge Distillation Specifically, we focus on the output classification probabilities of the server-model regarding the \(N\) classes in support set \(\mathcal{S}\) while ignoring other classes. In this regard, we can extract the information that is crucial for learning local meta-knowledge from these \(N\) classes and also reduce the irrelevant information from other classes. Particularly, we consider the same meta-task \(\mathcal{T}=\{\mathcal{S},\mathcal{Q}\}\). We denote the output probabilities for the \(i\)-th query sample \(q_{i}\) in \(\mathcal{Q}\) (with label \(y_{i}\)) of the server-model and the client-model as \(\mathbf{p}_{\phi}^{i}\in\mathbb{R}^{|\mathcal{C}_{b}|}\) and \(\mathbf{p}_{\psi}^{i}\in\mathbb{R}^{N}\), respectively. It is noteworthy that the \(N\) classes in this meta-task, denoted as \(\mathcal{C}_{m}\), are sampled from the base classes \(\mathcal{C}_{b}\) (i.e., \(|\mathcal{C}_{m}|=N\) and \(\mathcal{C}_{m}\subset\mathcal{C}_{b}\)). Therefore, the output of server-model (i.e., \(\mathbf{p}_{\phi}^{i}\)) will include the probabilities of classes in \(\mathcal{C}_{m}\). In particular, we enforce the probabilities of in \(\mathcal{C}_{m}\) from the client-model to be consistent with the probabilities of the same classes from the server-model. As a result, the learning of local meta-knowledge can leverage the information of data in the same \(N\) classes from other clients, which is encoded in the server-model. In this regard, we can handle the local data insufficiency issue by involving information from other clients while reducing the irrelevant information from other classes not in \(\mathcal{C}_{m}\). In particular, by utilizing the output of the server-model as the soft target for the client-model, we can achieve an objective as follows: \[\mathcal{L}_{KD}=-\frac{1}{Q}\sum_{i=1}^{Q}\sum_{j=1}^{N}\mathbf{q}_{\phi}^{i} (c_{j})\log\mathbf{q}_{\psi}^{i}(c_{j}), \tag{16}\] where \(c_{j}\) is the \(j\)-th class in \(\mathcal{C}_{m}\) (i.e., the \(N\) classes in meta-task \(\mathcal{T}\)). \(\mathbf{q}_{\phi}^{i}(c_{j})\) and \(\mathbf{q}_{\psi}^{i}(c_{j})\) are the knowledge distillation values for \(c_{j}\) from server-model and client-model, respectively. Specifically, the values of \(\mathbf{q}_{\phi}^{i}(c_{j})\) and \(\mathbf{q}_{\psi}^{i}(c_{j})\) are obtained via the softmax normalization: \[\mathbf{q}_{\phi}^{i}(c_{j})=\frac{\exp(\mathbf{z}_{\phi}^{i}(c_{j})/T_{i})}{ \sum_{k=1}^{N}\exp(\mathbf{z}_{\phi}^{i}(c_{k})/T_{i})}, \tag{17}\] \[\mathbf{q}_{\psi}^{i}(c_{j})=\frac{\exp(\mathbf{z}_{\psi}^{i}(c_{j})/T_{i})}{ \sum_{k=1}^{N}\exp(\mathbf{z}_{\psi}^{i}(c_{k})/T_{i})}, \tag{18}\] where \(\mathbf{z}_{\phi}^{i}(c_{j})\) are \(\mathbf{z}_{\psi}^{i}(c_{j})\)) are the logits (i.e., output before softmax normalization) of class \(c_{j}\) from server-model and client-model, respectively. \(T_{i}\) is the temperature parameter for the \(i\)-th query sample. In this way, we can ensure that \(\sum_{j=1}^{N}\mathbf{q}_{\phi}^{i}(c_{j})=\sum_{j=1}^{N}\mathbf{q}_{\psi}^{i} (c_{j})=1\). #### 3.3.2. Adaptive Temperature Parameter Generally, a larger value of \(T_{i}\) denotes that the client-model focuses more on extracting information from the other classes in \(\mathcal{C}_{m}\)(Gordner, 2017) (i.e., \(\{c|c\in\mathcal{C}_{m},c\neq y_{i}\}\)), denoted as negative classes. Since the classification results can be erroneous in the server-model, we should adaptively adjust the value of \(T_{i}\) for each meta-task to reduce the adverse impact of extracting misleading information from the server-model. However, although negative classes can inherit useful information for classification, such information is generally noisier when the output probabilities of these negative classes are smaller. Therefore, to estimate the importance degree of each negative class, we consider the maximum output logit for negative classes to reduce potential noise. Particularly, if the probability of a negative class from the server-model is significantly larger than other classes, we can conjecture that this class is similar to \(y_{i}\) and thus potentially contains the crucial information to distinguish them. Specifically, the temperature parameter \(T_{i}\) for the \(i\)-th query sample \(q_{i}\) is computed as follows: \[T_{i}=\sigma\left(\frac{\max_{c\in\mathcal{C}_{m}\subset y_{i}}\exp(\mathbf{z}_ {\phi}^{i}(c))}{\exp(\mathbf{z}_{\phi}^{i}(y_{i}))}\right), \tag{19}\] where \(\sigma(\cdot)\) denotes the Sigmoid function, and \(y_{i}\) is the label of \(q_{i}\). In this way, the temperature parameter \(T_{i}\) will increase when the ratio between the largest probability in negative classes and the probability for \(y_{i}\) is larger. As a result, the client-model will focus more on the negative class information. Then by further incorporating the cross-entropy loss on the query set \(\mathcal{Q}\), we can obtain the final loss for the client-model: \[\mathcal{L}_{\psi}=(1-\lambda_{KD})\mathcal{L}_{CE}(\mathcal{Q})+\lambda_{KD} \mathcal{L}_{KD}, \tag{20}\] where \(\mathcal{L}_{CE}(\mathcal{Q})\) is defined as follows: \[\mathcal{L}_{CE}(\mathcal{Q})=-\frac{1}{Q}\sum_{i=1}^{Q}\sum_{j=1}^{N}y_{c_{j}}^ {i}\log\mathbf{p}_{\psi}^{i}(c_{j}), \tag{21}\] where \(\mathbf{p}_{\psi}^{i}(c_{j})\) is the probability of the \(i\)-th query sample belonging to class \(c_{j}\) computed by the client-model. \(y_{c_{j}}^{i}=1\) if the \(i\)-th query sample belongs to \(c_{j}\), and \(y_{c_{j}}^{i}=0\), otherwise. Moreover, \(\lambda_{KD}\in[0,1]\) is an adjustable hyper-parameter to control the weight of \(\mathcal{L}_{KD}\). In this manner, the client-model can selectively learn useful knowledge from both the local and global perspectives, i.e., global-to-local knowledge distillation. ### Overall Learning Process With the proposed losses \(\mathcal{L}_{\phi}\) and \(\mathcal{L}_{\phi}\), on each round, we can conduct meta-training on each client \(\mathbb{C}^{(i)}\) by sampling \(\tau\) meta-training tasks from the local base dataset \(\mathcal{D}_{b}^{(i)}\). The detailed process is described in Algorithm 1. After \(T\) rounds of meta-training on all the clients, we have obtained a model that accommodates comprehensive meta-knowledge for federated few-shot learning. For the meta-test phase, since we have aggregated learned local meta-knowledge from each client to the server-model, we can leverage the server-model to generate data representations for classification. Specifically, during evaluation, for each meta-test task \(\mathcal{T}=\{\mathcal{S},\mathcal{Q}\}\) sampled from local novel datasets \(\{\mathcal{D}_{n}^{(i)}\}_{i=1}^{I}\) in all clients, we follow the same process as meta-training including fine-tuning, except that the meta-update process is omitted. The output of the client-model will be used for classification. ## 4. Experiments In this part, we conduct extensive experiments to evaluate our framework F\({}^{2}\)L on four few-shot classification datasets covering both news articles and images under the federated scenario. ### Datasets In this section, we introduce four prevalent real-world datasets used in our experiments, covering both news articles and images: **20 Newsgroup**(Kang et al., 2019), **Huffpost**(Kang et al., 2019; Wang et al., 2020), **FC100**(Wang et al., 2020), and **minImageNet**(Wang et al., 2020). In particular, 20 Newsgroup and Huffpost are online news article datasets, while FC100 and miniImageNet are image datasets. The details are as follows: * **20 Newsgroup**(Kang et al., 2019) is a text dataset that consists of informal discourse from news discussion forums. There are 20 classes for documents in this dataset, where each class belongs to one of six top-level categories. The classes are split as 8/5/7 for training/validation/test, respectively. * **Huffpost**(Kang et al., 2019; Wang et al., 2020) is a text dataset containing news headlines published on HuffPost2 between 2012 and 2018. Generally, the headlines are significantly shorter and less grammatical than the 20 Newsgroup dataset. Moreover, each headline belongs to one of 41 classes, which are then split as 20/5/16 for training/validation/test, respectively. Footnote 2: [https://www.huffpost.com/](https://www.huffpost.com/) * **FC100**(Wang et al., 2020) is an image classification dataset based on CIFAR-100 (Kang et al., 2019). Specifically, this dataset contains 100 image classes, where each class maintains 600 images with a low \(32\times 32\) resolution. The classes are split as 60/20/20 for training/validation/test, respectively. * **miniImageNet**(Wang et al., 2020) is an image dataset extracted from the full ImageNet dataset (Kang et al., 2019). This dataset consists of 100 image classes, and each class maintains 600 images with a resolution of \(84\times 84\). The classes are split as 64/16/20 for training/validation/test, respectively. ### Experimental Settings To validate the performance of our framework F\({}^{2}\)L, we conduct experiments with the following baselines for a fair comparison: * _Local._ This baseline is non-distributed, which means we train an individual model for each client on the local data. The meta-test process is conducted on all meta-test tasks, and the averaged results of all models are reported. * _FL-MAML._ This baseline leverages the MAML (Krizhevsky et al., 2014) strategy to perform meta-learning on each client. The updated model parameters will be sent back to the server for aggregation. * _FL-Proto._ This baseline uses ProtoNet (Wang et al., 2020) as the model in each client. The classification is based on the Euclidean distances between query samples and support samples. * _FedFSL_(Wang et al., 2020). This method combines MAML and an adversarial learning strategy (Krizhevsky et al., 2014; Wang et al., 2020) to construct a consistent feature space. The aggregation is based on FedAvg (Wang et al., 2020). During meta-training, we perform updates for the client-model and the server-model according to Algorithm 1. Finally, the server-model that achieves the best result on validation will be used for meta-test. Then during meta-test, we evaluate the server-model on a series of 100 randomly sampled meta-test tasks from local novel datasets \(\{\mathcal{D}_{n}^{(i)}\}_{i=1}^{I}\) in all clients. For consistency, the class split of \(\mathcal{C}_{b}\) and \(\mathcal{C}_{n}\) is identical for all baseline methods. The classification accuracy over these meta-test tasks will be averaged as the final results. The specific parameter settings are provided in Appendix C.3. For the specific choices for the encoder and classifier in server-model and client-model (i.e., \(q_{\phi}\), \(f_{\phi}\), \(q_{\psi}\), and \(f_{\psi}\)) and model parameters, we provide further details in Appendix C.1. Note that for a fair comparison, we utilize the same encoder for all methods. ### Overall Evaluation Results We present the overall performance comparison of our framework and baselines on federated few-shot learning in Table 1. Specifically, we conduct experiments under two few-shot settings: 5-way 1-shot and 5-way 5-shot. Moreover, to demonstrate the robustness of our framework under different data distributions, we partition the data in both IID and non-IID settings. For the IID partition, the samples of each class are uniformly distributed to all clients. For non-IID partition, we follow the prevailing strategy (Zhu et al., 2019; Wang et al., 2020) and distribute samples to all clients based on the Dirichlet distribution with its concentration parameter set as 1.0. The evaluation metric is the average classification accuracy over ten repetitions. From the overall results, we can obtain the following observations: * Our framework F\({}^{2}\)L outperforms all other baselines on various news article and image datasets under different few-shot settings (1-shot and 5-shot) and data distributions (IID and non-IID). The results validate the effectiveness of our framework on federated few-shot learning. * Conventional few-shot methods such as Prototypical Network (Wang et al., 2020) and MAML (Krizhevsky et al., 2014) exhibit similar performance compared with the Local baseline. The result demonstrates that directly applying few-shot methods to federated learning brings less competitive improvements over local training. This is because such methods are not proposed for federated learning and thus lead to unsatisfactory training performance under the federated setting. * The performance of all methods degrades at different extents when the data distribution is changed from IID to non-IID. The main reason is that the variety of classes in each client results in a more complex class distribution and brings difficulties to the classification task. Nevertheless, by effectively transferring the meta-knowledge among clients, our framework is capable of alleviating such a problem under the non-IID scenario. * When increasing the value of \(K\) (i.e., more support samples in each class), all methods achieve considerable performance gains. In particular, our framework \(\mathrm{F}^{2}\mathrm{L}\) obtains better results compared to other baselines, due to our decoupled meta-learning framework, which promotes the learning of meta-knowledge in the support samples. ### Ablation Study In this part, we conduct an ablation study on FC100 and Huffpost to validate the effectiveness of three crucial designs in \(\mathrm{F}^{2}\mathrm{L}\) (similar results observed in other datasets). First, we remove the decoupled strategy so that the client-model will also be sent to the server for aggregation. We refer to this variant as \(\mathrm{F}^{2}L|M\). Second, we remove the local-to-global knowledge transfer module so that the meta-knowledge in the client-model will be effectively transferred to the server-model. This variant is referred to as \(\mathrm{F}^{2}L|T\). Third, we eliminate the global-to-local knowledge distillation loss. In this way, the client-model cannot leverage the global knowledge in the server-model for learning meta-knowledge. We refer to this variant as \(\mathrm{F}^{2}L|A\). The overall ablation study results are presented in Fig. 4. From the results, we observe that \(\mathrm{F}^{2}\mathrm{L}\) outperforms all variants, which verifies the effectiveness of the three designs in \(\mathrm{F}^{2}\mathrm{L}\). Specifically, removing the design of local-to-global knowledge transfer leads to significant performance degradation. This result demonstrates that such a design can effectively aggregate learned meta-knowledge among clients and thus bring performance improvements. More significantly, without our decoupled strategy, the performance deteriorates rapidly when federated few-shot learning is conducted in the non-IID scenario. This phenomenon verifies the importance of mitigating the disruption from the server in the presence of complex data distributions among clients. ### Parameter Sensitivity Study #### 4.5.1. Effect of \(\lambda_{MI}\) and \(\lambda_{KD}\) In this section, we further conduct experiments to study the sensitivity of several parameters in our framework \(\mathrm{F}^{2}\mathrm{L}\). During the process of transferring and achieving meta-knowledge, we introduce two novel losses \(\mathcal{L}_{MI}\) and \(\mathcal{L}_{KD}\), respectively, along with the traditional cross-entropy loss. To empirically evaluate the impact brought by different values of \(\lambda_{MI}\) and \(\lambda_{KD}\) in Eq. (14) and Eq. (20), we adjust the values of \(\lambda_{MI}\) and \(\lambda_{KD}\) from 0 to 1 and present the results in Fig 5. From the results, we can observe that the performance generally increases with a \begin{table} \begin{tabular}{c|c|c|c|c||c|c|c} \hline Dataset & \multicolumn{4}{c||}{20 Newsgroup} & \multicolumn{4}{c}{Huffpost} \\ \hline Distribution & \multicolumn{2}{c|}{IID} & \multicolumn{2}{c||}{Non-IID} & \multicolumn{2}{c|}{IID} & \multicolumn{2}{c}{Non-IID} \\ \hline Setting & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline \hline Local & \(31.53\pm 1.68\) & \(42.73\pm 1.51\) & \(29.64\pm 1.81\) & \(41.01\pm 2.40\) & \(34.02\pm 1.67\) & \(49.95\pm 1.54\) & \(33.09\pm 2.28\) & \(47.18\pm 1.43\) \\ \hline FL-MAML & \(32.89\pm 1.86\) & \(44.34\pm 1.66\) & \(31.60\pm 1.44\) & \(43.84\pm 1.97\) & \(37.47\pm 1.43\) & \(52.85\pm 1.43\) & \(36.01\pm 2.17\) & \(50.56\pm 2.08\) \\ \hline FL-Proto & \(35.62\pm 2.07\) & \(46.04\pm 1.92\) & \(32.79\pm 1.41\) & \(43.82\pm 1.85\) & \(37.87\pm 1.23\) & \(51.90\pm 1.43\) & \(34.05\pm 1.35\) & \(50.52\pm 1.33\) \\ \hline FedFSL & \(36.56\pm 1.41\) & \(46.37\pm 1.82\) & \(35.84\pm 1.49\) & \(45.89\pm 1.72\) & \(39.18\pm 1.42\) & \(53.81\pm 1.36\) & \(37.86\pm 1.46\) & \(52.18\pm 1.82\) \\ \hline \(\mathrm{F}^{2}\mathrm{L}\) & \(\mathbf{39.80\pm 1.80}\) & \(\mathbf{49.64\pm 1.32}\) & \(\mathbf{39.00\pm 1.36}\) & \(\mathbf{49.44\pm 1.98}\) & \(\mathbf{42.12\pm 2.12}\) & \(\mathbf{57.88\pm 2.17}\) & \(\mathbf{41.64\pm 1.81}\) & \(\mathbf{57.12\pm 1.87}\) \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c||c|c|c|c} \hline Dataset & \multicolumn{4}{c||}{FC100} & \multicolumn{4}{c}{miniImageNet} \\ \hline Distribution & \multicolumn{2}{c|}{IID} & \multicolumn{2}{c||}{Non-IID} & \multicolumn{2}{c|}{IID} & \multicolumn{2}{c}{Non-IID} \\ \hline Setting & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline \hline Local & \(33.45\pm 1.68\) & \(50.89\pm 1.56\) & \(32.40\pm 1.76\) & \(50.29\pm 2.24\) & \(47.82\pm 1.68\) & \(64.30\pm 1.59\) & \(46.81\pm 2.03\) & \(64.06\pm 1.45\) \\ \hline FL-MAML & \(34.10\pm 1.29\) & \(50.66\pm 1.68\) & \(36.06\pm 1.78\) & \(50.35\pm 1.57\) & \(49.74\pm 1.40\) & \(65.55\pm 1.57\) & \(47.64\pm 1.36\) & \(63.56\pm 1.13\) \\ \hline FL-Proto & \(36.11\pm 1.49\) & \(54.74\pm 2.05\) & \(35.54\pm 1.71\) & \(52.31\pm 1.76\) & \(51.32\pm 1.41\) & \(66.67\pm 2.06\) & \(50.82\pm 1.82\) & \(65.09\pm 1.90\) \\ \hline FedFSL & \(39.38\pm 1.95\) & \(52.25\pm 1.84\) & \(38.60\pm 2.00\) & \(53.90\pm 1.80\) & \(55.75\pm 2.06\) & \(70.59\pm 1.97\) & \(53.52\pm 2.01\) & \(69.56\pm 1.86\) \\ \hline \(\mathrm{F}^{2}\mathrm{L}\) & \(\mathbf{42.52\pm 2.06}\) & \(\mathbf{58.60\pm 2.09}\) & \(\mathbf{42.56\pm 2.25}\) & \(\mathbf{59.52\pm 2.14}\) & \(\mathbf{56.72\pm 1.79}\) & \(\mathbf{74.23\pm 2.32}\) & \(\mathbf{56.16\pm 2.05}\) & \(\mathbf{73.24\pm 2.02}\) \\ \hline \end{tabular} \end{table} Table 1. The overall federated few-shot learning results of various models on four datasets under IID and Non-IID settings (5-way), where accuracy and standard deviation are reported in %. The best results are presented as bold. Figure 4. Ablation study of our framework on FC100 and Huffpost. I-\(K\) (or N-\(K\)) denotes the setting of 5-way \(K\)-shot under IID (or non-IID) distributions. M denotes the decoupled framework, T means the local-to-global knowledge transfer, and A denotes the global-to-local knowledge distillation. larger value of \(\lambda_{MI}\), while decreasing with \(\lambda_{MI}\) approaches 1. The results indicate the importance of transferring learned local meta-knowledge, while also demonstrating that the cross-entropy loss is necessary. On the other hand, the performance first increases and then degrades when a larger value of \(\lambda_{KD}\) is presented. That being said, although partial knowledge distillation can enable each client to benefit from the global data, a larger \(\lambda_{KD}\) can potentially lead to more irrelevant information when learning local meta-knowledge. #### 4.5.2. Effect of Client Number In this section, we study the robustness of our framework under the scenario with a varying number of clients. In particular, we keep the total training data unchanged, which means with more clients participating in the training process, each client preserves fewer training samples. As a result, the training performance will be inevitably reduced. Specifically, we partition the total training data into \(I=1,2,5,10,20\), and \(50\) clients. Note that \(I=1\) denotes the setting of completely centralized training. The results on FC100 with 1-shot and 5-shot settings are presented in Fig 6 (we have similar results for other datasets and omit them for brevity). From the results, we can observe that all methods encounter a performance drop in the presence of more clients. Nevertheless, our framework \(\mathrm{F}^{2}\mathrm{L}\) can reduce the adverse impact brought by more clients through effectively leveraging the global knowledge learned from all clients. In consequence, the performance degradation is less significant for \(\mathrm{F}^{2}\mathrm{L}\). ## 5. Related Work ### Few-shot Learning The objective of Few-shot Learning (FSL) is to learn transferable meta-knowledge from tasks with abundant information and generalize such knowledge to novel tasks that consist of only limited labeled samples (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Li et al., 2017). Existing few-shot learning works can be divided into two categories: _metric-based_ methods and _optimization-based_ methods. The metric-based methods target at learning generalizable metric functions to classify query samples by matching them with support samples (Zhu et al., 2017; Li et al., 2017; Li et al., 2018). For instance, Prototypical Networks (Zhu et al., 2017) learn a prototype representation for each class and conduct predictions based on the Euclidean distances between query samples and the prototypes. Relation Networks (Zhu et al., 2017) learn relation scores for classification in a non-linear manner. On the other hand, optimization-based approaches generally optimize model parameters based on the gradients calculated from few-shot samples (Zhu et al., 2017; Li et al., 2017; Li et al., 2018; Li et al., 2018). As an example, MAML (Li et al., 2018) proposes to optimize model parameters based on gradients on support samples to achieve fast generalization. In addition, LSTM-based meta-learner (Li et al., 2018) adjusts the step size to adaptively update parameters during meta-training. ### Federated Learning Federated Learning (FL) enables multiple clients to collaboratively train a model without exchanging the local data explicitly (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2018). As a classic example, FedAvg (Zhu et al., 2017) performs stochastic gradient descent (SGD) on each client to update model parameters and send them to the server. The server averages the received model parameters to achieve a global model for the next round. FedProx (Li et al., 2018) incorporates a proximal term into the local update of each client to reduce the distance between the global model and the local model. To deal with the non-IID problem in FL, recent works also focus on personalization in FL (Krizhevsky et al., 2014; Li et al., 2018; Li et al., 2018). For instance, FedMeta (Li et al., 2018) incorporates MAML (Li et al., 2018) into the local update process in each client for personalization. FedRep (Li et al., 2018) learns shared representations among clients. Moreover, FedFSL (Li et al., 2018) proposes to combine MAML and an adversarial learning strategy (Krizhevsky et al., 2016; Li et al., 2017) learn a consistent feature space. ## 6. Conclusion In this paper, we study the problem of federated few-shot learning, which aims at learning a federated model that can achieve satisfactory performance on new tasks with limited labeled samples. Nevertheless, it remains difficult to perform federated few-shot learning due to two challenges: global data variance and local data insufficiency. To tackle these challenges, we propose a novel federated few-shot learning framework \(\mathrm{F}^{2}\mathrm{L}\). In particular, we handle global data variance by decoupling the learning of local meta-knowledge. Then we leverage the global knowledge that is learned from all clients to tackle the local data insufficiency issue. We conduct extensive experiments on four prevalent few-shot learning datasets under the federated setting, covering both news articles and images. The experimental results further validate the superiority of our framework \(\mathrm{F}^{2}\mathrm{L}\) over other state-of-the-art baselines. ## 7. Acknowledgements The work in this paper is supported by the National Science Foundation under grants (IIS-2006844, IIS-2144209, IIS-2223769, CNS-2154962, and BCS-2228534), the Commonwealth Cyber Initiative awards (VV-1Q23- 007 and HV-2Q23- 003), the JP Morgan Chase Faculty Research Award, the Cisco Faculty Research Award, the Jefferson Lab subcontract 23-D0163, and the UVA 4-VA collaborative research grant. Figure 5. The results with different values of \(\lambda_{MI}\) and \(\lambda_{KD}\) on Huffpost under the non-IID setting. Figure 6. The results of non-IID federated 1-shot and 5-shot learning on FC100 regarding the number of clients.
2305.09235
Synthetic data, real errors: how (not) to publish and use synthetic data
Generating synthetic data through generative models is gaining interest in the ML community and beyond, promising a future where datasets can be tailored to individual needs. Unfortunately, synthetic data is usually not perfect, resulting in potential errors in downstream tasks. In this work we explore how the generative process affects the downstream ML task. We show that the naive synthetic data approach -- using synthetic data as if it is real -- leads to downstream models and analyses that do not generalize well to real data. As a first step towards better ML in the synthetic data regime, we introduce Deep Generative Ensemble (DGE) -- a framework inspired by Deep Ensembles that aims to implicitly approximate the posterior distribution over the generative process model parameters. DGE improves downstream model training, evaluation, and uncertainty quantification, vastly outperforming the naive approach on average. The largest improvements are achieved for minority classes and low-density regions of the original data, for which the generative uncertainty is largest.
Boris van Breugel, Zhaozhi Qian, Mihaela van der Schaar
2023-05-16T07:30:29Z
http://arxiv.org/abs/2305.09235v2
# Synthetic Data, Real Errors: How (Not) to Publish and Use Synthetic Data ###### Abstract Generating synthetic data through generative models is gaining interest in the ML community and beyond, promising a future where datasets can be tailored to individual needs. Unfortunately, synthetic data is usually not perfect, resulting in potential errors in downstream tasks. In this work we explore how the generative process affects the downstream ML task. We show that the naive synthetic data approach--using synthetic data as if it is real--leads to downstream models and analyses that do not generalize well to real data. As a first step towards better ML in the synthetic data regime, we introduce Deep Generative Ensemble (DGE)--a framework inspired by Deep Ensembles that aims to implicitly approximate the posterior distribution over the generative process model parameters. DGE improves downstream model training, evaluation, and uncertainty quantification, vastly outperforming the naive approach on average. The largest improvements are achieved for minority classes and low-density regions of the original data, for which the generative uncertainty is largest. Machine Learning, Synthetic Data, Real Errors, Synthetic Data, Real Errors ## 1 Introduction Data is the foundation of most science. Recent advances in deep generative modelling have seen a steep rise in methods that aim to replace real data with synthetic data. The general idea is that synthetic data resembles the real data, while guaranteeing privacy (Ho et al., 2021; Yoon et al., 2020; Jordon et al., 2019; van Breugel et al., 2023), improving fairness (Xu et al., 2018, 2019; van Breugel et al., 2021), augmenting the dataset size (Antoniou et al., 2017; Dina et al., 2022; Das et al., 2022; Bing et al., 2022), or simulating distributional shifts (Yoon et al., 2018). Often the aim is to be able to use the synthetic data in place of the real data for some downstream task, e.g. statistical analyses or training an ML supervised model. The hope is that downstream results are equally valid in the real-world--e.g. a prediction model trained on synthetic data will do well on real data. Evidently, whether this is true will rely entirely on how well the synthetic data describes the real data. This brings us to the focus of this work: how do we do good ML on synthetic data, given that the generative process underlying the synthetic data is not perfect. If we are the data publisher, how should we create and publish synthetic data for it to be most useful to downstream users? And if we are the downstream user, can we create models that are more robust to potential errors, evaluate models reliably using synthetic data, and how do we estimate uncertainty for the downstream results? If we envision a future where synthetic data plays a significant role in research, these are pertinent questions. Let us first highlight why this is an important topic in practice. First, synthetic data is not perfect. Deep generative models may fail in numerable ways, with consequences such Figure 1: Synthetic data is not perfect, which affects downstream ML tasks, e.g. training a prediction model. The naive synthetic data approach generates one synthetic dataset and treats it like it is real. We propose using an ensemble of generative models for capturing the generative uncertainty, implicitly encoded into different synthetic data distributions. as mode collapse, noisy data, memorisation of training data, and poor coverage (van Breugel and van der Schaar, 2023). Second, in addition to synthetic data being imperfect, even just quantifying the quality of generative models is hard, because this requires comparing distributions as a whole--a notoriously difficult task (Alaa et al., 2022). These two points make it hard to guarantee the data is 'good' or 'bad'. Third, even if we would be able to measure the quality of synthetic data accurately, in general is not at all trivial how we would use this information for estimating the influence of the generative process on the downstream result--e.g. when training a neural network, the influence of individual training samples is highly complex. Since the downstream user usually has no access to real data (see Figure 1), they cannot verify results on real data. Let us give a real example of what could you go wrong when using synthetic data. _Example._ We take the SEER prostate cancer dataset and generate synthetic data using CTGAN with different numbers of hidden layers. Subsequently, we use a train-test-split on the synthetic data and train a random forest for classification. We compare the train-on-synthetic-test-on-synthetic (TSTS) accuracy and the train-on-synthetic-test-on-real (TSTR) accuracy, the latter measured on a real hold-out test set (Jordon et al., 2021). Figure 2 displays the results over 10 runs. We see that the real performance of the downstream models is comparable across different generative model depths--an indicator that the utility of the synthetic data is similar. On the other hand, the TSTS performance indicates a large preference for the data generated using the deeper generator, with the corresponding TSTS vastly overestimating the TSTR. Also note the TSTS estimates have an undesirable high variance, due to the added randomness in the generative process. **Contributions.** Through this work we explore how--and how _not_-- to perform basic ML tasks when using synthetic data. The contributions are as follows. 1. We investigate how the standard, naive way of using synthetic data--using it like it is real data--yields poor downstream models, poor downstream model evaluation, and poor uncertainty quantification, due to it ignoring errors in the generation process _itself_. 2. We introduce Deep Generative Ensemble (DGE) as a simple synthetic data framework for alleviating these concerns through generating multiple synthetic datasets. An advantage of DGE is its flexibility and compatibility with different deep generative base model (e.g. VAE, GAN, diffusion models). 3. We investigate why and when DGE provides better downstream models, evaluation, selection, and better downstream uncertainty quantification. 4. Furthermore, we explore how DGE improves upon the naive approach especially in low-density regions. This is important, since these regions may correspond to underrepresented groups in the population. Section 3 is mostly targeted at data publishers, focusing on synthetic data issues and how DGE aims to fix these. Section 4 explores experimentally how the naive approach fails, and describes how DGE-generated synthetic data can be used by data users for better downstream ML. In Section 5 we highlight takeaways for both groups. ## 2 Related Work Generative ensembles and dropout.Deep generative models--notoriously GANs (Goodfellow et al., 2014)--often lack diversity in their generated samples, i.e. a GAN that aims to generate patient hospital records may produce a lot of old people but hardly any young people. A large body of work (Tolstikhin et al., 2017; Grover and Ermon, 2017; Ghosh et al., 2017; Hoang et al., 2018) aims to fix diversity issues by ensembling GANs. Some of these approaches use boosting (Tolstikhin et al., 2017; Grover and Ermon, 2017), others use multiple generators (Ghosh et al., 2017; Hoang et al., 2018), discriminators (Nguyen et al., 2017; Ghosh et al., 2017), or dropout (Morrido et al., 2018). These approaches are entirely generative performance focused, and do not consider any application for or insight into improving some downstream ML task. Most importantly, they do not consider how data should be published, and these methods result still in a single synthetic dataset. In Section 4 we explore why publishing a single synthetic dataset does not suffice, even if it is generated by an ensemble. Uncertainty quantification in ML.Uncertainty quantification has gained significant attention in recent deep learning literature, see (Abdar et al., 2021) for an overview. One of the more popular methods is Deep Ensembles (Lakshminarayanan et al., 2016), which provides a straightforward approach to supervised model uncertainty estimation: train multiple networks, create predictions using each network and consider the variance between the different networks. Even though this approach is simple, Deep Ensembles have been shown to perform very positively in comparison to fully bayesian methods, possibly due to their ability to capture uncertainty at a more global level of the weight space Figure 2: Conclusions drawn from synthetic data do not always transfer to real data. (Fort et al., 2019). Note that the other very popular UQ method of Monte Carlo dropout (Gal and Ghahramani, 2015) can also be seen as an ensemble method, but where the networks share most of the parameters. To the best of our knowledge, we are the first to apply UQ to generative models and their downstream task. We note that there are works that consider the opposite: applying generative models to UQ (Bohm et al., 2019; Phan et al., 2019; Sensoy et al., 2020; Liu et al., 2022). The aim of these methods is to improve UQ using some form of synthetic data, which is entirely tangential to our work that is interested in the uncertainty in the generative process itself. See Table 1 for an overview. ## 3 Modelling Generative Uncertainty ### Set-up Let \(X,Y\) be random variables on \(\mathcal{X},\mathcal{Y}\) denoting features and label, respectively, and let us be given real data \(\mathcal{D}_{r}=(\mathbf{x}^{(i)},y^{(i)})_{i=1}^{n_{R}}\) from distribution \(p_{r}(X,Y)\). Let \(G_{\theta}\) be a generator parameterised by \(\theta\) that outputs samples with distribution \(p_{\theta}(X,Y)\). We denote samples from \(G_{\theta}\) by \(\mathcal{D}_{s}(\mathbf{x}^{(i)},y^{(i)})_{i=1}^{n_{S}}\). In the typical generative modelling setting, the objective is to minimise: \[\theta=\operatorname*{arg\,min}_{\theta}D(p_{\theta},p_{r}) \tag{1}\] for some divergence metric \(D\) (e.g. KL-divergence or Wasserstein distance). Though in the limit of infinite training data and capacity some generative models may be guaranteed to achieve \(p_{\theta}=p_{r}\) (e.g. for GANs (Goodfellow et al., 2014)), in practice \(p_{\theta}\) is only an approximation. Evidently, inaccuracies in \(p_{\theta}\) imply that \(\mathcal{D}_{s}\) has a different distribution as \(\mathcal{D}_{r}\), and hence this affects any downstream task \(T\) we may perform using \(\mathcal{D}_{s}\). This task \(T\) can depend directly on the synthetic data--e.g. estimating the density at some point--or indirectly--e.g. estimating treatment effects or making prediction by first training some supervised ML model \(g\). Thus, the variable \(T\) is a random variable itself, due to it depending on random \(\mathcal{D}_{s}\), as well as possible training randomness (e.g. if it is a prediction of a downstream neural network). In any case, we want to take into account the uncertainty in \(\theta\) when performing this task. ### Influence of Data on Downstream Task To account for the synthetic data generation process when computing downstream \(T\), let us consider the distribution of \(T\). Let us denote the distribution of downstream \(T\) w.r.t. the real data \(\mathcal{D}_{r}\) as \(p(T|\mathcal{D}_{r})\), we can write: \[p(T|\mathcal{D}_{r})=\int p(T|\mathcal{D}_{s})p(\mathcal{D}_{s}|\theta)p( \theta|\mathcal{D}_{r})d\mathcal{D}_{s}d\theta. \tag{2}\] Let us look at the right-hand-side terms in turn. \(p(T|\mathcal{D}_{s})\) is the distribution of \(T\) conditional on the synthetic data, which is entirely dependent on the downstream task and something we have no control over as a data publisher. The term \(p(\mathcal{D}_{s}|\theta)\) is the output of the synthetic data generator for some \(\theta\), which we can sample from, but usually have no explicit expression for (in case of deep generative models). At last, \(p(\theta|\mathcal{D}_{r})\) is the distribution over the generative modelling parameters given the real data. This is the term we are most interested in; it reflects the uncertainty we have in the generative model parameters themselves. Computing the integral in Eq. 2 exactly is intractable for most settings of interest (e.g. if the synthetic data is generated using a GAN). However if a data user would have expressions for all of the terms in Eq. 2, they could use Monte Carlo integration for computing any statistic of interest (e.g. the variance). That is, we sample \(\hat{\theta}^{k}\sim p(\theta|\mathcal{D}_{r})\), sample sufficiently large \(\mathcal{D}_{s}^{k}\sim p(\mathcal{D}_{s}|\theta^{k})\), and sample \(\hat{T}^{k}\sim p(T|\mathcal{D}_{s}^{k})\) for \(k=1,...,K\), \(K\in\mathbb{N}\). This allows us to approximate statistics of interest, for example the empirical mean \(\text{E}_{T\sim\hat{p}(T|\mathcal{D}_{r})}[T]=\frac{1}{K}\sum_{k}\hat{T}^{k}\) and variance \(\text{Var}_{T\sim\hat{p}(T|\mathcal{D}_{r})}(T)=\frac{1}{K-1}\sum_{k}(\hat{T}^ {k}-\text{E}_{T\sim\hat{p}(T|\mathcal{D}_{r})}[T])^{2}\) Evidently, there is a trade-off when choosing \(K\): a larger \(K\) will give more accurate estimates, but also larger computational costs. We will study this further in the experiments. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Method & Works & (i) & (ii) & (iii) & (iv) \\ \hline Ensembles of generative models & (Tolstikhin et al., 2017; Grover and Ermon, 2017; Ghosh et al., 2017; Hoang et al., 2018) & ✓ & \(\times\) & \(\times\) & \(\times\) \\ Dropout-GAN & (Mordido et al., 2018) & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Deep Ensembles & (Lakshminarayanan et al., 2016) & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) \\ MC dropout & (Gal and Ghahramani, 2015) & ✓ & \(\times\) & \(\times\) & \(\times\) \\ Generative models for UQ & (Bohm et al., 2019; Phan et al., 2019; Sensoy et al., 2020; Liu et al., 2022) & ✓ & ✓ & \(\times\) & \(\times\) \\ \hline Deep Generative Ensemble (DGE) & & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison to related work. (i) Focuses on generative models, (ii) Considers downstream ML tasks, (iii) Considers error in the generative process, (iv) Provides guidelines to synthetic data publishers and users. ### Modelling the Posterior over \(\theta\) So how do we model \(p(\theta|\mathcal{D}_{r})\)? The Bayesian approach could motivate us to parameterise the forward generative process, giving \(p(\mathcal{D}_{r}|\theta)=\prod_{i}p_{\theta}(\mathbf{z})\), and some prior \(p(\theta)\), which would allow computing the posterior over \(\theta\): \(p(\theta|\mathcal{D}_{r})=\frac{p(\theta)p(\mathcal{D}_{r}|\theta)}{\int p( \theta)p(\mathcal{D}_{r}|\theta)d\theta}\). Computing the denominator is intractable for deep generative models. Consequently, we need to approximate this further. We draw inspiration from the supervised uncertainty quantification (UQ) literature, which aims to estimate \(p(\phi|\mathcal{D})\) for some predictive model parameterised by \(\phi\) trained on data \(\mathcal{D}\). We borrow a popular technique: Deep Ensembles. ### Approximating the Posterior: Deep Generative Ensemble (DGE) Deep Ensembles (Lakshminarayanan et al., 2016) assumes that we can approximate \(p(\theta|\mathcal{D}_{r})\) as the empirical distribution over the training process of some deep neural network. In the generative setting, this means we choose a deep generative model class (e.g. VAE, GAN, diffusion model, normalizing flow), train the generative model \(K\) times, giving \(K\) local solutions \(\hat{\theta}^{k}\) to Eq. 1, and we approximate: \(p_{DGE}(\theta|\mathcal{D}_{r})=\frac{1}{K}\sum_{k}\delta(\theta=\hat{\theta} ^{k})\), after which we can use this distribution for computing any downstream statistic of interest. This is a strong assumption and indeed a crude Bayesian approximation of the true posterior--see (Wilson & Izmailov, 2021) for an in-depth discussion. Nonetheless, Deep Ensembles have a solid track record in predictive UQ, often outperforming more traditional Bayesian methods (Fort et al., 2019). **Choosing the Baselines.** An advantage of DGE is that it allows for different generative model classes. In this paper we focus on tabular data, because many high-stake applications of synthetic data are predominantly tabular, e.g. credit scoring and medical forecasting (Borisov et al., 2021; Shwartz-Ziv & Armon, 2022). Additionally, nearly 79% of data scientists work with tabular data on a daily basis, compared to only 14% who work with modalities such as images (Kaggle, 2017). We choose a GAN architecture, specifically CTGAN (Xu et al., 2019), for its widespread use, and its high expected diversity between individually trained models--cf. VAEs, which tend to learn fuzzier distributions (Theis et al., 2015). We use (i) random initialization for each generative model, and (ii) the same real data for training each base model, as this has been shown to perform well in the supervised domain (Lakshminarayanan et al., 2016). ## 4 Empirical Study: the Effect of DGE on Downstream ML In this section we consider fundamental supervised learning tasks, and how these can be applied in conjunction with synthetic data. We consider using DGE versus the naive synthetic data approach. All experimental details can be found in Appendix A.1 Footnote 1: Code and seeded experiments are available at [https://github.com/bwanbreugel/deep_generative_ensemble](https://github.com/bwanbreugel/deep_generative_ensemble) ### Synthetic Data for Model Training Let us start by considering how we can train better predictive models on synthetic data. We define "better", in terms of a lower generalization error. Choose some predictive loss function \(\mathcal{L}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) and predictor \(g_{\phi}:\mathcal{X}\rightarrow\mathcal{Y}\) parameterised by \(\phi\). The generalization error is defined as \(\text{Err}(g_{\phi},p_{r})=\mathbb{E}_{p_{r}}\mathcal{L}(g_{\phi}(X),Y)\). In the case of classification and a proper scoring rule \(\mathcal{L}\), the optimal solution is \(g_{\phi}(x)=p_{r}(Y|X=x)\). Because we do not have data from the real distribution, we cannot minimise the error w.r.t. the real distribution directly. Instead, we aim to choose \(\phi\) that minimises: \[\mathbb{E}_{\theta}[\text{Err}(g_{\phi},p_{\theta})]=\mathbb{E}_{\theta}[ \mathbb{E}_{(X,Y)\sim p_{\theta}(X,Y)}L(g_{\phi}(X),Y))]. \tag{3}\] The typical synthetic data approach for model training uses a single synthetic dataset. This will yield high variance in the trained model, because we effectively minimise w.r.t. \(p_{\theta_{1}}(Y|X)\) for a single \(\theta_{1}\sim p(\theta|\mathcal{D}_{r})\). Instead, we use an empirical estimate as given by Eq. 2: we train a predictive model on each synthetic dataset individually, and average predictions. Let us show how this improves performance. **Datasets.** We use a range of datasets with different characteristics of interest: Scikit-learn's Two-moons and Circles toy datasets--simple two-dimensional datasets that we will later use for visualising the learnt models; UCI's Breast Cancer and Adult Census Income (Asuncion & Newman, 2007)--the former a very small dataset such that synthetic data generation is hard, the latter a large dataset with mixed categorical and numerical features; and SEER (Duggan et al., 2016) and Kaggle's Covid-19 dataset (Ministry of Health of Mexico, 2020), two large medical datasets with some highly unbalanced categorical features. **Set-up.** We first show that downstream models trained on DGE synthetic data, perform better on real data than baselines. We compare against a classifier trained on a single synthetic dataset (Naive (S)) and a pseudo-oracle trained on the real, generative training data (\(\mathcal{D}_{r}\)-model). For fair evaluation, we also include the use of an ensemble of classifiers (Naive (E)) and use the same MLP architecture for all predictive models. At last, we also include a naive generative ensemble approach that concatenates all synthetic datasets (Naive (C)). We consider the TSTR AUC performance, computed on a hold-out dataset of real data that has not been used for training the generative model. We use CTGAN (Xu et al., 2019b) with the same hyperparameters and architecture in all experiments. In Appendix B we include experiments for other models, showing similar results, and so too do the CelebA and CIFAR-10 results in Appendix D. See Appendix A for experimental details. **Results.** See Table 2. Training the downstream models on an ensemble of synthetic datasets achieves almost \(\mathcal{D}_{r}\)-model performance on real data. In contrast, the naive baseline is often a few percent lower. This performance increase is _not_ merely due to ensembles generally being more robust, since the _Naive (ensemble)_ method does not perform as well as \(\text{DGE}_{20}\), despite using 20 base models. Note that the performance of DGE with \(K=20\) is higher on average, but even for \(K=5\) we find a significant advantage over the naive (i.e. \(K=1\)) baseline. These results are unsurprising. When the generative model is erroneous--e.g. it overfits or underfits--we expect the naive method to perform poorer than the DGE method, since the different models in the DGE are unlikely to make the same mistakes. Inevitably the effect of generative model overfitting is dependent on the downstream task. A simpler downstream task--or simpler downstream model--is less prone to copying the generative overfitting. We will explore this further in Section 4.2.2. **Takeaway**: By generating multiple synthetic datasets (e.g. \(K=5\)), training a prediction model on each, and averaging predictions, one can achieve better performance on real data compared to the naive approach of training a model on a single synthetic dataset. The largest gains are observed when the generative model tends to overfit. ### Synthetic Data for Model Evaluation and Selection Model evaluation and selection (ME/MS) is key to machine learning. ME aims to estimate a model's generalisation error \(\text{Err}(g_{\phi},p_{r})\), while MS aims to choose the model (among a list of models) that minimises this error. Estimating the generalization error is usually achieved through train-test splits or cross-validation (Hastie et al., 2001). Unfortunately, this approach is not possible when we are in the synthetic data regime _where an ML practitioner has no access to real data_--see Figure 1. The naive approach is to have a single synthetic dataset and treat it like real data--i.e. one trains a predictive model \(f\) on \(\mathcal{D}^{k}_{s,train}\) and evaluate it on \(\mathcal{D}^{k}_{s,test}\), for e.g. \(k=1\). This induces bias, since the estimate is taken w.r.t. the same generative model. It also has a high variance, because it is an estimate w.r.t. a single draw of \(\theta_{k}\). Can we do better? As a closer alternative to using an independent real test set--which we do not have--we evaluate w.r.t. other test sets, i.e. \(\cup_{i\neq k}\mathcal{D}^{k}_{s,test}\). This reduces the bias and variance, due to us not using the same model parameters \(\theta_{k}\) for training and evaluation. Let us explore in turn how this influences model evaluation and selection. #### 4.2.1 Model Evaluation **Set-up.** We split the real data into a training and a test set, and as before train \(K\) generative models on the training set, providing \(\{\mathcal{D}^{k}_{s}\}_{k=1}^{K}\). Subsequently, we split up each \(\mathcal{D}^{k}_{s}\) into a training and a test set for the downstream model, \(\mathcal{D}^{k}_{s,train}\) and \(\mathcal{D}^{k}_{s,test}\). We use an MLP as downstream model \(g\), trained on a single synthetic training set. We compare the \(g\) performance evaluation of the naive approach, our DGE approach and pseudo-oracle evaluation (Oracle)--the latter being the performance of \(g\) on a hold-out real dataset. We report results over 20 runs. **Results.** In Table 3 we see that the DGE and naive evaluation approaches perform very differently. We see the naive approach overestimates the model performance. This is due to a synthetic data variant of data leakage: overfitting in the generative model is copied by the trained model, but is also reflected in the synthetic test set. DGE evaluation suffers less from this bias, since the test set is from a different generative model, i.e. different draws from \(\theta^{k}\). Conversely, generative errors cause DGE to underestimate the downstream performance often. Figure 3 shows the same story by varying the generator complexity. An underfitted generative model affects both approaches similarly--slightly underestimated performances. On the other hand, an overfitted generative model leads to significantly overestimated performance by the naive approach. Let us explore how the downstream task plays its part, through considering different downstream models. **Takeaway**: Using train-test-splits on single synthetic datasets can lead to significantly overestimated real-world Figure 3: Varying generative size for SEER dataset, shows model evaluation becomes overestimated for the naive approach when the generative model starts overfitting. DGE is more robust to this. performance, due to train and test split both evaluating on the same (potentially erroneous) generative model. On the other hand, DGE tends to underestimate performance, due to uncertainty in the generative model process leading to different--and aggregated fuzzier--label distributions. #### 4.2.2 Model Selection **Set-up.** In this part, we ask the question: can we decide on which predictive model to use? We use the same set-up as before, but repeat the experiment for the following models: logistic regression, random forest, 5-nearest neighbour, XGBoost, SVM, and a deep MLP--see Appendix A for experimental details. We consider the ranking of models by different approaches (naive, DGE, and oracle). **Results.** We see that DGE ranks models similar to the oracle, whereas the naive approach favours complex models. The latter is again explained by the naive approach's positive bias to a model that captures the generative model's overfitting. Congeniality plays a big role this time; the naive approach is inclined to choose a predictive model that is similar to the generative model's parameterisation. Like most deep generative models, CTGAN uses a neural network architecture for implicitly encoding the label distribution, so it is predictable that this can be best replicated by a deep MLP as downstream model.2 The naive approach becomes even worse when the amount of synthetic data increases, since for more complex models this allows better learning of the generative label distribution--see Appendix C for experiments. Footnote 2: This argument is not entirely straightforward. Since conditional generators like CTGAN usually model the conditional feature distribution \(p(X|Y=y)\) using an NN, it is not necessarily true that the output \(p(Y|X)\) itself falls in the same model class as the generator. Nonetheless, we do expect the generator output to show similar behaviour (e.g. ReLu artifacts) as the underlying NN, since \(p(Y|X)=p(X|Y)p(Y)/(\sum_{y}p(X|Y=y)p(Y=y))\). **Takeaway**: The naive approach consistently selects more complex models that can copy the generative model's \(p_{\theta}(Y|X)\), but which generalise poorly to real data. DGE has a more accurate estimate of real-world performance due to evaluating on other synthetic datasets, which leads to it selecting simpler downstream models that generalise better to real-world data. ### Model Uncertainty Going one step further than evaluation, we look at downstream predictive uncertainty quantification. Using synthetic data like it is real does not account for uncertainty in the generative process itself, which leads to underestimated downstream uncertainty. We focus on classification and define uncertainty in terms of the estimated probability of the predicted label. We show the difference between generative \begin{table} \begin{tabular}{l l l l l l l|l} \hline & Moons & Circles & Adult Income & Breast Cancer & SEER & Covid-19 & Mean \\ \hline \(\mathcal{D}_{r}\)-model & \(0.996\pm 0.0\) & \(0.868\pm 0.0\) & \(0.87\pm 0.0\) & \(0.993\pm 0.0\) & \(0.907\pm 0.0\) & \(0.928\pm 0.001\) & \(0.927\) \\ Naive (S) & \(0.981\pm 0.006\) & \(0.801\pm 0.054\) & \(0.821\pm 0.006\) & \(0.975\pm 0.008\) & \(0.885\pm 0.006\) & \(0.869\pm 0.02\) & \(0.889\) \\ Naive (E) & \(0.981\pm 0.006\) & \(0.802\pm 0.053\) & \(0.837\pm 0.004\) & \(0.978\pm 0.009\) & \(0.888\pm 0.006\) & \(0.895\pm 0.015\) & \(0.897\) \\ Naive (C) & \(0.985\pm 0.001\) & \(0.862\pm 0.005\) & \(0.852\pm 0.007\) & \(0.974\pm 0.011\) & \(0.906\pm 0.001\) & \(0.895\pm 0.005\) & \(0.912\) \\ DGE\({}_{5}\) & \(0.982\pm 0.002\) & \(0.853\pm 0.016\) & \(0.871\pm 0.003\) & \(0.986\pm 0.003\) & \(0.903\pm 0.002\) & \(0.926\pm 0.004\) & \(0.92\) \\ DGE\({}_{10}\) & \(0.983\pm 0.001\) & \(0.861\pm 0.008\) & \(0.883\pm 0.002\) & \(0.986\pm 0.003\) & \(0.906\pm 0.001\) & \(0.935\pm 0.003\) & \(0.926\) \\ DGE\({}_{20}\) & \(0.984\pm 0.001\) & \(0.865\pm 0.003\) & \(0.889\pm 0.001\) & \(0.987\pm 0.003\) & \(0.906\pm 0.001\) & \(0.942\pm 0.001\) & \(0.929\) \\ \hline \end{tabular} \end{table} Table 2: **Using an ensemble of synthetic datasets for downstream model training improves real-world performance.** AUC performance of different approaches on different datasets, when trained on synthetic data and tested on real data. For the naive methods, we report the median performance across 20 synthetic datasets. Naive (S) uses a single classifier, Naive (E) uses an ensemble of classifiers, though both are trained on a single synthetic dataset. Naive (C) uses all 20 synthetic datasets but naively concatenates them before training a classifier. Note that DGE\({}_{K}\) gives consistently better performance on average, even for \(K=5\). \begin{table} \begin{tabular}{l l l l l l|l} \hline & Moons & Circles & Adult Income & SEER & Covid-19 & Mean \\ \hline Oracle & \(0.775\pm 0.14\) & \(0.508\pm 0.036\) & \(0.785\pm 0.015\) & \(0.711\pm 0.108\) & \(0.912\pm 0.014\) & \(0.738\) \\ \hline Naive & \(0.892\pm 0.072\) & \(0.819\pm 0.132\) & \(0.784\pm 0.028\) & \(0.877\pm 0.061\) & \(0.832\pm 0.042\) & \(0.841\) \\ DGE\({}_{5}\) & \(0.703\pm 0.132\) & \(0.518\pm 0.07\) & \(0.773\pm 0.01\) & \(0.743\pm 0.129\) & \(0.819\pm 0.022\) & \(0.711\) \\ DGE\({}_{10}\) & \(0.744\pm 0.139\) & \(0.522\pm 0.094\) & \(0.774\pm 0.01\) & \(0.772\pm 0.088\) & \(0.81\pm 0.017\) & \(0.724\) \\ DGE\({}_{20}\) & \(0.753\pm 0.138\) & \(0.506\pm 0.045\) & \(0.775\pm 0.01\) & \(0.769\pm 0.069\) & \(0.815\pm 0.016\) & \(0.724\) \\ \hline \end{tabular} \end{table} Table 3: **Naïve synthetic data model evaluation overestimates real-world performance.** Performance of a fixed model, evaluated using different approaches. The naive approach overestimates performance, whereas the DGE approach slightly underestimates it. and predictive uncertainty. **Set-up.** To separate generative and predictive uncertainty, we include a Deep Ensembles _predictive_ model (Lakshminarayanan et al., 2016) that provides uncertainty on the predictive level. Effectively, we compare the sample mean and variance in \(\hat{P}(Y=1|x)\) of (i) a DGE approach, in which each synthetic dataset is used for training a predictive model, (ii) a naive approach, in which one single synthetic dataset is used to train \(K\) predictive models in a Deep Ensembles fashion, (iii) Naive (C), in which all synthetic datasets are concatenated and a Deep Ensembles is trained on this dataset. We add toy dataset Gaussian for visualization--see Appendix A for details--and remove the Breast cancer dataset due to insufficient number of real test samples for accurate assessment. **Results.** First, let us draw the confidence-accuracy curves for different methods on the real-world datasets, see Figure 4. We see that DGE is able to make consistently more confident predictions more successfully than the naive approach. DGE performs en par with the \(\mathcal{D}_{r}\)-model, and in some cases outperforms it; this is likely due to the generative models effectively augmenting the data, which has been shown to increase downstream robustness (Antoniou et al., 2017; Dina et al., 2022; Das et al., 2022; Bing et al., 2022). Let us try to understand why uncertainty quantification on the single synthetic dataset does not suffice, by separating generative and predictive uncertainty. Specifically, let us plot the sample standard deviation of the different classifiers in the naive Deep Ensembles approach versus the \(\text{DGE}_{20}\) approach, see Figure 5 and nota bene the different colorbar scales. We include the Naive (C) baseline, which is a naive approach that simply concatenates all synthetic datasets and runs a Deep Ensembles on this, but does not explicitly take into account generative uncertainty. We include this baseline to show that typical generative ensembles that result in a single dataset (see Table 1), fail for UQ. We see that the naive approaches lead to poor diversity between different models within the ensemble. Since they cannot capture the generative uncertainty, these approaches provide little insight into how the model may do on real data. On the other hand, the diversity between \(\text{DGE}_{20}\) classifiers is much higher and arguably provides more intuitive explanations. We also see that the naive approaches overestimate confidence in low-density regions--even if it is on a decision boundary--whereas DGE does not. This is unsurprising, since generative uncertainty is also highest in these regions. **Takeaway**: DGE provides uncertainty on the generative level, which the naive approaches cannot. It is thus essential to ensure individual synthetic datasets in the DGE are published separately (cf. concatenated and shuffled) ### Underrepresented Groups The generative process is expected to be most inaccurate for regions with few samples. Since low-density regions can correspond to minority groups in the population, this would be disconcerting for the reliability of our synthetic data. In this section we explore the quality of downstream models on underrepresented groups in the dataset. **Set-up.** We investigate the Covid-19 dataset, because it consists of mostly categorical data with unbalanced features. Let us define "underrepresented groups" in terms of minority categories of individual features--see Appendix A. We re-run the experiment from 4.1 and evaluate performance on the minority groups, see Figure 6. We plot the performance relative to a \(\mathcal{D}_{r}\)-model, which is trained on \(\mathcal{D}_{r}\) and also evaluated on underrepresented groups. **Results.** Note the distinctly different behaviour between the naive and DGE approach. The naive approach performs worse than the \(\mathcal{D}_{r}\)-model for most underrepresented groups, \begin{table} \begin{tabular}{l l even though it performs comparably overall. On the other hand, the DGE approach consistently outperforms the \(\mathcal{D}_{r}\)-model. The latter is explained by interpreting DGE as a data augmentation method, in which the synthetic datasets (in this case \(20\) times the size of the real data) replace the real data. Data augmentation can effectively regularize trained model (Chawla et al., 2002; Zhang et al., 2017; Antoniou et al., 2017), and lead to better performance on underrepresented groups (Bing et al., 2022). **Takeaway**: Closer inspection shows that naive downstream model training leads to particularly poor performance on small subgroups in the population. On the other hand, DGE has a regularization effect on the downstream predictor, consistently outperforming the \(\mathcal{D}_{r}\)-model on minority groups. ## 5 Discussion **DGE.** We have aimed to highlight a gap in our understanding of synthetic data: how to do ML on synthetic data in practice, and the validity of downstream results. We have shown that the standard synthetic data methodology--treating the synthetic data like it is real--leads to poor downstream trained models, evaluation, and uncertainty, with a tendency to do worst for underrepresented groups. We have shown that DGE, which provides multiple synthetic datasets, is a simple and scalable way of avoiding these problems partly. We hope this work will result in a significant change in how synthetic datasets are published, create more interest in synthetic data's use and limitations, and contribute to the trustworthiness of synthetic data. Table 5 highlights the takeaways for practitioners. **Practical Considerations.** Let us highlight practical considerations. First, DGE is not a perfect approximation for the true posterior of generative model parameters. Further research is required into systemic errors between synthetic datasets, e.g. if the true model is not well approximated by the generator class. Second, the use of ensembles requires extra compute at generation, downstream training, and downstream inference stage. In practice, we have seen that even for \(K=5\) we can get significant gains compared to the naive approach. Additionally, cost may be further reduced by sharing parameters across models or using MC drop-out in the generative model. Third, each synthetic dataset is derived from the same real data, hence there is some data leakage between train and test sets. At last, if privacy is key, the ensemble approach of DGE requires scaling the privacy budget for each synthetic data generator to account for the multiple generators. Please see Appendix E for a longer discussion. Figure 5: Comparison of predictive versus generative uncertainty. We plot the sample std of different ensembles, where columns denote datasets and rows approaches. The \(\mathcal{D}_{r}\) decision boundary (\(\hat{P}(Y=1|x)=0.5\)) is drawn in dotted white and other decision boundaries in dashed red. In almost all cases, these decision boundaries are significantly different. Meanwhile, the std of the naive approaches does not reflect this deviation, hence it underestimates the uncertainty—N.B. the different color scales. This is caused by these methods not considering the generative uncertainty. DGE\({}_{20}\) is preferred, as it does reflect generative uncertainty. Figure 6: Accuracy of downstream model relative to \(\mathcal{D}_{r}\)-model, evaluated on minority subsets. The naive approach tends to underperform the \(\mathcal{D}_{r}\)-model on minority sets, whereas DGE outperforms the \(\mathcal{D}_{r}\)-model. Extensions to Other Downstream Tasks.We have explored using DGE for downstream prediction tasks, but future work could consider other domains (e.g. unsupervised learning, reinforcement learning, statistical analyses). ## Acknowledgments We would like to thank the ICML reviewers and area chairs for their time and feedback, as well as Nabeel Seedat who reviewed an early draft of the paper. Additionally, we would like to acknowledge the Office of Naval Research UK, who funded this research.
2301.03589
Explainable, Physics Aware, Trustworthy AI Paradigm Shift for Synthetic Aperture Radar
The recognition or understanding of the scenes observed with a SAR system requires a broader range of cues, beyond the spatial context. These encompass but are not limited to: imaging geometry, imaging mode, properties of the Fourier spectrum of the images or the behavior of the polarimetric signatures. In this paper, we propose a change of paradigm for explainability in data science for the case of Synthetic Aperture Radar (SAR) data to ground the explainable AI for SAR. It aims to use explainable data transformations based on well-established models to generate inputs for AI methods, to provide knowledgeable feedback for training process, and to learn or improve high-complexity unknown or un-formalized models from the data. At first, we introduce a representation of the SAR system with physical layers: i) instrument and platform, ii) imaging formation, iii) scattering signatures and objects, that can be integrated with an AI model for hybrid modeling. Successively, some illustrative examples are presented to demonstrate how to achieve hybrid modeling for SAR image understanding. The perspective of trustworthy model and supplementary explanations are discussed later. Finally, we draw the conclusion and we deem the proposed concept has applicability to the entire class of coherent imaging sensors and other computational imaging systems.
Mihai Datcu, Zhongling Huang, Andrei Anghel, Juanping Zhao, Remus Cacoveanu
2023-01-09T09:22:13Z
http://arxiv.org/abs/2301.03589v1
# Explainable, Physics Aware, Trustworthy AI Paradigm Shift for Synthetic Aperture Radar ###### Abstract The recognition or understanding of the scenes observed with a SAR system requires a broader range of cues, beyond the spatial context. These encompass but are not limited to: imaging geometry, imaging mode, properties of the Fourier spectrum of the images or the behavior of the polarimetric signatures. In this paper, we propose a change of paradigm for explainability in data science for the case of Synthetic Aperture Radar (SAR) data to ground the explainable AI for SAR. It aims to use explainable data transformations based on well-established models to generate inputs for AI methods, to provide knowledge-able feedback for training process, and to learn or improve high-complexity unknown or un-formalized models from the data. At first, we introduce a representation of the SAR system with physical layers: i) instrument and platform, ii) imaging formation, iii) scattering signatures and objects, that can be integrated with an AI model for hybrid modeling. Successively, some illustrative examples are presented to demonstrate how to achieve hybrid modeling for SAR image understanding. The perspective of trustworthy model and supplementary explanations are discussed later. Finally, we draw the conclusion and we deem the proposed concept has applicability to the entire class of coherent imaging sensors and other computational imaging systems. SAR image understanding, explainable artificial intelligence, deep neural networks, knowledge inspired data science ## I Motivation and Significance The Earth is facing unprecedented climatic, geomorphologic, environmental or anthropogenic changes, which require global scale, long term observation with Earth Observation (EO) sensors. SAR sensors, due to their observation capability during day and night and independence on atmospheric effects, are the only EO technology to insure global and continuous observations. Meanwhile, the SAR observations of Sentinel-1 satellites in the frame of the European Copernicus program, are worldwide freely and openly accessible. This is immensely enlarging the SAR Data Science and applications, covering a multitude of areas as: urbanization, agriculture, forestry, geology, tectonics, oceanography, polar surveys, or biomass estimation, only to enumerate a few. Copernicus Open Access Hub provides more than 457.59 PB data of satellites covering the Earth for more than 570,000 users all around the world. 1 Footnote 1: [https://scihuhu.copernicus.eu/reportsandstats/](https://scihuhu.copernicus.eu/reportsandstats/) SAR is a pioneer technology in the field of computational sensing and imaging, of which the imaging mechanism is totally different from optical sensors. A radar instrument carried by an airborne or spaceborne platform illuminates the scene by side-looking or forward-looking, which allows to discriminate objects in the range direction. As the platform moving along its track, the SAR sensor is constantly transmitting a sequence of chirp signals and receiving echos reflected from objects on the ground, as depicted in Fig. 1. When recording all individual acquisitions with a short physical antenna and mathematically combining them into a synthetic image, a much larger synthesized aperture is formed. This allows high capacity to distinguish objects in azimuth despite a physically small antenna [1]. A high resolution "image" can be processed by applying SAR focusing principle, e.g., matching filtering [2]. A deluge of SAR sensors have increased the data availability for various SAR applications. The allure of data-driven learning stems from the ability of automatically extracting abstract features from large data volumes [3, 4, 5, 6], and therefore, many deep learning studies for SAR applications have been developed in recent years [7, 8, 9, 10]. Current popular paradigm predominantly follows the blue part in Fig. 2 (a), where SAR image data is all that is required to operate an intelligent Fig. 1: A simple illustration of how SAR images the world (Stripmap Mode). SAR society is facing the big data challenge but with limited ground truth. In the meanwhile, the knowledge of SAR is equally important. This is also the motivation of the physical layers in this paper. network. In addition to data, however, the physical model and principles of SAR sensor should not be neglected. In the upper example of Fig. 2 (b), A bridge over a placid river that is illuminated perpendicular to its primary orientation appears as many brilliant lines, resembling the lower SAR image in which several bridges are imaged from a different viewing angle. The phenomena can be explicable by multi-path scattering [11, 12], as illustrated in Fig. 2 (c). Apart from the direct scattering from the bridge, the double bounce reflection between the bridge and water or vice versa occurs at the corner reflector spanned from the smooth vertical bridge facets facing the sensor and the water surface. In addition, the triple-bounce reflection and maybe some five-path scattering would happen between the horizontal plane of bridge and water surface. Thus, SAR image implies the causality of multi-path scattering phenomena and object characteristics. This positions the load of SAR image understanding, and the outmost challenge of data science, as new and particular paradigm of Artificial Intelligence (AI). So far, some researches have discussed the paradigm that attempts to bring scientific knowledge and data science models together, applied to a broad range of research themes such as partial differential equation solving [13] and Earth sciences [14, 15, 16]. In particular for SAR community, however, this topic has rarely been systematically analyzed and illustrated. Thus, we aim to prospect the hybrid modeling paradigm for intelligent SAR image understanding, where deep learning is integrated and interacted with SAR physical models and principles, to achieve explainability, physics awareness, and trustworthiness. Explainable AI is a broad concept. A scientific understanding of explainability is the capacity to clarify the results in the context of domain knowledge. The algorithms still remain a black box. A different approach is the algorithmic explainability. This is constructed such that the results of the used model can be described algorithmically. To obtain a higher degree of explainability, we aim at the synergy of the paradigms: _algorithmic and scientific explainability_. Fig. 2: **a** The conventional data-driven paradigm for intelligent SAR image understanding based on deep neural networks and the proposed paradigm shift integrated and interacted with physical knowledge of SAR. **b** A bridge can be imaged as multiple bright lines, similar as a couple of bridges imaged in the other SAR image, depending on the observation parameters and orientations. This positions the load and outmost difficulty of SAR image understanding. **c** The multipath scattering formation in the SAR image. Algorithmic explainability lies in the guarantee of transparency to understand how the machine learning algorithm works by participation of SAR physical models and principles. Scientific explainability ensures the physical consistency of AI output, as well as learning of trustworthy results with physical meaning. To ground this, we first lay out a representation of SAR physical layer in the context of SAR domain knowledge, as presented in Section II. Further, we describe how to integrate and interact them with popular neural networks to build a hybrid and translucent model for SAR applications using illustrative examples, demonstrated in Section III. The perspective of trustworthy models and supplementary explanation for SAR community are discussed in Section IV and V. The conclusion and perspectives are finally given in Section VI. ## II SAR Physical Layers Other than the neural network layers equipped with a number of learnable parameters, SAR physical layers are ones embedded with physical knowledge of SAR, well-established, interpretable, and supported by domain theories. The concept of "physical layer" apart from "neural network layer" arose in literature [16] to make the model more physically realistic. As motivated in Fig. 1, three SAR physical layers are highlighted specific for SAR applications in this paper, i.e., (i) sensor and platform: referring to antenna characteristics and moving satellite/aircraft, (ii) imaging system: figuring image formation with focusing process and (iii) scattering signature: reflecting the physical properties of terrain and objects. ### _Sensor and Platform_ Fig. 3 demonstrates the physical layer of sensor and platform that indicates the physics behind the SAR acquisition principle, such as aperture synthesizing with moving platform and various characteristics of antenna. Existing spaceborne EO SAR missions work in a monostatic or quasi-monostatic configuration. The simplest illumination mode of a SAR system is the stripmap mode in which the antenna pointing direction is constant throughout the acquisition, as shown in Fig. 3**a**. The moving platform leads to a sliding Doppler spectrum that impacts the complex SAR image. Knowing the behaviour of the Doppler centroid to create sub-looks is essential for exploiting look angle diversity of the input data, especially for very high-resolution SAR images. It is well-known that in high-resolution SAR image where the signals are performed over a broad bandwidth and wide angular aperture, the targets are no longer isotropic and non-dispersive. Instead, it is more plausible to infer that the target's backscattering is dependent on illumination angle and frequencies [17]. The sub-aperture processing can be applied to analyze the target scattering variations. Fig. 3**b** gives an example of a synthesized pseudo color SAR image via sub-aperture processing. The complex-valued SAR image is first transformed to the azimuth spectral domain by a one-dimensional Fourier transform. Then, the full Doppler spectrum is equally split into three intervals, named sub-apertures or sub-looks, each containing 1/3 range of azimuth Fig. 3: Physical layer (i): Sensor and Platform. **a**: The moving platform creates Doppler variations and synthesizes large virtual aperture; PolSAR transmits and receives diverse polarized wave, and SAR polarimetric characteristics are depicted. **b**: Based on the physics behind the platform and sensor, the physical layer produces SAR specific representations such as sub-aperture synthesis image and polarimetric feature, with specified physical parameters. angles. Finally, the three sub-apertures are transformed back to time-domain using an inverse Fourier transform, coded as the R, G, and B channels, respectively. Red, Green, and Blue targets respond mainly on the first, second, and third sub-looks, respectively, whilst gray targets indicate that they respond equivalently in different sub-looks. The pseudo-colored image well demonstrates the particular behavior of some targets. Given the precise knowledge of the parameters related to Doppler variations (e.g., orbit, azimuth steering rate, radiation pattern, incidence angle), the physical layer can generate sub-look data deterministically and there is no need to design a neural network that should learn to create sub-looks from various types of SAR training data. Sensor characteristics, such as polarization, interferometry and tomography, construct physical layer as well. Fig. 3 **b** presents a Pauli pseudo RGB image, where R, G, and B channels are formed with \(|HH-VV|^{2}\), \(2|HV|^{2}\), and \(|HH+VV|^{2}\), respectively, indicating the polarimetric relation. Several physical layers can be stacked to represent rich physics of SAR sensor and platform. Early in literature [18], the diversity in the polarimetric features with the azimuthal look angle was exploited. Thus, the moving platform and polarimetric sensor are both characterized. Similarly, the stacked physical layers can represent polarimetric and interferometric properties of PolInSAR data, or any other combinations. ### _Imaging System_ The second physical layer we suppose delineates the physics behind SAR image formation with an imaging system. The selected exemplars are illustrated in Fig. 4. A pulse-based radar or a frequency modulated continuous wave (FMCW) radar is usually used in a SAR system, where a range profile is obtained for each transmitted/received waveform, either by range compression in the case of a pulse-based radar or by applying a Fourier transform to the beat signal in the case of an FMCW radar [19]. By a coherent processing of the range profiles, the azimuth focusing process outputs a SAR image representing a two-dimensional complex reflectivity map of the illuminated area. SAR processing, taking a simple point target as example, aims to collect the dispersed signal energy in range and azimuth into a single pixel. Many traditional imaging algorithms are in terms of a Fourier synthesis framework [20], as such, Fourier transform provides a specific physical meaning for SAR image. This kind of physical layer assists AI model to better depict the target scattering beyond the "image" domain. Fig. 4 (a) first shows a simple time-frequency analysis of target with short-time Fourier transform [21, 22], characterizing the backscattering intensity variations in 2-D range and azimuth frequency domain. Four kinds of backscattering behaviors observed in SAR were defined in literature [23], related to different objects shown in Fig. 4. In the high-resolution case (wide bandwidth chirp signal and broad angular aperture), the complex amplitude of a target is frequency and aspect dependent [17]. Thus, the image formation can be extended to four dimension (called hyperimage) with wavelet transform, providing a concise physically relevant description of target scattering. This frequency and angular energy response pattern is proved useful for discriminating different scatterers, offering valuable prior information to AI model, depicted in Fig. 4 **b**. ### _Scattering Signatures of Objects_ Thirdly, we introduce the physical layer regarding the scattering signatures of objects, in which the causality of target characteristics and scattering behaviors is involved. For optical images, what you see is what you receive, that is, the objects depicted on the optical image are in accord with human cognition. Targets in SAR images are reflected by scattering characteristics, yet they include a wealth of physical information that the human eye cannot immediately identify. Fig. 5 **a** shows example of two typical SAR targets of bridge and building. The scattering phenomenon that shows several parallel lines over the river can be interpreted as single, double, and multiple scattering of the bridge based on the domain knowledge. The building, with scattering signatures of layover, shadow, single and secondary scattering in the high-resolution SAR image, can also be reflected as only layover and shadow [24], depending on the building orientation and shape. Similar Fig. 4: Physical layer (ii): Image Formation. **a.** Targets are characterized by sliding bandpass filtering in the Fourier domain. **b.** On the basis of image formation principle and target scattering model, the physical layer generates the rich target description with physical meaning. research by Ferro et al. [26] investigated the relationship between double bounce and the orientation of buildings in VHR SAR images. Fig. 5**b** demonstrates the relations between the scattering mechanism of H/\(\alpha\) plane and the semantics of land-cover and land-use classes [25]. Likewise, one can deduce the scattering center position and the specific shape of distributed target from a SAR image by applying some scattering models [27]. The conventional data-driven convolutional neural network can capture the image contents as we "see" in the SAR image, whereas it is not equipped with the ability to "interpret" the scattering phenomenon as we discussed before. This indicates the knowledge gap between SAR scattering signatures and human vision cognition. The physical layer delivering semantic understanding behind the SAR scattering signature permits a more thorough interpretation of the SAR image. As shown in Fig. 5**c**, the physical layer defines the association between the scattering characteristics of a SAR image and the object's qualities, such as shape, structure, or semantics. It can be written as an objective function or a regularization term that constrains the training of neural networks. This will improve the intelligence of AI model to master some causality between scattering signatures and the object nature. ## III Hybrid Modeling with SAR Physical Layers The integration and interaction of neural network layers and physical layers construct the hybrid modeling for SAR image interpretation. In view of algorithmic explainability, the explainable physical models and domain knowledge improves the transparency. For scientific explainability, the hybrid modeling ensures the physical meaning of output in physical layers and the prediction can maintain the physical consistency. In this section, we demonstrate several hybrid modeling approaches with SAR physical layer to achieve explainability and physics awareness. ### _Insert for Substitution_ The introduced physical layer can be inserted in a deep neural network for substitution, extracting explainable and meaningful features, either as the input of a DNN or fused with DNN features in intermediate layers. A common way is to insert a physical layer into the input layer to obtain the polarimetric features for PolSAR image classification, including the elements of coherency matrix, Pauli decomposition features, etc [31, 32]. Similarly, the sub-aperture images are generated as the input for target detection [33]. The other usage of physical layer is for feature fusion, where the features obtained by well-established physical model and deep neural networks are combined [34, 35]. Our recent work, a deep learning framework named Deep SAR-Net (DSN) [28], addressed both aspects that inserts the physical layer into the input and the intermediate position of deep model. As shown in Fig. 6, DSN was proposed for classifying SAR images with complex values. Instead of the entire data-driven method, i.e. the complex-valued convolutional neural networks (CV-CNN), the designed DSN encompasses three shallow neural network modules and two physical layers. The first physical layer generates the high-dimensional radar spectrogram based on time-frequency analysis. The second one handles the features of the 2-D projection along the frequency axises [22] to maintain the location constraint, making it possible to be fused with spatial features from intensity image. DSN outperformed CV-CNN especially with limited labeled training data, and had a remarkable performance in discriminating the man-made target scenes compared with the traditional CNN. It demonstrates the Fourier process on single-look complex SAR image embedded the knowledge like synthesizing the antenna Fig. 5: Physical layer (iii): Scattering Signatures of Objects. **a**. The Golden Gate Bridge revealing multipath scattering characteristics in a Gaofen-3 quad-pol SAR image, and a typical single building representing different scattering regions in a high-resolution (1m) SAR image [24]. **b**. The scattering mechanisms indicated by the H/\(\alpha\) plane for full-polarized SAR data point out the land-use and land-cover classes [25]. **c**. The physical layer describes the relationship and reasoning between the scattering characteristics seen in the SAR image and the object’s features, such as its shape, structure, or even semantics. well characterizes the physical property of SAR target, and the usages of physical layer cut down unnecessary parameters in neural network layers to improve the model performance with limited ground truth. ### _Compensation for Imperfect Knowledge with Feedback_ In condition of unknown/inconclusive physical models or incomplete knowledge, it is difficult to extract perfect physical parameters or physical scattering characteristics of SAR via model-based methods. For instance, obtaining the polarimetric features from dual-pol, or even single-polarized SAR image. Thus, the physical layer interacted with deep neural network take effect. #### Iii-B1 Target Character Identification Some researches have analyzed the energy response pattern in frequency dimensions of target varied in SAR image, and discussed the nonstationary targets [18, 36]. Spigai et al. [23] pointed out four canonical targets with a rough definition shown in Fig. 4 a. However, it remains unknown for many complicated scene and objects. Fig. 7 show our related work of using physical layer and deep neural network for compensation of imperfect knowledge. The first is the unsupervised hierarchical deep embedding clustering based on time-frequency analysis (HDEC-TFA) [29], which was proposed to automatically characterize the radar spectrogram (or the sub-band scattering pattern defined in [29]) basically in urban area, discovering the various scattering pattern more than the four specific classes defined in [23]. It offered a new perspective to describe the physical properties of single-polarized SAR. Furthermore, we used two stacked physical layers to obtain the polarimetric and time-frequency patterns and analyzed with deep neural network in reference [37]. Fig. 6: Our recent work Deep SAR-Net (DSN) [28] for SAR image classification can be regarded as a typical example of inserting the physical layers into a deep model. Fig. 7: **a**. Unsupervised HDEC-TFA method [29]. It automatically discovered the radar spectrogram patterns more than the four defined in [23] with deep neural networks. **b**. Learning the polarimetric features from single-polarized SAR image, supervised by Entropy-Alpha-Anisotropy generated from full-pol data [30]. **c**. The physical layers in **a** and **b** play the role of input transform (blue) and supervision generation (green) in hybrid modeling. In addition, the physical layer (red) can act as feedback to restrict learning and produce physically consistent outcomes. Fig. 8 demonstrates the result compared with the polarimetric physical model. The SOLEIL synchrotron in France, shown as the round building in the Google Earth remote sensing image, is surrounded by three different shapes of buildings. The HDEC-TFA method can capture the special characteristics of the architectures even in single HH channel SAR image, as much as the physical model based method GD-Wishart [38] on quad-pol SAR. Some other man-made targets examples characterized by time-frequency model with neural networks are given in [39]. Our experiments in [29] demonstrated the trained model varies with different imaging conditions since the sub-band scattering pattern is influenced by several imaging parameters, which should be taken into consideration when transferring the AI model to other situations. #### Iii-A2 Polarimetric Parameter Extraction By transmitting and receiving waves that are both horizontally and vertically polarized, the full-pol SAR image captures abundant physical characteristics of the imaged objects that can lead to various physical parameters. In contrast, single-pol and dual-pol SAR data are less informative for physical feature extraction. If only one polarization channel is obtained, one cannot derive the other polarization channels in principle. Once the objects are known, i.e., once the characteristics of targets such as geometry, surface roughness, etc, are identified, deep learning can be employed to transfer the knowledge learned from physical models to reconstruct the physical parameters of objects. As shown in Fig. 7 b, Zhao et al. [30] proposed a complex-CNN model to learn physical parameters (entropy \(H\) and \(\alpha\) angle) with transfer learning from single-pol and dual-pol SAR data, supervised by features obtained with a physical layer. Some similar studies include but not limit to [40, 41]. Song et al. [40] addressed "radar image colorization" issue to reconstruct the polarimetric covariance matrix with a designed deep neural network, where the supervision was also generated with a physical layer. When training a data-driven deep neural network, some physical consistencies may not be guaranteed. The authors pointed that the reconstructed covariance matrix may not be positive semi-definite [40], and they proposed an algorithm to correct it. In this case, the additional physical layer embedded prior constraint acts as post-processing to revise the physically inconsistent result of DNNs. Furthermore, this type of physical layer is suggested to provide feedbacks during training, as demonstrated in Fig. 7 c, the red part. The feedback of physical layer aims to prevent the model from learning the physical inconsistency. Fig. 8: The SOLEIL synchrotron in France and the surrounding buildings with different shapes are depicted in the Gaofen-3 SAR image. Both the GD-Wishart [38] result on Quad-Polarization SAR data and the HDEC-TFA [29] result on HH channel single-polarized SAR can capture the special scattering characteristics of the objects. Fig. 9: The SAR physical layer can be integrated in a self-supervised learning framework to guide the neural network training without ground truth. **a**. The physical layer generates various modalities of SAR image using well-established physical models, such as sub-aperture images, different polarimetric features, etc. The self-supervised learning can be conducted with contrastive learning paradigm. **b**. The physical layer produces a physical representation of image, serving as a guided signal that drives the neural network to learn a similar representation. #### Iii-B3 SAR Image Generation/Simulation This paradigm can be popularized to other SAR applications. SAR target image generation (or simulation) based on deep generative model (such as variational auto-encoder [44] and generative adversarial network [45]) has attracted much attention in recent years. The generated SAR images are expected to be used as data supplements to support target identification. The authenticity and interpretability of current deep SAR image generation is a substantial obstacle that has a significant impact on subsequent tasks [46]. Many latest studies input important physical parameters into the deep generative model or use them as supervision at the output layer, such as depression angle and target orientation, that facilitated more reliable outcomes [47, 48]. We consider them physical layer as shown in Fig. 7 c, the green and blue part. Coupling the physical layer as a feedback in neural network for SAR image generation has yet to be explored. When generative model produces a pseudo SAR image, a physical layer will be applied to verify whether it is consistent with the knowledge base of SAR, e.g. physical parameters derived from a well-established model. If not, the current generative model will revise the pseudo result to minimize the inconsistency. There are some examples to learn from in the field of fluid simulation [49, 50]. As such, the physical layer is used for constructing physical inconsistency as a feedback that explicitly constrain the generative model to fulfill some quantitative conditions, so as to guarantee authenticity. Referred to some latest studies in other fields, physical model as a feedback or constraint in the loop of deep learning is also applied to under water image enhancement [51] and seismic impedance inversion [52]. ### _Self Supervised Learning Guidance_ Self supervised learning has been attracted much attention in recent years, since it can help reduce the required amount of labeling. One can pre-train a model on unlabeled data and fine-tune it on a small labeled dataset. It offers great opportunity for SAR community where big data volume is available while the ground truth is usually difficult to obtain. There is a remarkable potential for SAR physical layer to apply for self-supervised learning. As shown in Fig. 9, two self-supervised learning paradigms are given. The physical layer helps to establish a pretext task for SAR image. In Fig. 9 **a**, different SAR image representations are generated by physical layer, for instance, the sub-aperture images, various polarimetric feature images, etc. As similar to SimCLR [53] that conducted the contrastive learning based on data-augmentation, or NPID [54] that learned the optimal feature via instance-level discrimination, the surrogate task can be built to form a self-supervised learning. An illustrative example is in reference [55]. Fig. 9 **b** illustrates a second line of thought, which we refer to as physics guided learning. Firstly, the physical layer is used for generating meaningful physical representations, like scattering mechanisms (physical layer (i) and (ii) can both achieve this). Meanwhile, the neural network extracts hierarchical spatial features from SAR image. The crucial point is how to establish a connection between physical properties and image features. We propose to exploit physical layer (iii) to reveal relationships and thereby design an objective function for self-supervised learning. Our recent work [42, 37, 43] details the paradigm in Fig. 9 **b**. A physics guided network (PGN) for SAR image feature learning was proposed as shown in Fig. 10. First, a physical layer is deployed at the beginning, where the physical scattering properties are derived. Based on the crucial assumption that SAR image features and the abstract physical scattering mechanisms should share common attributes in semantic level, a surrogate task was established via the other physical layer that defines a loss function. The inspiration is from reference [56], which indicated the abstract topic mixture on scattering properties and the high-level image features are with similar semantics. Thus, we built the relation between the image semantics and SAR scattering characteristics. A novel objective function was designed to instruct self-supervised learning guided by physical scattering mechanisms. The advantages of this kind of learning paradigm lie in two aspects. First, the training process takes all labeled and unlabeled data so that the learned features generalize well in test set. Second, the guidance of physical information leads to physics awareness of features learned by neural networks, i.e., the DNN feature maintains physical consistency. In a word, the prior physical knowledge is embedded in the neural network. The experiments in [43] verified this quantitatively and qualitatively. Additionally, the outputs of the physically interpretable deep model can be further explained, which in turn inspires algorithm improvement. We illustrate with an example of sea-ice classification in polar area [42]. The physics guided learning is driven by physical signals that reflect the scattering properties of SAR image. The guided physical signals are visualized with t-sne in Fig. 11, where different colors in (a) represent semantic labels of sea-ice and each color in (b) indicates samples with similar physical scattering properties. One characteristic that can be seen is that young ice and water bodies have extremely similar physical representations, which would impede semantic discrimination. It can explain the physics guided learning result in [42] that about 23% test samples in water bodies class were predicted as young ice. The explanation will motivate us to improve the algorithm by, for instance, relaxing the physical constraints between the two classes. Similarly, a very recent work [57] was proposed for SAR target recognition inspired by our work [43]. The authors proposed a CNN under the guidance of SAR target physical model, attributed scattering center (ASC), to extract the significant target features, that were successively injected into the classification network for more robust and interpretable results. ## IV Trustworthy Modeling ### _Why Trustworthy Modeling Needed_ The results obtained by applying AI techniques in SAR processing can be validated using in situ measurements of known targets. For example, a common approach for calibration/validation of SAR data is to employ an electronic target (transponder) that receives a signal, applies a controllable time delay, and transmits the delayed signal towards the receiver of the bistatic/monostatic system. Such a target can be used to validate results related to deformation measurements (e.g., atmospheric corrections) or polarimetric analysis. Some real world applications of SAR requires the measurement of reliability and uncertainty. One example is the sea-ice classification in the untraversed polar regions where the ice is always promptly changeable, that would result in the difficulty for annotation and the lack of reference data. In this case, the predictions in unknown polar areas obtained by AI model need to be trusted by humans. Strong robustness and plausible degree of confidence of ML system prediction are equally as important as its accuracy. Fig. 12**a** indicates building orientations have a great impact on polarization orientation angles [58] and scattering mechanisms [38]. The zoomed-in region mainly contains ortho buildings buildings where \(\phi_{1}\) is close to \(0^{\circ}\) and orientated buildings with a larger \(\phi_{2}\). The polarization orientation angles of ortho buildings are obviously smaller than those of oriented buildings. Ortho built-up areas mainly depict double scattering (DS) and mixed scattering (MS) where the double scattering dominates. The oriented buildings are with volume scattering (VS). Fig. 12**b** shows limited robustness of recognition performance as the angle of test data varies when training with a small range of angles. A trustworthy model is expected to perceive SAR scattering variations with a variety of physical parameters and be perturbation-tolerant. ### _Trustworthy Modeling with Uncertainty Quantification_ The development of Bayesian deep learning [59] has caught much attention in recent years, where the posterior distribution over parameters are obtained instead of the point estimation. A crucial property of the Bayesian method is its ability to quantify uncertainty, to the benefit of constructing trustworthy model. In the case of Fig. 12**b**, the performance of deep neural networks drops dramatically when testing SAR targets of very different orientation angles with training data. The model is over-confident about some uncertain data that cannot be perceived by frequentist method. Bayesian deep neural network, instead, is able to calibrate the output score and measure the uncertainty of the prediction. Some recent studies applied Fig. 11: Visualization of physics guided signals on test data by t-sne. (a) Different colors represent semantic labels of sea-ice. (b) The physics guided signals are grouped into eight clusters, where each color indicates samples with similar physical scattering properties. Fig. 10: A physics guided network was proposed [42, 43] where a novel deep learning paradigm and loss function were designed to associate the SAR scattering characteristics with image semantics. Bayesian deep learning for SAR sea-ice segmentation [60, 61, 62], as well as target discrimination [63]. The generated uncertainty map can serve as a guideline for the experts in annotation and improve trust between users and the model. Some approximation strategies of Bayesian deep neural network, such as Monte Carlo Dropout [64] and Deep ensembles [65], are promising for different SAR applications. We give an example of SAR ship detection for demonstration. The limited labeled training data, and the interference of complex scattering from target itself or theshore background, would strongly restricted the detection performance. The ship detection result on some selected SAR images from AIR-SARShip-1.0 dataset [66], obtained by FCOS detection algorithm [67], are shown in the first row of Fig. 13. Compared with the ground truth annotation in the third row, the detection result appears many false alarms. It is crucial to estimate the model uncertainty, which is basically brought by inadequate training data, to evaluate the reliability of SAR ship detection model and provide more trustworthy predictions. When we apply the Monte Carlo (MC) Dropout training strategy to approximate the Bayesian inference [64], it captures the uncertainty from the existing deep model for SAR ship detection. The results with high uncertainty and very low classification scores are discarded, with only the trustworthy predictions preserved. The results are shown in the second row of Fig. 13, where the false alarms are evidently reduced. In the fourth and fifth SAR image, the localization uncertainty of two large ships visualized with circles around the corner of the predicted bounding box is relatively high. Intuitively, we can infer the reason for the weak capability of the trained model in detecting such kind of targets is probably the lack of the large size ships in the training set. The feedback from the uncertainty estimation should further inspire the follow-up studies to improve the algorithm and build trustworthier models. ## V Supplementary Explanations Beyond the hybrid and trustworthy modeling, extra explanations and other interpretable models are as well required to assist with developing more transparent AI model for SAR. The explainable artificial intelligence (XAI) techniques, such as gradient based, attention based, and occlusion based explanation methods, are helpful to demonstrate the effectiveness of integrating physical layers to achieve explainability. The transparent machine learning models, such as linear regression, decision trees, and Bayesian models, are interpretable [68]. The algorithm itself provides explanations, for example, Latent Dirichlet Allocation (LDA) builds a three-level hierarchical Bayesian model to describe the underlying relationship among document-topic-word. That is, the document can be explained with a set of topic, where each topic in turn, is represented by a distribution over words. Karmakar et al. [69] used the LDA model for SAR image data mining to generate the topic compositions and group them into semantic classes, which were fused with domain knowledge obtained by active learning from experts. The transparent model can be also integrated in a deep learning framework to approach the explainability. Huang et al. [42, 43] applied the LDA model to generate the physical attributes representation as the guided physics signals, rather than directly using the physical scattering characteristic labels to train the physics guided network. That is because the learned physics-aware features are expected to the benefit of semantic label prediction, but the semantic gap actually exists between the physical scattering characteristics and the semantic annotation. Consequently, the LDA model enables the guided signals to gain the abstract semantics and be explained with physical scattering properties. The other purpose for approaching the explainability lies in the applications of transfer learning. The manual annotation in SAR domain is difficult and the deficiency of labeled data basically restricts the development of data-driven meth Fig. 12: A trustworthy model should perceive the SAR scattering variations with a variety of physical parameters and be perturbation-tolerant. **a** Differently oriented buildings reflect various polarization orientation angles and scattering mechanisms in a PolSAR image. **b** SAR targets vary violently with orientation angles. When training with a small range of angles, limited robustness of recognition performance is observed as the angle of test data varies. ods. Facing a wide variety of launched SAR platforms with various frequency bands and resolutions, as well as other multi-spectral, hyper-spectral, optical remote sensing sensors, it is of vital importance for elucidating the transferability of ML models among inhomogeneous data. Arrieta et al. [68] indicated the transferability is one of the goals toward reaching the explainability. Although many researches have explored different deep transfer learning methods in SAR domain [70, 71, 46], the inner transfer mechanisms of deep learning model still need explanation of insight. An insufficient understanding of the model may mislead the user toward inappropriate design of algorithm and fatal consequences, i.e. the negative transfer. Based on SAR target recognition, we proposed to analyze the transferability of features in DNN, which contributed to explaining what, where, and how to transfer more effectively for SAR images [72]. The inspiration also motivates the follow-up studies, including the SAR-specific pretrained model [73], the application in detection task [74], and the interpretability analysis of deep learning model in radar image [75]. ## VI Conclusion and Perspectives In this paper, we prospect an AI paradigm shift for SAR applications that is explainable, physics aware and trustworthy. To ground this, SAR physical layers embedded with domain knowledge are introduced, which are supposed to be integrated and interacted with neural networks for hybrid modeling. Some illustrative examples are provided to demonstrate the general patterns, showing algorithmic and scientific explainability. In addition, we emphasize the importance and approaches of trustworthy modeling with Bayesian deep learning, as well as illustrating some other techniques such as interpretable machine learning method, explainable techniques, and model transferability, that would assist with developing more transparent AI model for SAR. In fact, this field belonging to interdisciplinary research is still largely undeveloped. To our best knowledge, such approaches have not been formulated in the past years. So far, only some plain attempts have been made. Significant questions and challenges remain, e.g., the feasible representation of SAR physical layer, the optimized form of physical constraint, and hybrid modeling optimization. Currently there are several smart sensing techniques in the SAR community that can be exploited as pre-processing steps of data fed into DNNs, e.g., multi-aperture focusing in bistatic configurations [76], monostatic/bistatic tomography, polarimetric decomposition, deformation time series. The outputs of these techniques can expose features that probably cannot be directly extracted by a DNN, especially when using a small training data set. The newly introduced AI paradigms can apply to the broad class of coherent imaging systems. A few examples can be enumerated: computer tomography, THz imaging, echographs in medicine or industrial applications, sonar or seismic observations in Earth sciences, or radio-telescope data in astrophysics. ## Acknowledgment This work was supported by the National Natural Science Foundation of China under Grant 62101459, China Postdoctoral Science Foundation under Grant BX2021248, the Fundamental Research Funds for the Central Universities under Grant G2021KY05104, and a grant of the Romanian Ministry of Education and Research, CNCS - UEFISCDI, project number PN-III-P4-ID-PCE-2020-2120, within PNCDI III. We would like to thank the associate editor and the anonymous reviewers for their great contribution to this article. Fig. 13: The SAR ship detection result on AIR-SARShip-1.0 dataset [66], obtained by the detection deep learning algorithm FCOS [67]. Many false alarms appear in the detection result, due to the limited training data and the interference of complex scattering. The prediction uncertainty is estimated by MC-Dropout [64] and the uncertain results are discard to achieve a better performance.
2303.15592
Uncovering Bias in Personal Informatics
Personal informatics (PI) systems, powered by smartphones and wearables, enable people to lead healthier lifestyles by providing meaningful and actionable insights that break down barriers between users and their health information. Today, such systems are used by billions of users for monitoring not only physical activity and sleep but also vital signs and women's and heart health, among others. Despite their widespread usage, the processing of sensitive PI data may suffer from biases, which may entail practical and ethical implications. In this work, we present the first comprehensive empirical and analytical study of bias in PI systems, including biases in raw data and in the entire machine learning life cycle. We use the most detailed framework to date for exploring the different sources of bias and find that biases exist both in the data generation and the model learning and implementation streams. According to our results, the most affected minority groups are users with health issues, such as diabetes, joint issues, and hypertension, and female users, whose data biases are propagated or even amplified by learning models, while intersectional biases can also be observed.
Sofia Yfantidou, Pavlos Sermpezis, Athena Vakali, Ricardo Baeza-Yates
2023-03-27T20:49:42Z
http://arxiv.org/abs/2303.15592v2
# Uncovering Bias in Personal Informatics ###### Abstract. Personal informatics (PI) systems, powered by smartphones and wearables, enable people to lead healthier lifestyles by providing meaningful and actionable insights that break down barriers between users and their health information. Today, such systems are used by billions of users for monitoring not only physical activity and sleep but also vital signs and women's and heart health, among others. Despite their widespread usage, the processing of sensitive PI data may suffer from biases, which may entail practical and ethical implications. In this work, we present the first comprehensive empirical and analytical study of bias in PI systems, including biases in raw data and in the entire machine learning life cycle. We use the most detailed framework to date for exploring the different sources of bias and find that biases exist both in the data generation and the model learning and implementation streams. According to our results, the most affected minority groups are users with health issues, such as diabetes, joint issues, and hypertension, and female users, whose data biases are propagated or even amplified by learning models, while intersectional biases can also be observed. machine learning, bias, fairness, personal informatics, ubiquitous computing, sensing data, digital biomarkers + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote physiological signals accompanied by PI usage logs have been used to predict mood, stress and overall mental health [95]. At the same time, the advanced features for health tracking that are continuously integrated into consumer smartphones and wearables [13, 14, 70, 96] now enable advanced analytics, such as atrial fibrillation identification, fertility prediction, fall, and crash detection, sleep apnea warnings, and are paving the future of mHealth. **Bias in PI.** However, the prevalent PI adoption embeds important challenges due to the questionable transparency and unexplored biases in the systems' algorithms. Bias in machine learning is a source of unfairness that can lead to harmful consequences, such as discrimination [72]. The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community defines fairness as a principle that "ensures that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g., race, sex, etc.)" [16]. Fairness is an inexorably subjective and context-dependent notion and incorporates different metrics for different definitions, some of which are even mutually incompatible [44]. Contrary to the common belief that algorithmic decisions are objective and unbiased by definition, a machine learning model may actually be inherently unfair by learning, preserving, or even amplifying historical biases existent in the data [82]. Real-world cases of unfair machine learning models are, unfortunately, abundant. Examples can be drawn from criminal justice [7], hiring practices [30], ad targeting [90], facial recognition [84], healthcare [108] and language models [21]. Despite this growing interest in machine learning biases overall, a focused emphasis on the requirements of unbiased PI systems in mHealth settings is lacking [3]. PI systems are deployed in high-stakes health-related applications, while their input data modality makes them susceptible to propagating or even amplifying bias. Beyond algorithm performance, the existence of bias is a challenging problem in delivering equitable care. Thus, it is critical to explore biases within these systems to raise awareness regarding mitigating and regulatory actions required to avert potential negative consequences. **PI Idiosyncracies.** Moreover, this need for exploring bias is further highlighted by the fact that the PI domain has significant differences -in terms of bias- from previously well-studied domains, such as facial or speech recognition: * _The digital divide as a barrier of entry:_ To contribute data to an image or voice dataset, users do not need any prerequisite knowledge or niche device. However, to contribute to a PI dataset, users face significant "entry barriers" in terms of digital capacity or device ownership, creating new-found _representation biases_ in the domain's datasets, as verified by our analysis in Sections 3.1 and 3.2. * _Emerging technologies accuracy:_ Facial or speech recognition measurement devices, e.g., camera or voice recorder, are based on mature technologies. As a result, their accuracy remains relatively unchangeable across different devices. On the contrary, emerging PI devices' accuracy significantly varies across manufacturers and even across models, creating unexplored _measurement biases_ and discrepancies between user segments (cf. Section 3.3). * _Complex nature of data:_ It may be easy to identify biases in terms of skin color and gender (facial recognition) or accent and gender (speech recognition). Yet, identifying biases in digital biomarkers (e.g., step or sleep data) may not be straightforward. Biases in PI data can remain hidden and be further propagated or even amplified in machine learning models (cf. Sections 4.1 and 4.2). **Summary of contributions.** Motivated by these idiosyncrasies and the gap in the literature, in this paper, we present the first comprehensive study on bias in PI: We adopt the most complete framework to date for understanding sources of harm in the machine learning life cycle Suresh and Guttag [94], explore biases in the data generation and model and implementation streams, and validate them in a real-world, large-scale PI dataset. During this process, we examine the suitability of different fairness metrics for digital biomarkers, initiating a conversation within the community on how to approach biases within ubiquitous mHealth. Specifically, our research questions (RQs) and the respective contributions of our study are as follows: 1. _What does bias mean for PI?_ To quantify bias in relation to the PI domain, we explore diverse fairness definitions and metrics and identify differences from other domains. We delineate each metric's strengths and shortcomings and select the most appropriate metrics for the domain (Section 2). 2. _Are PI data susceptible to biases?_ We examine the largest real-world PI dataset to date to assess whether ubiquitous digital biomarkers incorporate biases. Specifically, we perform the first detailed study on bias in the MyHeart Counts cardiovascular health study dataset (Suresh and Guttag, 2018), containing physical activity, fitness, sleep, and cardiovascular health data for 50K participants across the United States. Our results identify biases across all dimensions of the data generation stream, namely _historical, representation,_ and _measurement_ biases; these findings highlight that users should be cautious when using PI datasets, in general, and the MyHeart Counts data, in particular (Section 3). 3. _Do machine learning models inherit PI data biases?_ We investigate whether biases present in PI data are propagated when applying machine learning models to these data. Specifically, we evaluate long short-term memory (LSTM) sequence models as a baseline and personalized models for _aggregation, learning,_ and _deployment_ biases. In line with prior work (Suresh and Guttag, 2018), our findings indicate that data biases are propagated to deep learning models, especially for intersectional user groups. Surprisingly, they are significantly amplified in their personalized counterparts, raising questions regarding the shortcomings of personalization (Section 4). 4. _Can synthetic benchmarks hide the imperfect nature of PI?_ We explore whether "perfect" synthetic benchmark datasets can hide PI data and model "imperfections" and biases during evaluation. Specifically, we compare a random benchmark, representative of our data, with one designed to achieve demographic parity for evaluation biases. Our findings highlight the importance of the establishment of PI benchmarks that are representative of the intended target populations to avoid the deployment of models with unidentified biases (Section 4.3). Finally, we partially apply our analysis on two different PI datasets to showcase the generalizability of our findings and share our code publicly (Bias * _Representation biases_ can occur when sampling methods lead to underrepresenting general population segments. For example, in popular image datasets, the majority of the images originate from the United States or Europe, leading to performance degradation when classifying images coming from an underrepresented region [33]. * _Measurement biases_ can occur when choosing, collecting, and calculating features and labels for the prediction problem. For example, in medical applications, oftentimes, diagnosis is used as a proxy for having a health condition; yet, certain gender and racial groups suffer higher rates of misdiagnosis, or underdiagnosis [54]. * _Aggregation biases_ can occur when an "one-size-fits-all" treatment, e.g., model, is used for data in which underlying user groups should be treated separately. For example, in natural language processing, training models in generic data will fail to capture the different meanings, and off-line context behind street slang [43]. * _Learning biases_ can occur when modeling choices amplify performance disparities across different user segments in the data. For example, optimizing a model for privacy can reduce the influence of data originating from underrepresented groups [18]. * _Evaluation biases_ can occur when the benchmark population is not representative of the real user population. For example, dark-skinned women comprise only a small percentage of popular facial images benchmark, leading to worse performance of commercial facial analysis tools on intersectional accuracy [22]. * _Deployment biases_ can occur when there exists a mismatch between the problem a model is designed to solve and how it is actually utilized. For example, risk assessment tools in criminal justice are not used in isolation but can be used in "off-label" ways, such as determining the length of a sentence [28]. Figure 1: Sources of harm in the data (top) and model building and implementation (bottom) streams [94]. The training, test, and benchmark sets are common across figures. In the following section, we introduce the use case through which we explore bias in PI for mHealth. We then show empirically and analytically how Suresh and Guttag's seven sources of bias translate in the PI domain. ### Exploring Bias through the Largest Digital Biomarkers mHealth Dataset To examine the existence of the seven sources of bias in the PI machine learning life cycle and provide clear answers to our research questions (RQs), introduced in Section 1, we need to define an indicative -but by no means restrictive- use case to enable our analysis. For this purpose, we utilize the MyHeart Counts dataset [53], the largest collection of digital biomarkers in the mHealth domain to date, enabling us to perform the most comprehensive analysis of bias across diverse user demographics, including gender, ethnicity, age, BMI, and health conditions. Nevertheless, our methodology and outputs can be generalized to any PI dataset other than the prominent use case of MyHeart Counts (see Section 5). Data DescriptionUp till recently, general-purpose, population-scale PI datasets were unavailable, partly due to the high cost of data collection, as well as privacy concerns and data protection regulations. The most popular open datasets consisted of small to medium samples [99, 105] or were domain-constrained to Human-Activity Recognition (HAR) [6] and Sleep Classification (SC) [71]. However, this changed with the publication of data from the MyHeart Counts Cardiovascular Health Study, a collection of real-world physical activity, fitness, sleep, and cardiovascular health data from 50K participants in the United States. Participants completed various surveys and a 6-minute walk test and contributed PI data via an iPhone application built using Apple's ResearchKit framework [15]. They provided informed consent to make this data freely available for future research. Approximately 1 out of 10 participants (\(N=4920\)) shared their basic HealthKit data (step count, distance covered, burned calories, and flights climbed), while fewer users shared their sleep (\(N=626\)) and workout (\(N=881\)) data. We perform our analysis on the basic HealthKit data, which contains the most common data types among scientific datasets so that our findings are generalizable and our methodology is reproducible. Additionally, we combine these data with survey responses to attain the following user attributes: gender, ethnicity, age, BMI, and health conditions, such as heart condition, hypertension, joint problem, and diabetes. Data PreprocessingTo ensure a sufficient sample size per user group and compatibility with popular bias metrics, we convert non-binary user attributes, such as ethnicity, age, and BMI, to binary, as seen in Table 1. This grouping creates two user groups per protected attribute, namely a majority group (also called "privileged" for the purpose of this analysis) and a minority group (also called "unprivileged" for the purpose of this analysis). Note that the usage of the term "privilege" in this work does not necessarily coincide with real-world "privilege". For example, users with non-healthy BMI are the majority user segment in our dataset, and hence, they are referred to as the "privileged" user group, whereas one could argue that the opposite applies in reality. \begin{table} \begin{tabular}{l l l l} \multicolumn{3}{c}{_Original Protected Attribute Values_} & \multicolumn{2}{c}{_Binarized Protected Attribute Values_} \\ \hline **Attribute** & **Original Groups** & **Majority Group** & **Minority Group** \\ \hline Gender & Male, Female, N/A & Male & Female \\ \hline Ethnicity & White, Asian, Black, Hispanic, American Indian, Pacific Islander, Other, N/A & White & Non-white \\ \hline Age & Integer Number, N/A & \textless{}65 (lower risk of complications) & \textgreater{}65 (higher risk of complications) \\ \hline BMI & Real Number (height and weight), N/A & \textless{}18.5 or \textgreater{}25 (non-healthy) & \textgreater{}18.5 and \textless{}4.25 (healthy) \\ \hline Heart Condition & Yes, No, N/A & No & Yes \\ \hline Hypertension & Yes, No, N/A & No & Yes \\ \hline Joint Problem & Yes, No, N/A & No & Yes \\ \hline Diabetes & Yes, No, N/A & No & Yes \\ \hline \end{tabular} \end{table} Table 1. The available protected attributes in the MyHeart Counts study data. For the purpose of the bias analysis, we convert the non-binary attributes to binary to ensure a sufficient sample size per group and compatibility with popular bias metrics. Data LabelingAs mentioned previously, the MyHeart Counts dataset is general-purpose, meaning that it does not introduce any new learning tasks or certain prediction labels. To this end, we select the _next-day physical activity prediction from historical data_ use case (Friedman et al., 2017; Krizhevsky et al., 2014) for model training. In other words, based on the user's past activity, we try to predict how many steps they will perform the next day (see Table 2). Such a task may enable, for instance, the provision of personalized step goals by PI systems, which have proven to be more effective in inciting positive health behavior change compared to static, fixed goals (Krizhevsky et al., 2014). The reasons behind this choice lie not only in the benefits of physical activity for physical and mental health (Krizhevsky et al., 2014) but also in the availability of basic digital behavioral biomarkers, such as steps. Contrary to raw sensor data, which are harder to collect at scale through consumer PI systems, basic digital behavioral biomarkers, are easy to collect and commonplace in the literature, enabling the reproducibility of our findings. At the same time, steps are the largest available sensed modality in the My Heart Counts dataset, allowing us to take advantage of a larger portion of the data for the purpose of this analysis. Finally, it is important to note that our findings can be generalized to other PI tasks, e.g., mood and stress prediction (Krizhevsky et al., 2014), or health monitoring (Krizhevsky et al., 2014). ### Quantifying Bias through Fairness Metrics In this section, we discuss fairness metrics that quantify bias in machine learning from the perspective of PI, providing an answer to RQ1, namely: _What does bias mean for PI?_ As mentioned earlier on, fairness is a social construct that defies simple definition (Krizhevsky et al., 2014). Quantitative fields view fairness as a mathematical problem of "equal or equitable allocation, representation, or error rates, for a particular task or problem" (Krizhevsky et al., 2014). There is a variety of fairness definitions and metrics (see Appendix A). However, not all of them are relevant to the PI domain. Contrary to popular bias quantification tasks, such as recidivism prediction or loan repayment prediction, in our use case -and related PI tasks- there is no clear positive outcome for the user. In other words, in recidivism prediction, being marked as low-risk for committing a new crime is indisputably positive for the individual. On the contrary, a high activity goal -even though recommended- might not be realistic and thus advantageous for all individuals. Specifically, according to the Goal-Setting Theory by Latham and Locke (Latham and Locke, 2017), if an individual does not believe they can achieve their goal, they are unlikely to do so. Thus, users' goals should be close to their current abilities to hold sufficient motivational power. To this end, we initially look into definitions based on predicted and actual outcomes, namely False Omission Rate (FOR), False Negative Rate (FNR), False Positive Rate (FPR) ratios, and Error Rate Ratio (ERR), that focus on erroneous predictions rather than solely positive outcomes (for definitions, formulas and interpretation, see Appendix B). However, we quickly notice that such metrics are prone to data biases and imbalances, as shown in the example below. Assume you have an imbalanced dataset of 3000 women -2000 low active and 1000 highly active- and 8000 men -3500 low active and 4500 highly active-. Imagine a model that misclassifies 100% of highly active women as low active, i.e., \(FN=1000\). Then, \(ER_{women}=\frac{FP*FN}{P*N}=\frac{1000}{3000}=33\%\). For men to have the same error rate, a model needs to misclassify 2640 highly active men (\(FN=2640\)) as \(ER_{men}=\frac{2640}{8000}=33\%\). So, even though we have misclassified 100% of the highly active part of the minority group, by misclassifying only 59% (\(\frac{2640}{8000}\)) of the majority group, we can achieve in paper demographic parity with an ERR of 1.0 (optimal value). As discussed in Section 3, our data suffer from various biases and imbalances, and hence error-centric metrics would \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Features**} & \multicolumn{2}{c}{**Label**} \\ \hline user\_id & timestamp & steps at t-48h & — steps at t-1h & next day’s steps \\ \hline 1 & 23-11-2022 & 1040 & — & 300 & 8500 \\ \hline \hline \end{tabular} \end{table} Table 2. An example of input data for the physical activity prediction use case. The step counts per hour for the past 48 hours are the features, and the total number of the next day’s steps is the label. The user ID and timestamps are not used in the learning. not offer an objective comparison. Hence, for the purpose of this work, we utilize the widely used DIR, which is the ratio of base or selection rates between unprivileged and privileged groups, assuming equal ability across demographics: \[\text{Disparate Impact Ratio}=\frac{\Pr(y^{+}\mid G0)}{\Pr(y^{+}\mid G1)}\] where \(y^{+}\) is the actual or predicted positive outcome label (base or selection rate, respectively), \(G0\) is the minority (protected) group, and \(G1\) is the majority group. Values less than 1 indicate that the majority group has a higher proportion of predicted positive outcomes than the minority group. A value of 1 indicates demographic parity. Values greater than 1 indicate that the minority group has a higher proportion of predicted positive outcomes than the majority one. For example, a value of 0.8 for a dataset with men/women as the majority/minority groups means that for every man receiving a high activity goal, only 0.8 women do so. According to the AIF360 toolkit ([https://aif360.mybluemix.net/](https://aif360.mybluemix.net/)), accepted values are within [0.8,1.25], but such ranges are not universally accepted and might be adjusted on a task-by-task basis [(29)]. Having established our metric of choice, we move forward to our analysis of bias in the data generation (Section 3) and the model building and implementation (Section 4) streams. ## 3. Exploring Bias in Personal Informatics Data Generation Bias in the data generation stream can take the form of historical, representation, and measurement biases, as seen in Figure 0(a). In this section, we explore all three sources, providing an answer to RQ2: _Are PI data susceptible to biases_? ### Historical Bias While historical biases cannot be measured directly in the specific dataset, there is evidence that PI is susceptible to several pre-existing or present biases. For completeness, we state the main findings of related literature below. _Physical Activity Inequalities_. In PI, physical activity data, such as step counts, are among the most common digital behavioral biomarkers. Similarly, in the MyHeart Counts dataset, they constitute the majority of the extracted HealthKit data. Specifically, the dataset includes 4920 users of step tracking compared to 626 users of sleep tracking, in line with previous research supporting that many users report not wearing their watch while sleeping [(55)]. However, inequalities in physical activity are well-reported [(78; 4; 50)]. Althoff et al. [(4)] use smartphone mobility data from over 68 million activity days by more than 700K individuals across 111 countries to quantify activity inequality. Their findings reveal variability in physical activity worldwide (measured in average step counts), where reduced activity in females explains a large portion of the observed activity inequality. Similarly, Guthold et al. [(50)] report that physical inactivity is twice as prevalent in high-income countries compared to low-income countries and they confirm lower activity levels in women than in men. Overall, the World Health Organization reports that "girls, women, older adults, people of low socioeconomic position, people with disabilities and chronic diseases, marginalized populations, indigenous people and the inhabitants of rural communities often have less access to safe, accessible, affordable and appropriate spaces and places in which to be physically active" [(78)]. Such inequalities, present in the real world, can undoubtedly creep into the behavioral data we build our models on. _The Digital Divide_. Similarly, as the world rapidly digitalizes, it threatens to exclude those that remain offline. Almost half the world's population, the majority of them women or citizens of developing countries, are still disconnected [(74)]. Even in the connected world, male internet users outnumber their female counterparts across regions. This "digital divide" encompasses even more discrepancies, such as the digital infrastructure quality and connectivity speed in rural or remote areas and the required skills to navigate technology [(26)]. Thus, it is evident that data collected from any technological system, including PI, do not capture the entirety of the world population due to pre-existing inequalities in digital access and literacy. _BYOD Study Design Biases_. On top of that, PI technologies are attracting attention as novel tools for data collection in clinical research, resulting in newfound demographic imbalances. Studies adopting a bring-your-own-device (BYOD) design, such as MyHeart Counts, are gaining traction because they are more user-friendly (participants use technologies they are already familiar with), achieve better participant compliance, potentially reduce the bias of introducing new technologies, and accelerate data collection from larger cohorts [27, 31]. However, the BYOD design may not support unbiased data collection from the target population where such technologies are intended to be deployed. In their work, Cho et al. [27] identify significant demographic disparities regarding race (50-85% white cohorts) in BYOD studies. Their findings align with the reported demographic divide existent in the composition of wearable users. Even though the gap is narrowing, a report by Ericsson ConsumerLab [38] documents that the majority of existing users of wearables are fit adults between 25-34 and that whilst females are more likely to own activity trackers, 63% of smartwatch owners are male. Hence, the technology used and the available participant cohorts in PI, especially for studies with BYOD design, such as the one under inspection, subject datasets to the same bias that has been exposed in the activity inequality and the digital divide literature. ### Representation Bias We discuss representation biases across three dimensions: misrepresented, underrepresented, and unevenly sampled populations. _Misrepresented Populations_. Representation bias can emerge when the sample population does not reflect the general population (bias in rows). To evaluate for such biases in the MyHeart Counts dataset, we compare the ratios of majority and minority user segments as defined in Table 1 with the real-world ratios extracted from United States population censuses as the MyHeart Counts recruitment was spread across the country. Specifically, we utilize the United States Census Bureau (gender, race, and age [23] distributions), the Centers for Disease Control and Prevention (BMI [45, 46]), joint issues [97], hypertension [42], and diabetes [77] distributions), and the American Heart Association (heart condition [98] distribution) data to extract the real distributions. Figure 2 showcases the results of this comparison in a radar plot. For example, while in the general United States population, we have approximately 1 female per 1 male (ratio of 1.0 in pink), in the MyHeart Counts HealthKit data, we have 0.2 females per 1 male, highlighting the strong underrepresentation of women in the dataset. The same applies to race, age, and hypertension segments, where the minority classes in the dataset (non-white users, users less than 45, and users with hypertension, respectively) do not reflect real-world ratios. An interesting finding is that, while in the United States, there exist approximately 0.3 underweight, overweight, or obese people for every person with normal weight, in the dataset, this ratio is doubled, in line with the research on BYOD design biases discussed above. Hence, potentially due to historical biases and study design choices, our analysis of the MyHeart Counts data (Figure 2) provides evidence that PI datasets might not represent the real target population. _Underrepresented Populations_. PI datasets can still include underrepresented groups (bias in rows) even if sampled perfectly. Figure 4 shows significant imbalances, measured in the number of samples in the dataset, between minority and majority user segments across almost all protected attributes. We notice that even for representative sampling, e.g., users with joint or heart problems, the minority group is still significantly underrepresented in the data. Thus the model will likely be less robust for those few users with these conditions because it has fewer data to learn from. Overall, we see that the MyHeart Counts HealthKit data are skewed towards _white, fit males_, which needs to be considered in the preprocessing and model-building phases for a fairer machine learning life cycle. Note here that we cannot achieve realistic and equal representation unless the population is equally distributed. Ideally, a PI dataset should be representative of the target population but also large enough to consist of sufficient minority samples. In practice, this is challenging to achieve due to the effort and cost required to build large-scale PI datasets. Counts data, diabetes patients, users with joint issues, racial minorities, and to a smaller extent, women, racial minorities, and overweight and obese users systematically perform lower step counts in the dataset compared to their majority segment counterparts. On the contrary, users of different age groups with or without hypertension or heart issues do not differ significantly in terms of step counts in the data. ### Measurement Bias In terms of measurement bias, we focus on the input modalities and their accuracy and discrepancies during data collection, i.e., we discuss how, in the MyHeart Counts data, the measurement method and accuracy vary across groups. Device DifferencesIn the MyHeart Counts HealthKit dataset, data originate from different sources. Specifically, 33% comes from an iPhone, 11% comes from an Apple Watch, and 56% comes from multiple third parties. iPhones detect and calculate step counts through integrated sensors, such as an accelerometer, gyroscope, GPS, and in some models, a magnetometer. These sensor data are then analyzed by the motion coprocessor unit, namely a low-power unit that reads the sensors' output and makes the data available to applications via Apple's CoreMotion programming interface (Garpin et al., 2017). Specifically, it communicates with the CMMotionActivityManager (Garpin et al., 2017), which is responsible for classifying whether the user is walking, running, in a vehicle, or stationary for periods of time. However, this process cannot be fully replicated in Apple watches due to inherent differences in placement (pocket versus wrist, fit, and usage habits. For instance, phones are known to underestimate user step counts due to non-carrying time in free-living conditions (Bartos et al., 2017; Sohn et al., 2017). On the contrary, Apple watches have been tested to be more accurate for measuring daily step counts for healthy adults (Garpin et al., 2017). Moreover, in the MyHeart Counts HealthKit data, there is also a statistically significant difference (\(p<0.05\)) across segments: in Apple Watch ownership based on gender (46% of male participants have at least one watch entry compared to 28% non-males), heart condition (38% of participants with heart condition compared to 26% without), and ethnicity (41% of non-white participants compared to 36% white). Model DifferencesTo make things worse, accuracy differences have been reported across consecutive generations of iPhone devices (Sohn et al., 2017). Incremental hardware changes may increase the quantity, modality, and quality of data available for the device to calculate the CMMotionActivityManager variables, which may improve the accuracy of activity recognition. For instance, iPhone 5S has introduced the M7 coprocessor; iPhones 6 and 6 Plus contain an M8 coprocessor; and 6S, 6S Plus, and SE have an M9 coprocessor, while prior models do not incorporate such a unit. The M8 has introduced the ability to differentiate between different activities (Bartos et al., 2017), and the M9 has introduced "always-on" capabilities (Garpin et al., 2017). Additionally, newer versions of iOS may provide revised algorithms that improve recognition accuracy. In the MyHeartCounts HealthKit data, we encounter various iPhone models, starting from iPhone 4S (no coprocessor) and reaching iPhone 6S Plus (M9 coprocessor). We analyze whether differences in demographics correlate with differences in phone ownership, and we identify statistically significant differences (\(p<0.05\)) based on gender and BMI. Specifically, females and people with normal BMI tend to own older and cheaper phones with fewer capabilities (see Figure 5). General Input Modality DifferencesFinally, most of the MyHeart Counts data comes from third parties, such as alternative wearables that communicate with the Apple Health app or fitness and well-being apps downloaded from the App Store. This is not uncommon in the PI domain, given the abundance and heterogeneity of available data sources. In our use case, we see, beyond the Apple Watch, Garmin, Polar, and Basis Peak wearable products, as well as various apps. With regards to third parties usage across segments, we identify statistically significant differences based on gender (91% of male participants have at least one third-party entry compared to 85% of non-male ones) and diabetes condition (97% of participants with diabetes have at least one third-party entry compared to 90% without). However, different input devices or apps are proven to have different accuracies, likely to create measurement accuracy discrepancies between different users (Sutton et al., 2019). **Summary of biases in data generation:** * Pre-existing _historical biases_ are also present in digital biomarkers extracted from PI systems, due to well-documented phenomena, such as the global inequality in physical activity and the digital divide, leading to data generation that is not representative of the general population, which is also the case in our MyHeart Counts use case, where female, non-white, underweight, overweight or obese, young, and hypertensive users, are undersampled in the data. * Even within well-sampled user groups, data imbalances, either in terms of user attributes or measured behaviors, are still prevalent due to realistic differences across user segments. Specifically, in our PI use case, we see significant underrepresentation of minority groups across all protected attributes and measured behavioral differences -not necessarily realistic- for users with diabetes, joint issues, non-healthy BMI, non-white users, and females. * PI is susceptible to _measurement biases_, due to the heterogeneity in input modalities (smartphone versus smartwatch), performance and hardware differences across generations of devices, and usage of third-multiple party apps of unknown accuracy. Females are especially affected by such biases in our dataset, as they tend to own older devices with fewer capabilities and use to a greater extend multiple fitness-related third-party apps. Given the awareness of certain historical, representation, and measurement biases in the data, practitioners can make informed decisions concerning appropriate preprocessing actions to alleviate potential negative effects. Such actions may include oversampling minority or undersampling majority user segments for misrepresented or underrepresented populations, choosing the appropriate sampling strategy to balance for unevenly sampled populations, or accounting for measurement differences across different devices or models. ## 4. Exploring Bias in Personal Informatics Model Building and Implementation Bias in the model building and implementation stream can take the form of aggregation, learning, evaluation, and deployment biases, as seen in Figure 1b. In this section, we discuss all four sources, providing an answer to RQ3, namely: _Do machine learning models inherit PI data biases? Do they mitigate, propagate, or maybe even amplify them?_ Figure 5. Differences in the price of participants’ phones as of September 2016 based on gender (left) and BMI (right). Females and people with BMI within the normal range tend to own older and cheaper phones with fewer capabilities. ### Aggregation Bias We evaluate aggregation bias by plotting the DIR (selection rate, i.e., rate of high activity goals predictions) for different user segments' predictions based on heart conditions, hypertension, joint issues, diabetes, race, BMI, gender, and age. Figure 6 shows the DIR scores for the segments, comparing data and baseline deep learning models. Specifically, we utilize two baseline models to capture the notions of "fairness through awareness" (Srivastava et al., 2017) and "fairness through unawareness" (Srivastava et al., 2017). In fairness through awareness, fairness is captured by the principle that similar individuals should have similar classification outcomes. In our use case, the similarity is defined based on user demographics in the absence of other features. In practical terms, the aware model is trained on a feature set that includes protected attributes per user. On the other hand, fairness through unawareness is satisfied if no sensitive attributes are explicitly used in the learning process (Krishnan et al., 2017), namely, the unaware model is trained with features excluding protected attributes. Models' DescriptionOur baseline models are sourced from prior work in the field of intelligent physical activity prediction, where Bampakis et al. (2019), utilizing the MyHeart Counts dataset, benchmarked and evaluated six distinct learning paradigms from traditional machine learning models to advanced deep learning architectures. Their best model, a Long Short-Term Memory (LSTM) recurrent neural network, achieved a Mean Absolute Error (MAE) of 1087 steps, beating previous state-of-the-art approaches by 67% on the task of physical activity prediction. We consider the following setting: we are given a time-series dataset \(S=\{S^{G,0},S^{G,1}\}\) of users segmented into two groups, \(G0\) and \(G1\), based on protected attribute \(G\) (e.g., gender, age, etc.). The user data within each group are denoted as \(S^{G,g}=\{s_{1}^{G,g},\ldots,s_{K}^{G,g}\}\), where \(g=\{0,1\}\) and K is the number of users per group, conditioned on protected attribute \(G\). Furthermore, the data of each user are stored as \(s_{i}=\{X_{i},y_{i}\}\), where input time series (step count values) of users \(i=1,\ldots,K\), are stored in \(X_{i}\in\mathbb{R}^{D_{x}x1}\), where \(D_{x}=48\) (unaware model) is the length (in time steps) of a sample daily activity in the data, or \(D_{x}=56\) (aware model) is the length of a sample daily activity in the data plus the protected attribute features. Formally, our deep neural network architecture receives as input the users' daily activity samples (\(X\)) and passes them through LSTM layers with parameters \(\theta_{l}=\{W_{l},b_{l}\}\), weight matrix, and bias, respectively, for each layer \(l\), to produce the output \(\hat{y}\). The optimization of the network parameters for LSTM layers is obtained by minimizing the binary cross entropy loss \(\alpha_{c}\) defined as: \[\Omega^{*}=\operatorname*{arg\,min}_{\Omega=\{\theta_{1},\ldots,\theta_{3}\}} \alpha_{c}(\hat{y},y)=\operatorname*{arg\,min}_{\Omega=\{\theta_{1},\ldots, \theta_{3}\}}-\frac{1}{N}\sum_{i=1}^{N}\Bigl{(}y_{i}\log(\hat{y}_{i})+(1-y_{i} )\log(1-\hat{y}_{i})\Bigr{)}\] where \(N\) represents the number of training samples _from both datasets_\(\{S^{G,0},S^{G,1}\}\). We implement the proposed architectures in PyTorch Lightning (Pytorch, 2017). The hyperparameter tuning is performed using the standard back-propagation algorithm and Adam optimizer with the default parameters (Krizhevsky et al., 2017). To avoid overfitting in the deep models, we applied dropout with a varying portion of dropping nodes. Single Attribute BiasesOur findings concerning machine learning model biases measured via DIR, as shown in Figure 6, highlight the following: 1. [label=(0)] 2. Aware learning models are not foolproof against data biases in most cases (joint issues, diabetes, gender), and even amplify them for certain protected attributes (hypertension). 3. Even excluding protected attributes from the training process of unaware models does not guarantee unbiased results in line with prior work (Krizhevsky et al., 2017). Specifically, fairness through unawareness is also ineffective due to the presence of proxy features, namely attributes that work like proxies for protected attributes. Through such features, bias propagates from the data to models: for example, a person's walking behavior (measured in step counts) is a good predictor of a person's gender, BMI, and age, which can thus be inferred, despite being hidden during training (Krishnan et al., 2017). 3. Overall, diabetes patients have the largest bias gap compared to their non-diabetic counterparts, partially attributed to their highly biased training data to start with. Yet, users with hypertension have the largest difference between data and model biases since models trained on seemingly unbiased introduce bias during the learning process. _Intersectional Biases_. We also examine intersectional biases, as shown in Figure 7; namely we quantify the biases of the unaware model not only conditioned on an single protected attribute, but also on protected attribute combinations. Specifically, we consider two attributes at a time, and two different combination strategies: _minority-minority vs. rest_ (e.g., diabetic women) and _majority-majority vs. rest_ (e.g., non-diabetic men). Our results, which we present indicatively keeping the diabetes attribute fixed, highlight the widening intersectional biases for people who belong to more than one minority (in pink) across almost all attributes (with an exception of BMI, where people with non-healthy BMI are the majority group, despite usually being considered unprivileged in practice). The largest gap appears in people with more than one health condition, such as diabetic heart patients, and diabetic patients aged 65+. At the same time, people who do not belong to any minority groups (in purple), benefit across all attributes. The trends in aggregation bias indicate that PI models do not tackle diverse user segments equally well, and reflect or even amplify representation biases existing in the data, especially when it comes to intersectional biases. ### Learning Bias In the PI literature, there has been a move toward personalization, straying from the "one-size-fits-all" mentality and its shortcomings, as discussed above. Contrary to generic models, personalized models are fine-tuned given the data of a single user or user segment. Accounting for such interindividual variability has been proven to Figure 6. A comparison of DIR between data, baseline model with protected attributes in the feature set (aware), and baseline model without protected attributes in the feature set (unaware). We see that the “one-size-fits-all” models propagate or, in some cases, amplify existing representation biases. Figure 7. A comparison of DIR given the unaware baseline model between user groups defined by a single protected attribute, e.g., gender, versus intersectional user groups defined by two attributes, e.g., gender and diabetes. Intersectional groups are either drawn from the minority or the majority classes for each attribute. The “one-size-fits-all” models’ amplified biases are even more prevalent in intersectional cases. dramatically improve prediction performance in various tasks within the PI domain, such as pain detection, engagement estimation, and stress prediction from ubiquitous devices data [69, 89, 95]. Given the increasing popularity of the personalization paradigm, in this study, we investigate whether personalization as a modeling choice can amplify performance disparities across different user segments in the data, given the existence of representation bias. Model DescriptionWe base our approach on the work of Rudovic et al. [87] and the CultureNet package [86] for building generalized and culturalized deep models to estimate engagement levels from face images of children with Autism Spectrum Condition. Specifically, we utilize our deep LSTM model, which is trained on data from all users, we freeze the network parameters \(\{\theta_{1},\ldots,\theta_{3}\}\) tuned to both minority and majority user groups, as described in Section 4.1, and then fine-tune the last layer (\(\theta_{4}\)), i.e., a linear fully-connected layer, to each user group separately based on the MyHeart Counts protected attributes (health condition, hypertension, joint issues, diabetes, race, BMI, gender, age). Figure 8 delineates the personalization process. Formally, the learning during the fine-tuning process is attained through the last layer in the network, one for the minority and one for the majority user group. Before further optimization, the group-specific layers are initialized as \(\theta_{4}^{G,0}\leftarrow\theta_{4}\) and \(\theta_{4}^{G,1}\leftarrow\theta_{4}\), and then fine-tuned using the data from \(G0\) (\(S^{G,0}\)) and \(G1\) (\(S^{G,1}\)), respectively, _for each protected attribute_\(G\) as: \[\left(\theta_{4}^{G,c}\right)^{*}=\operatorname*{arg\,min}_{\theta_{4}}-\frac {1}{N}\sum_{i=1}^{N\in S^{G,c}}\left(y_{i}\log(\hat{y}_{i})+(1-y_{i})\log(1- \hat{y}_{i})\right),\quad c=\{0,1\}\text{ and}\] \[G=\{\text{gender, ethnicity, age, bmi, heart condition, hypertension, joint problem, diabetes}\}\] The final network weights, \(\theta_{l}=\{W_{l},b_{l}\}\), are then used to perform the group-specific inference of next-day physical activity level from past behavior per protected attribute. Single Attribute BiasesWhile we could not identify significant performance benefits either for the privileged or the unprivileged group by utilizing personalization in our use case, we encountered significant bias shortcomings of the approach. Specifically, across all protected attributes (with a borderline exception of race), we see that personalized models are more biased compared to either aware or unaware models or both. An extreme case appears in users with diabetes, where the personalized model "learns" that this user segment is less active than their non-diabetic counterpart in the dataset and thus provides them only with low activity goals, regardless of individual differences in physical activity levels. The intuition behind this behavior is that a personalized model is fine-tuned to a specific user segment, e.g., users with diabetes. If this segment suffers from representation bias in the dataset, which is true in many cases in PI, then personalized models amplify this bias through the fine-tuning process, as is evident in Figure 6. Our findings highlight that a common modeling choice in PI, such Figure 8: Our personalized deep learning architecture inspired by CultureNet [87]. The last layer is indicatively fine-tuned based on gender for female users. as personalization, can negatively affect biases and asks for bias-aware personalization approaches to rip the benefits of user tailoring without leading to biased results. ### Evaluation Bias _Benchmark Selection_. In machine learning, models are optimized on their training data, but their quality is often evaluated based on benchmarks, such as ImageNet [32] in the computer vision community and MovieLens [51] in the recommender systems community. However, the ubiquitous computing community still suffers from a lack of benchmarks, or benchmarks that are limited to traditional tasks, such as human activity recognition [6] and sleep classification [25, 111]. To make things worse, oftentimes, benchmarks within the community are not representative of the target population. For example, within the fall detection domain, datasets usually comprise imitated falls performed by younger people while they are deployed on older people [93]. Yet, a misrepresentative benchmark encourages the development and deployment of models that perform well only on the data subset represented by the benchmark. To illustrate our point, given the lack of established benchmarks for our use case, we devise two distinct test sets for comparison purposes: our original (random) test set, \(T1\), and a sampled subset of \(T1\), \(T0\), with demographic parity at base rate (DIR = 1.0). We then evaluate our models, namely the baseline aware and unaware and personalized LSTMs, on these two test sets. Figure 9 presents the results of our experimentation, where it is clear that \(T0\), imitating a "perfect", fair world, consistently shows better performance concerning DIR compared to \(T1\). Better performance is defined as smaller deviations from the optimal DIR value of 1.0. Essentially, an ideal-world benchmark, such as \(T0\), is "hiding" the imperfections of our trained model, which has been proven to propagate or even amplify biases based on \(T1\). _Evaluation Metric Selection_. On a different note, evaluation bias can also emerge from the choice of metric used to quantify the models' performance. For instance, group fairness hybrid metrics, such as error rates, are prone to imbalances, as discussed earlier, and can hide disparities in other types of bias metrics, such as WAE metrics (see Appendix B). Similarly, aggregate measures, such as accuracy, can hide subgroup under-performance or conceal shortcomings in more important metrics for certain use cases, such as false positive or false negative rate [94]. Fig. 9. A comparison of DIR between different test sets across models. We see that “perfect” test sets in terms of data bias (continuous lines) tend to hide imperfections in the trained models compared to the original test sets (dashed lines). ### Deployment Bias Changing Deployment ScenariosWe see at least two sources of deployment biases in PI. The first is related to the fact that the most active research areas within PI are Human-Activity Recognition and Sleep Classification. From this lens, FPs and FNs (Type I and Type II errors, respectively) in these scenarios are not critical, and models have been developed to maximize TPs. This dominant but limited view promotes deployment bias in novel use cases with the emergence of health-related intelligence embedded into PI systems. For example, given the novel ECG sensor data and AFib detection functionality, Type II errors should be minimized to avoid loss of life. It is thus critical to reassess the conceptualization of PI systems' evaluation practices and datasets and tailor them to their context. Development in IsolationSecond, learning models for PI systems are built and evaluated as if they were fully autonomous, while in reality, they operate in a complex socio-ethical system moderated by institutions and human decision-makers, also known as the "framing trap" [(88)]. Users may share their mHealth data with physicians for interpretation and disease management. Despite good performance in isolation, they may lead to harmful consequences because of human biases, such as confirmation bias. Specifically, physicians are more likely to believe AI that supports current practices, and opinions [(79)]. At the same time, research shows that physicians' perceptions about black male patients' physical activity behavior were significant predictors of their recommendations for coronary artery bypass graft surgery, independent of clinical factors, appropriateness, payer, and physician characteristics [(100)]. Such complicated interconnections highlight how evaluating a system in isolation creates unrealistic notions of its benefits and harms. Summary of biases in model building and implementation: * Digital biomarkers representation biases are propagated or even amplified by machine (deep) learning models, regardless of the inclusion of protected attributes in the feature set, due to the existence of proxy variables in PI data, e.g., steps, calories, that can be used by the model to infer hidden protected attributes. Such _aggregation biases_ are also prevalent in our use case for users with joint issues, diabetes, hypertension, and female users. * Common learning choices in PI, such as personalization, can introduce _learning biases_, if trained on biased data. In extreme cases, as highlighted in our use case for diabetic users, they can even introduce maximum bias, i.e., \(\text{DIR}=0\), while performing worse -in terms of bias- across all attributes. * Our empirical results illustrate that model performance is highly susceptible to the representativeness of the PI benchmark used and highlight how _evaluation biases_ can affect ubiquitous models in the evaluation phase. * The application of machine learning in PI is not free of _deployment biases_, which can emerge from outdated evaluation practices emerging from the PI systems' early applications or the false assumption of autonomous PI systems' existence. Given the awareness of certain aggregation and learning biases in the data, PI practitioners can make informed decisions concerning the machine learning paradigms or models to utilize or the necessary in-processing bias mitigation steps to apply. Additionally, aware of the evaluation biases present in PI data, they might choose to experiment with more benchmark datasets, evaluate their suitability for their target population apriori and select appropriate accuracy and fairness metrics to quantify performance across different user segments. Finally, they may follow a user-in-the-loop concept during the development phase, acknowledging the dependencies between PI systems and their users. ## 5. Generalizability This section aims to (i) demonstrate the straightforward applicability of our methodology and our open-source code [(8)] to other datasets and (ii) reveal initial insights about the generality of our findings and future steps. While our analysis was conducted on the MyHeart Counts dataset, most of our findings can be generalized to other scenarios in PI and mHealth. To showcase this, we apply part of our experiments on two distinct datasets: * **LifeSnaps:** LifeSnaps is a newly-released, multi-modal, time- and space-distributed dataset containing 71M rows of anthropological data, collected unobtrusively for the total course of more than four months by 71 participants. Based on data availability, we consider three protected attributes in Lifesnaps, namely gender, age, and BMI. Also, given the lack of official benchmark tasks, namely tasks built specifically on this dataset that are selected to be representative of relevant machine learning workloads and to evaluate competing models, we consider the "next-day physical activity prediction" task for model training, same with the MyHeart Counts dataset. * **MIMIC-III:** MIMIC-III is an established, large-scale clinical dataset consisting of information concerning more than 38K patients admitted to intensive care units (ICU) at a large tertiary care hospital. Based on data availability, in MIMIC-III, we consider six protected attributes, namely gender, ethnicity, language, insurance, religion, and age. Contrary to LifeSnaps or MyHeart Counts, there exists a public benchmark suite that includes four different clinical prediction tasks for MIMIC-III [52]. For this analysis, we utilize the "in-hospital mortality" task as a binary classification equivalent to the "next-day physical activity prediction" task. In exploring biases, we identified both commonalities and differences across PI datasets. Regarding the data generation stream, _representation biases seem to be the norm in PI datasets_, naturally leading to _learning and aggregation biases_ in the model building and implementation stream and highlighting the need for increased awareness among researchers and practitioners in the field. Having said that, the identified biases are distinct in each dataset, emerging mostly from their recruitment methodology and the availability of protected attributes. Fig. 11: MIMIC-III data representation biases Fig. 10: LifeSnaps data representation biases _Bias in Rows Commonalities_. All three datasets suffer from some type of "bias in rows" as seen in Figures 10a and 11a. Specifically, both LifeSnaps and MIMIC-III suffer from misrepresented populations. In LifeSnaps (Figure 10a), younger people are overrepresented due to university-based recruitment, while in MIMIC-III (Figure 11a) older people are overrepresented due to ICU-based recruitment. Additionally, while gender and ethnicity representation is improved compared to MyHeart Counts, still white males are overrepresented in all three datasets. MIMIC-III, similarly to MyHeartCounts, suffers from underrepresented populations, such as uninsured, non-white, non-English-speaking, or non-christian users (Figure 11b). These biases are, in turn, propagated to the baseline learning models, in line with prior work (Stein and biases are present in terms of gender and insurance type for mortality prediction, and insurance policy for psychiatric 30-day readmission. Within the same scope, Zhang et al. (2012) train deep embedding models on medical notes from the MIMIC-III database (Zhang et al., 2013), and find that classifiers trained from their embeddings exhibit statistically significant differences in performance, often favoring the majority group regarding gender, language, ethnicity, and insurance status. Yet, despite the emerging research on fairness in healthcare, its proximity to PI, and the widespread adoption of PI technologies, biases in PI have been barely explored. An initial effort of capturing biases in digital biomarkers is reported by Paviglianiti and Pasero (2018). Their Vital-ECG, a wearable smart device that collects electrocardiogram and plethysmogram signals, is embedded with machine learning algorithms to monitor arterial blood pressure and is found to underestimate the risk of disease in female patients. While this is a first step in uncovering biases in PI, it is far from a complete study of bias in PI. As highlighted by research in other domains, bias has multiple facets that may affect system fairness. To this end, our work aims to raise awareness and set up a systematic approach for a comprehensive analysis of data and machine learning model biases/fairness in PI systems. ## 7. Discussion & Conclusions This paper presents the first-of-its-kind, in-depth study of bias in PI by analyzing the most extensive digital biomarkers data to date. We provide empirical and analytical evidence of sources of bias at every stage of the PI ML development pipeline, from data ingestion to model deployment. In response to _RQ1_, we recognize the limitations of hybrid group fairness metrics in overcoming data imbalances and conclude that there is no optimal metric as of now capturing the idiosyncrasies of PI. Additionally, in response to _RQ2_ and _RQ3_, we show that bias exists across all stages of the machine learning lifecycle, both in the data generation and model building and implementation streams. Different user minorities are affected by diverse types of bias, but users with diabetes, joint issues, or hypertension and female users show higher degrees of impact adversity in our MyHeart Counts use case due to representation, aggregation, and learning biases. Our findings echo concerns similar to those raised in the evaluation for healthcare technologies (Bahdan et al., 2017). While some of our findings are specific to the investigated use case, they can, for the most part, be extended to PI tasks more broadly. Below we present limitations of our work that create new opportunities for future research and provide recommendations for future work for studying and mitigating bias in PI. ### Limitations _Alternative PI Use Cases._ Our work presents the first study of bias in PI research and development and does not study and compare bias in commercial PI systems, such as consumer smartphones and wearables, which we position as a future work direction. This is due to the prevalence of commercial black box models -which can be attributed to competitive advantage in an emerging market- and closed data because of ethical and privacy considerations. Yet, such restrictions have limited our use case, which may not seem as critical as AFib detection, for instance, but provides a strong indication of how PI data and models are susceptible to bias and is large enough to ensure the generalizability of our findings. Hence, our findings should be interpreted with these limitations in mind and not be seen as a generic evaluation of bias across all PI systems. _In-the-wild Data Quantity versus Data Quality._ In our search for large-scale data, we had to partly sacrifice data quality (e.g., missing values, noise, duplicate measurements), as often happens with in-the-wild datasets of the scale of MyHeart Counts. Nevertheless, we engaged in thorough preprocessing methods benchmarking to ensure the best possible quality for our training data, as reported in prior work (Zhou et al., 2017). Due to the small sample sizes for certain user groups conditioned on a protected attribute, e.g., Pacific Islanders or American Indians for the ethnicity attribute, we had to binarize all protected attributes to avoid immense imbalances between majority and minority groups. Nevertheless, we recognize that some minority groups might be treated more unfairly than others by the data and algorithms, a fact not captured in the current configuration. On the same note, gender was treated as a binary concept in the MyHeart Counts dataset, and recognizing diverse gender identities was outside of our control for the purpose of this study. ### Future Work Directions #### 7.2.1. Inclusive Training and Evaluation Datasets for Real-life Scenarios Appropriate PI datasets for fueling future bias research in the domain are still lacking. Due to the sensitivity of the data at hand, many datasets are proprietary with restrictive Institutional Review Board (IRB) agreements. Out of the open PI datasets, most are small-scale [110] due to the high effort and equipment cost required for building larger datasets or are conducted in-the-lab failing to represent the target population. To this end, any future work publishing open, large-scale, in-the-wild PI data sourced from diverse populations in terms of geographic location, gender, age, and health conditions, is significantly contributing to the advancement of the domain. Also, given the prevalence of small-scale datasets, future work should focus on quantifying biases in small digital biomarkers data, as realistically, most institutions will never acquire big data [17]. Additionally, due to the recentness of the domain and the closed-sourced data and algorithms, there is a lack of established benchmarks, especially regarding emerging PI tasks, such as fertility prediction, AFib, or fall detection. To this end, similarly to the work of Harutyunyan et al. [52], which published benchmarks for electronic health records tasks, future work should create inclusive and representative benchmarks for tasks within the PI domain. #### 7.2.2. Fairness Metrics Capturing PI Idiosyncrasies Digital biomarkers are essentially sequential time-series data, inherently different from images and audio, where fairness research is most advanced. Hence, there is work to be done in quantifying bias and identifying idiosyncrasies in sequential physiological and behavioral data. For instance, many PI tasks are formulated as regression problems, but regression-specific fairness metrics are limited in the literature [49]. Beyond that, future work should explore diverse definitions of bias and their suitability for heterogeneous PI tasks. For example, how is demographic disparity defined in AFib detection, where false negatives can be fatal, versus human-activity recognition, where false positives deteriorate the user experience? Also, which metrics are most appropriate in tasks with no clear positive outcome, such as fertility prediction, given the shortcomings of error-based metrics as discussed in Section 2.3? The latter research gap provides opportunities for future work in redefining error-based fairness metrics that are more robust to representation biases in the data, as is the case in digital biomarkers. #### 7.2.3. Benchmarking Bias Mitigation Approaches in PI While the focus of this work is uncovering the susceptibility of digital biomarkers to data and model biases, there is plenty of work to be done in benchmarking preprocessing, in-processing, and post-processing bias mitigation approaches or developing new ones for capturing the idiosyncrasies of digital biomarkers and their respective PI tasks. PI tasks are undoubtedly different from learning to rank or classification scenarios not only because of the nature of their data but also oftentimes their problem formulation as regression. Yet, fair regression is considerably overlooked compared to fair classification [2], and most fairness libraries (AIF360, FairLearn) feature only a few bias mitigation algorithms' implementations for regression tasks. Additionally, PI literature uses discrete machine learning paradigms, such as self-supervised learning, multi-task learning, and personalized learning, among others, whose consequences to algorithm bias are yet to be explored. Finally, due to privacy considerations for sensitive digital biomarkers, many times PI data are not accompanied by protected attributes for the population they describe, making it cumbersome to perform a fairness evaluation. To this end, future work should investigate the space of "fairness in unawareness", or, in other words, how you can quantify and mitigate biases in the absence of protected attributes. ###### Acknowledgements. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 813162. The content of this paper reflects only the authors' view and the Agency and the Commission are not responsible for any use that may be made of the information it contains. Results presented in this work have been produced using the Aristotle University of Thessaloniki Compute Infrastructure and Resources. The authors would like to acknowledge the support provided by the Scientific Computing Office throughout the progress of this research work.
2306.07748
Two-Loop Electron Factor Contribution to Lamb Shift in Muonium and Positronium
We calculate hard spin-independent contributions to energy levels in muonium and positronium which are due to radiatively corrected electron factor insertion in two-photon exchange diagrams. Calculation of these corrections is motivated by the new round of precise measurements of spin-independent transition frequencies in muonium and positronium.
Michael I. Eides, Valery A. Shelyuto
2023-06-13T13:07:05Z
http://arxiv.org/abs/2306.07748v3
# Two-Loop Electron Factor Contribution to Lamb Shift in Muonium and Positronium ###### Abstract We calculate hard spin-independent contributions to energy levels in muonium and positronium which are due to radiatively corrected electron factor insertion in two-photon exchange diagrams. Calculation of these corrections is motivated by the new round of precise measurements of spin-independent transition frequencies in muonium and positronium. pacs: 12.30.-r, 13.20.-r, 13.20.-r, 13.20.-r, 13.20.-r + Footnote †: preprint: For many years experimental and theoretical research on energy levels in muonium and positronium concentrated on hyperfine structure, see reviews in [1; 2; 3; 4; 5]. Now a new generation of experiments on measuring spin-independent transitions (\(1S-2S\), \(2S-2P\), etc.) in muonium and positronium (see, e.g., [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]) is either going on or planned. Inspired by these new developments we recently started a program of calculating hard three-loop spin-independent corrections to energy levels in muonium and positronium [18; 19]. There are numerous gauge invariant sets of diagrams generating such corrections. We have already calculated contributions of three gauge invariant sets of diagrams in Fig. 1[18; 19]. Below we calculate contributions to the Lamb shift in muonium and positronium generated by one more set of diagrams in Fig. 2 (plus diagrams with crossed exchanged photons, which are not shown explicitly). In the case of muonium nonrecoil contribution of these diagrams was calculated long time ago [20], so here we calculate only the radiative-recoil contribution. For any system of two electromagnetically interacting leptons with unequal (or equal) masses the hard spin-independent energy shift to the bound state energy level generated by the diagrams with two-photon exchanges is described by the integral [21] \[\Delta E=-\frac{(Z\alpha)^{5}}{\pi n^{3}}m_{r}^{3}\int\frac{d^{4}k}{i\pi^{2}k^ {4}}\frac{1}{4}Tr\Big{[}(1+\gamma_{0})L_{\mu\nu}\Big{]}\frac{1}{4}Tr\Big{[}(1 +\gamma_{0})H_{\mu\nu}\Big{]}\delta_{l0}, \tag{1}\] where \(m\) and \(M\) are the masses of the leptons, \(L_{\mu\nu}\) and \(H_{\mu\nu}\) are the light and heavy fermion factors, respectively, \(m_{r}=mM/(m+M)\) is the reduced mass, \(Z=1\) is the charge of the heavy fermion in terms of the positron charge, \(n\) and \(l\) are the principal quantum number and the orbital momentum, respectively. The expression in Eq. (1) is exact in the mass ratio, and is valid also in the case of \(m=M\) (positronium). The radiatively corrected electron factor is a sum of three terms \[L_{\mu\nu}=L_{\mu\nu}^{\Sigma}+2L_{\mu\nu}^{\Lambda}+L_{\mu\nu}^{\Xi}, \tag{2}\] arising from the two-loop self-energy, vertex and spanning photon insertions in electron line and corresponding to the diagrams in Fig. 3. Respectively, the first trace in Eq. (1) can be written as \[\frac{1}{4}Tr\Big{[}(1+\gamma_{0})L_{\mu\nu}\Big{]}\equiv\frac{\alpha^{2}}{ \pi^{2}m}{\cal L}_{\mu\nu}\left(\frac{k}{m}\right)=\frac{\alpha^{2}}{\pi^{2}m }\left[{\cal L}_{\mu\nu}^{\Sigma}\left(\frac{k}{m}\right)+2{\cal L}_{\mu\nu}^ {\Lambda}\left(\frac{k}{m}\right)+{\cal L}_{\mu\nu}^{\Xi}\left(\frac{k}{m} \right)\right]. \tag{3}\] A photon line with the insertion of a one-loop polarization operator in Fig. 3 has a natural interpretation as a massive photon propagator with the mass squared \(\lambda^{2}=4m^{2}/(1-v^{2})\). The diagrams with the polarization insertions are obtained from this massive propagator integrating over \(v\) with the weight \((\alpha/\pi)v^{2}(1-v^{2}/3)/(1-v^{2})\). All entries in the two-loop fermion factor except the two-loop anomalous magnetic moment and two-loop slope of the electric form factor decrease at least as \(k^{2}\) at \(k^{2}\to 0\). As a result, the slowly decreasing terms with the anomalous magnetic moment and slope of the electric form factor produce infrared-divergent contributions in the integral in Eq. (1) for the diagrams in Fig. 2. This linear infrared divergence indicates existence of a contribution to the Lamb shift of the previous order in \(Z\alpha\) that is already well known. To get rid of this spurious divergence, we subtract the terms with the two-loop anomalous magnetic moment and slope of the electric form factor from the two-loop electron factor. The heavy line factor in Eq. (1) has the form \[H_{\mu\nu}=\gamma_{\mu}\frac{\hat{P}+\hat{k}+M}{k^{2}+2Mk_{0}+i0}\gamma_{\nu}+ \gamma_{\nu}\frac{\hat{P}-\hat{k}+M}{k^{2}-2Mk_{0}+i0}\gamma_{\mu}, \tag{4}\] where \(P=(M,\mathbf{0})\) is the momentum of the particle with mass \(M\). In the case of \(m\ll M\) the heavy trace reduces to \[\begin{split}\frac{1}{4}Tr\Big{[}(1+\gamma_{0})H_{\mu\nu}\Big{]} &\rightarrow-\frac{1}{M}\bigg{[}k^{2}g_{\mu 0}g_{\nu 0}\wp\Big{(} \frac{1}{k_{0}^{2}}\Big{)}-\big{(}g_{\mu 0}k_{\nu}+g_{\nu 0}k_{\mu}\big{)} \frac{1}{k_{0}}+g_{\mu\nu}\bigg{]}\\ &\equiv-\frac{1}{M}\mathcal{H}_{\mu\nu}(k)_{rec},\end{split} \tag{5}\] where \(\mathcal{H}_{\mu\nu}(k)_{rec}\) is a dimensionless function, and \(\wp\Big{(}\frac{1}{k_{0}^{2}}\Big{)}\) is the principal value integral, see [21; 22] for the definition and properties. The linear in mass ratio radiative-recoil contribution is obtained from Eq. (1) by the substitution in Eq. (5) [21] \[\Delta E_{rec}=\frac{\alpha^{2}(Z\alpha)^{5}}{\pi^{3}n^{3}}\frac{m_{r}^{3}}{ Mm}\int\frac{d^{4}k}{i\pi^{2}k^{4}}\mathcal{L}_{\mu\nu}\left(\frac{k}{m}\right) \mathcal{H}_{\mu\nu}(k)_{rec}. \tag{6}\] This expression will be used for calculation of the radiative-recoil contribution of the diagrams in Fig. 2 in muonium. We have shown in [19] that there exists a simple relationship between the integrand for the linear in mass ratio radiative-recoil corrections in Eq. (6) and the integrand for the total (recoil and nonrecoil) spin-independent contribution in the case of equal masses \(m=M\). Namely, it is sufficient to let \(m=M\) and make the substitution \(\mathcal{H}_{\mu\nu}(k)_{rec}\rightarrow\mathcal{H}_{\mu\nu}(k)_{tot}\), where \[\mathcal{H}_{\mu\nu}(k)_{tot}=\mathcal{H}_{\mu\nu}(k)_{rec}\frac{k_{0}^{2}}{k_ {0}^{2}-\frac{k^{4}}{4m^{2}}}=\frac{k^{2}g_{\mu 0}g_{\nu 0}-(g_{\mu 0}k_{\nu}+g_{ \nu 0}k_{\mu})k_{0}+g_{\mu\nu}k_{0}^{2}}{k_{0}^{2}-\frac{k^{4}}{4m^{2}}}. \tag{7}\] Then the total contribution to the spin-independent energy shift of order \(\alpha^{7}m\) generated by the diagrams in Fig. 2 in the case of equal masses is given by the integral \[\Delta E_{tot}=2\frac{\alpha^{2}(Z\alpha)^{5}}{\pi^{3}n^{3}}\frac{m_{r}^{3}}{ Mm}\int\frac{d^{4}k}{i\pi^{2}k^{4}}\mathcal{L}_{\mu\nu}\left(\frac{k}{m}\right) \mathcal{H}_{\mu\nu}(k)_{tot}, \tag{8}\] where an extra factor 2 reflects the possibility to make radiative insertions in both fermion lines. We calculated the energy shifts in the Feynman gauge for the radiative photons. The linear infrared divergences, which, as was explained above, indicate the presence of the contributions of the previous order in \(Z\alpha\), were omitted, and the spurious logarithmic infrared divergences cancelled in the sum of diagrams in Fig. 2. Using Eq. (6) we obtain radiative-recoil correction in muonium \[\Delta E^{(Mu)}=\left(J_{\Sigma}^{(Mu)}+2J_{\Lambda}^{(Mu)}+J_{\Xi}^{(Mu)} \right)\frac{\alpha^{2}(Z\alpha)^{5}}{\pi^{3}n^{3}}\frac{m}{M}\left(\frac{m_{r} }{m}\right)^{3}m\delta_{l0}. \tag{9}\] Calculations are similar to the ones in [18; 19], and the infrared finite contributions of the diagrams in Fig. 2 are as follows \[J_{\Sigma}^{(Mu)}=0.10602(3),\quad 2J_{\Lambda}^{(Mu)}=-0.07644(2),\quad J_{\Xi}^ {(Mu)}=0.07373(3). \tag{10}\] The total radiative-recoil contribution to the Lamb shift in muonium from the diagrams in Fig. 2 is \[\Delta E^{(Mu)}=0.10332(3)\frac{\alpha^{2}(Z\alpha)^{5}}{\pi^{3}n^{3}}\frac{m} {M}\left(\frac{m_{r}}{m}\right)^{3}m\delta_{l0}. \tag{11}\] To calculate the spin-independent contribution to the energy shift in positronium we use the expression in Eq. (8) \[\Delta E^{(Ps)}=\left(J_{\Sigma}^{(Ps)}+2J_{\Lambda}^{(Ps)}+J_{\Xi}^{(Ps)} \right)\frac{\alpha^{7}}{\pi^{3}n^{3}}\frac{m}{4}\delta_{l0}, \tag{12}\] where the infrared finite contributions of separate diagrams in the Feynman gauge are \[J_{\Sigma}^{(Ps)}=-0.11001(2),\quad 2J_{\Lambda}^{(Ps)}=-0.07969(2),\quad J_{\Xi} ^{(Ps)}=-0.32816(1). \tag{13}\] Finally, the contribution to the Lamb shift in positronium from the diagrams in Fig. 2 is \[\Delta E^{(Ps)}=-0.12947(3)\frac{\alpha^{7}m}{\pi^{3}n^{3}}\delta_{l0}. \tag{14}\] Combining the results in Eq. (11) with our earlier results for muonium [18] we obtain the total radiative-recoil contribution of the diagrams in Fig. 1 and Fig. 2 \[\Delta E^{(Mu)}=-11.3275(2)\frac{\alpha^{2}(Z\alpha)^{5}}{\pi^{3}n^{3}}\frac{m}{M} \left(\frac{m_{r}}{m}\right)^{3}m\delta_{l0}. \tag{15}\] The contribution to the Lamb shift in positronium generated by the diagrams in Fig. 1[18; 19] and Fig. 2 is \[\Delta E^{(Ps)}=0.8057(2)\frac{\alpha^{7}m}{\pi^{3}n^{3}}\delta_{l0}. \tag{16}\] The contributions in Eq. (15) and Eq. (16) are too small to play a significant role for the results of the ongoing experiments, they are at the level of a few tenths of kHz and a few kHz, respectively. However, we expect that these corrections will become phenomenologically relevant in the future with further improvements of the experimental accuracy. There are other gauge-invariant sets of three-loop diagrams which arise as radiative corrections to the two-photon exchange diagrams, see, e.g., [22]. Hard spin-dependent corrections generated by these diagrams are already calculated, see, e.g., the review in [23] and references therein. Respective spin-independent corrections remain at this time unknown, and we hope to calculate them in the near future. ###### Acknowledgements. Work of M. I. Eides was supported by the NSF grant PHY- 2011161.
2304.00886
Gromov's tori are optimal
We give an optimal bound on normal curvatures of immersed n-torus in a Euclidean ball of large dimension.
Anton Petrunin
2023-04-03T11:11:15Z
http://arxiv.org/abs/2304.00886v2
# Gromov's torii are optimal ###### Abstract We give an optimal bound on normal curvatures of immersed \(n\)-torus in a Euclidean ball of large dimension. ## 1 Introduction Let us denote by \(\mathbb{B}^{q}\) the unit ball in \(\mathbb{R}^{q}\) centered at the origin. Further, \(\mathbb{T}^{n}\) will denote the \(n\)-dimensional torus -- the smooth manifold diffeomorphic to the product of \(n\) circles. This note is inspired by examples of embeddings \(\mathbb{T}^{n}\hookrightarrow\mathbb{B}^{q}\) for large \(q\) with constant normal curvatures \(K_{n}=\sqrt{3\cdot n/(n+2)}\). In other words, any geodesic in the torus has constant curvature \(K_{n}\) as a curve in \(\mathbb{R}^{q}\). These examples were found by Michael Gromov among geodesic subtorii in Clifford's torii [5, 2.A], [4, 1.1.A]. In particular, Gromov's torii have flat induced metric. (Recall that Clifford's torus is a product of \(m\) circles of radius \(1/\sqrt{m}\) in \(\mathbb{R}^{2\cdot m}\); its normal curvatures lie in the range \([1,\sqrt{m}]\).) Gromov's examples lead to the following surprising facts: _any closed smooth manifold \(L\) admits a smooth embedding into \(\mathbb{B}^{q}\) for large \(q\) with normal curvatures less than \(\sqrt{3}\); moreover, the induced Riemannian metric on \(L\) can be chosen to be proportional to any given metric \(g\);_ see [5, 1.D] and [4, 1.1.C]. The next theorem implies that Gromov's torii have the best upper bound on normal curvatures; in particular, the \(\sqrt{3}\)-bound is optimal. **1.1**.: **Theorem.** _Suppose \(\mathbb{T}^{n}\) is smoothly immersed in \(\mathbb{B}^{q}\). Then its maximal normal curvature is at least_ \[\sqrt{3\cdot\frac{n}{n+2}}.\] To make the statement more exact, we need one more notation. Assume that \(L\) is a smooth \(n\)-dimensional manifold immersed in \(\mathbb{R}^{q}\); we will always assume that \(L\) is equipped with induced Riemannian metric. Let us denote by \(\mathrm{T}_{x}\) and \(\mathrm{N}_{x}\) the tangent and normal spaces of \(L\) at \(x\). Recall that _second fundamental form_\(\mathrm{I\!I}\) at \(x\) is a symmetric quadratic form on \(\mathrm{T}_{x}\) with values in \(\mathrm{N}_{x}\). It is uniquely defined by the identity \(\mathrm{I\!I}(\mathrm{v},\mathrm{v})\equiv\gamma_{\mathrm{v}}^{\prime\prime}(0)\), where \(\mathrm{v}\in\mathrm{T}_{x}\) and \(\gamma_{\mathrm{v}}\) is an \(L\)-geodesic that starts at \(x\) with initial velocity vector \(\mathrm{v}\). Given \(x\in L\), denote by \(\mathcal{H}\!\mathrm{K}(x)\) the average value of \(|\mathrm{I\!I}(\mathrm{u},\mathrm{u})|^{2}\) for \(\mathrm{u}\in\mathrm{T}_{x}\) such that \(|\mathrm{v}|=1\). Since \(K(\mathrm{u})=|\mathrm{I\!I}(\mathrm{u},\mathrm{u})|\) is the normal curvature in the direction \(\mathrm{u}\), we have that \(\mathcal{H}\!\mathrm{K}(x)\) is the average value \(K^{2}(\mathrm{u})\). (The Cyrillic zhe \(\mathcal{H}\!\mathrm{K}\) is used since it resembles squared \(K\).)
2303.14854
No Time for Time from No-Time
Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no-time.
Eugene Y. S. Chua, Craig Callender
2023-03-26T23:36:12Z
http://arxiv.org/abs/2303.14854v1
# No Time for Time from No-Time ## Abstract Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no-time. ## Acknowledgements We thank Maaneli Derakhshani, Valia Allori, the Southern California Philosophy of Physics Group, and participants of the Workshop in Celebration of David Albert's Birthday for their comments/feedback. ###### Abstract Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no-time. ## 1 Introduction Programs in quantum gravity often produce supposedly fundamentally timeless formalisms. Because we observe change, it's important that they recover time from no-time somehow. One popular idea suggests time emerges from fundamentally timeless physics just as perceived color arises from the fundamentally uncolored world of basic physics. In canonical quantum gravity's semiclassical time program, the idea is that time emerges from fundamentally timeless physics after taking semiclassical approximations. Nothing fundamentally plays the "time role" throughout any solution, but time emerges in approximately classical sectors of some solutions. Comparisons with perceived color suggest an obvious worry: circularity. Physically, color only emerges from uncolored matter diachronically. Color arises from observers like us interacting with matter across temporal intervals. Replace color with time and the threat is obvious: if time emerges from no-time but emergence requires time, then we can't really say we've derived time from no-time. Time emerges if we blur our vision, but if blurring takes time then time never disappeared. Here, we raise this concern in a sharp way for the semiclassical time program. Focusing specifically on the approximations necessary to derive time from no-time, we'll show that time implicitly sneaks back in via the physical justifications behind these approximations. This leaves the program either unjustified in applying the approximations because we are applying them to timeless solutions, or succeeding only on pains of circularity. ## 2 The Problem of Time and Emergence of Semiclassical Time Quantum gravity seeks to reconcile our best theory of gravity, general relativity, with our best theory of matter, quantum theory. Different strategies exist, but we focus on the oldest canonical approach, quantum geometrodynamics, and its recovery of semiclassical time. We chose this program because it has been rigorously developed. We expect, however, that many lessons will generalise. Canonical approaches employ a quantised Hamiltonian formalism. One therefore casts general relativity into its Hamiltonian "3+1" form, decomposing spacetime into leaves of spacelike hypersurfaces. The Hamiltonian framework demands canonical variables and conjugate momenta. For gravity, the basic variable is the three-dimensional spatial metric characterizing spacelike hypersurfaces. Its conjugate momentum is defined in terms of the trace of the spatial three-metric's extrinsic curvature. In classical mechanics the Hamiltonian governs the spatial configuration of particles through time; in classical Hamiltonian general relativity, the Hamiltonian governs the spatial geometry itself through time. Once put in this form, we quantise. The counterpart of the quantum state is a functional operating in a configuration space of spatial three-metrics. To quantise, we turn the variables into operators. Trouble arises because general relativity is a constrained Hamiltonian system. One of the constraints is due to general relativity's time reparameterization freedom - we can foliate spacetime in many different ways. This constraint, the Hamiltonian constraint, demands that the Hamiltonian vanishes. Making the Hamiltonian an operator and imposing the constraint yields: \[\hat{H}\Psi(h_{ab}(x),\phi)=0 \tag{1}\] i.e., the famous Wheeler-DeWitt (**WD**) equation, where \(\hat{H}\) is the Hamiltonian operator for both gravity and matter, and \(\Psi\) is the **WD** wave-functional depending on the spatial three-geometries encoded by the spatial metric \(h_{ab}(x)\) and whatever matter fields we include, e.g., \(\phi\), a massive scalar field. The semiclassical time program's core idea is that time emerges if \(h_{ab}(x)\) is semiclassical. If not -- if \(h_{ab}(x)\) is quantum -- then the concept of time won't find any realizer. This idea was expressed by DeWitt (1967) but developed by Banks (1985) in the canonical approach.1 The **WD** wave-functional is, at the fundamental level, utterly timeless. Nonetheless it describes patterns of correlations, just like a checkered shirt at an instant contains a spatial pattern of correlations amongst stripes and colors. In the semiclassical interpretation, the idea is that at a certain level of approximation, a pattern of correlations "looks" temporal, just as a checkered shirt can look solidly colored if one zooms out far enough. By "looks temporal" we mean that a parameter plays the time role. While defining the time role could become quite messy and philosophical, this program adopts a very minimal sufficient condition that seems plausible, namely, that something plays the time role if it behaves as "\(t\)" does in the ordinary time-dependent Schrodinger equation (**TDSE**). In other words, if the matter fields vary with some parameter the same way they do with "\(t\)" in the **TDSE**, that warrants calling that parameter time. Herein lies the key achievement of the semiclassical time program: given suitable approximations, they show that the non-temporal gravitational fields \(h_{ab}\) can play the time role in a functional Schrodinger equation for the matter fields \(\phi\). If one approximates from the **WD** equation appropriately, it looks like matter is evolving with respect to time (_a la_ Schrodinger equation) against a classical gravitational curved spacetime background (described by the semiclassical Einstein-Hamilton-Jacobi equation). Let's turn to the actual derivation of time and the functional Schrodinger equation. Here we loosely follow a presentation by Derakhshani (2018). The derivation has two crucial steps. One, it uses the Born-Oppenheimer approximation (**BO**) to motivate factorizing the wave-functional of the universe. Two, it employs the **WKB** approximation on the gravity term in this product. One can think of the first move as separating out a sub-system from the total system. The second move shows that when that sub-system behaves approximately classically, it can function as a clock for the rest of the system. Suppose we have a wave-functional that satisfies the **WD** equation and other necessary constraints. This describes a static wave in a high-dimensional configuration space. How do we get time? To begin, notice that we don't expect quantum gravitational effects except near the Planck scale. Since \(h_{ab}\) depends on the extremely small Planck mass \(m_{p}\), the idea of separating scales via (**BO**) is natural. Hence we can separate the "heavy" part of the wave-function, \(\chi(h_{ab})\) from the "light" part, \(\psi(\phi,h_{ab})\): \[\Psi\approx\chi(h_{ab})\psi(\phi,h_{ab}) \tag{2}\] The idea is to use the \(h_{ab}\) degrees of freedom as a clock for the light part \(\phi\). We now apply a **WKB** approximation, substituting the ansatz \(Ae^{iS}\) for a wave-function. We do that for the first factor, the heavy subsystem, turning the wavefunction into \[\Psi\approx A(h_{ab})e^{im_{p}^{2}S(h_{ab})}\psi(\phi,h_{ab}) \tag{3}\] Next, expand S(\(h_{ab}\)) as a power series in \(m_{p}^{2}\): \[S=m_{p}^{2}S_{0}+S_{1}+m_{p}^{-2}S_{2}... \tag{4}\] Then, as usual in **WKB**, we plug \(S_{0}\) and \(S_{1}\) terms back into the wave equation and solve. In the ordinary quantum mechanical case, the \(0^{th}\) order terms returns a Hamilton-Jacobi equation and the \(1^{st}\) order term returns a continuity equation. Essentially the same happens here. Notably, solving to leading order \(m_{p}^{2}\), we derive a semiclassical gravitational Hamilton-Jacobi equation. Take a solution of these equations. Based on experience with geometric optics and quantum theory, we know it defines in superspace a vector field whose integral curves can be parametrized by a time. With this in mind, we define \[\dot{h}_{ab}=2NG_{abcd}\frac{\delta S}{\delta h_{cd}}+D_{a}N_{b}+D_{b}N_{a} \tag{5}\] where \(G_{abcd}\) is the DeWitt metric, \(N\) is the lapse function, \(D_{a}\) and \(D_{b}\) are the spatial derivatives, and \(N_{a}\) and \(N_{b}\) are shift vectors. One now takes the matter wavefunction \(\psi(\phi,t;h_{ab})\) and uses (5) to define a time derivative for it \[\frac{\partial\psi(\phi,t)}{\partial t}=\int\limits_{\Sigma}\dot{h}_{ab}({ \bf x},t)\frac{\partial}{\partial h_{ab}}\psi(\phi,h_{ab})d^{3}x \tag{6}\] Time emerges in terms of this directional derivative. Call this **WKB time**. The final step uses **WKB** again, keeps only the lowest order terms, and requires a lot of massaging. Skipping these details, we can show that \(\psi(\phi,h_{ab})\) satisfies a functional Schrodinger equation \[i\frac{\partial}{\partial t}\psi(\phi,t;h_{ab})=\hat{H}^{m}(\phi;h_{ab})\psi( \phi,t;h_{ab}) \tag{7}\] where \(\hat{H}\) is a Hamiltonian-type term and \(\psi\) is evaluated at a solution \(h_{ab}\), which is itself a solution of the classical Einstein equations. Such a compressed derivation may be confusing to unfamiliar readers. The important take-away here is that the \(t\) we used to parametrize the approximately classical general relativistic solutions (corresponding to the first "heavy" term in our factorization (2)) is used in a solution as a clock for the matter fields in the **WKB** regime. The "\(t\)" in (7) is the same as that in (6). We won't delve into the rest of the theory; however, note that we can also derive a continuity equation that allows us to use the normal Born rule for predictions from the theory, and furthermore, using perturbation theory - by considering higher-order terms we have so far ignored - one can derive non-classical predictions. In sum, the semiclassical derivation provides an elegant derivation of time from no-time. Making a series of seemingly reasonable assumptions, a parameter that looks and acts like time emerges. And if we agree that something that looks and acts like time _is_ time, then time emerges. ## 3 Justifying the Approximations We jumped from one equation to another by expanding to leading order, focusing on lowest order, assuming the wave-functional approximately factorises, and so on. What justifies these steps? Approximations require physical justification. At the level of pure math, one can "derive" virtually any equation from any other if allowed to assume anything. It makes no sense to say that one equation or quantity is "close" to another absent a metric. We need justification, and it is in this physical justification that we fear time sneaks in. To elaborate, we can treat classical pendulums as approximately undamped harmonic oscillators. For small angles, \(sin\ \theta\approx\theta\), allowing us to derive equations of motion for pendulums which are identical to that of harmonic oscillators. A harmonic oscillator, we might say, "emerges" from the pendulum in the small angle limit. But relative to some measurement standard, at some point an initial displacement angle becomes too big and the approximation fails; that is, we notice deviations from the derived equation of motion. Angles aren't intrinsically big or small. They are big or small relative to a standard. Typically that standard refers to the observational or measurement capacities of an observer. The approximation's validity hangs partly on an error analysis of our measurement technique. Coarse measurements allow the approximation to be good for greater values of \(\theta\) than finer measurements. This example suggests a subtle problem for the semiclassical time program and even the present analysis. We have no observers yet in canonical quantum gravity. We are working in a partially interpreted theory, one lacking a solution to the infamous measurement problem. Absent observers, we cannot perform the above error analysis. When are (say) off-diagonal terms in matrices "small" and justifably ignored? The answer: when they're irrelevant to the observer (measurement/analysis/etc.). However, to introduce an observer in order to have a standard for judging smallness, we effectively already need time. Observation is a temporal process. So we can only justify approximations by already introducing time, making the derivation circular. Sans an observer, we can't say what "looks" like a small difference that would warrant an approximation. We'll return to this point, but for now we keep things simple by noting that the approximations used to derive semiclassical time are always warranted in the rest of physics by appeal to an implicit time metric. Without the time metric, the approximations seem physically unwarranted. We do not, and cannot, show that there is _no standard possible_ warranting these approximations. What we can do is raise the worry and challenge advocates of semiclassical time to justify the approximations without appealing to a prior time standard. We'll see that some apparently innocent assumptions are, in fact, not. Although the semiclassical time program has an estimated twenty assumptions (Anderson 2007), we'll concentrate on three: the Born-Oppenheimer approximation, the **WKB** approximation, and decoherence. ### The Born-Oppenheimer Approximation The **BO** approximation in the semiclassical time program splits the universe into two kinds of subsystems, the gravitational field \(h\) and quantum matter fields \(\phi\). The justification for this split ultimately appeals to a difference in masses: masses associated with \(h\) are 'heavy' in comparison to the masses associated with \(\phi\).2 Therefore it seems plausible that \(h\) is largely insensitive to \(\phi\). By contrast, \(\phi\), being small and light, is sensitive to the big and heavy \(h\). We therefore assume that the wave-functional \(\Psi\) for the entire system (the universe) can be approximately factorized into two wave-functions \(\chi(h)\) and \(\psi(\phi,h)\), with \(\chi\) associated with the heavier \(h\), and \(\psi\) associated with both the lighter \(\phi\)_and_\(h\), as per (3). This factorization, as we've seen, is a necessary assumption in the above derivations. Footnote 2: See e.g. Banks (1985, 337–338), Kiefer (2004, 165). On its face the rationale doesn't sneak time in. Some masses are larger than others, and we expand accordingly. That's it. Let's probe deeper. **BO** is motivated by appealing to the "very different scales" (Kiefer 2004, 164) that the gravitational fields and matter fields have. This appeals to a metric that measures how big the effects of one subsystem are on the other. Why does having different size masses warrant different scales and factorizing the wavefunction? Differences in the values of other properties (say, charge) don't always demand or legitimize such an approximation. What is special about mass? To help answer this question let's look at standard uses of **BO** outside quantum gravity. Unfortunately we'll find that mass and size scale differences between systems are only relevant for **BO** because they are proxies for _timescale differences_ in the dynamics of the relevant subsystems. In its most popular application - molecular and atomic physics - **BO** is used to factorize an atom or molecule's wave-function into the product of two subsystems. Here, the heavier subsystem is the nuclei, and the lighter subsystem is the electrons surrounding the nuclei (Griffiths 2005). Again, the heavier system is assumed to be effectively independent of the lighter system, while the lighter system rapidly adapts itself to changes in the heavier system. Usually, we pretend that the nuclear wave-function is not changing at all _in time_, and then calculate the electronic wave-function associated with that nuclear wave-function. We then find a more realistic nuclear wave-function by letting it vary'slowly' or'sluggishly', calculating the possible ranges of electronic wave-functions and hence the mean potentials in which the nuclei can move. More generally, **BO** applies in cases where heavier subsystems are known to change slowly _in time_ with respect to lighter subsystems. That is why mass matters. Heavier subsystems have significantly different _characteristic dynamical timescales_ - timescales over which "the parameters of the system change appreciably" - with respect to lighter subsystems, and can be said to be _adiabatic_ with respect to the lighter subsystem. The change in the lighter subsystem happens on such a short timescale that there isn't enough time for the heavier subsystem to react in that relevant timescale, and so it is effectively independent of lighter subsystems in that period of time. **BO** is thoroughly laden with temporal notions. Returning to the semiclassical time program, a problem arises. Because **BO** is so widely used, and because it initially seems to be about mass (not time!), it may be imported into derivations without considering whether its use in new applications is warranted. Did that happen here? We cannot say, but we leave this section with a dilemma: either the mass scales relevant here are proxies for time scales or not. If they are, we face circularity; if they are not, we have no clear means of assessing whether **BO** is even applicable here. In short, this seems to be a case of needing time to get time, but of course, in canonical quantum gravity we have no time for that. ### The WKB Approximation The **WKB** approximation is a staple of every quantum mechanics course. Often presented as a piece of pure math, **WKB** seems like a mere approximation method in the theory of partial differential equations, an unlikely place to find a hidden time preference. But of course, we still need physical justifications for why this math applies to a given physical situation. For that we need physics. Frequently, we use **WKB** when working with stationary states of energy \(E>V\). Immediately, we note that the time dependence is therefore hidden. If a system begins in an energy eigenstate, then time evolution simply multiplies the state by a time-dependent phase factor that doesn't affect the probabilities for measurement. Perhaps we shouldn't think this way: here, the time-independent equation is fundamental and the time-dependent one is non-fundamental, contrary to ordinary quantum theory. Still, we believe the approximation presumes the existence of time. We see this most clearly with the textbook **WKB** derivation. Begin with the one-dimensional time-independent Schrodinger equation (**TISE**) describing a system in a background potential \(V(x)\): \[\frac{d^{2}\psi}{dx^{2}}+\frac{2m}{\hbar^{2}}(E-V(x))\psi=0 \tag{8}\] or: \[\frac{d^{2}\psi}{dx^{2}}+\frac{p(x)^{2}}{\hbar^{2}}\psi=0 \tag{9}\] where we use the classical momentum identity: \[p(x)=\sqrt{2m(E-V(x))} \tag{10}\] If \(V(x)\) is constant, the system behaves like a free particle with \(\psi(x)\sim e^{ip(x)}\). If \(V(x)\) varies _slowly_, we expect that the system behaves _approximately_ like a free particle. Motivated by this, we find solutions to the **TISE** such that \[\psi(x)=A(x)e^{iS(x)/\hbar} \tag{11}\] Plugging this back into the **TISE**, we get two equations (for the imaginary and real parts respectively): \[\hbar\frac{d^{2}A}{dx^{2}}=A\left((\frac{dS}{dx})^{2}-\frac{p(x)^{2}}{\hbar^{2 }}\right) \tag{12}\] \[2\frac{dA}{dx}\frac{dS}{dx}+A\frac{d^{2}S}{dx^{2}}=0 \tag{13}\] Everything so far is exact. However, note that (12) _generally does not have analytic solutions_. What then? The solution, and a crucial step in **WKB**, is to _assume_ that \(A\) varies _so slowly_ with respect to \(x\) that \(\frac{d^{2}A}{dx^{2}}\approx 0\). This step allows us to solve (12) and (13) for \(A\) and \(S\). Combining these results, we get the well-known **WKB** approximation to the wave-function \[\psi(x)\approx\frac{C}{\sqrt{p(x)}}exp\left(\pm\frac{i}{\hbar}\int dx\ p(x)\right) \tag{14}\] where \(C\) is some real constant dependent on \(A\) and \(S\). Arbitrary superpositions of these wave-functions are approximate solutions of the Schrodinger equation. They are also exact solutions of the classical Hamilton-Jacobi equation -- from which one obtains the time parameter used in the semiclassical time program. Under what conditions are we allowed to neglect \(\frac{d^{2}A}{dx^{2}}\)? This is where the physics enters. The answer is well-known: \(V\) must vary slowly with \(x\) and \((E-V)\) can't be too small. When \(V\) is constant, and the system behaves like a free particle, \(A\) is constant. When \(V\) is 'close to constant', i.e. varying slowly, so too is \(A\). On its face the condition of \(V\) "slowly varying" does not conceal any time-dependence since it concerns slowness with respect to the spatial \(x\) not the temporal \(t\). What motivates **WKB** is that when the potential is not too spatially sharp one tends to not see much interference, so this important assumption is about spatial smoothness not temporal variation. Still, time is present. There are many ways to see this. An obvious one is to consider the use of the classical momentum identity (10). In quantum mechanics, we know that the momentum operator depends only on spatial variables and not time: \[\hat{p}=-ih\nabla \tag{15}\] However, the classical momentum _does_ depend on time, since: \[p=m\frac{dx}{dt} \tag{16}\] Despite working in quantum mechanics, we used the classical momentum identity without explanation. This lets us adopt the energy condition, \(E>V\) not being too small (and \(E\neq V\)), to physically justify neglecting \(\frac{d^{2}A}{dx^{2}}\). But _why_ are we considering \(E>V\)? The time dependence of (10) lets us see why. Combining (10) and (16) and separating variables yields: \[\int dt=\int dx\frac{m}{\sqrt{2m(E-V(x))}} \tag{17}\] Now we see _why_\(E>V\) is the relevant condition for **WKB**. For any fixed potential \(V(x)\), the integral on the right hand side is small when \(E-V(x)\) is large. As a result, the total time \(\Delta t=\int dt\) spent by a system in that constant potential is very small. On the contrary, if \(E-V(x)\) is not too large (but \(E\neq V\)), then the total time spent under a fixed \(V(x)\) is relatively longer. The _longer_ a particle generally spends _time_ moving in each given fixed potential, the _slower_ we can say the potential is varying spatially. The latter fact lets us derive **WKB** - but notice how the temporal metric is involved in the physical justification. One might worry that this imports 'classical bias' about particles into quantum mechanics, but we see essentially the same point from a wave perspective. Note that if the potential spatially varies slowly with respect to the particle's de Broglie wavelength, then its wave-function approximates that of a free particle, i.e., a plane wave. That means the system will propagate freely with a constant velocity \(v\) for a time \(T\). As Allori and Zanghi (2009, 24) note, that time - the time for which we can pretend that \(V\) is effectively constant - satisfies the following relation: \[T\sim\frac{L}{v} \tag{18}\] where \(L\) is the scale of variation of the potential. This provides a clear physical picture of what it means to apply **WKB**. If \(L\) is long and \(v\) is low, then the particle is moving slowly through an effectively unchanging \(V\), allowing **WKB** to hold for long times. Conversely, if \(L\) is short and \(v\) high, then the particle rapidly moves (in time!) through the potential - in these cases we can no longer assume that \(V\) is effectively constant for the system, and **WKB** will not hold for long times. This clearly parallels the classical case discussed earlier. Since \(\lambda=\frac{\hbar}{p}=\frac{\hbar}{mv}\), we can write (18) as: \[T\sim\frac{Lm\lambda}{\hbar} \tag{19}\] The time-dependence, evident when talking about velocities/momenta, becomes masked when we replace velocities with notions of wavelengths and spatial variations. Yet the time-dependence is plainly there. From (18) and (19) we can see that if \(L\) is large, the **WKB** approximation will be good for long \(T\) and if small then only for short \(T\). In standard cases **WKB** is thus justified via a background time metric. In the case of semiclassical time, however, there is no such background time metric, so we again face our challenge to justify the assumption without invoking time. ### Decoherence The discerning reader might have noticed two sleights of hand in deriving the functional **TDSE** and 't'. First, in using **BO**, we effectively assumed that \(\Psi\) was an eigenstate (3) of the **WD** equation. Since the **WD** equation is linear, general solutions involve a superposition of states. Second, a similar assumption was made in choosing the approximate **WKB** wave-function for the gravitational fields \(\chi(h_{ab})\) in (3). Again, due to linearity, arbitrary superpositions of states are also solutions. These assumptions are absolutely vital for deriving a functional **TDSE**.3 Using an arbitrary superposition of states in the **BO** and **WKB** approximations, the above procedures do _not_ recover a semiclassical time. Footnote 3: See Kuchar 1992. The most popular response to these observations appeals to _decoherence_. (Kiefer 2004, 317) The idea is that if the initial state of the universe is in an arbitrary superposition of states, then decoherence will drive the wavefunction into a superposition of effectively non-interacting components, each one of which is suitable for the semiclassical time recovery. In an Everett-type interpretation of quantum mechanics, for instance, we could recover a time in each decohered branch or world. Our worry is especially clear here because decoherence is normally understood as a dynamical process. It presumes temporal evolution by the Schrodinger equation. Decoherence at once requires time and is required for time. Indeed, one finds tension in Kiefer's own account. On the one hand, he writes that "A prerequisite [of decoherence in the semiclassical time program] is the validity of the semiclassical approximation... This brings an approximate time parameter t into play." (Kiefer 2004, 311) But later he writes that "Since [decoherence] is a prerequisite for the derivation of the Schrodinger equation, one might even say that time (the WKB time parameter in the Schrodinger equation) arises from symmetry breaking [i.e decoherence]... Strictly speaking, the very concept of time makes sense only after decoherence has occurred." (Kiefer 2004, 318) Obviously, the two claims cannot be true at once, and again, we face our dilemma. ## 4 Discussion Our investigations into three approximations integral to the semiclassical time approach have unearthed a general worry: we seemingly need to put time _in_, somewhere and somehow, in order to get time _out_ of the timeless formalism. This worry hasn't been noticed before, we suspect, because time is not blatantly assumed in derivations. It appears implicitly via the justifications for the assumptions, not explicitly in the math. Note that we haven't shown the impossibility of answering our challenge. If we could make sense of an atemporal observer, perhaps we could find a measurement standard that makes the terms ignored in **BO**, decoherence and **WKB** small in some relevant sense. Absent such observers, we observe that there is very little to work with in canonical quantum gravity to help us. This point becomes clearer by comparing our objection to a similar one leveled against decision-theoretic attempts to derive Born's rule in Everettian quantum mechanics. As is well-known, the Everettian interpretation faces a problem in making sense of quantum mechanical probabilities. Its law consists only of a linear deterministic wave equation. Therefore it produces only trivial probabilities (0, 1) for any outcome. Born's Rule, our guide to experiment, seems unexplained. In response, some Everettians have turned to decision theory, by trying to prove that rational Everettian agents will set their preferences in accordance with Born's Rule. Controversy ensues about whether the assumptions used in the proofs are really requirements of rationality. But another line of criticism will immediately sound familiar. Baker (2007), Kent (2010) and Zurek (2005) point out that Everettians use decoherence to say that different "worlds" approximately emerge from the wave-function. What does "approximately" mean here? Well, it seems to mean that a branching structure is likely to happen - the probability of an error is small according to the Born measure (mod-squared amplitude). Yet the decision theoretic proofs begin with a branching structure. That begs the question, the critics say, for we've assumed that mod-squared amplitude is a probability in our demonstration that mod-squared amplitude is probability. Structurally this objection is similar to ours. Can any replies there be transferred to our case? The only Everettian response we found is Wallace (2012, 253-4). Wallace argues that the branching structure "really is robustly present" even prior to interpreting mod-squared amplitude as probability. What standard makes it present? His answer: Hilbert space norm. This is an objective physical measure. If branching emerges approximately with respect to Hilbert norm, then the probability measure is not needed as an assumption in deriving Born's rule. One could justifiably ask whether Hilbert space norm is enough to answer the objection. Small differences in Hilbert space norm may not be small differences for an observer, or vice versa. From color science we know that similar-looking colors (with small phenomenological distance) might be produced by physically dissimilar properties. Hilbert space norm might not be enough to fully answer the charge. However that debate goes, we lack anything like Hilbert space norm in the present case. The space of spatial three-metrics has a geometry given by the DeWitt metric. But this metric won't say how far quantum states are from one another. What we need, comparable to the Hilbert norm, is an invariant positive-definite inner product on **WD**'s solution space. Here we're right back to time! "Invariant" means the inner product is independent of time. Constructing an invariant positive-definite inner product on **WD**'s solution space is the notorious "Hilbert space problem" (Kuchar 1992). While the Schrodinger equation provides a conserved inner product "for free", **WD** doesn't. The most natural way to solve the Hilbert space problem is to identify a time variable and construct a norm from that; but in this context that won't help. Again, we don't want to say that there is no way to warrant the approximations. But we have argued that the most natural warrant appears temporal. We see no reason to think the introduction of observers will change that verdict. ## 5 Conclusion We started with the idea that the world was fundamentally timeless: semiclassical time arises from certain regimes looking temporal when we blur our vision. That metaphor turns out to be not quite right, as it neglects that we've imported a mathematical construct, the Hamilton-Jacobi structure, onto the basic physics. Only within that structure does time seemingly emerge. Instead of blurry vision making a pattern of correlations in the wave-functions look temporal, what's happened is that we're being offered "time glasses." We are told you're justified in using these glasses - this mathematical construct - and when we look through them, they turn the pattern temporal. Are we justified in wearing "time glasses"? It seems the only reason to wear them is when one already has time.
2304.01540
Gonosomal algebras and associated discrete-time dynamical systems
In this paper we study the discrete-time dynamical systems associated with gonosomal algebras used as algebraic model in the sex-linked genes inheritance. We show that the class of gonosomal algebras is disjoint from the other non-associative algebras usually studied (Lie, alternative, Jordan, associative power). To each gonosomal algebra, with the mapping $x\mapsto\frac{1}{2}x^{2}$, an evolution operator $W$ is associated that gives the state of the offspring population at the birth stage, then from $W$ we define the operator $V$ which gives the frequency distribution of genetic types. We study discrete-time dynamical systems generated by these two operators, in particular we show that the various stability notions of the equilibrium points are preserved by passing from $W$ to $V$. Moreover, for the evolution operators associated with genetic disorders in the case of a diallelic gonosomal lethal gene we give complete analysis of fixed and limit points of the dynamical systems.
U. A. Rozikov, S. K. Shoyimardonov, R. Varro
2023-04-04T05:33:46Z
http://arxiv.org/abs/2304.01540v1
# Gonosomal algebras and associated discrete-time dynamical systems ###### Abstract. In this paper we study the discrete-time dynamical systems associated with gonosomal algebras used as algebraic model in the sex-linked genes inheritance. We show that the class of gonosomal algebras is disjoint from the other non-associative algebras usually studied (Lie, alternative, Jordan, associative power). To each gonosomal algebra, with the mapping \(x\mapsto\frac{1}{2}x^{2}\), an evolution operator \(W\) is associated that gives the state of the offspring population at the birth stage, then from \(W\) we define the operator \(V\) which gives the frequency distribution of genetic types. We study discrete-time dynamical systems generated by these two operators, in particular we show that the various stability notions of the equilibrium points are preserved by passing from \(W\) to \(V\). Moreover, for the evolution operators associated with genetic disorders in the case of a diallelic gonosomal lethal gene we give complete analysis of fixed and limit points of the dynamical systems. **Mathematics Subject Classifications (2010).** 17D92; 17D99. **Key words.** Bisexual population, Gonosomal algebra, Quadratic operator, Gonosomal operator, equilibrium point, limit point. ## 1. Introduction In most bisexual species sex determination systems are based on sex chromosomes also called gonosomes (or heterochromosomes, diiod chromosomes, heterosomes, allosomes). Gonosomes, unlike autosomes are not homologous, they are often of different sizes and in all cases they have two distinct regions: - the pseudoautosomal region corresponds to homologous regions on the two gonosome types, it carries genes present on the two types of sex chromosomes that are transmitted in the same manner as autosomal genes; - the differential region carries genes that are present only on one type of gonosome and have no counterpart on the other type, we say that these genes are sex-linked or gonosomal. The chromosomal dimorphism in gonosomes induces an asymmetry in the transmission of gonosomal genes: for example, for a diallelic gene three genotypes are observed in one sex and only two in the other and when an allele is recessive it is always expressed in one sex and one third of cases in the other. Therefore inheritance of gonosomal genes is very different from that of autosomal genes. Population genetics studies the evolution (dynamics) of frequency distributions of genetic types (alleles, genotypes, gene collections etc.) in successive generations under the action of evolutionary forces. This study is based on the definition and application of an evolution operator to describe the next generation state knowing that of the previous Introduction The study of the classical problem of finding a solution of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of a system of equations of a system of equations of a given system of equations of a given system of equations of a system of equations of a system of equations of a system of equations of a system of equations of a system of equations of a system of equations of a given system of equations of a system of equations of a given system of equations of a system of equations of equations of a given system of equations of a system of equations of a system of equations of equations of a given system of equations of a system of equations of equations of a given system of equations of a system of equations of a system of equations of a system of equations of a system of equations of equations of a given system of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of a system of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of equations of a system of equations of equations of a system of equations of a system of equations of equations of a system of equations of equations of a system of equations of equations of a system of equations of therefore if \(N\left(t+1\right)\neq 0\), the frequency of type \(e_{k}\) (resp. \(\widetilde{e}_{r}\)) in the generation \(t+1\) is given by: \[x_{k}^{\left(t+1\right)} = \frac{\sum_{i,j=1}^{n,\nu}\gamma_{ijk}x_{i}^{\left(t\right)}y_{j}^ {\left(t\right)}}{\left(\sum_{i=1}^{n}x_{i}^{\left(t\right)}\right)\left(\sum_{ j=1}^{\nu}y_{j}^{\left(t\right)}\right)} \tag{2.3}\] \[\Big{(}\text{resp. }y_{k}^{\left(t+1\right)} = \frac{\sum_{i,j=1}^{n,\nu}\widetilde{\gamma}_{ipr}x_{i}^{\left(t \right)}y_{j}^{\left(t\right)}}{\left(\sum_{i=1}^{n}x_{i}^{\left(t\right)} \right)\left(\sum_{j=1}^{\nu}y_{j}^{\left(t\right)}\right)}\Big{)}. \tag{2.4}\] Consider \((n+\nu-1)-\)dimensional simplex \[S^{n+\nu-1}=\left\{\left(x_{1},\ldots,x_{n};y_{1},\ldots,y_{\nu}\right)\in \mathbb{R}^{n+\nu}:x_{i}\geq 0,\,y_{j}\geq 0,\,\sum_{i=1}^{n}x_{i}+\sum_{j=1}^{ \nu}y_{j}=1\right\}.\] Then equations (2.3) is a discrete-time dynamical system generated by the evolution operator \(W:S^{n+\nu-1}\to S^{n+\nu-1}\) defined as (see [11]) \[\begin{array}{ll}W:&x_{k}^{\prime}=\frac{\sum_{i,j=1}^{n,\nu}\gamma_{ijk}x_ {i}y_{j}}{\left(\sum_{i=1}^{n}x_{i}\right)\left(\sum_{j=1}^{\nu}y_{j}\right)} \\ W:&y_{k}^{\prime}=\frac{\sum_{i,j=1}^{n,\nu}\widetilde{\gamma}_{ipr}x_{i}y_{j} }{\left(\sum_{i=1}^{n}x_{i}\right)\left(\sum_{j=1}^{\nu}y_{j}\right)}.\end{array} \tag{2.5}\] ## 3. Definition and basic properties of gonosomal algebras There are several algebraic models to study the inheritance of gonosomal genes. The first was proposed by Etherington [3] for a gonosomal diallelic gene in the \(XY\)-system, it was extended to diallelic case with mutation in [4], to multiallelic case in [5, 14, 15]. The second model is due to Gonshor [6] by introducing the concept of sex-linked duplication. In [7] the authors introduced a more general definition: the evolution algebras of a bisexual population (_EABP_). In [12] we show that several genetic situations are not representable by _EABP_ what leads to put the following definition. **Definition 1**.: _Given a commutative field \(K\) with characteristic \(\neq 2\), a \(K\)-algebra \(A\) is gonosomal of type \((n,\nu)\) if it admits a basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{j}\right)_{1\leq j \leq\nu}\) such that for all \(1\leq i,j\leq n\) and \(1\leq p,q\leq\nu\) we have:_ \[e_{i}e_{j} = 0,\] \[\widetilde{e}_{p}\widetilde{e}_{q} = 0,\] \[e_{i}\widetilde{e}_{p}\;=\;\widetilde{e}_{p}e_{i} = \sum_{k=1}^{n}\gamma_{ipk}e_{k}+\sum_{r=1}^{\nu}\widetilde{\gamma} _{ipr}\widetilde{e}_{r},\] _where \(\sum_{k=1}^{n}\gamma_{ipk}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr}=1\). The basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{j}\right)_{1\leq j \leq\nu}\) is called a gonosomal basis of \(A\)._ **Remark 1**.: _For now, we do not need to assume that the structure constants \(\gamma_{ipk}\), \(\widetilde{\gamma}_{ipr}\) are non-negative._ It was shown in [12] that gonosomal algebras can represent algebraically all sex determination systems (\(XY\), \(WZ\), \(X0\), \(Z0\) and \(WXY\)) and a wide variety of genetic phenomena related to sex as: temperature-dependent sex determination, sequential hermaphrodism, androgenesis, parthenogenesis, gynogenesis, bacterial conjugation, cytoplasmic inheritance, sex-linked lethal genes, multiple sex chromosome systems, heredity in the \(WXY\)-system, heredity in the \(WZ\)-system with male feminization, \(XY\)-system with fertile \(XY\)-females, \(X\)-linked sex-ratio distorter, kleptogenesis, genetic processes (mutation, recombination, transposition) influenced by sex, heredity in ciliates, genomic imprinting, \(X\)-inactivation, sex determination by gonosome elimination, sexual reproduction in triploid, polygenic sex determination, cytoplasmic heredity. The gonosomal basis on a gonosomal algebra may be not unique as as shown by the following proposition. **Proposition 1**.: _Let \(A\) be a gonosomal algebra with gonosomal basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{p}\right)_{1\leq p \leq\nu}\). Then any basis \(\left(a_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{a}_{p}\right)_{1\leq p \leq\nu}\) with_ \[a_{i}=\sum_{j=1}^{n}\alpha_{ji}e_{j}\text{ and }\widetilde{a}_{p}=\sum_{q=1}^{ \nu}\widetilde{\alpha}_{qp}\widetilde{e}_{p}\] _where \(\sum_{j=1}^{n}\alpha_{ji}=\sum_{q=1}^{\nu}\widetilde{\alpha}_{qp}=1\) for all \(1\leq i\leq n,1\leq p\leq\nu\), is a gonosomal basis of \(A\)._ Proof.: Let \(\left(a_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{a}_{p}\right)_{1\leq p \leq\nu}\) be a basis of the assumed form. It is immediate that \(a_{i}a_{j}=\widetilde{a}_{p}\widetilde{a}_{q}=0\). Next by an easy calculation we get \[a_{i}\widetilde{a}_{p} = \sum_{k=1}^{n}\bigl{(}\sum_{j,q=1}^{n,\nu}\alpha_{ji}\widetilde{ \alpha}_{qp}\gamma_{jqk}\bigr{)}e_{k}+\sum_{r=1}^{\nu}\bigl{(}\sum_{j,q=1}^{n, \nu}\alpha_{ji}\widetilde{\alpha}_{qp}\widetilde{\gamma}_{jqr}\bigr{)} \widetilde{e}_{r}\] where \[\sum_{k=1}^{n}\bigl{(}\sum_{j,q=1}^{n,\nu}\alpha_{ji}\widetilde{ \alpha}_{qp}\gamma_{jqk}\bigr{)}+\sum_{r=1}^{\nu}\bigl{(}\sum_{j,q=1}^{n,\nu} \alpha_{ji}\widetilde{\alpha}_{qp}\widetilde{\gamma}_{jqr}\bigr{)} = \sum_{j,q=1}^{n,\nu}\alpha_{ji}\widetilde{\alpha}_{qp}\bigl{(}\sum _{k=1}^{n}\gamma_{jqk}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{jqr}\bigr{)}\] \[= \bigl{(}\sum_{j=1}^{n}\alpha_{ji}\bigr{)}\bigl{(}\sum_{q=1}^{\nu }\widetilde{\alpha}_{qp}\bigr{)}=1,\] which establishes that the basis \(\left(a_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{a}_{p}\right)_{1\leq p \leq\nu}\) is gonosomal. **Proposition 2**.: _Any gonosomal algebra of type \((n,\nu)\) is isomorphic to a gonosomal algebra of type \((\nu,n)\)._ Proof.: Let \(A\) be a gonosomal algebra with basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{p}\right)_{1\leq p \leq\nu}\) verifying \(e_{i}\widetilde{e}_{p}=\sum_{k=1}^{n}\gamma_{ipk}e_{k}+\sum_{r=1}^{\nu} \widetilde{\gamma}_{ipr}\widetilde{e}_{r}\). We consider the algebra \(A^{o}\) with bases \(\left(a_{i}\right)_{1\leq i\leq\nu}\cup\left(\widetilde{a}_{p}\right)_{1\leq p \leq n}\) defined by \(a_{i}\widetilde{a}_{p}=\sum_{k=1}^{\nu}\widetilde{\gamma}_{pik}a_{k}+\sum_{r=1 }^{n}\gamma_{pir}\widetilde{a}_{r}\) then the mapping \(\varphi:A\to A^{o}\) defined by \(e_{i}\mapsto\widetilde{a}_{i}\) and \(\widetilde{e}_{p}\mapsto a_{p}\) is an algebra-isomorphism. **Proposition 3**.: _Let \(A\) be a gonosomal algebra of type \((n,\nu)\), if \(A^{\prime}\) is an algebra isomorphic to \(A\) then \(A^{\prime}\) is gonosomal of type \((n,\nu)\) or \((\nu,n)\)._ Proof.: Let \(A\) be a gonosomal algebra with basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{p}\right)_{1\leq p\leq\nu}\) and \(\varphi:A\to A^{\prime}\) an algebra-isomorphism, we put \(a_{i}=\varphi\left(e_{i}\right)\) and \(b_{p}=\varphi\left(\widetilde{e}_{p}\right)\), we get \(a_{i}a_{j}=\varphi\left(e_{i}e_{j}\right)=0\), \(b_{p}b_{q}=\varphi\left(\widetilde{e}_{p}\widetilde{e}_{q}\right)=0\) and \(a_{i}b_{p}=\sum_{k=1}^{n}\gamma_{ipk}a_{k}+\sum_{r=1}^{\nu}\widetilde{\gamma}_ {ipr}b_{r}\), therefore the algebra \(A^{\prime}\) is gonosomal for the basis \(\left(a_{i}\right)_{1\leq i\leq n}\cup\left(b_{p}\right)_{1\leq p\leq\nu}\) and proposition 2 gives that it can be \(\left(\nu,n\right)\) type. In the literature (cf. [13]) an algebra is referred to as a nonassociative algebra in order to emphasize that the associativity relation \(x\left(yz\right)=\left(xy\right)z\) (\(\star\)) is not assumed to hold. If relation (\(\star\)) is not satisfied in an algebra, we say that this algebra is not associative. The best-known nonassociative algebras are: * Lie algebras, that is \(xy+yx=0\) and \(\left(xy\right)z+\left(yz\right)x+\left(zx\right)y=0\) (Jacobi identity). * Flexible algebras if \(x\left(yx\right)=\left(xy\right)x\). * Alternative algebras if \(x^{2}y=x\left(xy\right)\) and \(yx^{2}=\left(yx\right)x\). * Jordan algebras if \(xy=yx\) and \(x^{2}\left(xy\right)=x\left(x^{2}y\right)\) (Jordan identity). * Power associative algebras if the subalgebra generated by any element \(x\) is associative, this is equivalent to defining \(x^{1}=x\) and \(x^{i+1}=xx^{i}\) and requiring \(x^{i+j}=x^{i}x^{j}\) for \(i,j=1,2,\dots\) and any \(x\). It is known that * commutative algebras are flexible; * associative algebras are flexible, alternative, power associative and verify the Jordan identity; * commutative alternative algebras are Jordan algebras; * Jordan algebras are power associative. In [12] an example of gonosomal algebra is given which is not associative, or Lie, or alternative, or power associative, nor Jordan. In what follows we will clarify this by showing that gonosomal algebras constitute a new class disjoint of other nonassociative algebras. **Theorem 1**.: _Any gonosomal algebra is not associative, not Lie, not power associative, not Jordan, not alternative._ Proof.: Let \(A\) be a gonosomal algebra with basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{j}\right)_{1\leq j \leq\nu}\). For any \(1\leq i,j\leq n\) and \(1\leq p,q\leq\nu\) we have: \[e_{i}\left(e_{j}\widetilde{e}_{p}\right) = \sum_{k=1}^{n}\Bigl{(}\sum_{r=1}^{\nu}\gamma_{irk}\widetilde{ \gamma}_{jpr}\Bigr{)}e_{k}+\sum_{s=1}^{\nu}\Bigl{(}\sum_{r=1}^{\nu} \widetilde{\gamma}_{irs}\widetilde{\gamma}_{jpr}\Bigr{)}\widetilde{e}_{s} \tag{3.1}\] \[\left(e_{i}\widetilde{e}_{p}\right)\widetilde{e}_{q} = \sum_{k=1}^{n}\Bigl{(}\sum_{l=1}^{n}\gamma_{ipl}\gamma_{lqk} \Bigr{)}e_{k}+\sum_{r=1}^{\nu}\Bigl{(}\sum_{l=1}^{n}\gamma_{ipl}\widetilde{ \gamma}_{lqr}\Bigr{)}\widetilde{e}_{r}. \tag{3.2}\] Assuming that \(A\) is associative, from \(e_{i}\left(e_{j}\widetilde{e}_{p}\right)=\left(e_{i}e_{j}\right)\widetilde{e} _{p}=0\) and (3.1) we infer that \[\sum_{r=1}^{\nu}\gamma_{irk}\widetilde{\gamma}_{jpr}=\sum_{r=1}^{\nu} \widetilde{\gamma}_{irs}\widetilde{\gamma}_{jpr}=0,\quad\left(1\leq i,j,k\leq n,1\leq p,s\leq\nu\right)\] but we have \[\sum_{k,r=1}^{n,\nu}\gamma_{irk}\widetilde{\gamma}_{jpr}+\sum_{s,r=1}^{\nu} \widetilde{\gamma}_{irs}\widetilde{\gamma}_{jpr}=\sum_{r=1}^{\nu}\Bigl{(}\sum_{k =1}^{n}\gamma_{irk}+\sum_{s=1}^{\nu}\widetilde{\gamma}_{irs}\Bigr{)} \widetilde{\gamma}_{jpr}=\sum_{r=1}^{\nu}\widetilde{\gamma}_{jpr}\] and thus \[\sum_{r=1}^{\nu}\widetilde{\gamma}_{jpr}=0,\quad(1\leq j\leq n,1\leq p\leq\nu )\,. \tag{3.3}\] Similarly, with \((e_{i}\widetilde{e}_{p})\,\widetilde{e}_{q}=e_{i}\,(\widetilde{e}_{p} \widetilde{e}_{q})=0\) and (3.2) we get \[\sum_{l=1}^{n}\gamma_{ipl}\gamma_{lqk}=\sum_{l=1}^{n}\gamma_{ipl}\widetilde{ \gamma}_{lqr}=0,\quad(1\leq i,k\leq n,1\leq p,q,r\leq\nu)\,,\] from which it follows that \[\sum_{k,l=1}^{n}\gamma_{ipl}\gamma_{lqk}+\sum_{l,r=1}^{n,\nu} \gamma_{ipl}\widetilde{\gamma}_{lqr} = \sum_{l=1}^{n}\gamma_{ipl}\Bigl{(}\sum_{k=1}^{n}\gamma_{lqk}+ \sum_{r=1}^{\nu}\widetilde{\gamma}_{lqr}\Bigr{)}=\sum_{l=1}^{n}\gamma_{ipl}\] thus \[\sum_{l=1}^{n}\gamma_{ipl}=0\quad(1\leq i\leq n,1\leq p\leq\nu)\,. \tag{3.4}\] From relations (3.3) and (3.4) we get that \(\sum_{l=1}^{n}\gamma_{ipl}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr}=0\) for all \(1\leq i\leq n,1\leq p\leq\nu\), hence a contradiction. Algebra \(A\) is not a Lie algebra because if \(A\) is both commutative and anticommutative we have \(xy=0\) for any \(x,y\in A\), in other words \(A\) is a zero-algebra. If \(A\) is a power associative algebra it verifies \(x^{2}x^{2}=x^{4}\) for all \(x\in A\). Let \(x=e_{i}+\widetilde{e}_{p}\) where \(1\leq i\leq n,1\leq p\leq\nu\), we have: \[x^{2}=2\sum_{k=1}^{n}\gamma_{ipk}e_{k}+2\sum_{r=1}^{\nu}\widetilde{\gamma}_{ ipr}\widetilde{e}_{r}.\] It follows that \[x^{2}x^{2}=8\sum_{l=1}^{n}\Bigl{(}\sum_{k,r=1}^{n,\nu}\gamma_{ipk}\widetilde{ \gamma}_{ipr}\gamma_{krl}\Bigr{)}e_{l}+8\sum_{s=1}^{\nu}\Bigl{(}\sum_{k,r=1}^{ n,\nu}\gamma_{ipk}\widetilde{\gamma}_{ipr}\widetilde{\gamma}_{krs}\Bigr{)} \widetilde{e}_{s}.\] but also \[x^{3}=2\sum_{j=1}^{n}\Theta_{j}e_{j}+2\sum_{u=1}^{\nu}\widetilde{\Theta}_{u} \widetilde{e}_{u}\] noting \[\Theta_{j}=\sum_{k=1}^{n}\gamma_{ipk}\gamma_{kpj}+\sum_{r=1}^{\nu}\widetilde{ \gamma}_{ipr}\gamma_{irj}\ \ \mbox{and}\ \ \widetilde{\Theta}_{u}=\sum_{k=1}^{n}\gamma_{ ipk}\widetilde{\gamma}_{kpu}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr}\widetilde{ \gamma}_{iru} \tag{3.5}\] and finally we get \[x^{4}=2\sum_{l=1}^{n}\Bigl{(}\sum_{j=1}^{n}\Theta_{j}\gamma_{jpl}+\sum_{u=1}^{ \nu}\widetilde{\Theta}_{u}\gamma_{iul}\Bigr{)}e_{l}+2\sum_{s=1}^{\nu}\Bigl{(} \sum_{j=1}^{n}\Theta_{j}\widetilde{\gamma}_{jps}+\sum_{u=1}^{\nu}\widetilde{ \Theta}_{u}\widetilde{\gamma}_{ius}\Bigr{)}\widetilde{e}_{s}.\] With the above, relation \(x^{2}x^{2}=x^{4}\) implies \[4\sum_{k,r=1}^{n,\nu}\gamma_{ipk}\widetilde{\gamma}_{ipr}\gamma_{ krl} = \sum_{j=1}^{n}\Theta_{j}\gamma_{jpl}+\sum_{u=1}^{\nu}\widetilde{ \Theta}_{u}\gamma_{iul}\] \[4\sum_{k,r=1}^{n,\nu}\gamma_{ipk}\widetilde{\gamma}_{ipr} \widetilde{\gamma}_{krs} = \sum_{j=1}^{n}\Theta_{j}\widetilde{\gamma}_{jps}+\sum_{u=1}^{\nu} \widetilde{\Theta}_{u}\widetilde{\gamma}_{ius}\] from which it follows that \[4\sum_{k,r=1}^{n,\nu}\gamma_{ipk}\widetilde{\gamma}_{ipr} = 4\sum_{k,r=1}^{n,\nu}\gamma_{ipk}\widetilde{\gamma}_{ipr}\Big{(} \sum_{l=1}^{n}\gamma_{krl}+\sum_{s=1}^{\nu}\widetilde{\gamma}_{krs}\Big{)}\] \[= \sum_{l=1}^{n}\Bigl{(}\sum_{j=1}^{n}\Theta_{j}\gamma_{jpl}+\sum_{ u=1}^{\nu}\widetilde{\Theta}_{u}\gamma_{iul}\Bigr{)}+\sum_{s=1}^{\nu}\Bigl{(} \sum_{j=1}^{n}\Theta_{j}\widetilde{\gamma}_{jps}+\sum_{u=1}^{\nu}\widetilde{ \Theta}_{u}\widetilde{\gamma}_{ius}\Bigr{)}\] \[= \sum_{j=1}^{n}\Theta_{j}\Bigl{(}\sum_{l=1}^{n}\gamma_{jpl}+\sum_{ s=1}^{\nu}\widetilde{\gamma}_{jps}\Bigr{)}+\sum_{u=1}^{\nu}\widetilde{\Theta}_ {u}\Bigl{(}\sum_{l=1}^{n}\gamma_{iul}+\sum_{s=1}^{\nu}\widetilde{\gamma}_{ius} \Bigr{)}\] \[= \sum_{j=1}^{n}\Theta_{j}+\sum_{u=1}^{\nu}\widetilde{\Theta}_{u}.\] But from (3.5) we have: \[\sum_{j=1}^{n}\Theta_{j}+\sum_{u=1}^{\nu}\widetilde{\Theta}_{u} = \sum_{k=1}^{n}\gamma_{ipk}\Bigl{(}\sum_{j=1}^{n}\gamma_{kpj}+\sum_ {u=1}^{\nu}\widetilde{\gamma}_{kpu}\Bigr{)}+\sum_{r=1}^{\nu}\widetilde{\gamma} _{ipr}\Bigl{(}\sum_{j=1}^{n}\gamma_{irj}+\sum_{u=1}^{\nu}\widetilde{\gamma}_{ iru}\Bigr{)}\] \[= \sum_{k=1}^{n}\gamma_{ipk}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr }\;=\;1\] thus \(\Bigl{(}\sum_{k=1}^{n}\gamma_{ipk}\Bigr{)}\Bigl{(}\sum_{r=1}^{\nu}\widetilde{ \gamma}_{ipr}\Bigr{)}=\frac{1}{4}\) and with \(\sum_{k=1}^{n}\gamma_{ipk}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr}=1\) we get \[\sum_{k=1}^{n}\gamma_{ipk}=\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr}=\tfrac{1}{ 2},\quad\left(1\leq i\leq n,1\leq p\leq\nu\right). \tag{3.6}\] By linearization of \(x^{2}x^{2}=x^{4}\) we get \(4x^{2}\left(xy\right)=x^{3}y+x\left(x^{2}y\right)+2x\left(x\left(xy\right)\right)\) (cf. [13], p. 129), we deduce that \(e_{i}\left(e_{i}\left(e_{i}\widetilde{e}_{p}\right)\right)=0\). Using (3.1) we get \[e_{i}\left(e_{i}\left(e_{i}\widetilde{e}_{p}\right)\right) = \sum_{k=1}^{n}\Bigl{(}\sum_{r,s=1}^{\nu}\widetilde{\gamma}_{irs} \widetilde{\gamma}_{ipr}\gamma_{isk}\Bigr{)}e_{k}+\sum_{t=1}^{\nu}\Bigl{(}\sum_{ r,s=1}^{\nu}\widetilde{\gamma}_{irs}\widetilde{\gamma}_{ipr}\widetilde{\gamma}_{ist} \Bigr{)}\widetilde{e}_{t}\] it follows that \[\sum_{r,s=1}^{\nu}\widetilde{\gamma}_{irs}\widetilde{\gamma}_{ipr} \gamma_{isk} = \sum_{r,s=1}^{\nu}\widetilde{\gamma}_{irs}\widetilde{\gamma}_{ipr} \widetilde{\gamma}_{ist}\;=\;0,\quad\left(1\leq i,k\leq n,1\leq p,t\leq\nu\right)\] \[= \sum_{r,s=1}^{\nu}\widetilde{\ and therefore for all \(1\leq i\leq n,1\leq p\leq\nu\) we have \[\sum_{r,s=1}^{\nu}\widetilde{\gamma}_{irs}\widetilde{\gamma}_{ipr} = \sum_{r,s=1}^{\nu}\widetilde{\gamma}_{irs}\widetilde{\gamma}_{ipr} \Bigl{(}\sum_{k=1}^{n}\gamma_{isk}+\sum_{t=1}^{\nu}\widetilde{\gamma}_{ist} \Bigr{)}\ =\ 0,\] But from (3.6) we have: \[\sum_{r,s=1}^{\nu}\widetilde{\gamma}_{irs}\widetilde{\gamma}_{ipr}=\sum_{r=1}^{ \nu}\widetilde{\gamma}_{ipr}\sum_{s=1}^{\nu}\widetilde{\gamma}_{irs}=\tfrac{1} {2}\sum_{r=1}^{\nu}\widetilde{\gamma}_{ipr}=\tfrac{1}{4}\] and so the assumtion \(A\) is power associative leads to a contradiction. **Proposition 4**.: _Gonosomal algebras do not verify the Jacobi identity._ Proof.: Let \(A\) be a gonosomal algebra with basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{j}\right)_{1\leq j \leq\nu}\) verifying the Jacobi identity. Applying Jacobi identity with \(\left(x,y\right)=\left(e_{i},\widetilde{e}_{p}\right)\) and \(\left(x,y\right)=\left(\widetilde{e}_{p},e_{i}\right)\) we get \(2e_{i}\left(e_{i}\widetilde{e}_{p}\right)=0\) and \(2\widetilde{e}_{p}\left(\widetilde{e}_{p}e_{i}\right)=0\), but in the previous proof to show that a gonosomal algebra is not associative we have seen that this leads to a contradiction. ## 4. From gonosomal algebras to normalized gonosomal evolution operators Now we use Definition 1 with \(K=\mathbb{R}\). In this section we will associate two evolution operators with each gonosomal \(\mathbb{R}\)-algebra. Starting from a gonosomal \(\mathbb{R}\)-algebra \(A\), we define the mapping \[\begin{array}{ccc}W:&A&\rightarrow&A\\ &z&\mapsto&\tfrac{1}{2}z^{2}.\end{array} \tag{4.1}\] In particular, if \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{j}\right)_{1\leq j \leq\nu}\) is a gonosomal basis of \(A\), for \[z^{(t)}=W^{t}\left(z\right)=\sum_{i=1}^{n}x_{i}^{\left(t\right)}e_{i}+\sum_{p =1}^{\nu}y_{p}^{\left(t\right)}\widetilde{e}_{p}\] we find: \[z^{(t+1)}=W\bigl{(}z^{(t)}\bigr{)} = \sum_{k=1}^{n}\sum_{i,p=1}^{n,\nu}\gamma_{ipk}x_{i}^{\left(t\right) }y_{j}^{\left(t\right)}e_{k}+\sum_{r=1}^{\nu}\sum_{i,p=1}^{n,\nu}\widetilde{ \gamma}_{ipr}x_{i}^{\left(t\right)}y_{j}^{\left(t\right)}\widetilde{e}_{r}. \tag{4.2}\] We notice that the components of the operator \(W\) correspond to the proportions obtained in (2.1). Note also in passing the difference between the gonosomal operator and the evolution operator associated with an autosomal genetic type that is defined by: \(z\mapsto z^{2}\) (cf. [8], p. 15 and [16], p. 7). For a given \(z=\left(x,y\right)\in\mathbb{R}^{n}\times\mathbb{R}^{\nu}\) the dynamical system generated by \(W\) is defined by the following sequence \(z\), \(W\left(z\right)\), \(W^{2}\left(z\right)\), \(W^{3}\left(z\right)\),.... Recall the quadratic evolution operator \(W\) called gonosomal evolution operator is defined in coordinate form by: \[W:\mathbb{R}^{n+\nu}\rightarrow\mathbb{R}^{n+\nu},\left(x_{1},\ldots,x_{n},y_ {1},\ldots,y_{n}\right)\mapsto\left(x_{1}^{\prime},\ldots,x_{n}^{\prime},y_{1} ^{\prime},\ldots,y_{n}^{\prime}\right)\] \[W:\left\{\begin{aligned} x_{k}^{\prime}&=\sum_{i,j=1}^{n, \nu}\gamma_{ijk}x_{i}y_{j},\quad k=1,\dots,n\\ y_{r}^{\prime}&=\sum_{i,j=1}^{n,\nu}\widetilde{ \gamma}_{ijr}x_{i}y_{j},\quad r=1,\dots,\nu,\end{aligned}\right. \tag{4.3}\] where \[\sum_{k=1}^{n}\gamma_{ijk}+\sum_{r=1}^{\nu}\widetilde{\gamma}_{ijr}=1,\quad 1 \leq i\leq n,1\leq j\leq\nu. \tag{4.4}\] Conversely, it is clear that any operator of the form (4.3) verifying (4.4) is associated to a gonosomal algebra. An element \(z^{*}\in\mathbb{R}^{n+\nu}\) is an equilibrium point of the dynamical system (4.3) if for all \(t\geq 1\) we have \(W^{t}\left(z^{*}\right)=z^{*}\). It follows from the equivalence \(W^{t}\left(z^{*}\right)=z^{*},\forall t\geq 1\Leftrightarrow W\left(z^{*} \right)=z^{*}\) that \(z^{*}\) is an equilibrium point if and only if \(z^{*}\) is a fixed point of \(W\). From the definition of \(W\) we immediately deduce the following result. **Proposition 5**.: _There is one-to-one correspondence between the idempotents of the gonosomal algebra \(A\) and the fixed points of the gonosomal operator \(W\)associated with \(A\)._ Proof.: Indeed, if \(e\in A\) is an idempotent, we have \(W\left(2e\right)=2e\), i.e. \(2e\) is a fixed point of \(W\). And if \(z^{*}\in\mathbb{R}^{n+\nu}\) is a fixed point of \(W\), we get \(\left(\frac{1}{2}z^{*}\right)^{2}=\frac{1}{4}\left(z^{*}\right)^{2}=\frac{1}{ 2}W\left(z^{*}\right)=\frac{1}{2}z^{*}\) thus element \(\frac{1}{2}z^{*}\) is an idempotent of \(A\). Using the definition given by (4.1) we get the following result: **Proposition 6**.: _Let \(\varphi:A_{1}\to A_{2}\) be an isomorphism between two gonosomal algebras \(A_{1}\) and \(A_{2}\), then the gonosomal operators \(W_{1}:A_{1}\to A_{1}\) and \(W_{2}:A_{2}\to A_{2}\) verify \(\varphi\circ W_{1}=W_{2}\circ\varphi\)._ Proof.: Indeed, for all \(x\in A_{1}\) we have \(\varphi\circ W_{1}\left(x\right)=\varphi\left(\frac{1}{2}x^{2}\right)=\frac{1 }{2}\varphi\left(x\right)^{2}=W_{2}\circ\varphi\left(x\right)\). And this result suggests the following equivalence relation between gonosomal operators; **Definition 2**.: _Two gonosomal operators \(W_{1}:A_{1}\to A_{1}\) and \(W_{2}:A_{2}\to A_{2}\) are conjugate if and only if there exists an algebra-isomorphism \(\varphi:A_{1}\to A_{2}\) such that \(\varphi\circ W_{1}=W_{2}\circ\varphi\)._ The trajectory of a point \(z^{(0)}\in\mathbb{R}^{n+\nu}\) for the gonosomal operator \(W\) is the sequence of iterations \(\left(z^{(t)}\right)_{t\geq 0}\) defined by \(z^{(t)}=W^{t}\big{(}z^{(0)}\big{)}\), where each point \(z^{(t)}\) corresponds to a state of the population at generation \(t\). If the trajectory of an initial point \(z^{(0)}\) converges, there is a point \(z^{(\infty)}\) such that \(z^{(\infty)}=\lim_{t\to\infty}z^{(t)}\), and by continuity of the operator \(W\), the limit point \(z^{(\infty)}\) is a fixed point of \(W\). **Proposition 7**.: _If \(W_{1}\), \(W_{2}\) are two conjugate gonosomal operators, there is an one-to-one correspondence between the fixed points and the limit points of these two operators._ Proof.: This is very known fact see, for example [2]. Here we give a brief proof. Let \(\varphi:A_{1}\to A_{2}\) be the algebra-isomorphism connecting \(W_{1}\) to \(W_{2}\). If \(z_{1}^{*}\) is a fixed point of \(W_{1}\), by \(\varphi\left(z_{1}^{*}\right)=\varphi\circ W_{1}\left(z_{1}^{*}\right)=W_{2} \circ\varphi\left(z_{1}^{*}\right)\) we get that \(\varphi\left(z_{1}^{*}\right)\) is a fixed point of \(W_{2}\). And if \(z_{1}^{(\infty)}\), \(z_{2}^{(\infty)}\) are limit points for \(W_{1}\) et \(W_{2}\) respectively, we get easily by continuity of \(\varphi\): \(\varphi\big{(}x_{1}^{(\infty)}\big{)}=\Big{(}\varphi\big{(}x_{1}^{(0)}\big{)} \Big{)}^{(\infty)}\) and \(\varphi^{-1}\big{(}x_{2}^{(\infty)}\big{)}=\Big{(}\varphi^{-1}\big{(}x_{2}^{(0 )}\big{)}\Big{)}^{(\infty)}\). To every gonosomal algebra \(A\) is canonically attached the linear form: \[\varpi:A\to\mathbb{R},\quad\varpi\left(e_{i}\right)=\varpi\left(\widetilde{e} _{j}\right)=1. \tag{4.5}\] Applying \(\varpi\) to (4.2) we find \[\varpi\big{(}z^{(t+1)}\big{)}=\sum_{i=1}^{n}x_{i}^{(t+1)}+\sum_{j=1}^{\nu}y_{j }^{(t+1)}=\big{(}\sum_{i=1}^{n}x_{i}^{(t)}\big{)}\big{(}\sum_{j=1}^{\nu}y_{j}^ {(t)}\big{)} \tag{4.6}\] which corresponds to the relation (2.2). On the fixed points of \(W\) with non-negative components we have: **Proposition 8**.: _If \(z^{*}\in\mathbb{R}_{+}^{n+\nu}\), \(z^{*}\neq 0\) is a fixed point of \(W\) then \(\varpi\left(z^{*}\right)\geq 4\)._ Proof.: Let \(z^{*}=\left(x_{1},\ldots,x_{n},y_{1},\ldots,y_{\nu}\right)\) be a fixed point of \(W\), with \(x_{k},y_{r}\geq 0\). From \(W\left(z^{*}\right)=z^{*}\) we deduce that \(\left(\sum_{k}x_{k}\right)\left(\sum_{r}y_{r}\right)=\sum_{k}x_{k}+\sum_{r}y_{ r}=\varpi\left(z^{*}\right)\) so that \(\sum_{k}x_{k}\) and \(\sum_{r}y_{r}\) are positive real roots of the polynomial \(X^{2}-\varpi\left(z^{*}\right)X+\varpi\left(z^{*}\right)\) with \(\varpi\left(z^{*}\right)\in\mathbb{R}_{+}\), but \(\varpi\left(z^{*}\right)\left(\varpi\left(z^{*}\right)-4\right)\geq 0\) and \(\varpi\left(z^{*}\right)\geq 0\) only if \(\varpi\left(z^{*}\right)\geq 4\). For applications in genetics we restrict to the simplex of \(\mathbb{R}^{n+\nu}\): \[S^{\,n+\nu-1}=\left\{\left(x_{1},\ldots,x_{n},y_{1},\ldots,y_{\nu}\right)\in \mathbb{R}^{n+\nu}:x_{i}\geq 0,y_{i}\geq 0,\sum_{i=1}^{n}x_{i}+\sum_{i=1}^{\nu}y_{i}=1\right\}\] this simplex is associated with frequency distributions of the genetic types \(e_{i}\) and \(\widetilde{e}_{j}\). But the gonosomal operator \(W\) does not preserve the simplex \(S^{\,n+\nu-1}\), indeed : **Proposition 9**.: _Let \(A\) be a gonosomal \(\mathbb{R}\)-algebra of type \(\left(n,\nu\right)\), we have:_ _a) \(W\left(\mathbb{R}_{+}^{n+\nu}\right)\subset\mathbb{R}_{+}^{n+\nu}\) if and only if \(\gamma_{ijk}\geq 0\) and \(\widetilde{\gamma}_{ijr}\geq 0\) for all \(1\leq i,k\leq n\) and \(1\leq j,r\leq\nu\)._ _b) \(\varpi\circ W\left(z\right)\leq\frac{1}{4}\) for all \(z\in S^{\,n+\nu-1}\)._ Proof.: For _a)_ the sufficient condition is immediate. For the necessary condition it suffices to note that \(W\left(e_{i}+\widetilde{e}_{j}\right)=\sum_{k=1}^{n}\gamma_{ijk}e_{k}+\sum_{k=1 }^{n}\widetilde{\gamma}_{ijk}\widetilde{e}_{k}\) for every \(1\leq i\leq n\) and \(1\leq j\leq\nu\). Result _b)_ follows from the well known inequality \(4ab\leq\left(a+b\right)^{2}\). This leads to the following definition. **Definition 3**.: _We say that a \(K\)-algebra \(A\) is a gonosomal stochastic algebra of type \(\left(n,\nu\right)\) if it satisfies the definition 1 with \(K=\mathbb{R}\) and \(\gamma_{ipk}\geq 0\), \(\widetilde{\gamma}_{ipr}\geq 0\) for all \(1\leq i,k\leq n\) and \(1\leq p,r\leq\nu\)._ In a gonosomal stochastic algebra with basis \(\left(e_{i}\right)_{1\leq i\leq n}\cup\left(\widetilde{e}_{p}\right)_{1\leq p \leq\nu}\), the elements of \(\left(e_{i}\right)_{1\leq i\leq n}\) (resp. \(\left(\widetilde{e}_{p}\right)_{1\leq p\leq\nu}\)) represent genetic types observed in females (resp. in males), and the structure constants \(\gamma_{ipk}\) (resp. \(\widetilde{\gamma}_{ipr}\)) are the inheritance coefficients, that is to say the probability that a female (resp. a male) offspring is of type \(e_{k}\) (resp. \(\widetilde{e}_{r}\)) when the parental pair is a female of type \(e_{i}\) and a male of type \(\widetilde{e}_{p}\). **Proposition 10**.: _Let \(A\) be a gonosomal stochastic algebra of type \((n,\nu)\) and \(z\in\mathbb{R}_{+}^{n+\nu}\)._ _a) If \(\varpi\left(z\right)=0\) then \(z=0\)._ _For all \(t\geq 1\) we denote \(z^{(t)}=W^{t}\big{(}z\big{)}\), then we have:_ _b) If \(\varpi\big{(}z\big{)}\leq 4\), the sequence \(\big{(}\varpi\big{(}z^{(t)}\big{)}\big{)}_{t\geq 0}\) is decreasing._ _c) For \(t\geq 0\),_ \[\Big{(}\min_{i,j}\Bigl{\{}\sqrt{\gamma_{ij}\widetilde{\gamma}_{ij}}\Bigr{\}} \Big{)}^{2\big{(}2^{t}-1\big{)}}\,\big{(}\varpi\big{(}z\big{)}\big{)}^{2^{t}} \leq\varpi\big{(}z^{(t)}\big{)}\leq\Bigl{(}\max_{i,j,p,q}\{\gamma_{ij} \widetilde{\gamma}_{pq}\}\Bigr{)}^{2^{t}-1}\,\big{(}\varpi\big{(}z\big{)} \big{)}^{2^{t}}\,,\] _where we put \(\gamma_{ij}=\sum_{k=1}^{n}\gamma_{ijk}\) and \(\widetilde{\gamma}_{pq}=\sum_{r=1}^{\nu}\widetilde{\gamma}_{pqr}\) for all \(1\leq i,p\leq n\) and \(1\leq j,q\leq\nu\)._ Proof.: _a_) Immediate. In what follows for all \(t\geq 0\) we note \(z^{(t)}=\Big{(}x_{1}^{\,(t)},\ldots,x_{n}^{\,(t)},y_{1}^{\,(t)},\ldots,y_{\nu }^{\,(t)}\Big{)}\) where \(z^{(0)}=z\). \(b)\) We show recursively with the relations (4.3) that \(z^{(t)}\in\mathbb{R}_{+}^{n+\nu}\) for every \(t\geq 0\). From \[4\Bigl{(}\sum_{k=1}^{n}x_{k}^{\,(t-1)}\Bigr{)}\Bigl{(}\sum_{r=1}^{\nu}y_{r}^{ \,(t-1)}\Bigr{)}\leq\Bigl{(}\sum_{k=1}^{n}x_{k}^{\,(t-1)}+\sum_{r=1}^{\nu}y_{r} ^{\,(t-1)}\Bigr{)}^{2}\] we deduce that we have for all \(t\geq 1\) : \[4\varpi\big{(}z^{(t)}\big{)}\leq\Bigl{(}\varpi\big{(}z^{(t-1)}\big{)}\Bigr{)} ^{2},\qquad(*)\] from \(0\leq\varpi\left(z\right)\leq 4\) we infer that \(\Big{(}\varpi\big{(}z\big{)}\Big{)}^{2}\leq 4\varpi\big{(}z\big{)}\) and with \((*)\) it follows \(\varpi\big{(}z^{(1)}\big{)}\leq\varpi\big{(}z\big{)}\leq 4\) then by \((*)\) and by induction the result is obtained. \(c)\) Indeed, from (4.6) we have: \[\varpi\big{(}z^{(t)}\big{)} = \Big{(}\sum_{k=1}^{n}x_{k}^{\,(t-1)}\Big{)}\Bigl{(}\sum_{r=1}^{\nu }y_{r}^{\,(t-1)}\Bigr{)}\] with relations (4.3) this is written \[\varpi\big{(}z^{(t)}\big{)} = \Big{(}\sum_{i,j=1}^{n,\nu}\gamma_{ij}\,x_{i}^{\,(t-2)}y_{j}^{\,( t-2)}\Big{)}\Big{(}\sum_{p,q=1}^{n,\nu}\widetilde{\gamma}_{pq}\,x_{p}^{\,(t-2)}y _{q}^{\,(t-2)}\Big{)} \tag{4.7}\] \[= \sum_{i,p=1}^{n}\sum_{j,q=1}^{\nu}\gamma_{ij}\widetilde{\gamma} _{pq}\,x_{i}^{\,(t-2)}x_{p}^{\,(t-2)}y_{j}^{\,(t-2)}y_{q}^{\,(t-2)}\] consequently \[\varpi\big{(}z^{(t)}\big{)}\leq\max_{i,j,p,q}\{\gamma_{ij}\widetilde{\gamma}_ {pq}\}\,\Bigl{(}\sum_{i=1}^{n}x_{i}^{\,(t-2)}\Bigr{)}^{2}\Bigl{(}\sum_{j=1}^{ \nu}y_{j}^{\,(t-2)}\Bigr{)}^{2} \tag{4.8}\] but from (4.6) we have \(\Big{(}\sum_{k=1}^{n}x_{k}^{(t-2)}\Big{)}\Big{(}\sum_{r=1}^{\nu}y_{r}^{(t-2)} \Big{)}=\varpi\left(z^{(t-1)}\right)\) and thus \[\varpi\big{(}z^{(t)}\big{)} \leq \max_{i,j,p,q}\left\{\gamma_{ij}\widetilde{\gamma}_{pq}\right\} \left(\varpi\big{(}z^{(t-1)}\big{)}\right)^{2},\] we deduce by induction: \(\varpi\big{(}z^{(t)}\big{)}\leq\Big{(}\max_{i,j,p,q}\left\{\gamma_{ij} \widetilde{\gamma}_{pq}\right\}\Big{)}^{2^{t}-1}\left(\varpi\big{(}z\big{)} \right)^{2^{t}}\). By exchanging the roles of \((i,j)\) and \((p,q)\) in (4.7) we obtain: \[\varpi\big{(}z^{(t)}\big{)} = \sum_{i,p=1}^{n}\sum_{j,q=1}^{\nu}\gamma_{pq}\widetilde{\gamma}_ {ij}\;x_{i}^{\;(t-2)}x_{p}^{\;(t-2)}y_{j}^{\;(t-2)}y_{q}^{\;(t-2)}\] hence \[\varpi\big{(}z^{(t)}\big{)} = \sum_{i,p=1}^{n}\sum_{j,q=1}^{\nu}\tfrac{1}{2}\left(\gamma_{ij} \widetilde{\gamma}_{pq}+\gamma_{pq}\widetilde{\gamma}_{ij}\right)\;x_{i}^{\; (t-2)}x_{p}^{\;(t-2)}y_{j}^{\;(t-2)}y_{q}^{\;(t-2)}\] but from \(a+b\geq 2\sqrt{ab}\) it follows \[\varpi\big{(}z^{(t)}\big{)} \geq \sum_{i,p=1}^{n}\sum_{j,q=1}^{\nu}\sqrt{\gamma_{ij}\gamma_{pq} \widetilde{\gamma}_{ij}\widetilde{\gamma}_{pq}}\;x_{i}^{\;(t-2)}x_{p}^{\;(t-2 )}y_{j}^{\;(t-2)}y_{q}^{\;(t-2)}\] \[= \Big{(}\sum_{i,j=1}^{n,\nu}\sqrt{\gamma_{ij}\widetilde{\gamma}_{ ij}}\;x_{i}^{\;(t-2)}y_{j}^{\;(t-2)}\Big{)}^{2}\] \[\geq \Big{(}\min_{i,j}\Big{\{}\sqrt{\gamma_{ij}\widetilde{\gamma}_{ ij}}\Big{\}}\Big{)}^{2}\Big{(}\sum_{i=1}^{n}x_{i}^{\;(t-2)}\Big{)}^{2}\Big{(}\sum_{j=1 }^{\nu}y_{j}^{\;(t-2)}\Big{)}^{2}\] consequently \[\Big{(}\min_{i,j}\Big{\{}\sqrt{\gamma_{ij}\widetilde{\gamma}_{ ij}}\Big{\}}\Big{)}^{2}\Big{(}\varpi\big{(}z^{(t-1)}\big{)}\Big{)}^{2} \leq \varpi\big{(}z^{(t)}\big{)},\] and we deduce by induction that \(\Big{(}\min_{i,j}\Big{\{}\sqrt{\gamma_{ij}\widetilde{\gamma}_{ij}}\Big{\}} \Big{)}^{2\big{(}2^{t}-1}\left(\varpi\big{(}z\big{)}\right)^{2^{t}}\leq\varpi \big{(}z^{(t)}\big{)}\). From (4.8) using (4.6) and \(ab\leq\frac{1}{4}\left(a+b\right)^{2}\) it follows that \[\varpi\big{(}z^{(t)}\big{)}\leq\max_{i,j,p,q}\big{\{}\tfrac{1}{16} \gamma_{ij}\widetilde{\gamma}_{pq}\big{\}}\left(\varpi\left(z^{(t-2)}\right) \right)^{4}\] thus by induction \[\varpi\big{(}z^{(t)}\big{)}\leq\Big{(}\max_{i,j,p,q}\big{\{}\tfrac{1}{16} \gamma_{ij}\widetilde{\gamma}_{pq}\big{\}}\Big{)}^{\frac{1}{3}\big{(}4^{\lfloor t /2\rfloor}-1\big{)}}\Big{(}\varpi\left(z^{\big{(}t-2\lfloor\tfrac{t}{2} \rfloor\big{)}}\right)\Big{)}^{4^{\lfloor t/2\rfloor}}\] we deduce immediately the result when \(t\) is even and when \(t\) is odd it suffices to note that \(\varpi\big{(}z^{(1)}\big{)}=\left(\sum_{k}x_{k}\right)\left(\sum_{r}y_{r} \right)\leq\frac{1}{4}\left(\varpi\big{(}z\big{)}\right)^{2}\). Denote \[\mathcal{O}^{\;n,\nu}=\left\{(x_{1},\ldots,x_{n},y_{1},\ldots,y_{\nu})\in \mathbb{R}^{n+\nu}:x_{1}=\cdots=x_{n}=0\text{ or }y_{1}=\cdots=y_{\nu}=0\right\}.\] It is easy to see that for \(z\in\mathbb{R}_{+}^{n+\nu}\) we have: \[\varpi\circ W\left(z\right)=\Bigl{(}\sum_{i=1}^{n}x_{i}\Bigr{)}\Bigl{(}\sum_{j=1 }^{\nu}y_{j}\Bigr{)}=0\;\Leftrightarrow\;z\in\mathcal{O}\,^{n,\nu}.\] Therefore if we denote \[S\,^{n,\nu}=S\,^{n+\nu-1}\setminus\mathcal{O}\,^{n,\nu}\] then the operator \[V:S\,^{n,\nu}\to S\,^{n,\nu},\quad z\mapsto\frac{1}{\varpi\circ W\left(z \right)}W\left(z\right)\] is well defined, it is called the normalized gonosomal operator of \(W\). Using the relations (4.3) we can express the operator \(V\) in coordinate form by: \[V:\begin{cases}x_{k}^{\prime}=\dfrac{\sum_{i,j=1}^{n,\nu}\gamma_{ijk}x_{i}y_{j }}{\left(\sum_{i=1}^{n}x_{i}\right)\left(\sum_{j=1}^{\nu}y_{j}\right)},\quad k =1,\ldots,n\\ y_{r}^{\prime}=\dfrac{\sum_{i,j=1}^{n,\nu}\widetilde{\gamma}_{ijr}x_{i}y_{j}}{ \left(\sum_{i=1}^{n}x_{i}\right)\left(\sum_{j=1}^{\nu}y_{j}\right)},\quad r =1,\ldots,\nu.\end{cases} \tag{4.9}\] We can notice that the coordinates of the operator \(V\) correspond to the frequency distributions of genetic types obtained in (2.3). **Proposition 11**.: _Let \(A\) be a gonosomal stochastic algebra of type \(\left(n,\nu\right)\). For all \(z\in S\,^{n,\nu}\) and \(t\geq 1\) we define \(z^{\left(t\right)}=V^{t}\bigl{(}z\bigr{)}=\Bigl{(}x_{1}^{\left(t\right)}, \ldots,x_{n}^{\left(t\right)},y_{1}^{\left(t\right)},\ldots,y_{\nu}^{\left(t \right)}\Bigr{)}\), then we have_ \[\min_{i,j}\left\{\gamma_{ijk}\right\}\leq x_{k}^{\left(t\right)}\leq\max_{i,j }\left\{\gamma_{ijk}\right\}\quad\text{and}\quad\min_{i,j}\left\{\widetilde{ \gamma}_{ijr}\right\}\leq y_{r}^{\left(t\right)}\leq\max_{i,j}\left\{ \widetilde{\gamma}_{ijr}\right\}.\] Proof.: It is easy to see that for each \(1\leq k\leq n\) and \(1\leq r\leq\nu\) the following inequalities hold \[\min_{i,j}\left\{\gamma_{ijk}\right\}\Bigl{(}\sum_{i,j}x_{i}^{ \left(t-1\right)}y_{j}^{\left(t-1\right)}\Bigr{)} \leq\sum_{i,j}\gamma_{ijk}x_{i}^{\left(t-1\right)}y_{j}^{\left(t -1\right)}\leq\max_{i,j}\left\{\gamma_{ijk}\right\}\Bigl{(}\sum_{i,j}x_{i}^{ \left(t-1\right)}y_{j}^{\left(t-1\right)}\Bigr{)}\] \[\min_{i,j}\left\{\widetilde{\gamma}_{ijr}\right\}\Bigl{(}\sum_{i, j}x_{i}^{\left(t-1\right)}y_{j}^{\left(t-1\right)}\Bigr{)} \leq\sum_{i,j}\widetilde{\gamma}_{ijr}x_{i}^{\left(t-1\right)}y_{j}^{ \left(t-1\right)}\leq\max_{i,j}\left\{\widetilde{\gamma}_{ijr}\right\} \Bigl{(}\sum_{i,j}x_{i}^{\left(t-1\right)}y_{j}^{\left(t-1\right)}\Bigr{)},\] therefore the result follows using relations (4.9). We can study the action of an algebra-isomorphism on normalized gonosomal operators. **Proposition 12**.: _If \(A_{1}\) and \(A_{2}\) are gonosomal stochastic algebras, \(\varpi_{1}\) and \(\varpi_{2}\) the linear forms defined on \(A_{1}\) and \(A_{2}\) as in (4.5) and if \(\varphi:A_{1}\to A_{2}\) is an algebra-isomorphism such that \(\varpi_{2}\circ\varphi=\varpi_{1}\) then we have \(V_{2}=\varphi\circ V_{1}\circ\varphi^{-1}\)._ Proof.: According to Proposition 6 we have \(\varphi\circ W_{1}=W_{2}\circ\varphi\). It is easy to show that for \(z\in\mathbb{R}^{n+\nu}\) we get: \(\varpi_{1}\circ W_{1}\left(z\right)=0\;\Leftrightarrow\;\varpi_{2}\circ W_{2 }\left(z\right)=0\). And for all \(z\in S\,^{n,\nu}\) we get: \[V_{2}\circ\varphi\left(z\right) = \frac{1}{\varpi_{2}\circ W_{2}\circ\varphi\left(z\right)}W_{2} \circ\varphi\left(z\right)=\frac{1}{\varpi_{2}\circ\varphi\circ W_{1}\left(z \right)}\varphi\circ W_{1}\left(z\right)\] \[= \frac{1}{\varpi_{1}\circ W_{1}\left(z\right)}\varphi\circ W_{1} \left(z\right)=\varphi\circ V_{1}\left(z\right).\] **Proposition 13**.: _In a gonosomal stochastic algebra of type \(\left(n,\nu\right)\):_ _a) If there is \(t_{0}\geq 1\) such that \(W^{t_{0}}\!\left(z\right)=0\) then \(W^{t}\!\left(z\right)=0\) for all \(t\geq t_{0}\)._ _b) If there is \(t\geq 0\) such that \(W^{t}\left(z\right)\in\mathcal{O}\,^{n,\nu}\) then \(W^{t+1}\left(z\right)=0\)._ _c) For \(z\in\mathbb{R}_{+}^{n+\nu}\) and \(t\geq 0\) we have \(W^{t}\left(z\right)\in\mathcal{O}\,^{n,\nu}\ \Leftrightarrow\ \varpi\circ W^{t+1}\left(z\right)=0\)._ _d) For \(z\in\mathbb{R}_{+}^{n+\nu}\), \(z\neq 0\), if \(W^{t}\left(z\right)=0\) then there is \(0\leq t_{0}<t\) such that \(W^{t_{0}}\left(z\right)\neq 0\) and \(W^{t_{0}}\left(z\right)\in\mathcal{O}\,^{n,\nu}\)._ _e) For all \(z\in S\,^{n,\nu}\) and \(t\geq 0\) such that \(\varpi\circ W^{t}\left(z\right)\neq 0\) we have:_ \[V^{t}\left(z\right)=\frac{1}{\varpi\circ W^{t}\left(z\right)}W^{t}\left(z \right).\] Proof.: _a)_ With \(z\,^{\left(t\right)}=\left(x_{1}^{\left(t\right)},\ldots,x_{n}^{\left(t\right) },y_{1}^{\left(t\right)},\ldots,y_{n}^{\left(t\right)}\right)\), from \(W^{t_{0}}\!\left(z\right)=0\) we have \(x_{i}^{\left(t_{0}\right)}=0\) and \(y_{j}^{\left(t_{0}\right)}=0\) what implies according to (4.3): \(x_{i}^{\left(t_{0}+1\right)}=0\) and \(y_{j}^{\left(t_{0}+1\right)}=0\) and the result follows by induction. _b)_ For \(W^{t}\left(z\right)=\left(x_{1},\ldots,x_{n},y_{1},\ldots,y_{\nu}\right)\), if \(x_{k}=0\) for all \(1\leq k\leq n\) or \(y_{r}=0\) and \(1\leq r\leq\nu\) then from relations (4.3) we get \(x_{k}^{\prime}=0\) and \(y_{r}^{\prime}=0\) and thus \(W^{t+1}\left(z\right)=0\). _c)_ Necessity follows from _b)_. For the sufficiency, it is enough to see that \(W^{t}\left(z\right)=\left(x_{1},\ldots,x_{n},y_{1},\ldots,y_{\nu}\right)\) implies \(\varpi\circ W^{t+1}\left(z\right)=\left(\sum_{k=1}^{n}x_{k}\right)\left(\sum_{ r=1}^{\nu}y_{r}\right)\), therefore if \(\varpi\circ W^{t+1}\left(z\right)=0\) then we get \(\sum_{k=1}^{n}x_{k}=0\) or \(\sum_{r=1}^{\nu}y_{r}=0\) and as \(x_{k}\geq 0\), \(y_{r}\geq 0\) for all \(k\) and \(r\) we have \(W^{t}\left(z\right)\in\mathcal{O}\,^{n,\nu}\). _d)_ Let \(z\neq 0\) and \(t>0\). Let \(t_{0}\geq 0\) be the smallest integer such that \(W^{t_{0}+1}\left(z\right)=0\), thus \(t_{0}+1\leq t\), from \(\varpi\circ W^{t_{0}+1}\left(z\right)=0\) and _c)_ we deduce that \(W^{t_{0}}\left(z\right)\in\mathcal{O}\,^{n,\nu}\). _e)_ By induction on \(t\geq 0\). For \(t\geq 1\), suppose that \(\varpi\circ W^{t+1}\left(z\right)\neq 0\) and that \(V^{t}\left(z\right)=\frac{1}{\varpi\circ W^{t}\left(x\right)}W^{t}\left(z\right)\) then we have \(W\!\left(V^{t}\!\left(z\right)\right)=\left(\frac{1}{\varpi\circ W^{t}\left(z \right)}\right)^{2}W^{t+1}\left(z\right)\ \left(\ast\right)\) from which it follows \(\varpi\circ W\!\left(V^{t}\!\left(z\right)\right)=\left(\frac{1}{\varpi\circ W ^{t}\left(z\right)}\right)^{2}\varpi\circ W^{t+1}\left(z\right)\neq 0\ \left(\ast\ast\right)\). By definition of the operator \(V\) we get \[V^{t+1}\left(z\right)=V\left(V^{t}\left(z\right)\right)=\frac{1}{\varpi\circ W \left(V^{t}\left(z\right)\right)}W\left(V^{t}\left(z\right)\right)\] what with \(\left(\ast\right)\) and \(\left(\ast\ast\right)\) gives the relation to the order \(t+1\). **Remark 2**.: _From a genetic point of view, the result a) means that in a bisexual population when a sex-linked gonosomal gene disappears it does not reappear. Results b) and c) means that all individuals of one sex disappear if and only if a gonosomal gene disappears._ There is a relation between the fixed points of the operator \(V\) and some fixed points of \(W\), for this we introduce the following definition: a fixed point \(z=\left(x_{1},\ldots,x_{n},y_{1},\ldots,y_{\nu}\right)\) of the gonosomal operator \(W\) is non-negative and normalizable if it satisfies the following conditions \(x_{i},y_{j}\geq 0\) and \(\sum_{i=1}^{n}x_{i}+\sum_{j=1}^{\nu}y_{j}>0\). It has been shown in [11] that **Proposition 14**.: _The map \(z^{\ast}\mapsto\frac{1}{\varpi\left(z^{\ast}\right)}z^{\ast}\) is an one-to-one correspondence between the set of non-negative and normalizable fixed point of \(W\) and the set of fixed points of the operator \(V\)._ The various stability notions of the equilibrium points are preserved by passing from \(W\) to the operator \(V\). **Theorem 2**.: _Let \(z^{*}\) be a non-negative and normalizable fixed point of \(W\)._ _a) If \(z^{*}\) is attractive then \(\frac{1}{\varpi\left(z^{*}\right)}z^{*}\) is an attractive equilibrium point of \(V\)._ _b) If \(z^{*}\) is stable (resp. uniformly stable) then \(\frac{1}{\varpi\left(z^{*}\right)}z^{*}\) is a stable (resp. uniformly stable) equilibrium point of \(V\)._ _c) If \(z^{*}\) is asymptotically stable then the fixed point \(\frac{1}{\varpi\left(z^{*}\right)}z^{*}\) of \(V\) is asymptotically stable._ _d) If \(z^{*}\) is exponentially stable then the fixed point \(\frac{1}{\varpi\left(z^{*}\right)}z^{*}\) of \(V\) is exponentially stable._ Proof.: _a)_ If \(z^{*}\) is an attractive point of \(W\), then there is \(\rho>0\) such that for all \(z\in\mathbb{R}^{n+\nu}\) verifying \(\left\|z-z^{*}\right\|<\rho\) we have \(\lim_{t\rightarrow\infty}W^{t}\left(z\right)=z^{*}\). As \(z^{*}\neq 0\) we get \(\varpi\left(z^{*}\right)\neq 0\). By continuity of \(\varpi\) we have \(\lim_{t\rightarrow\infty}\varpi\circ W^{t}\left(z\right)=\varpi\left(z^{*}\right)\). Next for all \(z\in\mathbb{R}^{n+\nu}\) such that \(\lim_{t\rightarrow\infty}W^{t}\left(z\right)=z^{*}\) we get \(W^{t}\left(z\right)\neq 0\) for every \(t\geq 0\), otherwise according to Proposition 13 _a)_, we would have \(\lim_{t\rightarrow\infty}W^{t}\left(z\right)=0\), we deduce that, in particular if \(z\in S^{\,n+\nu-1}\) we get \(\varpi\circ W^{t}\left(z\right)\neq 0\). Finally, for any \(z\in S^{\,n+\nu-1}\) such that \(\left\|z-z^{*}\right\|<\rho\) we get \(\lim_{t\rightarrow\infty}V^{t}\left(z\right)=\lim_{t\rightarrow\infty}\frac{1 }{\varpi\circ W^{t}\left(z\right)}W^{t}\left(z\right)=\frac{1}{\varpi\left(z^{ *}\right)}z^{*}\). In the following \(\mathbb{R}^{n+\nu}\) is equipped with the norm \(\left\|\left(x_{1},\ldots,x_{n+\nu}\right)\right\|=\sum_{i=1}^{n+\nu}\left|x_{i}\right|\) and we see that for this norm we have \(\left\|z\right\|=\varpi\left(z\right)\) if \(z\in\mathbb{R}_{+}^{n+\nu}\). _b)_ By definition, the equilibrium point \(z^{*}\) is stable for \(W\) if for all \(t_{0}\geq 0\) and \(\epsilon>0\), there exists \(\delta>0\) such that the condition \(\left\|z-z^{*}\right\|<\delta\) implies \(\left\|W^{t}\left(z\right)-z^{*}\right\|<\epsilon\,\,\left(t\geq t_{0}\right)\), and \(z^{*}\) is uniformy stable if the existence of \(\delta>0\) does not depend on \(t_{0}\). We deduce from Proposition 8 that \(\varpi\left(z^{*}\right)-2>2\), in what follows we take \(0<\epsilon<\varpi\left(z^{*}\right)-2\). For all \(z\in S^{\,n,\nu}\) we get \[\left\|V^{t}\left(z\right)-V\left(z^{*}\right)\right\|\leq\left\|\frac{1}{ \varpi\circ W^{t}\left(z\right)}W^{t}\left(z\right)-\frac{1}{\varpi\circ W^{ t}\left(z\right)}z^{*}\right\|+\left\|\frac{1}{\varpi\circ W^{t}\left(z\right)}z^{*}- \frac{1}{\varpi\left(z^{*}\right)}z^{*}\right\|\] or \[\left\|V^{t}\left(z\right)-V\left(z^{*}\right)\right\|\leq\frac{1}{\varpi \circ W^{t}\left(z\right)}\left\|W^{t}\left(z\right)-z^{*}\right\|+\left|\frac {1}{\varpi\circ W^{t}\left(z\right)}-\frac{1}{\varpi\left(z^{*}\right)} \right|\left\|z^{*}\right\|. \tag{4.10}\] If we denote \(W^{t}\left(z\right)=\left(x_{i}^{\left(t\right)}\right)_{1\leq i\leq n+\nu}\) and \(z^{*}=\left(x_{i}^{\ast}\right)_{1\leq i\leq n+\nu}\) we notice that \[\left|\varpi\circ W^{t}\left(z\right)-\varpi\left(z^{*}\right)\right|\leq\sum _{i=1}^{n+\nu}\lvert x_{i}^{\left(t\right)}-x_{i}^{\ast}\rvert=\left\|W^{t} \left(z\right)-z^{*}\right\|,\] we deduce that for all \(x\in S^{\,n,\nu}\) such that \(\left\|z-z^{*}\right\|<\delta\) we have \(0<\varpi\left(z^{*}\right)-\epsilon\leq\varpi\circ W^{t}\left(z\right)\), with this and \(\left\|z^{*}\right\|=\varpi\left(z^{*}\right)\) inequality (4.10) becomes \[\left\|V^{t}\left(z\right)-V\left(z^{*}\right)\right\|\leq\frac{2\epsilon}{ \varpi\left(z^{*}\right)-\epsilon}<\epsilon\] which proves the result. _c)_ If \(z^{*}\) is asymptotically stable for \(W\), then by definition \(z^{*}\) is attractive and stable for \(W\) but from _a)_ and _b)_ it follows that \(z^{*}\) is attractive and stable for \(V\), thus \(z^{*}\) is asymptotically stable for \(V\). _d)_ By definition, the equilibrium point \(z^{*}\) of \(W\) is exponentially stable if for all \(t_{0}\geq 0\) there exists \(\delta>0\), \(M>0\) and \(\eta\in]0,1[\) such that for \(z\in\mathbb{R}^{n+\nu}\) : \[\left\|z-z^{*}\right\|\leq\delta\Rightarrow\left\|W^{t}\left(z\right)-z^{*} \right\|\leq M\eta^{t}\left\|z-z^{*}\right\|,\text{ for all }t\geq t_{0}.\] Analogously to what was done in _b_), for all \(z\in S^{\,n,\nu}\) we have the inequality: \[\left\|V^{t}\left(z\right)-V\left(z^{*}\right)\right\|\leq\tfrac{1}{\varpi \circ W^{t}\left(z\right)}\left\|W^{t}\left(z\right)-z^{*}\right\|+\left| \tfrac{1}{\varpi\circ W^{t}\left(z\right)}-\tfrac{1}{\varpi\left(z^{*}\right) }\right|\left\|z^{*}\right\|. \tag{4.11}\] As in _b_) we get: \(\left|\varpi\circ W^{t}\left(z\right)-\varpi\left(z^{*}\right)\right|\leq \left\|W^{t}\left(z\right)-z^{*}\right\|\), we deduce that for all \(z\in S^{\,n,\nu}\) verifying \(\left\|z-z^{*}\right\|\leq\delta\) we get \(\varpi\left(z^{*}\right)-M\eta^{t}\left\|z-z^{*}\right\|\leq\varpi\circ W^{t} \left(z\right)\). But \(\eta\in\left]0,1\right[\), thus there exists \(t_{1}\geq t_{0}\) such that \(4-M\eta^{t}\left\|z-z^{*}\right\|\geq 2\) for \(t\geq t_{1}\), but we saw in Proposition 8 that \(\varpi\left(z^{*}\right)\geq 4\), thus for all \(z\in S^{\,n,\nu}\) such that \(\left\|z-z^{*}\right\|\leq\delta\) and for every \(t\geq t_{1}\) we have \[2\leq\varpi\left(z^{*}\right)-M\eta^{t}\left\|z-z^{*}\right\|\leq\varpi\circ W ^{t}\left(z\right)\] with this and \(\left\|z^{*}\right\|=\varpi\left(z^{*}\right)\), inequality (4.11) becomes \[\left\|V^{t}\left(z\right)-V\left(z^{*}\right)\right\|\leq\frac{2M\eta^{t} \left\|z-z^{*}\right\|}{\varpi\left(z^{*}\right)-M\eta^{t}\left\|z-z^{*} \right\|}\leq M\eta^{t}\left\|z-z^{*}\right\|,\text{ for all }t\geq t_{1},\] which proves that \(x^{*}\) is an exponentially stable point for \(V\). ## 5. Dynamical systems of diallelic gonosomal lethal genetic disorders A genetic disease is a disease caused by a mutation on a gene, it is gonosomal (resp. autosomal) if the locus of the mutated gene is gonosomal (resp. autosomal or pseudo-autosomal). A genetic disease is said to be dominant or recessive if the mutant allele is dominant or recessive. In gonosomal disease case, dominance plays a role only in homogametic sex individuals, that is to say carrying two similar gonosomes, heterogametic sex individuals with the mutant allele will be sick in any event that the allele is dominant or recessive. Finally an allele is lethal if it causes the death of a carrier when this allele is dominant d the death of a homozygous individual when this allele is recessive. In what follows we consider a gonosomal diallelic genetic disease with one lethal allele in the \(XY\) sex determination system, according to the dominant or recessive nature of the lethal allele there are six types of gonosomal algebras corresponding to the cases given in the table below: \[\begin{array}{c c c}&&\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit Proof.: For \(z\in\mathbb{R}\left\langle e,\widetilde{e}\right\rangle\), \(z=xe+y\widetilde{e}\) the relation \(z=\frac{1}{2}z^{2}\) is equivalent to \[\begin{cases}x&=\ \gamma xy\\ y&=\ (1-\gamma)\,xy\end{cases}\] or \[\begin{cases}(1-\gamma y)\,x&=\ 0\\ (1-(1-\gamma)\,x)\,y&=\ 0.\end{cases}\] If \(\gamma=0\) or \(\gamma=1\) we get immediately \((x,y)=(0,0)\). If \(\gamma\neq 0,1\) it is clear that if \(x=0\) then \(y=0\) and if \(x\neq 0\) we deduce from the first equation \(y=\frac{1}{\gamma}\) with this the second equation gives \(x=\frac{1}{1-\gamma}\). **Proposition 16**.: _Concerning operators \(W\), \(V\) associated with the gonosomal stochastic algebra \(\mathbb{R}\left\langle e,\widetilde{e}\right\rangle\): \(e\widetilde{e}=\gamma e+\left(1-\gamma\right)\widetilde{e}\), \(\left(0<\gamma<1\right)\), we have for any initial point \(z^{(0)}=\left(x^{(0)},y^{(0)}\right)\in\mathbb{R}^{2}\):_ \[\lim_{t\to\infty}W^{t}\left(z^{(0)}\right) = \begin{cases}(0,0)&\text{if }\left|x^{(0)}y^{(0)}\right|< \frac{1}{\gamma(1-\gamma)}\\ \left(\frac{1}{1-\gamma},\frac{1}{\gamma}\right)&\text{if }\left|x^{(0)}y^{(0)} \right|=\frac{1}{\gamma(1-\gamma)}\\ +\infty&\text{if }\left|x^{(0)}y^{(0)}\right|>\frac{1}{\gamma(1-\gamma)} \end{cases}\] \[V^{t}\left(z^{(0)}\right) = \left(\gamma,1-\gamma\right),\quad\left(\forall t\geq 1\right).\] Proof.: Let \(z^{(t)}=W^{t}\big{(}z^{(0)}\big{)}=\big{(}x^{(t)},y^{(t)}\big{)}\). We get \[\begin{cases}x^{(t+1)}&=\ \gamma x^{(t)}y^{(t)}\\ y^{(t+1)}&=\left(1-\gamma\right)x^{(t)}y^{(t)}\end{cases}\] from this we prove easily that for any \(t\geq 1\) \[x^{(t)}=\frac{1}{1-\gamma}\left[\gamma\left(1-\gamma\right)x^{(0)}y^{(0)} \right]^{2^{t}}\text{ and }y^{(t)}=\frac{1}{\gamma}\left[\gamma\left(1-\gamma \right)x^{(0)}y^{(0)}\right]^{2^{t}},\] hence \(\varpi\circ W^{t}\left(z^{(0)}\right)=\frac{1}{\gamma(1-\gamma)}\left[\gamma \left(1-\gamma\right)x^{(0)}y^{(0)}\right]^{2^{t}}\) and we use the result _e_) of Proposition 13. **Remark 3**.: _In Proposition 2 the reciprocal of the results are not true in general, indeed in the result above the fixed point \(\left(\frac{1}{1-\gamma},\frac{1}{\gamma}\right)\) is not stable for \(W\) while its normalized \(\left(\gamma,1-\gamma\right)\) is stable for \(V\)._ **Application**: We consider a gonosomal diallelic gene recessive lethal in females and lethal in males. We denote \(0\leq\mu\leq 1\) the mutation rate of the normal allele to the lethal in females and \(0\leq\eta\leq 1\) the analogous rate in males. We assume that in each individual mutation affects only one gonosome \(X\) at a time, it follows that in gametogenesis we have: \(XX\succ(1-\mu)\,X+\mu X^{*}\), \(XY\succ\frac{1-\eta}{2}X+\frac{\eta}{2}X^{*}+\frac{1}{2}Y\) and thus after reproduction \(XX\times XY\succ\frac{1-\eta}{2-\eta}XX+\frac{1}{2-\eta}XY\). According to Proposition 16 in each generation the frequency distribution of a non-lethal allele is stationary equal to \(\left(\frac{1-\eta}{2-\eta},\frac{1}{2-\eta}\right)\), we notice that it does not depend on the rate \(\mu\) and the frequency in females is lower than in males. Asymptotic behavior of trajectories in the case (\(\mathfrak{g}\) lethal recessive, \(\sigma\) lethal) In this case, genotypes \(X^{*}X^{*}\) and \(X^{*}Y\) are lethal, thus we observe only \(XX\), \(XX^{*}\) and \(XY\) types. Let \(A\) be the gonosomal algebra of type \((2,1)\) with basis \((e_{1},e_{2},e)\) defined by \(e_{1}e=\gamma_{1}e_{1}+\gamma_{2}e_{2}+\gamma e\) and \(e_{2}e=\delta_{1}e_{1}+\delta_{2}e_{2}+\delta e\) where \(\gamma_{i},\delta_{i}\geq 0\) and \(\gamma=1-\gamma_{1}-\gamma_{2},\delta=1-\delta_{1}-\delta_{2}\) with \(\gamma,\delta\geq 0\). Let \(W\) be the gonosomal operator \(W\) associated to the gonosomal algebra defined above. For \(z^{(0)}=\big{(}x_{1}^{(0)},x_{2}^{(0)},y^{(0)}\big{)}\) consider \(z^{(t)}=W^{t}\big{(}z^{(0)}\big{)}\) where \[W:\begin{cases}x_{1}^{\prime}&=\ \big{(}\gamma_{1}x_{1}+\delta_{1}x_{2}\big{)}y \\ x_{2}^{\prime}&=\ \big{(}\gamma_{2}x_{1}+\delta_{2}x_{2}\big{)}y\\ y^{\prime}&=\ \big{(}\gamma x_{1}+\delta x_{2}\big{)}y.\end{cases} \tag{5.1}\] **Proposition 17**.: _Let \(Fix(W)\) be the set of fixed points of \(W\). In addition to the point \((0,0,0)\), the operator \(W\) has the following fixed points:_ _1) If \(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}=0\),_ \[Fix(W)\ =\ \begin{cases}\left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}} \right),&if\ \ \gamma_{1}\neq 0,\gamma_{1}\neq 1,\delta_{2}=0,\gamma_{2}=0\\ \left(0,\frac{1}{1-\delta_{2}},\frac{1}{\delta_{2}}\right),&if\ \ \gamma_{1}=0, \delta_{2}\neq 0,\delta_{2}\neq 1,\delta_{1}=0\\ \left(\frac{\gamma_{1}}{(\gamma_{1}+\gamma_{2})(1-\gamma_{1}-\delta_{2})}, \frac{\gamma_{2}}{(\gamma_{1}+\gamma_{2})(1-\gamma_{1}-\delta_{2})},\frac{1}{ \gamma_{1}+\delta_{2}}\right),&if\ \ \gamma_{1}\delta_{2}\neq 0,\gamma_{1}+\delta_{2}\neq 1, \gamma_{2}\delta_{1}\neq 0.\end{cases}\] _2) If \(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}\neq 0\),_ \[Fix(W)\ =\ \begin{cases}\left(\frac{1}{1-\gamma_{1}},\frac{1-\lambda}{1- \gamma_{1}},\frac{1}{\gamma_{1}}\right),\lambda\in\mathbb{R},&if\ \ \gamma_{1}=\delta_{2},\delta_{1}=0,\gamma_{2}=0\\ \left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right),\left(0,\frac{1}{1 -\delta_{2}},\frac{1}{\delta_{2}}\right),&if\ \ \gamma_{1}\neq\delta_{2},\delta_{1}=0,\gamma_{2}=0\\ \left(\frac{\gamma_{1}-\delta_{2}}{(1-\gamma_{1})(\gamma_{1}+\gamma_{2}- \delta_{2})},\frac{\gamma_{2}}{(1-\gamma_{1})(\gamma_{1}+\gamma_{2}-\delta_{2} )},\frac{1}{\gamma_{1}}\right),&if\ \ \delta_{1}=0,\gamma_{2}\neq 0\\ \left(0,\frac{1}{1-\delta_{2}},\frac{1}{\delta_{2}}\right),&if\ \ \delta_{1}=0, \gamma_{2}\neq 0\\ \left(\frac{\delta_{1}}{(1-\delta_{2})(\delta_{1}+\delta_{2}-\gamma_{1})}, \frac{\delta_{2}-\gamma_{1}}{(1-\delta_{2})(\delta_{1}+\delta_{2}-\gamma_{1})}, \frac{1}{\delta_{2}}\right),&if\ \ \delta_{1}\neq 0,\gamma_{2}=0\\ \left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right),&if\ \ \delta_{1}\neq 0, \gamma_{2}=0\\ \left(\frac{\delta_{1}y_{i}}{(\gamma\delta_{1}-\delta\gamma_{1})y_{i}+\delta}, \frac{1-\gamma_{1}y_{i}}{(\gamma\delta_{1}-\delta\gamma_{1})y_{i}+\delta},y_{ i}\right),(i=1,2)&if\ \ \delta_{1}\neq 0,\gamma_{2}\neq 0.\end{cases}\] _where \(y_{1}\) and \(y_{2}\) are roots of \(\left(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}\right)y^{2}-\left(\gamma_{1}+ \delta_{2}\right)y+1=0\)._ Proof.: Let us find the fixed points of \(W\), for that we must solve the system of equations: \[\begin{cases}x_{1}&=\ \big{(}\gamma_{1}x_{1}+\delta_{1}x_{2}\big{)}y\\ x_{2}&=\ \big{(}\gamma_{2}x_{1}+\delta_{2}x_{2}\big{)}y\\ y&=\ \big{(}\gamma x_{1}+\delta x_{2}\big{)}y\end{cases} \tag{5.2}\] If \(y=0\) we get the fixed point \((0,0,0)\). If \(y\neq 0\) we write the system (5.2) in the form: \[\begin{cases}\left(\gamma_{1}y-1\right)x_{1}+\left(\delta_{1}y\right)x_{2}&=\;0 \\ \left(\gamma_{2}y\right)x_{1}+\left(\delta_{2}y-1\right)x_{2}&=\;0\\ \gamma x_{1}+\delta x_{2}&=\;1\end{cases} \tag{5.3}\] the determinant of the first two equations is necessarily zero, thus \[\left(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}\right)y^{2}-\left(\gamma_{1}+ \delta_{2}\right)y+1=0. \tag{5.4}\] We consider two cases depending on the degree of the equation (5.4). **Case-1**. If \(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}=0\) from (5.4) we have \(\gamma_{1}+\delta_{2}\neq 0\), otherwise we have the unique fixed point \((0,0,0).\) Hence \(y=\frac{1}{\gamma_{1}+\delta_{2}}\) then in (5.2) the first and second equations we get \[\begin{cases}\gamma_{2}x_{1}-\gamma_{1}x_{2}&=0\\ \delta_{2}x_{1}-\delta_{1}x_{2}&=0\end{cases} \tag{5.5}\] Using this we get \(\gamma x_{1}=\left(1-\gamma_{1}\right)x_{1}-\gamma_{1}x_{2}\) and \(\delta x_{2}=\left(1-\delta_{2}\right)x_{1}-\delta_{2}x_{1}\) hence \(\gamma x_{1}+\delta x_{2}=\left(1-\gamma_{1}-\delta_{2}\right)\left(x_{1}+x_{ 2}\right)=1\) consequently, if \(\gamma_{1}+\delta_{2}\neq 1\) then \(x_{1}+x_{2}=\frac{1}{1-\gamma_{1}-\delta_{2}}\). Of course, if \(\gamma_{1}+\delta_{2}=1\) then \(\gamma x_{1}+\delta x_{2}=0\) and the system (5.3) does not have any solution except \((0,0,0).\) So we consider the following subcases with condition \(\gamma x_{1}+\delta x_{2}\neq 0,1.\) Case 1.1. If \(\gamma_{1}\neq 0,\gamma_{1}\neq 1,\delta_{2}=0,\gamma_{2}=0\), then from (5.5) and taking into account \(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}=0\) we obtain the fixed point \(\left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right);\) Case 1.2. If \(\gamma_{1}=0,\delta_{2}\neq 0,\delta_{2}\neq 1,\delta_{1}=0\), while in the previous case, we obtain the next fixed point \(\left(0,\frac{1}{1-\delta_{2}},\frac{1}{\delta_{2}}\right);\) Case 1.3. If \(\gamma_{1}\neq 0,\delta_{2}\neq 0,\gamma_{1}+\delta_{2}\neq 1,\gamma_{2}\neq 0, \delta_{1}\neq 0\), then from first equation of (5.5) we get \(x_{2}=\frac{\gamma_{2}x_{1}}{\gamma_{1}}\) and then using \(x_{1}+x_{2}=\frac{1}{1-\gamma_{1}-\delta_{2}}\) one has \(x_{1}=\frac{\gamma_{1}}{(\gamma_{1}+\gamma_{2})(1-\gamma_{1}-\delta_{2})}\), so \(x_{2}=\frac{\gamma_{2}}{(\gamma_{1}+\gamma_{2})(1-\gamma_{1}-\delta_{2})}.\) Note that \(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}=0\), i.e., \(\frac{\gamma_{2}}{\gamma_{1}}=\frac{\delta_{2}}{\delta_{1}}\) that we can get another equivalent fixed point form: \(x_{1}=\frac{\delta_{1}}{(\delta_{1}+\delta_{2})(1-\gamma_{1}-\delta_{2})}\) and \(x_{2}=\frac{\delta_{2}}{(\delta_{1}+\delta_{2})(1-\gamma_{1}-\delta_{2})}.\) Note that for the other subcases the system (5.3) has a unique trivial solution \((0,0,0).\) **Case-2.** If \(\gamma_{1}\delta_{2}-\gamma_{2}\delta_{1}\neq 0\), the discriminant of (5.4) is \(\Delta=\left(\gamma_{1}+\delta_{2}\right)^{2}-4\left(\gamma_{1}\delta_{2}- \gamma_{2}\delta_{1}\right)\) or \(\Delta=\left(\gamma_{1}-\delta_{2}\right)^{2}+4\gamma_{2}\delta_{1}\geq 0\). Let \(y_{1}\), \(y_{2}\) be the roots of (5.4). If \(\delta_{1}=0\) or \(\gamma_{2}=0\) we have \(\gamma_{1}\delta_{2}\neq 0\) and the roots \(y_{1}=\frac{1}{\gamma_{1}}\) and \(y_{2}=\frac{1}{\delta_{2}}.\) Case 2.1. If \(\delta_{1}=\gamma_{2}=0\) and \(\gamma_{1}=\delta_{2}\neq 1\) then \(\gamma=\delta=1-\gamma_{1}\) and (5.3) is reduced to \(x_{1}+x_{2}=\frac{1}{1-\gamma_{1}}\) which results to the fixed point \(\left(\frac{\lambda}{1-\gamma_{1}},\frac{1-\lambda}{1-\gamma_{1}},\frac{1}{ \gamma_{1}}\right)\) for any \(\lambda\in\mathbb{R}.\) Case 2.2. If \(\delta_{1}=\gamma_{2}=0\) and \(\gamma_{1}\neq\delta_{2}\), by using (5.3) we get for the root \(y_{1}\) the solution \(\left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right)\) with \(\gamma_{1}\neq 1\) and for \(y_{2}\) the fixed point \(\left(0,\frac{1}{1-\delta_{2}},\frac{1}{\delta_{2}}\right)\) with \(\delta_{2}\neq 1\). Case 2.3. If \(\delta_{1}=0\), \(\gamma_{2}\neq 0\) and \(\gamma_{1}=\delta_{2}\neq 1\) then from (5.3) we get \(\left(0,\frac{1}{1-\gamma_{1}},\frac{1}{\gamma_{1}}\right)\). Case 2.4. If \(\delta_{1}=0\), \(\gamma_{2}\neq 0\) and \(\gamma_{1}\neq\delta_{2}\), for the root \(y_{1}=\frac{1}{\gamma_{1}}\) the system (5.3) is written \[\begin{cases}\gamma_{2}x_{1}+\left(\delta_{2}-\gamma_{1}\right)x_{2}&=\;0\\ \left(1-\gamma_{1}-\gamma_{2}\right)x_{1}+\left(1-\delta_{2}\right)x_{2}&=\;1 \end{cases}\] it follows the fixed point \(\left(\frac{\gamma_{1}-\delta_{2}}{(1-\gamma_{1})(\gamma_{1}+\gamma_{2}-\delta_{2} )},\frac{\gamma_{2}}{(1-\gamma_{1})(\gamma_{1}+\gamma_{2}-\delta_{2})},\frac{1 }{\gamma_{1}}\right)\) with \(\gamma_{1}\neq 1\), \(\gamma_{1}+\gamma_{2}-\delta_{2}\neq 0\) and for \(y_{2}\) we get by (5.3): \(\left(0,\frac{1}{1-\delta_{2}},\frac{1}{\delta_{2}}\right)\) with \(\delta_{2}\neq 1\). Case 2.5. If \(\delta_{1}\neq 0\), \(\gamma_{2}=0\) and \(\gamma_{1}=\delta_{2}\neq 1\) we get from (5.3) the solution \(\left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right)\). Case 2.6. If \(\delta_{1}\neq 0\), \(\gamma_{2}=0\) and \(\gamma_{1}\neq\delta_{2}\), for the root \(y_{1}\) we get \(\left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right)\) with \(\gamma_{1}\neq 1\) and for \(y_{2}\) the system (5.3) becomes \[\begin{cases}\left(\gamma_{1}-\delta_{2}\right)x_{1}+\delta_{1}x_{2}&=\ 0\\ \left(1-\gamma_{1}\right)x_{1}+\left(1-\delta_{1}-\delta_{2}\right)x_{2}&=\ 1 \end{cases}\] it follows the fixed point \(\left(\frac{\delta_{1}}{(1-\delta_{2})(\delta_{1}+\delta_{2}-\gamma_{1})}, \frac{\delta_{2}-\gamma_{1}}{(1-\delta_{2})(\delta_{1}+\delta_{2}-\gamma_{1}) },\frac{1}{\delta_{2}}\right)\) with \(\delta_{2}\neq 1\) and \(\delta_{1}+\delta_{2}-\gamma_{1}\neq 0\). Case 2.7. If \(\delta_{1}\neq 0\), \(\gamma_{2}\neq 0\) we have \(\Delta>0\), to each root \(y_{i}\) of (5.4) corresponds the fixed point \(\left(\frac{\delta_{1}y_{i}}{(\gamma\delta_{1}-\delta\gamma_{1})y_{i}+\delta}, \frac{1-\gamma_{1}y_{i}}{(\gamma\delta_{1}-\delta\gamma_{1})y_{i}+\delta},y_{i}\right)\). In the following we consider the dynamical system \(\left(z^{(t)}\right)_{t\geq 0}\) generated by \(W\) for a given initial point \(z^{(0)}=\left(x_{1}^{(0)},x_{2}^{(0)},y^{(0)}\right)\), we have \(z^{(t)}=W^{t}\left(z^{(0)}\right)\) and \(z^{(t)}=\left(x_{1}^{(t)},x_{2}^{(t)},y^{(t)}\right)\). It is clear that if there is \(t_{0}\geq 0\) such as \(y^{(t_{0})}=0\) then by (5.1) we have \(W^{t}\left(z\right)=0\) for all \(t\geq t_{0}\). Now it is assumed that \(y^{(t)}\neq 0\) for all \(t\geq 0\). To study the trajectories \(\left(z^{(t)}\right)\) we consider two cases depending on whether the set \(\mathcal{E}_{z^{(0)}}=\left\{t\in\mathbb{N}:x_{2}^{(t)}=0\right\}\) is infinite or finite. **Lemma 1**.: _Let \(W\) be the gonosomal operator defined by (5.1) and \(y^{(t)}\neq 0\) for all \(t\geq 0\)._ _a) If \(\gamma_{2}=0\), then the following are equivalent:_ _(i) \(\mathcal{E}_{z^{(0)}}\) is infinite; (ii) \(\mathbb{N}^{*}\subset\mathcal{E}_{z^{(0)}}\); (iii) \(x_{2}^{(1)}=0\)._ _b) If \(\gamma_{2}\neq 0\), then the following are equivalent:_ _(i) \(\mathcal{E}_{z^{(0)}}\) is infinite; (ii) \(\mathcal{E}_{z^{(0)}}=2\mathbb{N}\) or \(\mathbb{N}\setminus 2\mathbb{N}\); (iii) \(\begin{cases}x_{1}^{(0)}=0,&x_{2}^{(1)}=x_{2}^{(3)}=0,\\ \text{or}\\ x_{1}^{(1)}=0,&x_{2}^{(0)}=x_{2}^{(2)}=0.\end{cases}\)_ Proof.: \(a)\) If we suppose \(\gamma_{2}=0\), from (5.1) we get: \(x_{2}^{(t+1)}=\delta_{2}x_{2}^{(t)}y^{(t)}\ \ (*)\). \((i)\Rightarrow(iii)\) Let \(t_{0}\) be the smallest element of \(\mathcal{E}_{z^{(0)}}\), if \(t_{0}=0\) we deduce from \((*)\) that \(x_{2}^{(t)}=0\) for all \(t\geq 0\). If \(t_{0}\geq 1\), from \(0=x_{2}^{(t_{0})}=\delta_{2}x_{2}^{(t_{0}-1)}y^{(t_{0}-1)}\), \(y^{(t_{0}-1)}\neq 0\) and by minimality of \(t_{0}\) we get \(\delta_{2}=0\) but this implies \(x_{2}^{(t)}=0\) for all \(t\geq 1\). \((iii)\Rightarrow(i)\) If \(x_{2}^{(1)}=0\) it is clear from \((*)\) that \(x_{2}^{(t)}=0\) from all \(t\geq 1\). \((ii)\Rightarrow(i)\) is trivial. \(b)\) If we have \(\gamma_{2}\neq 0\). \((i)\Rightarrow(ii)\) Let \(t_{0}\) be the smallest element of \(\mathcal{E}_{z^{(0)}}\), from (5.1) we have \[x_{1}^{(t_{0}+1)}=\gamma_{1}x_{1}^{(t_{0})}y^{(t_{0})},\quad x_{2}^{(t_{0}+1)}= \gamma_{2}x_{1}^{(t_{0})}y^{(t_{0})},\quad y^{(t_{0}+1)}=\gamma x_{1}^{(t_{0})}y ^{(t_{0})}.\] And for any \(m\geq 1\) it exists \(a_{m},b_{m},c_{m}\geq 0\) such as \[\begin{cases}x_{1}^{(t_{0}+m+1)}=\gamma^{2^{m-1}}a_{m}\left(x_{1}^{(t_{0})}y^{(t_ {0})}\right)^{2^{m-1}}\\ x_{2}^{(t_{0}+m+1)}=\gamma_{2}\gamma^{2^{m-1}}b_{m}\left(x_{1}^{(t_{0})}y^{(t_ {0})}\right)^{2^{m-1}}\\ y^{(t_{0}+m+1)}=\gamma^{2^{m-1}}c_{m}\left(x_{1}^{(t_{0})}y^{(t_{0})}\right)^{2 ^{m-1}}\end{cases} \tag{5.6}\] with \(a_{1}=\gamma_{1}\), \(b_{1}=1\), \(c_{1}=\gamma\) and \[a_{m+1}=c_{m}\left(\gamma_{1}a_{m}+\delta_{1}\gamma_{2}b_{m}\right),\quad b_{ m+1}=c_{m}\left(a_{m}+\delta_{2}b_{m}\right),\quad c_{m+1}=c_{m}\left(\gamma a _{m}+\delta\gamma_{2}b_{m}\right).\] From \(y^{(t)}\neq 0\) for all \(t\geq 0\) and the third equation of (5.6) we deduce \(\gamma\neq 0\), \(x_{1}^{(t_{0})}\neq 0\) and \(c_{m}\neq 0\) for \(m\geq 1\). As \(\mathcal{E}_{z^{(0)}}\) is infinite, there exists \(m_{0}\geq 3\) such as \(x_{2}^{(t_{0}+m_{0}+1)}=0\), thus we have \(b_{m_{0}}=0\), from the relation giving \(b_{m_{0}}\) it follows \(a_{m_{0}-1}=\delta_{2}b_{m_{0}-1}=0\quad(*)\), then \(c_{m_{0}}=\delta\gamma_{2}c_{m_{0}-1}b_{m_{0}-1}\), as \(c_{m_{0}}\neq 0\) we get \(\delta\gamma_{2}b_{m_{0}-1}\neq 0\) and with \((*)\) we get \(\delta_{2}=0\). From \(0=a_{m_{0}-1}=c_{m_{0}-2}\left(\gamma_{1}a_{m_{0}-2}+\delta_{1}\gamma_{2}b_{m _{0}-2}\right)\) we deduce \(\gamma_{1}a_{m_{0}-2}=\delta_{1}\gamma_{2}b_{m_{0}-2}=0\). If we suppose \(\gamma_{1}\neq 0\) then we get \(a_{m_{0}-2}=0\) that leads by recursively to the contradiction \(a_{1}=0\). Thus we have \(\gamma_{1}=0\) and from (5.1) we get \[\begin{cases}x_{1}^{(t+1)}&=\ \delta_{1}x_{2}^{(t)}y^{(t)}\\ x_{2}^{(t+1)}&=\ \gamma_{2}x_{1}^{(t)}y^{(t)}\\ y^{(t+1)}&=\ \left(\left(1-\gamma_{2}\right)x_{1}^{(t)}+\left(1-\delta_{1} \right)x_{2}^{(t)}\right)y^{(t)}.\end{cases}\] We can say that \(\delta_{1}\neq 0\) otherwise we would have \(x_{1}^{(t)}=0\) for all \(t\geq 1\) hence \(x_{2}^{(t)}=0\) for each \(t\geq 2\) and \(y^{(t)}=0\) for every \(t\geq 3\). Assuming \(t_{0}\geq 2\), from \(x_{2}^{(t_{0})}=0\) we get \(\gamma_{2}x_{1}^{(t_{0}-1)}y^{(t_{0}-1)}=0\) then \(0=x_{1}^{(t_{0}-1)}=\delta_{1}x_{2}^{(t_{0}-2)}y^{(t_{0}-2)}\) hence \(x_{2}^{(t_{0}-2)}=0\) which contradicts the minimality of \(t_{0}\). Therefore \(t_{0}\leq 1\), for \(t_{0}=1\) we get \(0=x_{2}^{(1)}=\gamma_{2}x_{1}^{(0)}y^{(0)}\) hence \(x_{1}^{(0)}=0\) then we get \(x_{2}^{(0)}\neq 0\) otherwise \(y^{(1)}=0\), next \(x_{1}^{(2)}=\delta_{1}x_{2}^{(1)}y^{(1)}=0\) hence \(x_{2}^{(3)}=\gamma_{2}x_{1}^{(2)}y^{(2)}=0\). In the case \(t_{0}=0\), we have \(x_{2}^{(0)}=0\) hence \(x_{1}^{(1)}=0\) then \(x_{2}^{(2)}=0\). \((iii)\Rightarrow(ii)\) If \(x_{1}^{(0)}=x_{2}^{(1)}=x_{2}^{(3)}=0\), we have \(0=x_{2}^{(1)}=\delta_{2}x_{2}^{(0)}y^{(0)}\), since \(x_{2}^{(0)}\neq 0\) otherwise \(y^{(1)}=0\) we get \(\delta_{2}=0\). From this we deduce \(x_{2}^{(2)}=\delta_{1}\gamma_{2}x_{2}^{(0)}y^{(0)}y^{(1)}\) and \(0=x_{2}^{(3)}=\gamma_{1}\delta_{1}\gamma_{2}x_{2}^{(0)}y^{(0)}y^{(1)}y^{(2)}\) hence \(\gamma_{1}\delta_{1}=0\), assuming \(\delta_{1}=0\) we get \(x_{1}^{(1)}=0\) and the contradiction \(y^{(2)}=0\), thus we have \(\delta_{1}\neq 0\) and \(\gamma_{1}=0\). Finally we have \(x_{2}^{(2t+1)}=\delta_{1}\gamma_{2}x_{2}^{(2t-1)}y^{(2)}\) for all \(t\geq 1\) and from \(x_{2}^{(1)}=0\) we get \(\mathcal{E}_{z^{(0)}}=\mathbb{N}\setminus 2\mathbb{N}\). If \(x_{1}^{(1)}=x_{2}^{(0)}=x_{2}^{(2)}=0\), we have \(0=x_{1}^{(1)}=\gamma_{1}x_{1}^{(0)}y^{(0)}\), since \(x_{1}^{(0)}\neq 0\) otherwise \(y^{(1)}=0\) we get \(\gamma_{1}=0\). From \(0=x_{2}^{(2)}=\delta_{2}x_{2}^{(1)}y^{(1)}\) and \(x_{2}^{(1)}\neq 0\) we get \(\delta_{2}=0\). Then for all \(t\geq 0\) we have \(x_{2}^{(2t+2)}=\gamma_{2}x_{1}^{(2t+1)}y^{(2t+1)}=\delta_{1}\gamma_{2}x_{2}^{(2t )}y^{(2t)}y^{(2t+1)}\), with this and \(x_{2}^{(0)}=0\) we get \(\mathcal{E}_{z^{(0)}}=2\mathbb{N}\). **Theorem 3**.: _Given any initial point \(z^{(0)}\in\mathbb{R}^{3}\) such as \(\mathcal{E}_{z^{(0)}}\) is infinite. For the gonosomal operator (5.1) we get:_ _a) if \(\gamma_{2}=0\), then_ \[\lim_{t\to\infty}W^{t}\left(z^{(0)}\right) = \begin{cases}(0,0,0)&\text{if }\left|x_{1}^{(1)}y^{(1)}\right|< \frac{1}{\gamma_{1}(1-\gamma_{1})}\\ \left(\frac{1}{1-\gamma_{1}},0,\frac{1}{\gamma_{1}}\right)&\text{if }\left|x_{1}^{(1)}y^{(1)} \right|=\frac{1}{\gamma_{1}(1-\gamma_{1})}\\ +\infty&\text{if }\left|x_{1}^{(1)}y^{(1)}\right|>\frac{1}{\gamma_{1}(1-\gamma_{1})}. \end{cases}\] \[V^{t+2}\left(z^{(0)}\right) = \left(\gamma_{1},0,1-\gamma_{1}\right),\quad\left(\forall t\geq 0 \right).\] _b) if \(\gamma_{2}\neq 0\) and_ _case 1_: if \(x_{1}^{(0)}=0\), then_ \[\lim_{t\to\infty}W^{t}\left(z^{(0)}\right) =\begin{cases}(0,0,0)&\text{if }\left|x_{2}^{(0)}y^{(0)}\right|< \frac{1}{\sqrt[3]{\gamma_{2}\delta_{1}^{2}\gamma\delta^{2}}}\\ +\infty&\text{if }\left|x_{2}^{(0)}y^{(0)}\right|>\frac{1}{\sqrt[3]{ \gamma_{2}\delta_{1}^{2}\gamma\delta^{2}}}.\end{cases}\] _if \(\left|x_{2}^{(0)}y^{(0)}\right|=\frac{1}{\sqrt[3]{\gamma_{2}\delta_{1}^{2} \gamma\delta^{2}}}\) then \(\forall t\geq 0\)_ \[W^{2t+1}\left(z^{(0)}\right) =\left(\frac{\delta_{1}}{\sqrt[3]{\gamma_{2}\delta_{1}^{2}\gamma \delta^{2}}},0,\frac{\delta}{\sqrt[3]{\gamma_{2}\delta_{1}^{2}\gamma\delta^{ 2}}}\right)\] \[W^{2t+2}\left(z^{(0)}\right) =\left(0,\frac{\gamma_{2}\delta_{1}\delta}{\sqrt[3]{\gamma_{2} \delta_{1}^{2}\gamma\delta^{2}}},\frac{\gamma_{\delta_{1}\delta}}{\sqrt[3]{ \gamma_{2}\delta_{1}^{2}\gamma\delta^{2}}}\right)\] _and for any \(z^{(0)}\) and \(\forall t\geq 0\)_ \[V^{2t+1}\left(z^{(0)}\right) =\left(\delta_{1},0,1-\delta_{1}\right),\] \[V^{2t+2}\left(z^{(0)}\right) =\left(0,\gamma_{2},1-\gamma_{2}\right).\] _case 2_: _if \(x_{2}^{(0)}=0\),_ \[\lim_{t\to\infty}W^{t}\left(z^{(0)}\right) =\begin{cases}(0,0,0)&\text{if }\left|x_{1}^{(0)}y^{(0)}\right|< \frac{1}{\sqrt[3]{\gamma_{2}^{2}\delta_{1}\gamma^{2}\delta}}\\ +\infty&\text{if }\left|x_{1}^{(0)}y^{(0)}\right|>\frac{1}{\sqrt[3]{ \gamma_{2}^{2}\delta_{1}\gamma^{2}\delta}}.\end{cases}\] _if \(\left|x_{1}^{(0)}y^{(0)}\right|=\frac{1}{\sqrt[3]{\gamma_{2}^{2}\delta_{1} \gamma^{2}\delta}}\) then \(\forall t\geq 0\)_ \[W^{2t+1}\left(z^{(0)}\right) =\left(0,\frac{\gamma_{2}}{\sqrt[3]{\gamma_{2}^{2}\delta_{1} \gamma^{2}\delta}},\frac{\gamma}{\sqrt[3]{\gamma_{2}^{2}\delta_{1}\gamma^{2} \delta}}\right)\] \[W^{2t+2}\left(z^{(0)}\right) =\left(\frac{\delta_{1}\gamma_{2}\gamma}{\sqrt[3]{\gamma_{2}^{2} \delta_{1}\gamma^{2}\delta}},0,\frac{\delta_{2}\gamma}{\sqrt[3]{\gamma_{2}^{2 }\delta_{1}\gamma^{2}\delta}}\right)\] _and for any \(z^{(0)}\) and \(\forall t\geq 0\) we have_ \[V^{2t+1}\left(z^{(0)}\right) = \left(0,\gamma_{2},1-\gamma_{2}\right),\] \[V^{2t+2}\left(z^{(0)}\right) = \left(\delta_{1},0,1-\delta_{1}\right).\] Proof.: \(a)\) According to Lemma 1 we have \(x_{2}^{(t)}=0\) for \(t\geq 1\) and from \(\gamma_{2}=0\) and with this (5.1) becomes for all \(t\geq 1\) \[\begin{cases}x_{1}^{(t+1)}&=\ \gamma_{1}x_{1}^{(t)}y^{(t)}\\ y^{(t+1)}&=\ (1-\gamma_{1})\,x_{1}^{(t)}y^{(t)}.\end{cases} \tag{5.7}\] We have \(\gamma_{1}\neq 0,1\) otherwise we would have \(y^{(t)}=0\) for \(t\geq 3\). From (5.7) we get \[\begin{cases}x_{1}^{(t+2)}&=\ \gamma_{1}^{2^{t}}\left(1-\gamma_{1}\right)^{2 ^{t}-1}\left(x_{1}^{(1)}y^{(1)}\right)^{2^{t}}\\ y^{(t+2)}&=\ \gamma_{1}^{2^{t}-1}\left(1-\gamma_{1}\right)^{2^{t}}\left(x_{1}^{(1)}y^{( 1)}\right)^{2^{t}},\quad t\geq 0.\end{cases}\] Since \(0<\gamma_{1}<1\) we have \(\lim_{t\to\infty}\gamma_{1}^{2^{t}}\left(1-\gamma_{1}\right)^{2^{t}}=0\) and with \(\varpi\circ W^{t+2}\left(z^{(0)}\right)=\gamma_{1}^{2^{t}-1}\left(1-\gamma_{1} \right)^{2^{t}-1}\left(x_{1}^{(1)}y^{(1)}\right)^{2^{t}}\) we get the results of the proposition. \(b)\) We saw in the proof of Lemma 1 that in this case we have for all \(t\geq 0\): \[\begin{cases}x_{1}^{(t+1)}&=\ \delta_{1}x_{2}^{(t)}y^{(t)}\\ x_{2}^{(t+1)}&=\ \gamma_{2}x_{1}^{(t)}y^{(t)}\\ y^{(t+1)}&=\ \big{(}\gamma x_{1}^{(t)}+\delta x_{2}^{(t)}\big{)}y^{(t)}.\end{cases}\] where \(\gamma=1-\gamma_{2}\) and \(\delta=1-\delta_{1}\). **Case 1**: \(x_{1}^{(0)}=0\). Then it is clear that \(x_{2}^{(1)}=0\). We have \(x_{2}^{(0)}\neq 0\) if not with \(x_{1}^{(0)}=0\) we get \(y^{(1)}=0\), therefore \(x_{1}^{(1)}=\delta_{1}x_{2}^{(0)}y^{(0)}\neq 0\). We show that \(x_{1}^{(2t)}=0\) and \(x_{2}^{(2t+1)}=0\) for all \(t\geq 0\). Then for all \(t\geq 0\) we get: \[\begin{cases}x_{1}^{(2t+1)}&=\ \delta_{1}x_{2}^{(2t)}y^{(2t)}\\ x_{2}^{(2t+2)}&=\ \gamma_{2}x_{1}^{(2t+1)}y^{(2t+1)}\\ y^{(2t+1)}&=\ \delta x_{2}^{(2t)}y^{(2t)}\\ y^{(2t+2)}&=\ \gamma x_{1}^{(2t+1)}y^{(2t+1)}.\end{cases}\] It follows that \[\begin{cases}x_{1}^{(2t+1)}&=\ \delta_{1}\left[\gamma_{2}\delta_{1}^{2}\gamma \delta^{2}\right]^{\left(4^{t}-1\right)/3}\left(x_{2}^{(0)}y^{(0)}\right)^{4^{t}} \\ x_{2}^{(2t+2)}&=\ \gamma_{2}\delta_{1}\delta\left[\gamma_{2}^{2}\delta_{1}^{4} \gamma^{2}\delta^{4}\right]^{\left(4^{t}-1\right)/3}\left(x_{2}^{(0)}y^{(0)} \right)^{2\times 4^{t}}\\ y^{(2t+1)}&=\ \delta\left[\gamma_{2}\delta_{1}^{2}\gamma\delta^{2}\right]^{ \left(4^{t}-1\right)/3}\left(x_{2}^{(0)}y^{(0)}\right)^{4^{t}}\\ y^{(2t+2)}&=\ \gamma\delta_{1}\delta\left[\gamma_{2}^{2}\delta_{1}^{4} \gamma^{2}\delta^{4}\right]^{\left(4^{t}-1\right)/3}\left(x_{2}^{(0)}y^{(0)} \right)^{2\times 4^{t}}.\end{cases}\] Since \(y^{(t)}\neq 0\) we get \(\gamma_{2}\delta_{1}\gamma\delta\neq 0\) and we can change the form of the last system: \[\begin{cases}x_{1}^{(2t+1)}&=\frac{\delta_{1}}{\sqrt[3]{\gamma_{2}\delta_{1}^{ 2}\gamma\delta^{2}}}\left(x_{2}^{(0)}y^{(0)}\sqrt[3]{\gamma_{2}\delta_{1}^{2} \gamma\delta^{2}}\right)^{4^{t}}\\ x_{2}^{(2t+2)}&=\frac{\gamma_{2}\delta_{1}\delta}{\sqrt[3]{(\gamma_{2}\delta_{1 }^{2}\gamma\delta^{2})^{2}}}\left(x_{2}^{(0)}y^{(0)}\sqrt[3]{\gamma_{2}\delta_{ 1}^{2}\gamma\delta^{2}}\right)^{2\times 4^{t}}\\ y^{(2t+1)}&=\frac{\delta}{\sqrt[3]{\gamma_{2}\delta_{1}^{2}\gamma\delta^{2}}} \left(x_{2}^{(0)}y^{(0)}\sqrt[3]{\gamma_{2}\delta_{1}^{2}\gamma\delta^{2}} \right)^{4^{t}}\\ y^{(2t+2)}&=\frac{\gamma\delta_{1}\delta}{\sqrt[3]{(\gamma_{2}\delta_{1}^{2} \gamma\delta^{2})^{2}}}\left(x_{2}^{(0)}y^{(0)}\sqrt[3]{\gamma_{2}\delta_{1}^{ 2}\gamma\delta^{2}}\right)^{2\times 4^{t}}.\end{cases}\] Using \(0<\gamma_{2}\delta_{1}\gamma\delta<1\) we get the results of the proposition. From \[\varpi\circ W^{2t+1}\left(z^{(0)}\right) = \left[\gamma_{2}\delta_{1}^{2}\gamma\delta^{2}\right]^{\left(4^{t }-1\right)/3}\left(x_{2}^{(0)}y^{(0)}\right)^{4^{t}}\] \[\varpi\circ W^{2t+2}\left(z^{(0)}\right) = \delta_{1}\delta\left[\gamma_{2}^{2}\delta_{1}^{4}\gamma^{2} \delta^{4}\right]^{\left(4^{t}-1\right)/3}\left(x_{2}^{(0)}y^{(0)}\right)^{2 \times 4^{t}},\] we deduce the values of \(V^{2t+1}\left(z^{(0)}\right)\) and \(V^{2t+2}\left(z^{(0)}\right)\). **Case 2**: \(x_{2}^{(0)}=0\). Then we get \(x_{1}^{(1)}=0\). We obtain \(x_{1}^{(0)}\neq 0\) if not with \(x_{2}^{(0)}=0\) we get \(y^{(1)}=0\), therefore \(x_{2}^{(1)}=\gamma_{2}x_{1}^{(0)}y^{(0)}\neq 0\). Then for all \(t\geq 0\) we get \(x_{1}^{(2t+1)}=0\) and \(x_{2}^{(2t)}=0\) and \[\begin{cases}x_{1}^{(2t+2)}&=\ \delta_{1}x_{2}^{(2t+1)}y^{(2t+1)}\\ x_{2}^{(2t+1)}&=\ \gamma_{2}x_{1}^{(2t)}y^{(2t)}\\ y^{(2t+2)}&=\ \delta x_{2}^{(2t+1)}y^{(2t+1)}\\ y^{(2t+1)}&=\ \gamma x_{1}^{(2t)}y^{(2t)}.\end{cases}\] The results are derived from the previous case by exchanging the roles of \(x_{1}^{(t)}\) and \(x_{2}^{(t)}\) at the same time as \(\gamma_{2}\) with \(\delta_{1}\) and \(\gamma\) with \(\delta\). **Theorem 4**.: _Given any initial point \(z^{(0)}\in\mathbb{R}^{3}\) such as \(\mathcal{E}_{z^{(0)}}\) is finite. For the gonosomal operator (5.1) we get:_ _(a) if \(\gamma_{1}=\delta_{2}<1\) and \(\gamma_{2}\delta_{1}=0\),_ \[\lim_{t\to\infty}W^{t}\left(z^{(0)}\right) = (0,0,0)\] _and for any \(z^{(0)}\in S^{\,2}\),_ \[\lim_{t\rightarrow+\infty}V^{t}\left(z^{(0)}\right) = \begin{cases}\left(\gamma_{1},0,\gamma\right)&\text{if }\gamma_{2}\neq 0, \delta_{1}=0\\ \left(\frac{\gamma_{1}x_{1}^{(t_{0})}}{x_{1}^{(t_{0})}+x_{2}^{(t_{0})}}, \frac{\delta_{2}x_{2}^{(t_{0})}}{x_{1}^{(t_{0})}+x_{2}^{(t_{0})}},\frac{ \gamma x_{1}^{(t_{0})}+\delta x_{2}^{(t_{0})}}{x_{1}^{(t_{0})}+x_{2}^{(t_{0})} }\right)&\text{if }\gamma_{2}=\delta_{1}=0,\\ \left(0,\delta_{2},\delta\right)&\text{if }\gamma_{2}=0,\delta_{1}\neq 0. \end{cases}\] _where \(t_{0}=\max\left(\mathcal{E}_{z^{(0)}}\right)+1\)._ _(b) if \(\gamma_{1}\neq\delta_{2}\) or \(\gamma_{2}\delta_{1}\neq 0\),_ \[\lim_{t\rightarrow\infty}W^{t}\left(z^{(0)}\right) = (0,0,0)\] _and for any \(z^{(0)}\in S^{\,2}\),_ \[\lim_{t\rightarrow+\infty}V^{t}(z^{(0)}) =\left(\frac{\gamma_{1}+\delta_{1}u(\lambda_{i})}{U(\lambda_{i})},\frac{u(\lambda_{i})(\gamma_{1}+\delta_{1}u(\lambda_{i}))}{U(\lambda_{i})}, \frac{\gamma+\delta u(\lambda_{i})}{U(\lambda_{i})}\right)\] _where \(i=1\) if \(|\lambda_{1}|<|\lambda_{2}|\) and \(i=2\) if \(|\lambda_{1}|>|\lambda_{2}|\), and_ \[\begin{cases}U(\lambda_{i})=\delta_{1}u(\lambda_{i})^{2}+(\delta+\delta_{1}+ \gamma_{1})u(\lambda_{i})+\gamma+\gamma_{1},\\ u(\lambda_{i})=\frac{\gamma_{2}x_{1}^{(t_{0})}+(\delta_{2}-\lambda_{i})x_{2}^{ (t_{0})}}{(\gamma_{1}-\lambda_{i})x_{1}^{(t_{0})}+\delta_{1}x_{2}^{(t_{0})}}, \\ \lambda_{1}=\frac{\gamma_{1}+\delta_{2}-\sqrt{(\gamma_{1}-\delta_{2})^{2}+4 \gamma_{2}\delta_{1}}}{2},\ \ \lambda_{2}=\frac{\gamma_{1}+\delta_{2}+\sqrt{(\gamma_{1}-\delta_{2})^{2}+4 \gamma_{2}\delta_{1}}}{2}.\end{cases}\] Proof.: Assume now that the set \(\mathcal{E}_{z^{(0)}}\) is finite. Let \(t_{0}=\max\left(\mathcal{E}_{z^{(0)}}\right)+1\). We have \(x_{2}^{(t)}\neq 0\) for all \(t\geq t_{0}\), because \(y^{(t)}\neq 0\) for all \(t\geq 0\) it follows from the second equation of (5.1) that \(\gamma_{2}x_{1}^{(t)}+\delta_{2}x_{2}^{(t)}\neq 0\) for all \(t\geq t_{0}\). From (5.1) we get: \[\frac{x_{1}^{(t+1)}}{x_{2}^{(t+1)}}=\frac{\gamma_{1}x_{1}^{(t)}+\delta_{1}x_{ 2}^{(t)}}{\gamma_{2}x_{1}^{(t)}+\delta_{2}x_{2}^{(t)}},\ \ \ \forall t\geq t_{0},\] taking \(w^{(t)}=\frac{x_{1}^{(t)}}{x_{2}^{(t)}}\) for \(t\geq t_{0}\), this is written as \(w^{(t+1)}=f\left(w^{(t)}\right)\), where \(f\left(x\right)=\frac{\gamma_{1}x+\delta_{1}}{\gamma_{2}x+\delta_{2}}\). Let \(M=\left(\begin{array}{cc}\gamma_{1}&\delta_{1}\\ \gamma_{2}&\delta_{2}\end{array}\right)\), if \(M^{t}=\left(\begin{array}{cc}a_{t}&b_{t}\\ c_{t}&d_{t}\end{array}\right)\) we verify that we have \(f^{t}\left(x\right)=\frac{a_{t}x+b_{t}}{c_{t}x+d_{t}}\) for all \(t\geq 0\). The characteristic polynomial of \(M\) is \(\chi_{M}\left(X\right)=X^{2}-\left(\gamma_{1}+\delta_{2}\right)X+\left(\gamma_ {1}\delta_{2}-\gamma_{2}\delta_{1}\right)\), its discriminant is \(\Delta=\left(\gamma_{1}-\delta_{2}\right)^{2}+4\gamma_{2}\delta_{1}\geq 0\). We have \(\Delta=0\) if and only if \(\gamma_{1}=\delta_{2}\) and \(\gamma_{2}\delta_{1}=0\). (_a_) The case \(\Delta=0\). Let \(\lambda=\gamma_{1}\) the root of \(\chi_{M}\), we have \(\gamma_{1}<1\), indeed if \(\gamma_{1}=1\) then \(\gamma=\gamma_{2}=0\) and \(\delta=\delta_{1}=0\), thus \(\gamma=\delta=0\) which leads to the contradiction \(y^{(t_{0}+1)}=0\). Modulo \(\chi_{M}\) we have for all \(t\geq 0\): \(X^{t}\equiv t\lambda^{t-1}X-\left(t-1\right)\lambda^{t}\) hence \(M^{t}=t\lambda^{t-1}M-\left(t-1\right)\lambda^{t}I_{2}\), it follows that for any \(m\geq 1\) we get for \(z^{(t_{0})}\in\mathbb{R}^{3}\): \[x_{1}^{(t_{0}+m)} = \lambda^{t_{0}+m-1}\left[\lambda x_{1}^{(t_{0})}+(t_{0}+m)\,\delta_{ 1}x_{2}^{(t_{0})}\right]y^{(t_{0})}\] \[x_{2}^{(t_{0}+m)} = \lambda^{t_{0}+m-1}\left[(t_{0}+m)\,\gamma_{2}x_{1}^{(t_{0})}+ \lambda x_{2}^{(t_{0})}\right]y^{(t_{0})}\] then \[y^{(t_{0}+m)} =y^{(t_{0})} \prod_{k=0}^{m-1}\left(\gamma x_{1}^{(t_{0}+k)}+\delta x_{2}^{(t _{0}+k)}\right).\] With \(\lambda<1\), we get \(\lim_{t\to+\infty}x_{1}^{(t)}=0\) and \(\lim_{t\to+\infty}x_{2}^{(t)}=0\). Concerning \(y^{(t)}\), it is clear that there exists positive integer \(k_{0}\) such that \(\gamma x_{1}^{(t)}+\delta x_{2}^{(t)}<1\) for all \(t>k_{0}.\) Finally we get \(\lim_{t\to+\infty}y^{(t)}=0.\) For the study of the operator \(V\), let \(z^{(0)}\in S^{\,2}\), we consider two cases. Case 1: If \(x_{1}^{(t_{0}+m)}\neq 0\) for all \(m\geq 1\), then we get \[\frac{x_{2}^{(t_{0}+m)}}{x_{1}^{(t_{0}+m)}}=\frac{(t_{0}+m)\,\gamma_{2}x_{1}^ {(t_{0})}+\lambda x_{2}^{(t_{0})}}{\lambda x_{1}^{(t_{0})}+(t_{0}+m)\,\delta_{ 1}x_{2}^{(t_{0})}}.\] Thus we have \[\lim_{m\to+\infty}\frac{x_{2}^{(t_{0}+m)}}{x_{1}^{(t_{0}+m)}}=\begin{cases}0& \text{if }\gamma_{2}=0,\delta_{1}\neq 0,\\ \frac{x_{2}^{(t_{0})}}{x_{1}^{(t_{0})}}&\text{if }\gamma_{2}=\delta_{1}=0,\\ +\infty&\text{if }\gamma_{2}\neq 0,\delta_{1}=0.\end{cases}\] and for \(t\geq t_{0}+m+1\) \[\lim_{t\to+\infty}\frac{y^{(t)}}{x_{1}^{(t)}}=\begin{cases}\frac{\gamma}{ \gamma_{1}}&\text{if }\gamma_{2}=0,\delta_{1}\neq 0\\ \frac{\gamma x_{1}^{(t_{0})}+\delta x_{2}^{(t_{0})}}{\gamma_{1}x_{1}^{(t_{0})} }&\text{if }\gamma_{2}=\delta_{1}=0,\\ +\infty&\text{if }\gamma_{2}=0,\delta_{1}\neq 0.\end{cases}\] and \[\lim_{t\to+\infty}\frac{y^{(t)}}{x_{2}^{(t)}}=\begin{cases}+\infty&\text{if } \gamma_{2}=0,\delta_{1}\neq 0\\ \frac{\gamma x_{1}^{(t_{0})}+\delta x_{2}^{(t_{0})}}{\delta_{2}x_{2}^{(t_{0})} }&\text{if }\gamma_{2}=\delta_{1}=0,\\ \frac{\delta}{\delta_{2}}&\text{if }\gamma_{2}=0,\delta_{1}\neq 0.\end{cases}\] Using them and \[\frac{x_{1}^{(t_{0}+m)}}{\varpi\circ W(z^{(t_{0}+m)})}=\frac{1}{1+\frac{x_{2}^ {(t_{0}+m)}}{x_{1}^{(t_{0}+m)}}+\frac{y^{(t_{0}+m)}}{x_{1}^{(t_{0}+m)}}}, \frac{x_{2}^{(t_{0}+m)}}{\varpi\circ W(z^{(t_{0}+m)})}=\frac{1}{1+\frac{x_{1}^ {(t_{0}+m)}}{x_{2}^{(t_{0}+m)}}+\frac{y^{(t_{0}+m)}}{x_{2}^{(t_{0}+m)}}},\] \[\frac{y^{(t_{0}+m)}}{\varpi\circ W(z^{(t_{0}+m)})}=\frac{1}{1+\frac{x_{1}^{(t _{0}+m)}}{y^{(t_{0}+m)}}+\frac{x_{2}^{(t_{0}+m)}}{y^{(t_{0}+m)}}}\] we get \[\lim_{m\to+\infty}\frac{x_{1}^{(t_{0}+m)}}{\varpi\circ W\left(z^{(t_{0}+m)}\right)} =\begin{cases}\gamma_{1}&\text{if }\gamma_{2}=0,\delta_{1}\neq 0,\\ \frac{\gamma_{1}x_{1}^{(t_{0})}}{x_{1}^{(t_{0})}+x_{2}^{(t_{0})}}&\text{if } \gamma_{2}=\delta_{1}=0,\\ 0&\text{if }\gamma_{2}\neq 0,\delta_{1}=0,\end{cases}\] \[\lim_{m\to+\infty}\frac{x_{2}^{(t_{0}+m)}}{\varpi\circ W\left(z^{(t_{0}+m)} \right)}=\begin{cases}0&\text{if }\gamma_{2}=0,\delta_{1}\neq 0,\\ \frac{\delta_{2}x_{2}^{(t_{0})}}{x_{1}^{(t_{0})}+x_{2}^{(t_{0})}}&\text{if } \gamma_{2}=\delta_{1}=0,\\ \delta_{2}&\text{if }\gamma_{2}\neq 0,\delta_{1}=0,\end{cases}\] and for \(t\geq t_{0}+m+1\) \[\lim_{m\to+\infty}\frac{y^{(t)}}{\varpi\circ W\left(z^{(t)}\right)}=\begin{cases} \gamma&\text{if }\gamma_{2}=0,\delta_{1}\neq 0,\\ \frac{\gamma x_{1}^{(t_{0})}+\delta x_{2}^{(t_{0})}}{x_{1}^{(t_{0})}+x_{2}^{(t _{0})}}&\text{if }\gamma_{2}=\delta_{1}=0,\\ \delta&\text{if }\gamma_{2}\neq 0,\delta_{1}=0.\end{cases}\] Case 2: If there is \(m_{0}\geq 1\) such as \(x_{1}^{(t_{0}+m_{0})}=0\) then from \(z^{(0)}\in S^{\,2}\) and by the formula for \(x_{1}^{(t_{0}+m)}\) we get \(x_{1}^{(t_{0})}=0\) and \(\delta_{1}=0\) thus \(x_{1}^{(t_{0}+m)}=0\) for every \(m\geq 1\) and we get easily \(\lim_{t\to+\infty}V^{t}\left(z^{(0)}\right)=(0,1,0)\). (_b_) The case \(\Delta>0\). Let \(\lambda_{1}<\lambda_{2}\) be the roots of \(\chi_{M}\). Modulo \(\chi_{M}\) we have for all \(t\geq 0\): \[X^{t}\equiv\frac{\lambda_{2}^{t}-\lambda_{1}^{t}}{\lambda_{2}-\lambda_{1}}X- \lambda_{1}\lambda_{2}\frac{\lambda_{2}^{t-1}-\lambda_{1}^{t-1}}{\lambda_{2}- \lambda_{1}}\] and with \(\theta_{t}=\frac{\lambda_{2}^{t}-\lambda_{1}^{t}}{\lambda_{2}-\lambda_{1}}\) we have \(M^{t}=\theta_{t}M-\lambda_{1}\lambda_{2}\theta_{t-1}I_{2}\) and thus for all \(m\geq 1\): \[x_{1}^{(t_{0}+m)} = \left[\left(\gamma_{1}\theta_{t_{0}+m}-\lambda_{1}\lambda_{2} \theta_{t_{0}+m-1}\right)x_{1}^{(t_{0})}+\delta_{1}\theta_{t_{0}+m}x_{2}^{(t_{ 0})}\right]y^{(t_{0})}\] \[x_{2}^{(t_{0}+m)} = \left[\gamma_{2}\theta_{t_{0}+m}x_{1}^{(t_{0})}+\left(\delta_{2} \theta_{t_{0}+m}-\lambda_{1}\lambda_{2}\theta_{t_{0}+m-1}\right)x_{2}^{(t_{ 0})}\right]y^{(t_{0})},\] hence \[y^{(t_{0}+m)} =y^{(t_{0})} \prod_{k=0}^{m-1}\left(\gamma x_{1}^{(t_{0}+k)}+\delta x_{2}^{(t _{0}+k)}\right).\] Let's prove that \(|\lambda_{1}|<1\) and \(|\lambda_{2}|<1.\) Since \(\gamma_{2}<1-\gamma_{1},\delta_{1}<1-\delta_{2}\) we get \(0<\Delta=(\gamma_{1}-\delta_{2})^{2}+4\gamma_{2}\delta_{1}<(\gamma_{1}-\delta _{2})^{2}+4(1-\gamma_{1})(1-\delta_{2})=(\gamma_{1}+\delta_{2}-2)^{2}.\) From this we obtain \(\lambda_{2}=\frac{\gamma_{1}+\delta_{2}+\sqrt{\Delta}}{2}<1\) and \(\lambda_{1}=\frac{\gamma_{1}+\delta_{2}-\sqrt{\Delta}}{2}>\gamma_{1}+\delta_{ 2}-1>-1.\) So, \(|\lambda_{1}|<1,\ \ |\lambda_{2}|<1\) and from this one has \(\theta_{t}\to 0\) as \(t\to+\infty.\) Thus, we get \(\lim_{t\to+\infty}x_{1}^{(t)}=\lim_{t\to+\infty}x_{2}^{(t)}=0\) and as previous case \(\lim_{t\to+\infty}y^{(t)}=0.\) To study the operator \(V\) for \(z^{(0)}\in S^{\,2}\), by considering two cases as in (_a_), we can get the proof of theorem. **Application**. _Dosage compensation and X inactivation in mammals._ In the XY-sex determination system, the female has two X chromosomes and the male only one. The X chromosome carries many genes involved in the functioning of cells, so in the absence of regulation, a female would produce twice as many proteins coded by these genes as a male, which would cause dysfunctions in these cells. In the early stages of female embryo formation, a mechanism called _dosage compensation_ (or _lyonization_) inactivates one of the two X chromosomes. The \(X\) inactivation is controlled by a short region on the \(X\) chromosome called the _X-inactivation center_ (_Xic_), the _Xic_ is active on the inactivated \(X\) chromosome. The _Xic_ site is necessary and sufficient to cause the \(X\) inactivation: presence in a female embryo of one non-functional site _Xic_ is lethal. If we denote by \(X^{*}\) a gonosome \(X\) carrying a non-functional site _Xic_, there are only three genotypes \(XY\), \(X^{*}Y\), \(XX\), thus the associated gonosomal algebra is of type \((1,2)\). And in the definition of the gonosomal operator \(W\), variables \(x_{1}^{(t)}\), \(x_{2}^{(t)}\), \(y^{(t)}\) are respectively associated to genotypes \(XY\), \(X^{*}Y\), \(XX\). Using Proposition 2 and 6, Definition 2 and Proposition 7, the results obtained in this section apply to this situation. Asymptotic behavior of trajectories in the case (\(\lozenge\) lethal recessive, \(\sigma\) non-lethal) In this case only the genotype \(X^{*}X^{*}\) is lethal, thus we observe only the types \(XX\), \(XX^{*}\), \(X^{*}Y\) and \(XY\). The general case of the dynamic system associated with this situation is complex, for this reason we will study a simpler case motivated by the following example. In humans, hemophilia is a genetic disease caused by mutation of a gene encoding coagulation factors and located on the \(X\) gonosome. It is a gonosomal recessive lethal disease, meaning that there are no homozygous women for the mutation, heterozygous women have not hemophilia but are carriers and only men are met. As many as one-third of hemophiliacs have no affected family members, reflecting a high mutation rate ('de novo' mutations). We denote \(\mu\) (resp. \(\eta\)) where \(0\leq\mu,\eta\leq 1\), the mutation rate from \(X\) to \(X^{*}\) in maternal (resp. paternal) gametes. Assuming that during oogenesis and spermatogenesis mutation when it occurs in a cell affects only one gonosome \(X\) both and considering that a mutated gene does not return to the wild type, after gametogenesis we observe the following rates: \[XX\rightarrow(1-\mu)\,X+\mu X^{*},\qquad XY\rightarrow\tfrac{1-\eta}{2}X+ \tfrac{\eta}{2}X^{*}+\tfrac{1}{2}Y,\] \[XX^{*}\rightarrow\tfrac{1-\mu}{2}X+\tfrac{1+\mu}{2}X^{*},\qquad X^{*}Y \rightarrow\tfrac{1}{2}X^{*}+\tfrac{1}{2}Y.\] Therefore after breeding the genotypes frequency distribution is given in the following Punnet square: \[XX\times XY \rightarrow \tfrac{(1-\mu)(1-\eta)}{2-\mu\eta}XX,\quad\tfrac{\mu+\eta-2\mu \eta}{2-\mu\eta}XX^{*},\qquad\tfrac{1-\mu}{2-\mu\eta}XY,\qquad\tfrac{\mu}{2- \mu\eta}X^{*}Y\] \[XX\times X^{*}Y \rightarrow \tfrac{1-\mu}{2-\mu}XX^{*},\qquad\tfrac{1-\mu}{2-\mu}XY,\qquad \tfrac{\mu}{2-\mu\mu}X^{*}Y\] \[XX^{*}\times XY \rightarrow \tfrac{(1-\mu)(1-\eta)}{4-(1+\mu)\nu}XX,\quad\tfrac{1+\mu-2\mu \eta}{4-(1+\mu)\eta}XX^{*},\quad\tfrac{1-\mu}{4-(1+\mu)\eta}XY,\quad\tfrac{1+ \mu}{4-(1+\mu)\eta}X^{*}Y\] \[XX^{*}\times X^{*}Y \rightarrow \tfrac{1-\mu}{3-\mu}XX^{*},\qquad\tfrac{1-\mu}{3-\mu}XY,\qquad \tfrac{1+\mu}{3-\mu}X^{*}Y\] Algebra associated with this situation is the gonomal \(\mathbb{R}\)-algebra of type \(\left(2,2\right)\), with basis \(\left(e_{1},e_{2}\right)\cup\left(\widetilde{e}_{1},\widetilde{e}_{2}\right)\) and commutative multiplication table: \[\begin{array}{rcl}e_{1}\widetilde{e}_{1}&=&\frac{\left(1-\mu\right)\left(1- \eta\right)}{2-\mu\eta}e_{1}+\frac{\mu+\eta-2\mu\eta}{2-\mu\eta}e_{2}+\frac{1 -\mu}{2-\mu\eta}\widetilde{e}_{1}+\frac{\mu}{2-\mu\eta}\widetilde{e}_{2}\\ e_{1}\widetilde{e}_{2}&=&\frac{1-\mu}{2-\mu}e_{2}+\frac{1-\mu}{2-\mu} \widetilde{e}_{1}+\frac{\mu}{2-\mu}\widetilde{e}_{2}\\ e_{2}\widetilde{e}_{1}&=&\frac{\left(1-\mu\right)\left(1-\eta\right)}{4- \left(1+\mu\right)\nu}e_{1}+\frac{1+\mu-2\mu\eta}{4-\left(1+\mu\right)\nu}e_{ 2}+\frac{1-\mu}{4-\left(1+\mu\right)\nu}\widetilde{e}_{1}+\frac{1+\mu}{4- \left(1+\mu\right)\nu}\widetilde{e}_{2}\\ e_{2}\widetilde{e}_{2}&=&\frac{1-\mu}{3-\mu}e_{2}+\frac{1-\mu}{3-\mu} \widetilde{e}_{1}+\frac{1+\mu}{3-\mu}\widetilde{e}_{2}\end{array}\] not mentioned products are zero. From (4.3) the dynamical system associated with this algebra is: \[W_{\mu,\eta}:\left\{\begin{array}{lclcl}x_{1}^{\prime}&=&\frac{\left(1-\mu \right)\left(1-\eta\right)}{2-\mu\eta}x_{1}y_{1}&&+\frac{\left(1-\mu\right) \left(1-\eta\right)}{4-\left(1+\mu\right)\eta}x_{2}y_{1}\\ x_{2}^{\prime}&=&\frac{\mu+\eta-2\mu\eta}{2-\mu\eta}x_{1}y_{1}&+\frac{1-\mu}{2 -\mu}x_{1}y_{2}&+\frac{1+\mu-2\mu\eta}{4-\left(1+\mu\right)\eta}x_{2}y_{1}&+ \frac{1-\mu}{3-\mu}x_{2}y_{2}\\ y_{1}^{\prime}&=&\frac{1-\mu}{2-\mu\eta}x_{1}y_{1}&+\frac{1-\mu}{2-\mu}x_{1}y_{2} &+\frac{1-\mu}{4-\left(1+\mu\right)\eta}x_{2}y_{1}&+\frac{1-\mu}{3-\mu}x_{2}y _{2}\\ y_{2}^{\prime}&=&\frac{\mu}{2-\mu\eta}x_{1}y_{1}&+\frac{\mu}{2-\mu}x_{1}y_{2}&+ \frac{1+\mu}{4-\left(1+\mu\right)\eta}x_{2}y_{1}&+\frac{1+\mu}{3-\mu}x_{2}y_{2 }\end{array}\right. \tag{5.8}\] **Proposition 18**.: _Fixed points for the operators \(W_{1,1}\) and \(W_{1,\eta}\) is \(\left(0,0,0,0\right)\) and for \(W_{\mu,1}\) are \(\left(0,0,0,0\right)\) and \(\left(0,\frac{3-\mu}{2},\frac{3-\mu}{2},\frac{\left(1+\mu\right)\left(3-\mu \right)}{2\left(1-\mu\right)}\right)\)._ Proof.: Let \(z=\left(x_{1},x_{2},y_{1},y_{2}\right)\), consider the equation \(z=W_{\mu,\eta}\left(z\right)\). a) If \(\mu=\eta=1\) we get immediately in (5.8): \(x_{1}=x_{2}=y_{1}=0\) and thus \(y_{2}=0\). b) If \(\mu=1\) and \(\eta\neq 1\), in (5.8) with \(\mu=1\) we get \(x_{1}=y_{1}=0\) it follows that \(x_{2}=y_{2}=0\). c) If \(\mu\neq 1\) and \(\eta=1\), fixed points \(\left(x_{1},x_{2},y_{1},y_{2}\right)\) of operator \(W_{\mu,1}\) verify \[\left\{\begin{array}{lcl}x_{1}&=&0\\ x_{2}&=&\frac{1-\mu}{3-\mu}x_{2}\left(y_{1}+y_{2}\right)\\ y_{1}&=&\frac{1-\mu}{3-\mu}x_{2}\left(y_{1}+y_{2}\right)\\ y_{2}&=&\frac{1+\mu}{3-\mu}x_{2}\left(y_{1}+y_{2}\right),\end{array}\right. \tag{5.9}\] If \(y_{1}+y_{2}=0\) we have \(x_{1}=x_{2}=y_{1}=y_{2}=0\). It is assumed that \(y_{1}+y_{2}\neq 0\), by summing the last two equations of (5.9) we get \(y_{1}+y_{2}=\frac{2}{3-\mu}x_{2}\left(y_{1}+y_{2}\right)\) thus \(x_{2}=\frac{3-\mu}{2}\) then \(y_{1}=\frac{1-\mu}{2}\left(y_{1}+y_{2}\right)\) and \(y_{2}=\frac{1+\mu}{2}\left(y_{1}+y_{2}\right)\) hence \(y_{1}=\frac{1-\mu}{1+\mu}y_{2}\) it follows \(y_{1}+y_{2}=\frac{2}{1+\mu}y_{2}\) and with the equation giving \(y_{2}\) in (5.9) we get \(y_{2}=\frac{\left(1+\mu\right)\left(3-\mu\right)}{2\left(1-\mu\right)}\) hence \(y_{1}=\frac{3-\mu}{2}\). Finally the fixed points of \(W_{\mu,1}\) are: \(\left(0,0,0,0\right)\) and \(\left(0,\frac{3-\mu}{2},\frac{3-\mu}{2},\frac{\left(1+\mu\right)\left(3-\mu \right)}{2\left(1-\mu\right)}\right)\). **Proposition 19**.: _For all \(z=\left(x_{1},x_{2},y_{1},y_{2}\right)\in\mathbb{R}^{4}\) and \(0\leq\mu,\eta\leq 1\) we have:_ _a) \(W_{1,1}^{n}\left(z\right)=0\) for every \(n\geq 2\)._ _b) \(W_{1,\eta}^{n}\left(z\right)=0\) for each \(n\geq 3\)._ _c) \(\lim_{n\rightarrow\infty}W_{\mu,1}^{n}\left(z\right)=\begin{cases}0&\text{ if }\left|\frac{x_{1}}{2-\mu}+\frac{x_{2}}{3-\mu}\right|\cdot\left|y_{1}+y_{2}\right| \leq\frac{1}{\left(1-\mu\right)^{2}}\\ +\infty&\text{ if }\left|\frac{x_{1}}{2-\mu}+\frac{x_{2}}{3-\mu}\right|\cdot\left|y_{1}+y_{2 }\right|>\frac{1}{\left(1-\mu\right)^{2}}.\end{cases}\)_ _And for the normalized gonosomal operator \(V_{\mu,1}\) defined by \(W_{\mu,1}\) we have:_ \[V_{\mu,1}^{n}\left(z\right)=\left(0,\frac{1-\mu}{3-\mu},\frac{1-\mu}{3-\mu}, \frac{1+\mu}{3-\mu}\right),\quad\forall n\geq 1.\] Proof.: \(a)\) If \(\mu=\eta=1\), the system (5.8) becomes: \[\begin{cases}x_{1}^{\prime}&=\ x_{2}^{\prime}=y_{1}^{\prime}=0\\ y_{2}^{\prime}&=\ (x_{1}+x_{2})\left(y_{1}+y_{2}\right)\end{cases}\] in other words, there are no more females in the first generation and the population died in the second generation. \(b)\) If \(\mu=1\) and \(\eta\neq 1\), the system (5.8) is written: \[\left\{\begin{array}{lll}x_{1}^{\prime}&=&0\\ x_{2}^{\prime}&=&\frac{1-\eta}{2-\eta}x_{1}y_{1}\\ y_{1}^{\prime}&=&0\\ y_{2}^{\prime}&=&\frac{1}{2-\eta}x_{1}y_{1}&+x_{1}y_{2}&+\frac{1}{2-\eta}x_{2} y_{1}&+x_{2}y_{2}\end{array}\right.\] for \(z=\left(x_{1},x_{2},y_{1},y_{2}\right)\) we find \(z^{\left(2\right)}=\left(0,0,0,\left(\frac{1-\eta}{2-\eta}\right)^{2}\left(x _{1}+x_{2}\right)^{2}y_{1}^{2}\right)\) and thus \(z^{\left(3\right)}=\left(0,0,0,0\right)\), the population goes out to the third generation. \(c)\) With \(\mu\neq 1\) and \(\eta=1\), the system (5.8) becomes: \[\left\{\begin{array}{lll}x_{1}^{\prime}&=&0\\ x_{2}^{\prime}&=&\left(\frac{1-\mu}{2-\mu}x_{1}+\frac{1-\mu}{3-\mu}x_{2} \right)\left(y_{1}+y_{2}\right)\\ y_{1}^{\prime}&=&\left(\frac{1-\mu}{2-\mu}x_{1}+\frac{1-\mu}{3-\mu}x_{2} \right)\left(y_{1}+y_{2}\right)\\ y_{2}^{\prime}&=&\left(\frac{\mu}{2-\mu}x_{1}+\frac{1+\mu}{3-\mu}x_{2} \right)\left(y_{1}+y_{2}\right).\end{array}\right.\] If for \(z=\left(x_{1},x_{2},y_{1},y_{2}\right)\in S^{2,2}\) and \(n\geq 0\), we put \(W_{\mu,1}^{n}\left(z\right)=\left(x_{1}^{\left(n\right)},x_{2}^{\left(n\right) },y_{1}^{\left(n\right)},y_{2}^{\left(n\right)}\right)\), we show that \[x_{1}^{\left(n+1\right)} = 0\] \[x_{2}^{\left(n+1\right)} = 2^{2^{n}-1}\frac{\left(1-\mu\right)^{2^{n+1}-1}}{\left(3-\mu \right)^{2^{n}-1}}\left(\frac{x_{1}}{2-\mu}+\frac{x_{2}}{3-\mu}\right)^{2^{n} }\left(y_{1}+y_{2}\right)^{2^{n}}\] \[y_{1}^{\left(n+1\right)} = x_{2}^{\left(n+1\right)} \tag{5.10}\] \[y_{2}^{\left(n+1\right)} = 2^{2^{n}-1}\left(1+\mu\right)\frac{\left(1-\mu\right)^{2^{n+1}-2 }}{\left(3-\mu\right)^{2^{n}-1}}\left(\frac{x_{1}}{2-\mu}+\frac{x_{2}}{3-\mu} \right)^{2^{n}}\left(y_{1}+y_{2}\right)^{2^{n}}.\] We have \(\frac{2}{3}<\frac{2}{3-\mu}<1\), \(x_{2}^{\left(n+1\right)}=\frac{1}{1-\mu}\left(\frac{2}{3-\mu}\right)^{2^{n}-1 }\left[\left(1-\mu\right)^{2}\left(\frac{x_{1}}{2-\mu}+\frac{x_{2}}{3-\mu} \right)\left(y_{1}+y_{2}\right)\right]^{2^{n}}\), \(y_{1}^{\left(n+1\right)}=x_{2}^{\left(n+1\right)}\) and \(y_{2}^{\left(n+1\right)}=\frac{1-\mu}{1+\mu}x_{2}^{\left(n+1\right)}\) from which we deduce the limit values of \(W_{\mu,1}^{n}\). From (5.10) we get \(\varpi\circ W_{\mu,1}^{n}\left(z\right)=2^{2^{n}-1}\frac{(1-\mu)^{2^{n+1}-2}}{ \left(3-\mu\right)^{2^{n}-2}}\left(\frac{x_{1}}{2-\mu}+\frac{x_{2}}{3-\mu} \right)^{2^{n}}\left(y_{1}+y_{2}\right)^{2^{n}}\), for all \(n\geq 1\) and by normalization of terms given by (5.10) we get the \(V_{\mu,1}^{n}\) components \(\left(0,\frac{1-\mu}{3-\mu},\frac{1-\mu}{3-\mu},\frac{1+\mu}{3-\mu}\right)\) for all \(n\geq 1\). Now in what follows we assume that \(\mu,\eta\neq 1\). **Proposition 20**.: _For any \(z=(x_{1},x_{2},y_{1},y_{2})\in S^{2,2}\) and \(0\leq\mu,\eta\leq 1\) the trajectory \(\{z^{(n)}\}\) tends to the fixed point \(0\) exponentially fast._ Proof.: It is clear that \(x_{1}^{(n)}\geq 0,x_{2}^{(n)}\geq 0,y_{1}^{(n)}\geq 0,y_{2}^{(n)}\geq 0\) for any \(n\geq 1\). We choose the function \(F(z)=(x_{1}+x_{2})(y_{1}+y_{2})\) and show that \(F(z)\) is a Lyapunov function for (5.8). Consider \[F(z^{\prime})=(x_{1}^{\prime}+x_{2}^{\prime})(y_{1}^{\prime}+y_{2}^{\prime})= (x_{1}^{\prime}+x_{2}^{\prime}+y_{1}^{\prime}+y_{2}^{\prime})(y_{1}^{\prime}+y _{2}^{\prime})-(y_{1}^{\prime}+y_{2}^{\prime})^{2}.\] Using b) of Proposition 9 we get that \(y_{1}^{\prime}+y_{2}^{\prime}\leq\frac{1}{4}\) and from (4.6) we obtain \[F(z^{\prime})=(x_{1}+x_{2})(y_{1}+y_{2})(y_{1}^{\prime}+y_{2}^{\prime})-(y_{1 }^{\prime}+y_{2}^{\prime})^{2}=(y_{1}^{\prime}+y_{2}^{\prime})F(z)-(y_{1}^{ \prime}+y_{2}^{\prime})^{2}\leq F(z).\] Thus, the sequence \(F(z^{(n)})\) is decreasing and bounded from below with \(0\), so it has a limit, i.e. it is a Lyapunov function. In addition, from b) of Proposition 9 \[F(z^{\prime})=(x_{1}^{\prime}+x_{2}^{\prime})(y_{1}^{\prime}+y_{2}^{\prime}) \leq\left(\frac{1}{4}\right)^{2},\] on the other hand, \(F(z^{\prime})=x_{1}^{(2)}+x_{2}^{(2)}+y_{1}^{(2)}+y_{2}^{(2)}\leq\left(\frac{ 1}{4}\right)^{2}\) and from this we get \(x_{1}^{(2)}+x_{2}^{(2)}\leq\left(\frac{1}{4}\right)^{2},y_{1}^{(2)}+y_{2}^{(2) }\leq\left(\frac{1}{4}\right)^{2}.\) Thus, \(F(z^{(2)})\leq\left(\frac{1}{4}\right)^{2^{2}}\) and so on. Hence, one has \(F(z^{(n)})\leq\left(\frac{1}{4}\right)^{2^{n}}\) for any \(n\geq 1\) and this guarantees that the limit of \(F(z^{(n)})\) converges to \(0\). In addition, from \(F(z^{(n)})=(x_{1}^{(n)}+x_{2}^{(n)})(y_{1}^{(n)}+y_{2}^{(n)})=x_{1}^{(n+1)}+x_{ 2}^{(n+1)}+y_{1}^{(n+1)}+y_{2}^{(n+1)}\) we obtain that \[0\leq x_{1}^{(n+1)}\leq F(z^{(n)}),\,0\leq x_{2}^{(n+1)}\leq F(z^{(n)}),\,0 \leq y_{1}^{(n+1)}\leq F(z^{(n)}),\,0\leq y_{2}^{(n+1)}\leq F(z^{(n)})\] which completes the proof of the proposition.
2307.01010
GECAM Observations of the Galactic Magnetar SGR J1935+2154 during the 2021 and 2022 Burst Active Episodes. I. Burst Catalog
Magnetar is a neutron star with an ultrahigh magnetic field ($\sim 10^{14}-10^{15}$ G) which usually manifests as soft gamma-ray repeater (SGR) or anomalous X-ray pulsar (AXP). SGR J1935+2154 is not only one of the most active magnetar detected so far, but also the unique confirmed source of fast radio burst (FRB). Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM) are dedicated to monitor gamma-ray transients all over the sky, including SGR bursts. Here we report the GECAM observation of the burst activity of SGR J1935+2154 from January 2021 to December 2022, which results in a unique and valuable data set for this important magnetar. With a targeted search of GECAM data, 164 bursts from SGR J1935+2154 are detected by GECAM-B while 97 bursts by GECAM-C, including the X-ray burst associated with a fast radio burst (FRB 20221014). We find that both the burst duration and the waiting time between two successive bursts follow lognormal distributions. The period of burst activity is $134\pm20$ days, thus the burst activity could be generally divided into 4 active episodes over these two years. Interestingly, the hardness ratio of X-ray bursts tends to be softer and more concentrated over these two years, especially during the active episode with FRBs detected.
Sheng-Lun Xie, Ce Cai, Yun-Wei Yu, Shao-Lin Xiong, Lin Lin, Yi Zhao, Shuang-Nan Zhang, Li-Ming Song, Ping Wang, Xiao-Bo Li, Wang-Chen Xue, Peng Zhang, Chao Zheng, Yan-Qiu Zhang, Jia-Cong Liu, Chen-Wei Wang, Wen-Jun Tan, Yue Wang, Zheng-Hang Yu, Pei-Yi Feng, Jin-Peng Zhang, Shuo Xiao, Hai-Sheng Zhao, Wen-Long Zhang, Yan-Ting Zhang, Yue Huang, Xiao-Yun Zhao, Xiang Ma, Shi-Jie Zheng, Xin-Qiao Li, Xiang-Yang Wen, Ke Gong, Zheng-Hua An, Da-Li Zhang, Sheng Yang, Xiao-Jing Liu, Fan Zhang
2023-07-03T13:43:23Z
http://arxiv.org/abs/2307.01010v3
# The 2021 X-ray outburst of magnetar SGR J1935\(+\)2154 observed by GECAM - I. Spectral properties ###### Abstract Over a period of multiple active episodes between January 2021 and January 2022, the magnetar SGR J1935\(+\)2154 emitted a total of 82 bursts observed by GECAM-B. Temporal and spectral analyses reveal that the bursts have an average duration of \(\sim\)145 ms and a fluence ranging from \(1.2\times 10^{-8}\) erg \(\cdot\) cm\({}^{-2}\) to \(3.7\times 10^{-5}\) erg \(\cdot\) cm\({}^{-2}\) (30 - 200 keV). The spectral properties of these bursts are similar to those of earlier active episodes. Specifically, we find that the emission area of the Double Black Body (BB2) model shows a Log-Linear correlation to its temperature, and there is a weak relation between fluence and \(E_{\rm peak}\) (or \(\alpha\)) in the Cut-Off Power Law (CPL) model. However, we note that the temperature distributions of BB2/BB models in GECAM-B samples are different from those in GBM-GECAM samples, due to differences in the energy range used for fitting. To understand this difference, we propose a Multi-Temperature Black Body (MBB) model, assuming that the BB temperatures follow a power law distribution. Our analysis shows that the minimum temperature \(kT_{\rm min}\sim 5\) keV of the MBB model, which is consistent between GECAM-B and GBM-GECAM. This indicates that both samples originated from similar magnetar bursts. We also reveal the spectra of magnetar bursts tend to be soft. It indicates that magnetar bursts may be composed of multiple low BB temperatures and the majority of the BB temperatures are concentrated around the minimum temperature. magnetars - soft gamma-ray repeaters: general - methods: data analysis - techniques 0000-0002-4000]Sheng-Lun Xie 0000-0002-3882-8870]Yi Zhao 0000-0002-1881-7888]Wang-Chen Xue 0000-0002-3188-7888]Yun-Wei Yu 0000-0002-1888-8888]Shao-Lin Xiong 0000-0002-1888-7888]Heng Yu 0000-0002-1888-7888]Ce Cai 0000-0002-1888-7888]Shuang-Nan Zhang ## 1 Introduction The SGRs are perceived to originate in the highly magnetized neutron star, namely magnetar (Duncan & Thompson, 1992; van Kerkwijk et al., 1995; Kouveliotou et al., 1998; Banas et al., 1997; Kaspi & Beloborodov, 2017). Magnetars are characterized by slow rotation period (\(P\sim 2-12\) s), rapidly spin down (\(\dot{P}\sim 10^{-13}-10^{-11}\) s \(\cdot\) s\({}^{-1}\)) and relatively young age (typically about 1000 yr). So far, 30 magnetars had been detected and 24 of them had been confirmed (Olausen & Kaspi, 2014). Most magnetars can increase persistent radiation and emit bursts/flares simultaneously in the X-/Gamma-ray band during an outburst. Based on their luminosity and duration, the SGR bursts can be divided into three classes (Woods & Thompson, 2006): Short-duration burst, which consists of single or multiple pulses, is the most typical magnetar burst with the duration range \(0.01\sim 1\) s and the fluence around \(10^{-10}\sim 10^{-4}\) erg \(\cdot\) cm\({}^{-2}\); Intermediate burst, is a brighter magnetar burst with duration longer than short-duration burst (\(>1\) s) and peak luminosity around \(10^{41}\sim 10^{43}\) erg \(\cdot\) s\({}^{-1}\); Giant flare, the rarest and the most powerful energetic burst, is characterized by a significantly higher luminosity than a typical magnetar burst and a special pulse profile with a hard initial spike and rapidly decaying tail. Magnetar SGR J1935+2154 was first discovered and located in the Milky Way Galaxy by Swift Burst Alert Telescope (BAT) in 2014 July (Stamatikos et al., 2014). Follow-up observations carried out between 2014 July and 2015 March with Chandra and XMM-Newton allowed the measurement of its spin period and spin-down rate, found to be \(P\sim 3.24\) s and \(\dot{P}\sim 1.43\times 10^{-11}\) s \(\cdot\) s\({}^{-1}\), respectively. This indicates a dipole-magnetic field of \(B\sim 2.2\times 10^{14}\) G (Israel et al., 2016). It has experienced multiple active windows from 2014 to 2021 (Younes et al., 2017; Lin et al., 2020, 2020; Rehan and Ibrahim, 2023). April 2020 was recognized as a month of intense bursting activity for SGR J1935+2154, during which a burst forest was observed. These bursts included the X-ray counterpart (Li et al., 2021; Mereghetti et al., 2020; Tavani et al., 2020; Ridnaia et al., 2021) associated with a fast radio burst, FRB 200428 (Bochenek et al., 2020; CHIME/FRB Collaboration et al., 2020). Additionally, 10 candidate bursts had also been found before 2014 (Xie et al., 2022) observed by _Fermi_ Gamma-ray Space Telescope (_Fermi_/GBM, Meegan et al., 2009). Xie et al. (2022) also found there were numerous bursts from SGR J1935+2154 observed by Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM, Li et al., 2021; Xiao et al., 2022; Zhang et al., 2023). In this paper, we carry out a spectral analysis on magnetar SGR J19354+2154 using the observation data of the GECAM-B that dates from 2021 January to 2022 January. In Section 2 we make brief introduce of GECAM-B and report the temporal analysis of SGR J1935+2154 over one year time-span. In Section 3 we report the spectral properties of SGR J1935+2154. Finally, the summary is given in Section 4. In Paper II (In prep.), we assess the localization method for magnetar bursts using the spectral fitting results. ## 2 Temporal Analysis Launched in December 2020, GECAM has been operating in low Earth orbit (600 km altitude and 29\({}^{\circ}\) inclination angle, Chen et al., 2020). GECAM consists of twin microsatellites, namely GECAM-A and GECAM-B, and each comprises 25 gamma-ray detectors (GRDs, Lv et al., 2018; An et al., 2021) and 8 charged particle detectors (CPDs, Zhang et al., 2021; Xu et al., 2021). GECAM-B can serve as a wide field of view (FOV) gamma-ray monitor with high time resolution (\(\mu\)s) and large effective area (up to thousands cm\({}^{2}\)). Cai et al. (2021) developed a pipeline to do ground search on GECAM-B daily observation data for GRBs using the traditional signal-to-noise ratio (SNR) method. Therefore, in 2021, GECAM-B observed a total of 82 bursts from SGR J1935+2154, as reported in Xie et al. (2022). Xie et al. (2022) discovered that GECAM-B has visibility to SGR1925+2154 for approximately half of each day, with a periodic active window of around 127 days. In this paper, temporal and spectral analysis of this burst history will be conducted. The burst history is depicted in Fig 1. ### Burst Duration For characterizing the SGR's temporal property, we use the Bayesian Block method (Scargle et al., 2013), which identifies regions of the highest statistical significance, to calculate the duration of each burst. The Bayesian Block method divides the events data into multiple blocks, each with a constant count rate. This is a good approach to characterize the variability of the GECAM EVT data by finding the optimal segmentation or boundaries. In this paper, the GRD detectors, which are used in Bayesian Block analysis, with an angle to the source are less than 60\({}^{\circ}\). To measure the duration of all bursts, the sliced event data of the 10 s burst time window will be used. This also includes both the pre-burst and post-burst time intervals, with an energy range of 30-200 keV. False-positive probability (p0) is set to 0.01 (corresponding to 3\(\sigma\)). We treat blocks with a duration longer than 6 s as background and consider blocks with a duration less than the spin period (3.24 s) of SGR J1935+2154 as part of the burst region. An example of the Bayesian Block analysis is shown in Fig 3. The burst duration will be used in the time-integrated spectral analysis (see Section 3). The burst duration distribution of SGRs is presented in the right panel of Fig 2, and is fitted by a Log-Gaussian function which obtains a central value of 119.2 ms. The durations of all bursts are listed in Appendix A. ### Burst Hardness Ratio The hardness ratio is the ratio of net counts of the source in different energy bands. The net counts is estimated as, \[C=S-B \tag{1}\] where \(S\) and \(B\) are the total counts and the expected background counts of the source. The background counts are estimated using total counts before and after the source time interval. We compute the hardness ratio of each burst of the GECAM-B (H3/H2: 50-200 keV/30-50 keV), and the results are shown in Fig 4. The hardness ratios of these bursts range from 0.2 to 1.5, with a median value of 0.56. The left panel of Fig 4 represents the evolution of the hardness ratio and exhibits no significant trend. We also do not find a correlation between the duration and the hardness ratio (see the right panel of Fig 4). Figure 1: The burst history of SGR J1935+2154 from 2021 January to 2022 January. Figure 2: The left panel presents two complementary cumulative distributions of energy fluences of SGR J1935+2154 bursts over one year. The dashed lines represent the best-fit of the distribution using a broken power law. The right panel show the distribution of burst duration. The curve is Log-Gaussian function fits to the histogram and its central value is represented by vertical lines. Figure 4: The left panel represents the evolution of the hardness ratio of the GECAM-B (H3/H2: 50-200 keV/30-50 keV). The right panel indicate the duration vs. hardness ratio of each burst. Figure 3: Example of the Bayesian Block analysis. The vertical green dashed lines are the Bayesian block edges. ## 3 Spectral Properties A total of 82 bursts were observed by GECAM-B, of which 39 bursts were also observed by _Fermi_/GBM. Therefore, the dataset utilized in this study comprises of 82 GECAM-B bursts and 39 joint-spectra of both instruments (hereafter, GBM-GECAM). The GRD detectors, which are used in spectral fitting, with an angle to the source are less than 60\({}^{\circ}\). The background spectra are accumulated from the data events during the preand post-burst time intervals (i.e., from T0 - 10s to T0 - 5 s and from T0 + 5s to T0 + 10 s, where T0 is the trigger time of the burst). For weak bursts, the GRPPHA command1 is used to group the observed data (e.g., GROUP MIN RCNTS) to ensure the validity of the fit statistics. Footnote 1: [https://heasarc.gsfc.nasa.gov/ftools/](https://heasarc.gsfc.nasa.gov/ftools/) Then, we perform a time-integrated spectral analysis using the _XSPEC_(Arnaud, 1996)2 software and the Poisson data with Gaussian background statistics (PGSTAT). These burst samples are fitted with 6 models: a single Black Body (BB), a single Power Law (PL), an Optically-Thin Thermal Bremsstrahlung (OTTB), a single Black Body plus a single Power Law (BBPL), the Double Black Body (BB2) and an exponentially Cut-Off Power Law (CPL), over the energy range of 30-200 keV for GECAM-B and 8-200 keV for the GBM-GECAM. And finally, we use Bayesian Information Criterion (BIC, Schwarz, 1978; Liddle, 2007) to estimate the best-fit model among BB2, BB, CPL, OTTB, BBPL and PL. Footnote 2: [https://heasarc.gsfc.nasa.gov/xanadu/xspec/](https://heasarc.gsfc.nasa.gov/xanadu/xspec/) One should note that the threshold of low energy for GECAM-B is dynamically changing over the course of this year. However, for consistency and the principle of controlling variables, the energy range for GECAM-B samples is set to 30-200 keV in all analyses conducted in this paper and in the localization research mentioned in Paper II. All results are presented in the Appendix A. The equations of these models are following: The exponentially Cut-Off Power Law (CPL) model: \[F(E)=K(\frac{E}{E_{\rm piv}})^{\alpha}e^{-(2+\alpha)E/E_{\rm peak}} \tag{2}\] where \(K\) is the amplitude in photon/s/cm\({}^{2}\)/keV, \(E_{\rm peak}\) is the \(\nu F_{\nu}\) peak in keV, \(\alpha\) is the power-law index and \(E_{\rm piv}\) is the pivot energy in keV and we set \(E_{\rm piv}=20\) keV in this paper. The Black Body (BB) model: \[F(E)=\frac{1.0344\times 10^{-3}\times KE^{2}}{e^{E/kT}-1} \tag{3}\] where \(K\) is \(R_{\rm km}^{2}/D_{10}^{2}\), \(R_{\rm km}^{2}\) is the radiative area of source in km, \(D_{10}^{2}\) is the distance to the source in units of 10 kpc and \(kT\) is the temperature keV. We set the distance to the SGR J1935+2154 is 9 kpc in this paper. The Optically-Thin Thermal Bremsstrahlung (OTTB) model: \[F(E)=K(\frac{E_{\rm piv}}{E})e^{(E_{\rm piv}-E)/kT} \tag{4}\] where \(K\) is the amplitude in photon/s/cm\({}^{2}\)/keV and \(E_{\rm piv}\) is the pivot energy in keV and we set \(E_{\rm piv}=20\) keV in this paper. The Power Law (PL) model: \[F(E)=KE^{\alpha} \tag{5}\] where \(K\) and \(\alpha\) are same to the parameters of CPL model. ### Burst Fluence The fluence is derived from the product of the burst duration and the flux, and the flux is calculated using the best-fit model in the energy ranges of 30-200 keV for GECAM-B samples. The results are listed in Appendix A. The left panel of Fig 2 shows the complementary cumulative distribution of energy fluences, which can be fitted by a broken power law. The break point is \(4.69\pm 0.17\times 10^{-8}\) erg \(\cdot\) cm\({}^{-2}\) for GECAM-B samples. The slope of the lower fluences is \(-0.036\pm 0.004\). The slope of the higher fluences is \(-0.65\pm 0.02\). The slope of higher fluences are consistent with the previous studies (Collazzi et al., 2015; Cheng et al., 1996; Lin et al., 2020, 2020) and the Gutenberg-Richter law (\(N(E)\propto E^{-5/3\sim-2}\)), which describes the power-law-like frequency distribution of earthquakes. This similarity implies that the majority magnetar bursts, akin to earthquakes, possibly originated from the cracks in the solid crust of the magnetar (Duncan & Thompson, 1992). ### The Double Black Body Model The Double Black Body Model (BB2) is the sum of two BBs (Eq 3). Of all burst samples, 41 GECAM-B bursts and 19 GBM-GECAM bursts can be well-fitted with the BB2 model. The Fig 5 shows the distribution of the low- and the high-temperature (\(kT_{\rm low}\), \(kT_{\rm high}\)) of BB2. Both \(kT_{\rm low}\) and \(kT_{\rm high}\), respectively, can be well fit with a Gaussian function (see Table 2 for central value and sigma). The BB2 temperature distribution of GBM-GECAM samples is similar to that of SGR J1935+2154 in previous reports (Lin et al., 2020, 2020; Rehan and Ibrahim, 2023). The Fig 6 represents that \(kT_{\rm low}\) and \(kT_{\rm high}\) exhibit a strong Log-Linear correlation to the emission area (\(R_{\rm low}\), \(R_{\rm high}\)), and the results are listed in Table 1. In addition, the emission area dependence spanning both the low and high BB temperatures, namely \(R_{\rm both}^{2}\propto(kT_{\rm both})^{-4.23\pm 0.15}\) for GECAM-B samples and \(R_{\rm both}^{2}\propto(kT_{\rm both})^{-3.60\pm 0.13}\), is very similar to the one corresponding to a single BB obeying the Stefan-Boltzmann law: \(R^{2}\propto(kT)^{-4}\). This \(R^{2}-kT\) correlation for BB2 model is also similar to that observed for the collection of SGR J1550-5418 bursts analyzed in previous studies (Lin et al., 2012; van der Horst et al., 2012). Because of the limited sample size, there are some differences between the correlation results of GECAM-B and GBM-GECAM. The correlation (\(R_{\rm low}^{2}\propto(kT_{\rm low})^{\alpha}\)) of GECAM-B bursts exhibits a steeper trend since the higher fitted lower-edge energy range (30-200 keV). This is also the reason why the BB/BB2 temperature distribution of GBM-GECAM is different from that of GECAM-B, and will be discussed in Subsection 3.5. ### The Cut-Off Power Law model As for the CPL model (Eq 2), 5 GECAM-B bursts and 24 GBM-GECAM bursts can be well-fitted. If the low energy edge of GECAM-B bursts is greater than \(\sim\)30 keV, it becomes difficult to effectively constrain the CPL model. As a result, only a limited number of bursts can be successfully fitted using this model. The CPL model is not recommended for fitting SGR burst data observed by GECAM-B, especially there is a possibility of the low energy edge increasing in future operations, or other instruments with a similar energy range. The distributions of \(\alpha\) and \(E_{\rm peak}\) are presented in the upper two panels of Fig 7. The the GBM-GECAM distributions can be fitted with a Gaussian function (see Table 2 for central value and sigma). Most of \(E_{\rm peak}\) values range from approximately 20 to 60 keV. The distributions of the two parameters are similar to those in previous studies. However, the correlation between the \(\alpha\) and the fluence, as well as the relation between the \(E_{\rm peak}\) and the fluence exhibit weak relevance, as shown in the lower two panels of Fig 7. ### The Other Models The Fig 8 represents the parameters histogram distribution of the BBPL (the sum of Eq 3 and Eq 5), OTTB (Eq 4), BB (Eq 3), and PL (Eq 5) models. The distributions can be fitted with a Gaussian function (see Table 2 for central value and sigma). The \(kT\) of the BBPL is similar to that of the BB, and the Photon Index (\(\alpha\)) of the BBPL is similar to that of the PL. The \(kT\) of the OTTB is similar to the \(E_{\rm peak}\) of the CPL since the \(kT\) of the OTTB is equivalent to \(E_{\rm peak}\) of the CPL (\(\alpha\)=-1). The BB/OTTB models in GECAM-B and GBM-GECAM have different \(kT\) values, which can be attributed to differences in the energy range used for fitting. ### The Multi-Temperature Black Body The \(kT\) distribution of the BB2/BB models as observed by GECAM-B differs from that of GBM-GECAM samples due to differences in the energy range used for fitting. Thus we assume the SGR bursts consist of multiple Black Bodies (BBs), and the temperatures of these BBs follow a power law distribution as following, \[f(kT)=kT^{-\beta} \tag{6}\] Then we perform integration of Eq (6) from \(kT_{\rm min}\) to infinity. The spectrum is defined as, \[N(E)=K\int_{kT_{\rm min}}^{\infty}f(kT)\frac{E^{2}}{\exp(\frac{E}{kT})-1}d(kT) \tag{7}\] where \(K\), \(\beta\) and \(kT_{\rm min}\) are normalization coefficient, power law \(kT\) index and Minimum temperature. The fit results are shown/listed in Fig 9 and Appendix A. The \(\beta\) and \(kT_{\rm min}\) distributions of GECAM-B and GBM-GECAM are similar to each other representing that they originated from similar thermal radiation even different \(kT\) distribution of the BB2/BB models. Figure 5: The temperature distributions of the BB2 model of GECAM-B and GBM-GECAM. Therefore, the MBB model is recommended for analyzing the BB temperature of magnetar bursts, even when using different instruments with similar detection energy ranges. As shown in the right panel of Fig 10, the \(kT\) of BB model exhibit a Log-Linear correlation to the \(\beta\) of MBB model. We fit the correlation (\(\beta\propto kT\)) with a simple power law function using both GECAM-B and GBM-GECAM samples, and obtain the slope: \(-0.25\pm 3\mathrm{E}-16\). This indicates that the higher temperature of the BB model corresponds to a wider temperature distribution, ranging from \(kT_{\mathrm{min}}\) to an even higher temperature, as we expected. The left panel of Fig 10 show the correlation between \(\beta\) and \(kT_{\mathrm{min}}\). The correlation (\(\beta\propto kT_{\mathrm{min}}\)) is also fitted by a power law function, which give the slope: \(0.11\pm 1.3\mathrm{E}-14\). However, the correlation between \(\beta\) and burst fluence/duration, as well as the relation between \(kT_{\mathrm{min}}\) and burst fluence/duration exhibit weak relevance, as shown in the Fig 11. Interesting, The temperature values (\(kT_{\mathrm{min}}\sim 5\) keV) of the MBB model are similar to those of the Multicolor Blackbody model (Hou et al., 2018), which describes a superposition of a series of Black Bodies with different temperatures for GRB 081221. Compared to GRB 081221, the slope (\(\beta\sim 5.5\)) of the temperature distribution indicates a narrow temperature distribution. This is because magnetar bursts are generally softer than typical GRBs. ### Time-Resolved Spectral Analysis GECAM-C, a gamma-ray monitor akin to GECAM-B, was launched on July 2022, aboard the SATech-01 satellite (Zhang et al., 2023). To examine the observation capabilities of GECAM-C on magnetar, we also conduct \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Correlation} & PL Index (\(\alpha\)) & Coefficient (\(\rho\)) 1 & p-value & Instrument \\ \hline \(R_{\mathrm{low}}^{2}\propto(kT_{\mathrm{low}})\)2 & -10.92 \(\pm\) 0.10 & -0.63 & 1.03E-05 & GECAM-B \\ \(R_{\mathrm{high}}^{2}\propto(kT_{\mathrm{high}})\)3 & -7.01 \(\pm\) 0.39 & -0.92 & 1.08E-17 & GECAM-B \\ \(R_{\mathrm{both}}^{2}\propto(kT_{\mathrm{both}})\)4 & -4.23 \(\pm\) 0.15 & -0.94 & 3.33E-40 & GECAM-B \\ \(R_{\mathrm{low}}^{2}\propto(kT_{\mathrm{low}})\)5 & -2.26 \(\pm\) 0.80 & -0.66 & 2.28E-03 & GBM-GECAM \\ \(R_{\mathrm{high}}^{2}\propto(kT_{\mathrm{high}})\)6 & -4.22 \(\pm\) 0.84 & -0.91 & 8.56E-08 & GBM-GECAM \\ \(R_{\mathrm{both}}^{2}\propto(kT_{\mathrm{both}})\)7 & -3.60 \(\pm\) 0.13 & -0.91 & 8.56E-08 & GBM-GECAM \\ \hline \end{tabular} \end{table} Table 1: The measure results of the Spearman’s rank correlation coefficient. Figure 6: The left panel is the emission areas (\(R^{2}\)) as a function of the low- and the high-temperature of the BB2 model of GECAM-B. The right panel is the emission areas (\(R^{2}\)) as a function of the low- and the high-temperature of the BB2 model of GBM-GECAM. a spectral analysis of a burst detected by _Fermi_/GBM, GECAM-B, and GECAM-C, as shown in Fig 3. We chose this burst for analysis because of the approaching detection energy range of the three instruments and had significant burst fluence. However, one should note that this burst is just for exhibiting the spectral analysis of GECAM-C and is not contained in the above statistical analysis (2021-01 to 2022-01). We generate the spectral dataset based on Bayesian block edges and correct the time delays of such three instruments in order to perform joint-spectra fitting, as shown in the left side of Fig 12. The right panel of Fig 12 is a time-integrated spectral fitting with the CPL model (\(E_{\rm peak}=29.85\pm 0.34\) keV, \(\alpha=-0.12\pm 0.08\)). We conduct a separate spectral analysis using GECAM-C for localization research, and the results show that the localization is close to the true position. For more detailed spectral fitting and localization results, please refer to Paper II. Figure 7: The upper two panels present distributions of the CPL model parameters (\(\alpha\) and \(E_{\rm peak}\)) of the GECAM-B and the GBM-GECAM. The curves are Gaussian fits to the histograms and their central values are represented by vertical lines. The lower two panels show the relevance between the \(\alpha\) and the \(E_{\rm peak}\) to the fluence. Figure 8: The distributions of the BBPL, OTTB, BB, and PL models parameters of the GECAM-B and the GBM-GECAM. Figure 9: The left and right panels are the \(\beta\) and \(kT_{\rm min}\) distributions of the MBB model, respectively. Figure 11: The upper two panels present the correlation between \(\beta\) and burst duration/flux. The lower two panels present the correlation between \(\beta\) and burst duration/flux. Figure 10: The left panel presents the \(kT_{\rm min}\) vs. \(\beta\) of each burst. The right panel presents the temperature (\(kT\)) of BB model vs. \(\beta\) of MBB model for each burst. Figure 12: The left panel is the \(kT/E_{\rm peak}/\alpha\) evolution as a function of time. The right panel is the time-integrated spectral fitting with the CPL model. \begin{table} \begin{tabular}{l l c c c} \hline \hline Model & Parameter & \(\mu\)a & \(a\) & Instrument \\ \hline BB2 & \(kT_{\rm low}\) (keV) & 9.16 & 0.88 & GECAM-B \\ BB2 & \(kT_{\rm high}\) (keV) & 49.56 & 23.17 & GECAM-B \\ BB2 & \(kT_{\rm low}\) (keV) & 5.72 & 2.21 & GBM-GECAM \\ BB2 & \(kT_{\rm high}\) (keV) & 19.30 & 4.70 & GBM-GECAM \\ CPL & \(\alpha\) & -1.21 & 0.33 & GBM-GECAM \\ CPL & \(E_{\rm peak}\) (keV) & 34.26 & 8.59 & GBM-GECAM \\ BBPL & kT (keV) & 9.18 & 1.30 & GECAM-B \\ BBPL & \(\alpha\) & -1.20 & 0.37 & GECAM-B \\ BBPL & kT (keV) & 8.82 & 0.52 & GBM-GECAM \\ BBPL & \(\alpha\) & -2.16 & 0.13 & GBM-GECAM \\ BB & kT (keV) & 8.74 & 0.58 & GBM-GECAM \\ BB & kT (keV) & 18.15 & 8.77 & GECAM-B \\ OTTB & kT (keV) & 35.83 & 10.22 & GBM-GECAM \\ OTTB & kT (keV) & 54.02 & 11.91 & GECAM-B \\ PL & \(\alpha\) & -2.14 & 0.24 & GBM-GECAM \\ PL & \(\alpha\) & -2.06 & 0.63 & GECAM-B \\ MBB & \(\beta\) & 5.65 & 0.44 & GBM-GECAM \\ MBB & \(\beta\) & 5.11 & 0.46 & GECAM-B \\ MBB & \(kT_{\rm min}\) & 4.47 & 1.55 & GBM-GECAM \\ MBB & \(kT_{\rm min}\) & 5.16 & 1.83 & GECAM-B \\ \hline \end{tabular} \end{table} Table 2: The Gaussian distribution of the spectral parameters ## 4 Summary In this paper, we make the temporal and spectral analysis based on GECAM-B EVT dates from January 2021 through January 2022, which is also helpful to study the localization (see Paper II). Fig 1 exhibit multiple active burst episodes of SGR J1935+2154 over this year. The left panel of Fig 2 represents the cumulative distribution of the fluence can be well fitted by broken power law. The break point is \(4.69\pm 0.17\times 10^{-8}\) erg\(\cdot\)cm\({}^{-2}\) for GECAM-B samples. The slope of the lower fluences is \(-0.036\pm 0.004\). The slope of the higher fluences is \(-0.65\pm 0.02\). The bursts duration follow a Log-Gaussian distribution with a central value of 119.2 ms, as shown in the right panel of Fig 2. The burst duration mentioned above are computed by the Bayesian Block method. The hardness ratio is computed in different energy ranges (H3/H2: 50-200 keV/30-50 keV for GECAM-B) and shown in Fig 4. The hardness ratio is range from 0.2 to 1.5, with a median value of 0.56. The fluence mentioned above is derived from the product of the burst duration and the flux. The flux is calculated by the best-fit model, which is assessed by spectral fitting. We carry out a time-integrated spectral analysis using the PGSTAT statistics with BB2, CPL, BBPL, OTTB, BB, and PL models. The distributions of each model parameter (\(kT\), \(E_{\rm peak}\) or \(\alpha\)) follow a Gaussian distribution which the central value is listed in Table 2. As for the BB2 model, the emission area exhibits a Log-Linear correlation with each corresponding temperature (see Fig 6). As for the CPL model, we do not find correlation between the fluence and the \(E_{\rm peak}\), and the relation between the fluence and the Photon Index (\(\alpha\)). The CPL model is not recommended for fitting SGR burst data observed by GECAM-B, especially there is a possibility of the low energy edge increasing in future operations, or other instruments with a low energy threshold higher than 30 keV. To investigate the thermal radiation emitted during a magnetar burst, we assume the temperature of Black Bodies (BBs) follows a power law distribution, as described in Eq 6. Based on this assumption, the spectrum is Eq 7. We perform a fit of the spectrum using all available datasets, and find that a total of 82 GECAM bursts and 39 GBM-GECAM bursts could be well-fit using this MBB model. As shown in Fig 9, the concentration of \(kT_{\rm min}\) values in our analysis indicate that magnetar burst spectra tend to be soft and may be composed of multiple BB components. The steep slope (\(\beta\)) of the temperature distribution further suggests that the majority of the BB temperatures are concentrated around the concentrated around 5 keV. Additionally, the \(\beta\) and \(kT_{\rm min}\) distributions of GECAM-B and GBM-GECAM are similar to each other representing that the MBB model is recommended for analyzing the BB temperature of magnetar bursts, even when using different instruments with similar detection energy ranges. This finding provides important insights into the thermal properties of magnetars, and can help inform future studies of these fascinating objects. This work is supported by the National Key R&D Program of China (2021YFA0718500), the National Natural Science Foundation of China (Grant No. 11833003, 12273042) and the National SKA program of China (2020SKA0120300). The GECAM (Huairou-1) mission is supported by the Strategic Priority Research Program on Space Science (Grant No. XDA15360000, XDA15360102, XDA15360300) of the Chinese Academy of Sciences. ## Appendix A Temporal and spectral properties of SGR J1935+2154 \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline ID & \multicolumn{4}{c}{BB2} & \multicolumn{4}{c}{BB} & \multicolumn{1}{c}{BB} \\ \cline{2-9} & kT1 (keV) & norm1 & kT2 (keV) & norm2 & PGSTAT/DOF & kT (keV) & norm & PGSTAT/DOF \\ \hline [MISSING_PAGE_POST] .. &... &... &... &... &... &... &... &... &... &... \\ 30 &... &... &... &... &... &... &... &... &... & \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ID} & \multicolumn{4}{c}{BB2} & \multicolumn{4}{c}{BB} \\ \cline{2-10} & \(\kappa\)T1 (keV) & norm1 & kT2 (keV) & norm2 & PGSTAT/DOF & kT (keV) & norm & PGSTAT/DOF \\ \hline 61 &... &... &... &... &... & 21.334\(\pm\)1.37 & 0.5640\(\pm\)0.15 & 150.75/140 \\ 62 & \(9.76^{+0.67}_{-0.67}\) & \(16.94^{+5.12}_{-5.12}\) & \(54.07^{+15.03}_{-15.03}\) & \(0.01^{+0.01}_{-0.01}\) & 149.82\(\pm\)198 & \(14.19^{+0.46}_{-0.46}\) & \(3.94^{+0.57}_{-0.57}\) & 244.63/200 \\ 63 &... &... &... &... &... & 15.77\({}^{+0.22}_{-0.20}\) & 2.05\({}^{+0.58}_{-0.58}\) & 108.14/95 \\ 64 & \(9.65^{+0.17}_{-0.17}\) & \(220.03^{+14.93}_{-14.93}\) & \(24.90^{+17.11}_{-1.71}\) & \(0.74^{+0.28}_{-0.28}\) & 336.83/272 &... &... &... \\ 65 &... &... &... &... &... & 324.04\(\pm\)0.40 & 0.110\(\pm\)0.03 & 153.68/159 \\ 66 & \(9.21^{+0.25}_{-0.25}\) & \(89.1^{+11.03}_{-10.03}\) & \(29.21^{+45.12}_{-5.12}\) & \(0.11^{+0.08}_{-0.08}\) & 258.21/262 & \(10.24^{+0.14}_{-0.16}\) & \(58.18^{+0.57}_{-3.22}\) & 340.00/264 \\ 67 &... &... &... &... &... & 11.73\({}^{+1.77}_{-1.77}\) & 1.32\(\pm\)0.63 & 292.81/287 \\ 68 & \(9.18^{+0.24}_{-0.24}\) & \(75.89^{+0.12}_{-9.12}\) & \(54.32^{+12.79}_{-12.79}\) & \(0.01^{+0.01}_{-0.01}\) & 228.59/236 & \(10.46^{+0.17}_{-0.17}\) & \(43.45^{+3.39}_{-3.39}\) & 408.35/238 \\ 69 & \(9.84^{+0.12}_{-0.12}\) & \(278.70^{+10.02}_{-10.02}\) & \(20.56^{+0.34}_{-0.54}\) & \(41.47^{+0.71}_{-0.71}\) & 526.86/301 &... &... \\ 70 & \(8.47^{+0.12}_{-0.12}\) & \(48.85^{+0.11}_{-11.68}\) & \(44.40^{+11.16}_{-0.02}\) & \(0.02^{+0.02}_{-0.02}\) & 169.87/170 & \(10.23^{+0.28}_{-0.28}\) & 251.53\(\pm\)0.00 & 258.69/172 \\ 71 &... &... &... &... &... &... & 16.74\({}^{+3.12}_{-3.2}\) & 0.824\(\pm\)0.30 & 94.84/94 \\ 72 & \(9.42^{+0.24}_{-0.24}\) & \(48.93^{+5.66}_{-0.65}\) & \(46.65^{+7.81}_{-7.81}\) & \(0.02^{+0.01}_{-0.01}\) & 287.64/250 & \(110.00^{+0.19}_{-10.19}\) & 25.74\(\pm\)1.82 & 497.10/252 \\ 73 & \(9.15^{+8.11}_{-8.11}\) & \(5.25^{+6.06}_{-4.25}\) & \(40.23^{+8.47}_{-4.47}\) & \(0.02^{+0.02}_{-0.02}\) & 143.18/163 & \(24.65^{+6.13}_{-1.63}\) & 0.164\(\pm\)0.04 & 167.93/165 \\ 74 &... &... &... &... &... &... & 23.394\(\pm\)1.57 & 0.36\(\pm\)0.09 & 97.89/124 \\ 75 & \(8.48^{+0.60}_{-0.60}\) & \(37.1^{+13.02}_{-13.02}\) & \(79.15^{+31.89}_{-31.41}\) & \(0.01^{+0.01}_{-0.01}\) & 122.53/174 & \(123.40^{+0.52}_{-0.52}\) & \(5.70^{+4.48}_{-1.88}\) & 238.56/176 \\ 76 &... &... &... &... &... &... & 32.362\(\pm\)4.44 & 0.08\(\pm\)0.02 & 156.21/184 \\ 77 & \(9.36^{+0.30}_{-0.30}\) & \(88.50^{+11.94}_{-19.94}\) & \(29.74^{+32.2}_{-32.5}\) & \(0.18^{+0.10}_{-0.25}\) & 214.79/241 & \(115.68^{+0.15}_{-0.13}\) & \(38.18^{+2.43}_{-2.43}\) & 388.29/243 \\ 78 & \(9.70^{+0.78}_{-0.75}\) & \(11.25^{+4.00}_{-4.00}\) & \(69.57^{+21.01}_{-21.01}\) & \(0.01^{+0.01}_{-0.01}\) & 192.17/227 & \(17.69^{+0.67}_{-0.67}\) & \(1.10^{+0.18}_{-0.18}\) & 307.82/229 \\ 79 &... &... &... &... &... & 20.17\({}^{+2.84}_{-2.84}\) & 0.00\(\pm\)0.02 & 231.17/237 \\ 80 &... &... &... &... &... & 21.65\({}^{+1.46}_{-1.46}\) & 0.458\(\pm\)0.12 & 140.36/120 \\ 81 & \(8.73^{+0.12}_{-0.12}\) & \(290.84^{+18.77}_{-18.37}\) & \(35.59^{+6.05}_{-0.05}\) & \(0.05^{+0.03}_{-0.03}\) & 277.77/229 & \(9.23^{+0.08}_{-0.08}\) & 225.61\(\pm\)0.121 & 416.70/231 \\ 82 & \(9.89^{+0.65}_{-0.83}\) & \(11.08^{+4.08}_{-4.08}\) & \(59.42^{+19.00}_{-11.90}\) & \(0.01^{+0.01}_{-0.01}\) & 195.88/205 & \(17.35^{+0.65}_{-0.65}\) & 1.31\(\pm\)0.21 & 288.45/207 \\ \hline \end{tabular} \end{table} Table 4: _(continued)_ \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ID} & \multicolumn{4}{c}{c} & \multicolumn{4}{c}{CPL} & \multicolumn{4}{c}{OTTB} \\ \cline{2-5} \multicolumn{1}{c}{Photon Index} & \(E_{peak}\) & norm & PGSTAT/DOF & kT (keV) & norm & PGSTAT/DOF \\ \hline 1 &... &... &... &... & 24.29\({}^{+0.79}_{-0.79}\) & \(8.60^{+0.40}_{ **Table 5** _(continued)_ \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{ID} & \multicolumn{3}{c}{CPL} & \multicolumn{3}{c}{OTTB} \\ \cline{2-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Photon Index} & \multicolumn{1}{c}{\(E_{\rm peak}\)} & \multicolumn{1}{c}{norm} & \multicolumn{1}{c}{PGSTAT/DOF} & kT (keV) & norm & PGSTAT/DOF \\ \hline [MISSING_PAGE_POST] \(-0.92\)\({}^{\rm 40.13}_{-0.13}\) & 28.53\({}^{\rm 41.84}_{-1.84}\) & 49.89\({}^{\rm 41.38}_{-1.38}\) & 407.75/273 & 27.32\({}^{\rm 40.37}_{-0.37}\) & 24.02\({}^{\rm 40.44}_{-0.44}\) & 408.13/274 \\ 66 & \(-0.91\)\({}^{\rm 40.28} \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline ID & \multicolumn{3}{c}{BBPL} & \multicolumn{3}{c}{PL} \\ \cline{2-9} & kT (keV) & norm1 & Photon Index & norm2 & PGSTAT/DOF & Photon Index & norm & PGSTAT/DOF \\ \hline 1 & 8.80\({}^{+0.75}_{-0.75}\) & 39.15\({}^{+39.46}_{-38.81}\) & \(-3.22\({}^{+0.24}_{-0.24}\) & 17126.38\({}^{+27207.40}_{-127207.40}\) & 187.21/206 & -3.42\({}^{+0.07}_{-0.07}\) & 548845.91\({}^{+414783.74}_{-0.75}\) & 195.82/208 \\ 2 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ [MISSING_PAGE_POST] .. &... &... &... &... & \(-\) & \(-\) & \(-\) \\ 22 & 9.71\({}^{+1.27}_{-1.27}\) & 5.75\({}^{+2.83}_{-2.83}\) & \(-0.79\)\({}^{+0.62}_{-0.62}\) & 25.05\({}^{+1.54}_{-0.25}\) & 222.09/215 & -2.34\({}^{+0.14}_{-0.14}\) & 362.47\({}^{+491.91}_{-0.91}\) & 227.87/217 \\ 23 &... &... &... &... &... & \(-\) & \(-\) & \(-\) \\ 24 & 11.12\({}^{+2.02}_{-2.92}\) & 1.15\({}^{+1.12}_{-1.12}\) & \(-1.36\)\({}^{+0.56}_{-0.56}\) & 7.92\({}^{+148.41}_{-0.72}\) & 147.74/182 & -1.79\({}^{+0.13}_{-0.13}\) & 73.83/267 & 148.39/184 \\ 25 & 7.61\({}^{+0.52}_{-0.52}\) & 56.77\({}^{+41.52}_{-31.28}\) & \(-1.15\)\({}^{+0.56}_{-0.54}\) & 4.74\({}^{+15.71}_{-4.77}\) & 100.84/118 & -2.80\({}^{+0.15}_{-0.15}\) & 18703.55\({}^{+ \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ID} & \multicolumn{5}{c}{BBPL} & \multicolumn{5}{c}{PL} \\ \cline{2-10} \multicolumn{1}{c}{} & kT (keV) & norm1 & Photon Index & norm2 & PGSTAT/DOF & Photon Index & norm & PGSTAT/DOF \\ \hline 62 & 9.22\({}^{+0.83}_{-0.83}\) & 18.00\({}^{+5.88}_{-5.58}\) & \(-\)1.35\({}^{+0.41}_{-0.41}\) & 13.28\({}^{+25.0}_{-13.26}\) & 148.02/198 & \(-\)2.53\({}^{+0.00}_{-0.09}\) & 4913.13\({}^{+1854.0}_{-1854.90}\) & 156.04/200 \\ 63 & 11.51\({}^{+2.13}_{-2.13}\) & 5.68\({}^{+3.00}_{-0.94}\) & \(-\)0.90\({}^{+12.23}_{-0.78}\) & 9.92\({}^{+7.32}_{-7.93}\) & \(-\)2.25\({}^{+0.19}_{-0.19}\) & 1272.57\({}^{+971.22}_{-577.12}\) & 95.67/35 \\ 64 & 10.34\({}^{+0.18}_{-0.18}\) & 129.90\({}^{+10.63}_{-16.43}\) & \(-\)2.75\({}^{+0.19}_{-0.16}\) & 36038.20\({}^{+2.031}_{-2707.67}\) & 339.36/272 & \(-\)3.25\({}^{+0.19}_{-0.03}\) & 829074.24\({}^{+2800.15}_{-2400.005}\) & 806.74/274 \\ 65 & 6.05\({}^{+0.21}_{-0.21}\) & 7.19\({}^{+12.34}_{-12.34}\) & -2.42\({}^{+0.45}_{-0.45}\) & 2389.36\({}^{+520.40}_{-233.14}\) & 2559.90/262 & -3.32\({}^{+0.00}_{-0.05}\) & 36665.92\({}^{+7127.44}_{-60.44}\) & 379.82/264 \\ 67 &... &... &... &... &... &... &... & 323.03417.64 & 284.48/287 \\ 68 & 8.94\({}^{+0.21}_{-0.28}\) & 78.96\({}^{+0.74}_{-1.75}\) & -3.97\({}^{+0.36}_{-0.36}\) & 16.66\({}^{+320.8}_{-15.24}\) & 268.85/236 & -3.23\({}^{+0.00}_{-0.00}\) & 212928.94\({}^{+4742.82}_{-474.28}\) & 284.10/238 \\ 70 & 7.83\({}^{+0.56}_{-0.56}\) & 58.20\({}^{+16.55}_{-16.5}\) & -1.92\({}^{+0.47}_{-0.47}\) & 173.60\({}^{+142.92}_{-172.92}\) & 167.81/170 & -3.21\({}^{+0.00}_{-0.09}\) & 90638.93\({}^{+3323.77}_{-217}\) & 174.82/172 \\ 71 &... &... &... &... &... &... &... & 23.020 & 631.07272.06 & 79.759/44 \\ 72 & 9.21\({}^{+0.27}_{-0.27}\) & 48.66\({}^{+4.53}_{-4.34}\) & -1.66\({}^{+0.34}_{-0.34}\) & 57.69\({}^{+78.61}_{-55.66}\) & 284.91/250 & -3.10\({}^{+0.14}_{-0.14}\) & 10832.07\({}^{+1705.65}_{-313.55}\) & 343.73/252 \\ 73 &... &... &... &... &... &... &... & \({}^{+0.14}_{-0.14}\) & 133.268\({}^{+633.35}_{-83.35}\) & 142.00/165 \\ 74 & 12.84\({}^{+3.00}_{-3.07}\) & 1.71\({}^{+14.22}_{-1.07}\) & -1.03\({}^{+0.94}_{-3.00}\) & 3.09\({}^{+16.06}_{-30.0}\) & 83.27/122 & -1.75\({}^{+0.16}_{-0.10}\) & 117.70/181 \\ 75 &... &... &... &... &... &... &... & \({}^{+0.14}_{-0.13}\) & 133.268\({}^{+633.35}_{-83.35}\) & 142.00/165 \\ 76 &... &... &... &... &... &... &... & \({}^{+0.14}_{-0.10}\) & 133.268\({}^{+633.35}_{-83.35}\) & 142.00/165 \\ 77 & 12.84\({}^{+3.00}_{-3.07}\) & 1.71\({}^{+14.22}_{-1.42}\) & -1.03\({}^{+0.94}_{-3.00}\) & 3.09\({}^{+16.06}_{-30.0}\) & 83.27/122 & -1.75\({}^{+0.16}_{-0.10}\) & 117.70/181 \\ 78 &... &... &... &... &... &... & \({}^{+0.14}_{-0.13}\) & 133.268\({}^{+633.35}_{-83.35}\) & 142.00/165 \\ 79 &... &... &... &... &... &... &... & \({}^{+0.14}_{-0.10}\) & 133.268\({}^{+633.35}_{-83.35}\) & 142.00/165 \\ 80 &... &... &... &... &... &... & \({}^{+0.14}_{-0.10}\) & 133.268\({}^{+633.35}_{-83.35}\) & 142.00/165 \\ 81 & 7.06\({}^{+0.20}_{-0.12}\) & 284.91\({}^{+16.40}_{-16.40}\) & -1.97\({}^{+0.49}_{-0.49}\) & 274.00/268 & 1278.16/229 & -1.75\({}^{+0.16}_{-0.10}\) & 117.70/181 \\ 82 & 9.28\({}^{+1.06}_{-1.06}\) & 12.12\({}^{+5.20}_{-2.27}\) & -1.17\({}^{+0.36}_{-0.36}\) & 4.98\({}^{+4.98}_{-4.98}\) & 193.75/205 & -2.24\({}^{+0.10}_{-0.10}\) & 1261.14\({}^{+4510.88}_{-4510.88}\) & 203.44/207 \\ \hline \hline \end{tabular} \end{table} Table 6: (continued) \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ID} & \multicolumn{1}{c}{\(kT_{\rm min}\) (keV)} & \(\alpha\) & norm & PGSTAT & ID & \(kT_{\rm min}\) (keV) & \(\alpha\) & norm & PGSTAT \\ \hline 1 & 5.85\({}^{+0.11}_{-0.12}\) & 6.95\({}^{+0.16}_{-0.12}\) & 47339.11\({}^{+2325.10}_{-1135.09}\) & 316.77 & 40 & 3.83\({}^{+15.68}_{-5.33}\) & 4.87\({}^{+0.21}_{-0.13}\) & 45.39\({}^{+16.50}_{-185.92}\) &
2305.10764
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
We introduce OpenShape, a method for learning multi-modal joint representations of text, image, and point clouds. We adopt the commonly used multi-modal contrastive learning framework for representation alignment, but with a specific focus on scaling up 3D representations to enable open-world 3D shape understanding. To achieve this, we scale up training data by ensembling multiple 3D datasets and propose several strategies to automatically filter and enrich noisy text descriptions. We also explore and compare strategies for scaling 3D backbone networks and introduce a novel hard negative mining module for more efficient training. We evaluate OpenShape on zero-shot 3D classification benchmarks and demonstrate its superior capabilities for open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than 10% for existing methods. OpenShape also achieves an accuracy of 85.3% on ModelNet40, outperforming previous zero-shot baseline methods by 20% and performing on par with some fully-supervised methods. Furthermore, we show that our learned embeddings encode a wide range of visual and semantic concepts (e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D and image-3D interactions. Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
Minghua Liu, Ruoxi Shi, Kaiming Kuang, Yinhao Zhu, Xuanlin Li, Shizhong Han, Hong Cai, Fatih Porikli, Hao Su
2023-05-18T07:07:19Z
http://arxiv.org/abs/2305.10764v2
# OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding ###### Abstract We introduce OpenShape, a method for learning multi-modal joint representations of text, image, and point clouds. We adopt the commonly used multi-modal contrastive learning framework for representation alignment, but with a specific focus on scaling up 3D representations to enable open-world 3D shape understanding. To achieve this, we scale up training data by ensembling multiple 3D datasets and propose several strategies to automatically filter and enrich noisy text descriptions. We also explore and compare strategies for scaling 3D backbone networks and introduce a novel hard negative mining module for more efficient training. We evaluate OpenShape on zero-shot 3D classification benchmarks and demonstrate its superior capabilities for open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy of \(46.8\%\) on the 1,156-category Objaverse-LVIS benchmark, compared to less than \(10\%\) for existing methods. OpenShape also achieves an accuracy of \(85.3\%\) on ModelNet40, outperforming previous zero-shot baseline methods by \(20\%\) and performing on par with some fully-supervised methods. Furthermore, we show that our learned embeddings encode a wide range of visual and semantic concepts (e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D and image-3D interactions. Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation. ## 1 Introduction 3D shape understanding has recently garnered a surge of interest driven by the growing demands in real-world applications, such as augmented/virtual reality, autonomous driving, and robotics. Despite significant advancements in 3D recognition and analysis, existing data-driven approaches are still greatly limited by the scale of 3D training datasets and tend to exhibit poor generalization when facing unseen shape categories, hindering the deployment of existing models in real-world applications. Note that 3D shapes and 2D images can be easily linked through rendering, and the dataset scale issue of 2D images has been remarkably addressed, as shown in recent works such as CLIP [53]. Therefore, many recent studies aim to utilize pre-trained 2D image-language models [53; 57] to assist 3D tasks, such as 3D generation [22; 26; 43; 61; 33; 7] and 3D scene-level segmentation [18; 27; 14; 76; 39; 47]. Regarding 3D shape-level understanding, a straightforward idea is to project 3D data to the 2D domain through rendering and use CLIP to analyze the 2D images, thereby enabling zero-shot 3D shape classification [82; 84]. However, these methods suffer from occlusion and information loss during projection, and unnecessary latency due to point cloud rendering and multiple CLIP inferences. To overcome the limitations caused by projection, it is necessary to train a 3D-native model by distilling knowledge from pretrained 2D models. However, training a 3D-native model requires a set of 3D shapes, and the amount of knowledge that can be distilled is determined by the size of the 3D dataset. For example, ULIP [75] aims to learn a joint representation space between language, 2D images, and 3D shapes, but uses a small-scale 3D dataset ShapeNetCore [8] for knowledge distillation. Specifically, ULIP fixes the 2D CLIP text and image encoders and trains a dedicated 3D-native point cloud encoder to extract 3D shape representations. The 3D encoder strives to align the 3D shape embedding space with the CLIP image and language embedding spaces by utilizing contrastive learning across all three modalities. However, since ULIP is only trained on 52K shapes of 55 object categories, it still struggles with out-of-distribution shape categories and fails to demonstrate an impressive open-world understanding of 3D shapes. In this work, we propose a novel method called OpenShape, which follows a similar paradigm as ULIP but aims to achieve a more generalized and scalable joint representation space encompassing language, 2D images, and 3D shapes. Our focus mainly lies on scaling up representation learning and addressing corresponding challenges. In OpenShape, we emphasize four key factors during the training process: (a) data scale: we significantly increase the scale of 3D training data by combining four public 3D shape datasets, resulting in 876k 3D shapes covering much more diverse categories; (b) text quality: the 3D shapes from our main dataset, Objaverse [12], is dominated with inaccurate or uninformative text descriptions. Given the data scale, we propose three strategies to automatically filter and enrich the text descriptions; (c) 3D backbone scaling: since most existing 3D backbones target small datasets, we find that it's important but non-trivial to scale up the 3D backbones; and (d) data resampling: since the ensembled dataset is highly unbalanced, we utilize hard negative mining to improve the model's discriminative ability. We first evaluate OpenShape on the zero-shot 3D shape classification task. As shown in Figure 1, OpenShape outperforms previous zero-shot approaches on the ModelNet40 dataset by at least 20%. Moreover, OpenShape excels at handling long-tail categories. On the challenging Objaverse-LVIS dataset, which contains 1,156 categories, OpenShape achieves a 46.8% accuracy, significantly surpassing previous methods. Notably, this performance gap remains even when ULIP is retrained on our ensembled datasets, highlighting the superiority of our text enrichment and training strategies. Besides zero-shot classification, we present demos that showcase the wide range of visual and semantic concepts learned by OpenShape. For example, in Figure 1-right, we take two 3D shapes as Figure 1: **Left**: Zero-shot shape classification on the Objaverse-LVIS (1,156 categories) and ModelNet40 datasets. OpenShape outperforms previous methods by a large margin. We exclude shapes in Objaverse-LVIS during training, and we also retrain ULIP [75] on our ensembled training shapes for fair comparison. **Right**: Our shape representations encode a broad range of semantic and visual concepts. We input two 3D shapes and use their shape embeddings to retrieve the top three shapes whose embeddings are simultaneously closest to both inputs. See Section. 4.4 for more details. input and use their OpenShape embeddings to retrieve the top three shapes whose embeddings are simultaneously closest to both inputs from our ensembled dataset. The retrieved shapes exhibit an interesting combination of the semantic and geometric elements from both input shapes. Furthermore, since we align our 3D shape embedding space with the CLIP language and image embedding space, we demonstrate that OpenShape embeddings can be easily integrated with other CLIP-based models to perform cross-modality tasks such as point cloud captioning and point cloud-conditioned image generation. ## 2 Related Work ### CLIP for 3D Learning Image-language models like CLIP have achieved remarkable performance through large-scale image-text pretraining [53; 29; 35; 80; 4; 54; 59]. As these models excel at capturing rich visual concepts and possess impressive zero-shot capabilities, they have been applied to various 3D vision tasks. For instance, numerous recent works utilize CLIP to facilitate zero-shot text-to-3D generation [22; 26; 43; 61; 33; 7; 32; 5; 28; 74; 38], typically through CLIP-guided per-scene optimization. From a recognition perspective, some works focus on scene-level representation, aiming to leverage CLIP priors for zero-shot 3D segmentation or detection in both indoor [18; 27; 14; 76; 39; 47; 79; 23; 58; 81; 31] and outdoor scenes [9; 21]. Meanwhile, another line of work focuses on shape-level understanding, targeting zero-shot shape classification [82; 84; 51; 75; 19] and part segmentation [37; 1]. There are two primary working paradigms for these methods. The first [82; 84; 24] involves using images as a medium representation, projecting 3D point clouds into 2D and employing 2D CLIP for inference. However, these methods typically suffer from occlusion and information loss during projection, along with unnecessary latency due to point cloud rendering and multiple 2D CLIP inferences. The second paradigm involves training a 3D-native encoder attempting to distill or fuse CLIP features into 3D representations. Our paper follows this paradigm. ### 3D Shape Representation Learning Various works have studied self-supervised pretraining for point clouds by designing pretext tasks [15; 66; 48; 2; 64] such as self-reconstruction [55; 13; 3; 69], masked auto-encoding [46; 77; 20], distortion reconstruction [62; 42; 65], normal estimation [55], and contrastive learning [83; 60; 73]. These tasks enhance models' shape representations and improve their performance on downstream applications, although they do not involve multimodal semantic alignments during pretraining. Recently, some works [51; 75; 19], exemplified by ULIP [75], have explored learning multimodal joint representations for 3D shapes. They train 3D-native shape encoders by aligning 3D shape embeddings with CLIP's language and/or image embeddings through multimodal contrastive learning. Works like ReCon [51] further combines cross-modal contrastive learning with masked auto-encoding for added enhancement. While these methods allow for zero-shot 3D classification through the computation of 3D-text similarity, the amount of distilled knowledge and their model capability are heavily limited by the small-scale training datasets used. Our work follows this paradigm but aims to learn more generalizable and scalable representations to enable open-world 3D shape understanding. ## 3 Method We propose a novel method, _OpenShape_, for learning generalizable and scalable multi-modal joint representation between language, 2D images, and 3D shapes, as shown in Figure 2. We first introduce the multi-modal contrastive learning framework we used for aligning representations of three modalities in Section 3.1. We then elaborate how we create our training sets and enrich our text data in Sections 3.2 and 3.3. In Section 3.4, we present how we scale up our 3D backbone models. Finally, we propose a hard negative mining strategy to enhance contrastive learning in Section 3.5. ### Multi-Modal Representation Alignment We aim to learn 3D shape representations that are aligned with pretrained CLIP embedding spaces of language and image. As shown in Figure 2 (c), we train a 3D native encoder \(f^{P}\) that takes a 3D point cloud as input and extracts 3D shape feature. Following previous works [51; 75; 19], such as ULIP [75], we utilize multi-modal contrastive learning for representation alignment. Since CLIP is pretrained on a much larger scale data, we freeze both its text encoder \(f^{T}\) and its image encoder during feature alignment to preserve CLIP's feature priors and avoid model collapse. Specifically, given a sampled batch of triplets \(\{(P_{i},T_{i},I_{i})\}\), where \(P_{i}\) denotes a point cloud of a 3D shape, \(T_{i}\) and \(I_{i}\) denote corresponding text and image, the contrastive loss is calculated as: \[-\frac{1}{4n}\sum_{i}\left(\log\frac{\exp(h_{i}^{P}\cdot h_{i}^{T}/\tau)}{\sum_ {j}\exp(h_{i}^{P}\cdot h_{j}^{P}/\tau)}+\log\frac{\exp(h_{i}^{T}\cdot h_{i}^{P} /\tau)}{\sum_{j}\exp(h_{i}^{P}\cdot h_{j}^{P}/\tau)}+\log\frac{\exp(h_{i}^{P} \cdot h_{i}^{P}/\tau)}{\sum_{j}\exp(h_{i}^{P}\cdot h_{j}^{P}/\tau)}\right) \tag{1}\] where \(n\) is the number of shapes in a batch; \(\tau\) is a learnable temperature; \(h_{i}^{P}=f^{P}(P_{i})/|f^{P}(P_{i})|\), \(h_{i}^{T}=g^{T}(f^{T}(T_{i}))/|g^{T}(f^{T}(T_{i}))|\), and \(h_{i}^{I}=g^{I}(f^{I}(I_{i}))/|g^{I}(f^{I}(I_{i}))|\) denote normalized projected features of \(P_{i}\), \(T_{i}\), and \(I_{i}\), where \(g^{T}\) and \(g^{I}\) are two learnable linear projections. Since \(f^{T}\) and \(f^{I}\) are frozen, we extract all \(f^{T}(T_{i})\) and \(f^{I}(I_{i})\) before training and cache them for acceleration. In most of our experiments, we utilize OpenCLIP ViT-bigG-14 [25] as the pretrained CLIP model. ### Ensembling 3D Datasets Since the scale and diversity of training triplets play a crucial role in learning scalable shape representations, we ensemble four currently-largest public 3D datasets for training as shown in Figure 2 (a), resulting in 876k training shapes. Among these four datasets, ShapeNetCore [8], 3D-FUTURE [16] and ABO [11] are three popular datasets used by prior works. They contain human-verified high-quality 3D shapes, but only cover a limited number of shapes and dozens of categories. The Objaverse [12] dataset is a more recent dataset, containing many more 3D shapes and covering significantly more diverse categories. However, shapes in Objaverse are mainly uploaded by web users and not verified by experts, and thus have uneven quality and exhibit highly unbalanced distributions, necessitating further processing. To create triplets for training, for each shape, we sample 10,000 points from the mesh surface and interpolate the point colors according to the mesh textures. We also render 12 color images from the preset camera poses that uniformly cover the whole shape. For datasets providing thumbnails, we include them as part of image candidates, since they typically capture the shape from a better camera view. For the Objaverse dataset, we use the model name as the raw text for each shape. For other datasets, we utilize provided metadata to create raw texts (see supplementary for details). During each pretraining iteration, we randomly sample one rendered image or thumbnail for each shape, and apply standard augmentation to the point clouds [75]. ### Text Filtering and Enrichment We find that only applying contrastive learning between 3D shapes and 2D images is insufficient to fuel zero-shot 3D classification, even when training on large-scale datasets. We conjecture that this is caused by the inherent domain gap in CLIP's language and image embedding spaces, which is also observed by previous studies [36; 67]. Consequently, 3D-text alignment is not guaranteed even if we obtain good 3D-image alignments via contrastive learning. Therefore, we need to explicitly align 3D shapes with text. Along this process, to facilitate better 3D-text alignment, we introduce 3 techniques to improve the text quality: filtering, captioning, and image retrieval, as shown in Figure 2 (b). **Filtering.** As shown in Figure 3, the 3D shapes from our main dataset, Objaverse, is dominated with noisy text descriptions ("names") uploaded by web users. Many of the problematic texts can be identified from the text itself without seeing the corresponding 3D shape. We thus leverage a powerful Figure 2: (a) We ensemble four public 3D shape datasets, resulting in 876k shapes that encompass diverse categories and concepts. (b) We propose three strategies to automatically filter and enrich the noisy texts in the original datasets. (c) We train a 3D point cloud encoder to align the 3D shape embedding space with the CLIP’s text and image embedding spaces. We perform cross-modal contrastive learning with scaled 3D backbones and hard negative mining. (d) OpenShape embeddings can be easily integrated with other CLIP-based models, enabling various cross-modality tasks. large language model, GPT-4 [45], to filter out inaccurate or uninformative text descriptions. We find that GPT-4 excels at recognizing irrelevant contents, such as timestamps, pure model numbers, incomprehensible descriptions, random filenames (e.g., new project), and random characters. Through GPT-4, we filter out about 30% of raw user texts. Note that we only filter the texts, and still keep all shapes for training. More details, such as the prompts we used, are presented in the supplementary. **Captioning.** We utilize BLIP [34] and the Azure cognition services to caption the 2D thumbnails (if present, or images rendered from a fixed frontal view) of the 3D models, obtaining two texts for each shape. As shown in Figure 3, the captioning models can usually produce meaningful and descriptive captions that either enhance user-uploaded texts or replace low-quality ones. We also notice that the two caption models complement each other, leading to better performance. **Image Retrieval.** In addition to image captioning, we also perform image retrieval to obtain additional descriptions of 3D models. We retrieve k-NN images of shape renderings from the LAION-5B dataset [63] using the CLIP ViT-L retrieval index [6]. We then take the captions of the k-NN images as the retrieved texts for our 3D models. Compared with captioning model generations, retrieved texts cover a wider range of text styles. They can also include more fine-grained semantics than both the user texts and the generated captions (e.g., "Labrador" in Figure 3). In each iteration of pretraining, for each shape, we first randomly sample a text source category among the raw text (if unfiltered), the captions, and the retrieved texts. We then select a text candidate from the selected category. We also apply the template-based prompt engineering technique used in ULIP [75] to both training texts and test-time category names. Specifically, we extend a word or a phrase to a collection of templated simple sentences and take their average embedding. \begin{table} \begin{tabular}{c|c|c c|c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{ \begin{tabular}{c} **\#Param** \\ \end{tabular} } & \multicolumn{3}{c|}{Train on ShapeNet [8]} & \multicolumn{2}{c}{Train on Ens-no-LVIS} \\ \cline{3-6} & & & MNet40 & O-LVIS & MNet40 & O-LVIS \\ \hline PointNet [49] & 1.3M & 67.0 & 9.3 & 74.9 & 24.4 \\ DGCNN [70] & 2.3M & 67.8 & 9.0 & 74.2 & 24.8 \\ PointNLP [40] & 9.3M & 73.5 & 12.9 & 82.9 & 36.6 \\ PointNeXt [52] & 2.8M & 72.6 & 12.2 & 81.6 & 33.8 \\ PointBERT [78] & 5.1M & 70.3 & 10.8 & 84.5 & 37.0 \\ SparseConv [10] & 5.3M & 70.7 & 10.6 & 78.8 & 31.7 \\ \hline std. dev. & & 2.3 & 1.4 & 3.9 & 5.1 \\ \hline \end{tabular} \end{table} Table 1: Comparison of different 3D backbones **before scaling up their parameters**. Models are trained on ShapeNet [8] or our ensembled dataset excluding Objayerse-LVIS [12]. Zero-shot classification performance are evaluated on ModelNet40 [72] and Objayerse-LVIS [12]. Figure 4: Accuracy on Objayerse-LVIS [12] when _scaling up_ the parameters of different models. Figure 3: **Text Filtering & Enrichment Examples** In each example, the left section features the thumbnail, model name, and GPT-4 filtering results. The upper right section shows image captions from two captioning models, while the lower right section displays retrieved images and their corresponding texts. ### Scaling Up 3D Point Cloud Backbones Previous works on 3D point cloud learning have primarily focused on smaller-scale datasets like ShapeNet. These techniques may not be directly applicable to our larger-scale ensembled dataset and need to be scaled up accordingly. We find that different 3D backbones may exhibit distinct behavior and scalability when trained on datasets with varying sizes. Specifically, we compare six popular backbones trained on ShapeNet or our ensembled dataset by evaluating their zero-shot classification performance on ModelNet40 [72] and Objaverse-LVIS datasets (for now, these backbones are trained with their original configurations and without scaling up model sizes). **Objaverse-LVIS** is a subset of Objaverse dataset with human-verified category labels. With 1,156 categories, it serves as a suitable dataset for evaluating zero-shot long-tail classification, and we exclude all shapes of Objaverse-LVIS from this experiment. Results are shown in Table 1. We find that when trained on ShapeNet, all backbones share similar performances. However, when trained on our ensembled dataset, the performance gap between backbones increases significantly. This suggests that while the original versions of these backbones share a similar number of parameters, some may have been saturated when trained on small datasets, while others do not. We also explore the performance and scalability of these backbones when scaling up the model sizes and training on our ensembled dataset. Please refer to the supplementary for details on how we scale up each model. As shown in Figure 4, we observe that all 3D backbones benefit significantly from model scaling. However, traditional backbones without a shrinking hierarchical structure, such as DGCNN and PointNet, require operating completely on dense points or modeling the relationships (e.g., through kNN) between dense points. As a result, they become more time-consuming and memory-intensive when scaled up compared to more modern backbones. We therefore select PointBERT [78] (Transformer-based) and SparseConv [10] (convolution-based) as our 3D backbones for the remaining experiments, as they exhibit strong performance and scalability. ### Hard Negative Mining Our ensembled dataset exhibits a high degree of class imbalance. Certain common categories, such as building, may occupy tens of thousands of shapes, while many other categories, such as walnus and wallet, are underrepresented with only a few dozen or even fewer shapes. Consequently, when randomly constructing batches, it is unlikely that shapes from two confusing categories (e.g., apples and cherries) will be contrasted within the same batch. Inspired by some previous works [56; 30], we propose an offline hard negative mining strategy for improving the training efficiency and performance. Specifically, in the first round of training, we train our model with random batches until it is about to converge. We then compute the kNN for each shape in the learned 3D embedding space. In the second round of training, for each iteration, we randomly select \(s\) seed shapes and then obtain \(m\) neighbors from the kNN results of each seed shape, resulting \(s\times m\) shapes per batch. In this way, confusing pairs are more likely to be selected in a single batch. However, this may also introduce false negative pairs (e.g., two apples) into contrastive learning. To mitigate this issue, we leverage image and text embeddings to filter out pairs sharing similar texts when calculating the contrastive loss. Specifically, for two shapes \(i\) and \(j\) selected from the same seed shape, if \(h_{j}^{T}\cdot h_{i}^{l}+\delta>h_{i}^{T}\cdot h_{i}^{l}\), where \(h^{T}\) and \(h^{I}\) are text and image embeddings, and \(\delta\) is a small threshold, we believe that the text embeddings of \(i\) and \(j\) are very close to each other, and we remove \(j\) from \(i\)'s negative examples when calculating contrastive loss. By employing this strategy to construct batches, we observe faster and better model learning. ## 4 Experiments ### Zero-Shot Shape Classification We evaluate the zero-shot classification performances of our models on three benchmarks: the traditional ModelNet40 [72] and ScanObjectNN [68], as well as a new benchmark, Objaverse-LVIS [12]. ModelNet40 and ScanObjectNN consist of 40 and 15 common categories, respectively. Objaverse-LVIS is an annotated subset of Objaverse [12] and comprises 46,832 shapes among 1,156 LVIS [17] categories. With a much larger base of classes than other benchmarks, Objaverse-LVIS presents a challenging long-tailed distribution, making it a better reflection on models' performance in open-world scenarios. We compare OpenShape with existing zero-shot approaches, including PointCLIP [82], PointCLIPv2 [84], ReCon [51], CG3D [19], CLIP2Point [24], and ULIP [75]. Among them, PointCLIP [82] and PointCLIPv2 [84] project point clouds into 2D images and directly utilize 2D CLIP for inference, while other methods leverage the CLIP embedding spaces for alignment and require 3D shapes for training. We report results on these baselines using their released checkpoints. To better analyze the source of our performance gains, we also retrain the baseline ULIP [75] on our ensembled shape dataset, but we use the original texts in the four constituent datasets along with the official codebase without backbone scaling. We train OpenShape and ULIP on three different sets of training shapes: "**Ensembled**" denotes using all shapes from the four datasets; "**Ensembled (no LVIS)**" is the same but excludes all shapes from the Obiavserse-LVIS subset; "**ShapeNet**" only includes shapes from the ShapeNet [8] dataset. Note that even when LVIS shapes are included in the training shapes (i.e., the "Ensembled" dataset), their test-time category labels are probably not included in their training texts. Please refer to the supplementary for more training and evaluation details. Table 2 shows the results. We observe that OpenShape consistently outperforms prior approaches, even when trained only on ShapeNet. When models are trained on our larger-scale ensembled dataset, they receive a significant performance boost. In this case, OpenShape still surpasses retrained ULIP by a significant margin, demonstrating the advantages of our text enrichment, backbone scaling, and other training strategies. Specifically, OpenShape greatly improves the classification accuracy on the long tail categories in Obiavserse-LVIS from a dull \(<10\%\) to **46.8\(\%\)**, outperforming the retrained ULIP by about 20 points and reaching a decent top-\(5\) accuracy of **77.0\(\%\)**. These results demonstrate OpenShape's capability to recognize open-world objects effectively. As for ModelNet40, OpenShape achieves a **85.3\(\%\)** accuracy, surpassing previous methods by a substantial margin of at least 20 percent. OpenShape also achieves impressive top-3 and top-5 accuracies of **96.5\(\%\)** and **98.0\(\%\)**. To the best of our knowledge, this is the first time zero-shot methods have matched the performance of a fully-supervised 3D learning method on ModelNet40, where OpenShape outperforms fully-supervised 3D ShapeNets [72] and VoxNet [41]. In addition, on ScanObjectNN, which contains challenging real scans with noise and occlusion, OpenShape exhibits decent sim-to-real transfer capabilities. To contextualize, OpenShape-SparseConv achieves **56.7\(\%\)** zero-shot accuracy on ScanObjectNN without specific sim-to-real training, which surpasses \(52.7\%\) reported by SKPConv [71], a recent method specially designed for sim-to-real transfer in point cloud classification tasks. \begin{table} \begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline & training shape & \multicolumn{3}{c|}{Obiavserse-LVIS [12]} & \multicolumn{3}{c|}{ModelNet40 [72]} & \multicolumn{3}{c}{ScanObjectNN [68]} \\ \cline{2-10} Method & source & Top1 & Top3 & Top5 & Top1 & Top3 & Top5 & Top1 & Top3 & Top5 \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} PointCLIP [82] \\ PointCLIP v2 [84] \\ \end{tabular} } & 2D inferences, & 1.9 & 4.1 & 5.8 & 19.3 & 28.6 & 34.8 & 10.5 & 20.8 & 30.6 \\ & 4.7 & 9.5 & 12.9 & 63.6 & 77.9 & 85.0 & 42.2 & 63.3 & 74.5 \\ \hline \multirow{2}{*}{\begin{tabular}{c} ReCon [51] \\ CG3D [19] \\ CLIPPointBERT [24] \\ ULP-PointBERT (Officii) [75] \\ OpenShape-SparseConv \\ OpenShape-PointBERT \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} 1.1 \\ 2.7 \\ 1.8 \\ 10.8 \\ 10. ### Few-Shot Linear Probing In the literature, linear probing is a common way to assess the representation learning capabilities of a model. To perform linear probing, we gather and freeze the representation vectors from all samples in a dataset. Subsequently, we train a linear classifier using these fixed vectors and few-shot class labels. We evaluate the accuracy of the linear classifier on three benchmarks: Objaverse-LVIS [12], ModelNet40 [72], and ScanObjectNN [68]. Figure 5 summarizes the performance of OpenShape in comparison with ULIP [75] (official release and our retrained versions) and PointCLIPv2 [84]. On the most challenging Objaverse-LVIS benchmark, OpenShape outperforms all other methods by a large margin. Notably, zero-shot OpenShape beats few-shot linear probes of other methods. On ModelNet40 and ScanObjectNN, we do not see a large performance margin between OpenShape and retrained ULIP. We hypothesize that for few-shot ModelNet40, the error is dominated by in-category sample bias rather than the representation quality; while for ScanObjectNN, the domain gap plays a major role. Since both OpenShape and retrained ULIP are exposed to the same source domain of training objects, their few-shot out-of-domain generalization performances tend to be similar. ### Ablation Study We perform various ablations by training a scaled version of SparseConv [10] on the ensembled dataset and then evaluate it on the Objaverse-LVIS [12] and ModelNet40 [72] zero-shot classification benchmarks, unless otherwise specified. The results are shown in Table 3 and Figures 6 and 7. **Data and Model Scaling.** We investigate the impact of training data by ablating (1) without or with only Objaverse shapes (Tab. 3) and (2) with different ratios of our ensembled dataset (Fig. 6). We observe that training with \(1\%\) of our ensembled dataset (about \(8.8\)k shapes) achieves similar or better zero-shot performance than training without Objaverse shapes (about \(77.1\)k shapes), indicating that the diversity of training data is sometimes more crucial than the scale. In addition, we compare the performances between scaled-up and non-scaled-up backbones. From Tab. 3, we demonstrate that model scaling plays an essential role when training on our large-scale ensembled dataset (also Fig. 4). **Text Filtering and Enrichment.** As shown in Tab. 3, both text filtering and text enrichment are beneficial for performance. We also investigate the specific text enrichment strategies to use for the SparseConv and PointBERT backbones. In Fig. 7, we observe that both image captioning and text retrieval are helpful, and including both yield the best results. Notably, PointBERT improves more than 10 points from text enrichment, highlighting the significance of enhancing text quality. **Other Aspects.** We also conduct additional ablation studies on color information, contrastive loss components, and our hard-negative mining strategy in Tab. 3. We observe that OpenShape performs well with only \(xyz\) coordinates as input and no RGB color. While 3D-image contrastive loss is also helpful, we observe that 3D shape-text alignment plays a very essential role for model zero-shot generalization, which necessitates our text filtering and text enrichment strategies that significantly \begin{table} \begin{tabular}{c|c c} \hline \hline Variant & O-LVIS & MNet40 \\ \hline No Objaverse shapes & 13.9 & 75.5 \\ Only Objaverse shapes & 41.6 & 79.2 \\ No backbone scale up & 31.7 & 78.7 \\ \hline No caption \& retrieval & 37.0 & 82.9 \\ No text filtering & 41.4 & 82.9 \\ \hline No point rgb, only xyz & 39.6 & 83.6 \\ No text contras. learning & 23.3 & 67.4 \\ No image contras. learning & 41.0 & 81.0 \\ \hline Full & 42.0 & 83.1 \\ Full + hard mining & 43.4 & 83.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study. Top 1 zero-shot accuracies on ModelNet40 [72] and Objaverse-LVIS [12] are shown. enhance text quality. Lastly, by employing our hard negative mining strategy, OpenShape effectively addresses the issue of unbalanced data distribution, leading to further improvements in performance. ### Cross-Modal Applications **Multi-modal 3D Shape Retrieval.** Through OpenShape multi-modal representations, we can index and retrieve 3D shapes from images, texts, or point clouds. In this section, we retrieve 3D shapes from our ensembled dataset by calculating the cosine similarity between input embedding(s) and 3D shape embeddings and performing kNN. As shown in Figure 8, OpenShape is capable of retrieving visually or semantically similar shapes from a single image or point cloud input. OpenShape embeddings encode a wide range of visual and semantic concepts. In Figure 9, we show that OpenShape supports retrieving 3D shapes from detailed text descriptions, which include fine-grained subcategories, attributes, and their combinations. Note that these input texts are typically not present in the raw texts of the retrieved shapes, indicating that OpenShape effectively learns generalizable concepts across shapes. In Figure 1, we provide a demo which takes two 3D shapes as inputs and retrieves the shapes that are simultaneously closest to both inputs. This is achieved by finding \(\operatorname*{arg\,max}_{i}\min(h_{i}^{P}\cdot h_{a}^{P},h_{i}^{P}\cdot h_{b}^ {P})\), where \(h_{a}^{P}\) and \(h_{b}^{P}\) denote normalized shape embeddings of the two input shapes. We can see that the retrieved shapes integrate visual or semantic elements in an interesting manner, highlighting the rich concepts and priors encoded in OpenShape embeddings. **Shape-Conditioned Multimodal Generation.** As OpenShape's 3D shape representations are aligned with CLIP's image and text embedding spaces, they can serve as inputs into other CLIP-based models to facilitate various multimodal generation applications. For example, we show that by feeding our 3D shape embeddings into ClipCap [44], an off-the-shelf image captioning model, along with Stable unCLIP [54], a text-to-image diffusion model, we can perform point cloud captioning and point cloud-conditioned image generation (optional text prompt supported) without extra training or finetuning. Qualitative results are shown in Figure 10. Please refer to the supplementary for more results and details. Figure 10: **(a) Point cloud captioning. (b) Point cloud-conditioned image generation. Our learned 3D shape embeddings can be integrated with off-the-shelf pretrained CLIP-based models (e.g., captioning and image generation models) to support various cross-modal applications.** Figure 9: **Text-input 3D shape retrieval. In each row, we show input texts on the left and two retrieved shapes for each text on the right. OpenShape embedding encodes a wide range of visual and semantic concepts and enables (a) retrieval of fine-grained subcategories (first two rows), and (b) control of attributes (e.g., color, shape, style) and their combinations (last two rows).** ## 5 Discussion and Conclusion We introduce OpenShape, a novel approach for learning scalable and generalizable multi-modal joint representations for 3D shapes. OpenShape representations effectively capture a wide range of semantic and visual concepts, enabling superior capabilities for open-world 3D shape recognition. By aligning OpenShape with CLIP's embedding space, our shape embeddings can be integrated with off-the-shelf CLIP-based models for various cross-modality applications. Moving forward, there are several directions worth further exploration: (a) More 3D data. While we utilized 876k 3D shapes during training, this is still quite limited compared to the 2D counterparts. We hope that our work inspires future investments in more resources to build even more powerful 3D representations. (b) Part-level information. Our current shape representations mainly focus on global semantic and visual features, and it would be beneficial to add more part-level supervision during training. (c) Sim-to-real domain gap. Our model is mainly trained on synthetic data, and it's challenging but crucial to explore explicit designs for reducing the domain gap with real-world shapes. ## 6 Appendix ### More Examples of Multi-Modal 3D Shape Retrieval In Figures 11 and 12, we showcase more examples of multi-modal 3D shape retrieval. ### More Examples of Shape-Conditioned Multimodal Generation In Figure 13 and Figure 14, we showcase more examples of point cloud captioning and point cloud-conditioned image generation. Figure 14: **Point cloud-conditioned image generation**. Each row shows three examples (input point clouds and generated images). Figure 13: **Point cloud captioning**. In each row, we show the input point clouds on the left and the generated captions on the right. ### Details on Raw Text Generation and Filtering #### 6.3.1 Raw Text Generation We leverage the metadata from the four datasets to generate the raw texts. Although the original datasets may contain numerous attributes for each shape, we carefully choose the most informative ones to compose the text, ensuring its quality and relevance. **Obiaverse**:We utilize the name associated with each shape to serve as the text. **ShapeNetCore**: For each shape, we generate three types of texts: (a) the name, (b) the category name (with a total of 55 categories), and (c) the concatenation of the sub-category names (with a total of 336 sub-categories), separated by commas. **3DFuture**: For each shape, we generate two types of texts: (a) the category, and (b) the concatenation of category, style, theme, and material, separated by commas. **ABO**: For each shape, we generate two types of texts: (a) the item_name, and (b) the product_type. In this way, we generate one or more raw texts for each shape. #### 6.3.2 Raw Text Filtering We employ GPT-4 [45] to filter out uninformative raw texts. To accomplish this, we divide all the raw texts into batches, each containing 256 entries, and process each batch independently using GPT-4. Here is an example illustrating the prompt we used and the corresponding response generated by GPT-4. Afterwards, we combine all the responses to create the final filtering results, effectively removing approximately 30% of the raw texts. ### Details on the Backbone Scaling Experiment In Figure 4 of the main paper, we investigate the performance and scalability of various backbones when scaling up their model sizes. For this experiment, we employ a default resolution of 10,000 points for input point clouds, a batch size of 200, and conduct the experiment on a single A100 GPU. In general, if instructions are given in the original paper of a backbone, we scale up the model as instructed. Otherwise, we scale up the model by expanding width or depth (i.e., stacking blocks or layers). Specifically, we scale up each backbone as follow: PointBERT [78]The scaling parameters are shown in Table 4. We scaled PointBERT to 72.1M parameters beyond the 32.3M version reported in Figure 4 of the main paper. However, at this scale, the model dramatically overfits on the training data and performs worse on all benchmarks than the 32.3M version. SparseConv [10]The smallest version (5.3M parameters) of the model is adapted from the MinkowskiFCNN model by adjusting the width of the final convolution and linear layers. The remaining three models are adaptations of MinkowskiResNet, each varying in the number of basic ResNet blocks used. See Table 5 for the specific scaling parameters. PointNeXt [52]PointNeXt is proposed as a scalable version of PointNet++ [50], and includes S/B/L/XL variants in the original paper. We simply adopt these official configurations. DGCNN [70] and PointNet [49]For these two backbones without a hierarchical structure, we increase the width of each layer proportionally to scale up to 4xPointNet and 2xDGCNN before we hit the GPU memory limit. As the models operate completely on dense points, it is impractical to use the default 10k-point resolution. We thus reduce the input resolution for the two backbones, resulting in 1k points for DGCNN and 4k points for PointNet. ### Details on Training and Evaluation Training DetailsWe freeze the CLIP text and image encoders and train the 3D encoder and two projection heads on our ensembled dataset using the cross-modal contrastive loss. We train the model on a single A100 GPU with a batch size of 200. Since we precache the text and image CLIP embeddings of all shapes, the training is greatly accelerated and takes about 300 A100 hours for convergence. We utilize an exponential learning rate schedule, and employ an range test to find the initial learning rate. For 32.3M version of PointBERT, we utilize a learning rate of \(5e-4\); for 72.1M version of PointBERT, we utilize a learning rate of \(4e-4\); and for other models, we utilize a learning rate of \(1e-3\). For hard-negative mining, the number of seed shapes \(s\) is set to 40, and the number of neighbors \(m\) is set to 5 per shape, and the threshold \(\delta\) is set to 0.1. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \# Parameters & \# Layers & Width & \# Heads & MLP Dim & \# Patches & Patch Embed Dim \\ \hline 5.1M & 6 & 256 & 4 & 1024 & 64 & 96 \\ 13.3M & 6 & 512 & 8 & 1024 & 64 & 128 \\ 32.3M & 12 & 512 & 8 & 1536 & 384 & 256 \\ 72.1M & 12 & 768 & 12 & 2304 & 512 & 256 \\ \hline \hline \end{tabular} \end{table} Table 4: Hyperparameters for scaling up PointBERT [78]. \begin{table} \begin{tabular}{c c c} \hline \hline \# Parameters & \# Convolution Layers & \# Linear Layers \\ \hline 5.3M & 7 & 4 \\ 29.0M & 18 & 3 \\ 33.7M & 26 & 3 \\ 41.3M & 42 & 3 \\ \hline \hline \end{tabular} \end{table} Table 5: Hyperparameters for scaling up SparseConv [10]. Fine-tuning CLIP Text and Image Encoders?After training OpenShape-PointBERT, we conducted experiments to unfreeze and finetune the CLIP text encoder for a single epoch. However, the results obtained did not demonstrate any noticeable improvement on the benchmarks. Moreover, we observed that finetuning the CLIP text encoder could potentially undermine the generalization capabilities of CLIP and hinder the integration of OpenShape embeddings into existing CLIP-based models. As a result, we choose to freeze the CLIP encoders throughout the entire training process. Evaluation DetailsWe evaluated all baselines using their publicly released pretrained checkpoints. Additionally, we retrained ULIP [75] on our ensembled training shapes using their official code base and backbone networks. Note that the retrained ULIP model utilized the original raw texts from the four datasets during training (prompt engineering is also applied), rather than our filtered and enriched texts. For ModelNet40 [72], the evaluation is conducted on the test split with 2,468 shapes. Regarding ScanObjectNN [68], we follow ULIP [75] to evaluate on the OBJ_ONLY version, which contains 581 test shapes. For Objavverse-LVIS [12], the input is 10,000 sampled points with point colors. For ModelNet40 [72], the input is 10,000 sampled points without color. For ScanObjectNN [68], we utilize the official 2,048 points without color as input. All methods use the same input during evaluation. The forward inference time on an A100 GPU for a 10,000-point point cloud is approximately 0.9ms for OpenShape-SparseConv and 3.8ms for OpenShape-PointBERT. ### Details on Shape-Conditioned Multimodal Generation Point Cloud CaptioningCLIPCap [44] utilizes a 10-token prefix generated from CLIP image embeddings to enable GPT-2 for captioning. In order to align with the off-the-shelf CLIPCap model, we trained a variant of OpenShape-PointBERT that employs CLIP ViT-B/32 embeddings instead of OpenCLIP ViT-G/14 used in other experiments. Consequently, we directly input the point cloud encoding, _without normalization_, into CLIPCap for captioning. Point Cloud Conditioned Image GenerationWe take the Stable Diffusion v2.1 unCLIP model [54] for image generation and replace the CLIP image condition encoder with our OpenShape encoder to perform image generation conditioned on point clouds (and optionally text prompts). The unCLIP model takes CLIP ViT-L/14 embeddings without normalization as input. To match the embedding space, we trained a variant of OpenShape-PointBERT with CLIP ViT-L/14 embeddings. Additionally, we noticed a significant mismatching of scales (\(L_{2}\)-norm of embedding vectors) between ViT-L/14 image embeddings and OpenShape embeddings. To mitigate this issue, we perform a re-normalization on OpenShape embeddings to a \(L_{2}\)-norm of \(\frac{1}{2}\sqrt{768}\), which is our observed mean \(L_{2}\)-norm of ViT-L/14 image embeddings. We use 50 diffusion steps. The guidance scale can be tuned freely.
2301.02697
Preferences on Ranked-Choice Ballots
This paper formalizes the lattice structure of the ballot voters cast in a ranked-choice election and the preferences that this structure induces. These preferences are shown to be counter to previous assumptions about the preferences of voters, which indicate that ranked-choice elections require different considerations for voters and candidates alike. While this model assumes that voters vote sincerely, the model of ranked-choice elections this paper presents allows for considerations of strategic voting in future work.
Brian Duricy
2023-01-06T19:34:49Z
http://arxiv.org/abs/2301.02697v1
# Preferences on Ranked-Choice Ballots ###### Abstract This paper formalizes the lattice structure of the ballot voters cast in a ranked-choice election and the preferences that this structure induces. These preferences are shown to be counter to previous assumptions about the preferences of voters, which indicate that ranked-choice elections require different considerations for voters and candidates alike. While this model assumes that voters vote sincerely, the model of ranked-choice elections this paper presents allows for considerations of strategic voting in future work. **JEL Codes:** D71, D72 **Keywords:** Ranked-choice voting, preferences, lattice theory ## 1 Introduction Social choice models and results usually require strict preference relations, or those where every alternative is uniquely ranked with respect to the others. This includes the subset of social choice theory dedicated to voting, despite real-life elections that use ranked-choice voting1 either mandating non-strict preference relations or showing that voters effectively vote as if this is the case. Regarding the former, the 2021 Primary Elections for New York City Mayor allowed voters to rank up to five candidates (Board of Elections in the City of New York (2021)), while the Democratic Primary had 13 total candidates to choose from (not including write-ins). Regarding the latter, Kilgour et al. (2020) list 17 ranked-choice elections and, amongst them, the highest average percentage of candidates on a ballot who were ranked was slightly above 80%. Experimental results such as Nielson (2017) similarly show that respondents generally do not approach ranking all--or even most--of the candidates. Ranked-choice elections serve as a compelling counterexample against mandating strict preference relations in all social choice models. The preferences that do appear in ranked-choice elections are an example of the preferences studied in Kreps (1979). This paper focuses on the structure of the ballots used in these elections, referred to as _ranked-choice ballots_. A ranked-choice ballot is the result of a voter having a _top-truncated order_ (or, alternatively, such as in Fitzsimmons and Lackner (2020), a _top order_) over the set of candidates. These terms are fully defined in Section 2, but the intuition is that not all alternatives must be uniquely ranked. Whereas previous work on top-truncated preferences like Ayadi et al. (2022) and Terzopoulou and Endriss (2021) have focused on scoring rules associated with these preferences, this paper examines the more foundational order-theoretic properties2 that arise from equipping a set with a top-truncated order. As Tomlinson et al. (2022) prove that differing ballot lengths can produce different winners in the same instant-runoff election, determining which scoring rule to use is an important related line of research. Footnote 2: A complete treatment of order, and more specifically lattice, theory can be found in Caspard et al. (2012) and Grätzer (2011), respectively. Another area of research on top-truncated preferences focuses on computational questions (e.g., Menon and Larson (2017)). Top-truncated preferences also necessitate a discussion of results that do not require a lattice, as similar work like Chambers and Echenique (2009) is based upon a lattice rather than a semilattice. That a top-truncated set is a join semilattice is the paper's first result, and one that informs the rest of the paper's findings. The results in Section 3 follow a unique and smooth path from lattice theory to utility functions, stopping along the way to provide novel applications of results from the preference and voting theory literature. The pairing of lattice theory with preference relations is common, and this paper contributes to this literature by focusing on antitone preference relations. Ranked-choice voting motivates the need for an exploration into if--and how--results from this literature apply to a context that is suited for antitone preferences. This paper is the first to identify the connection between top-truncated preferences and ranked-choice voting, and it connects multiple strands of literature that have previously existed somewhat independently of one another. With an understanding of some mathematical properties of ranked-choice ballots and the preferences that define them, normative work regarding the value of ranked-choice voting vis-a-vis other voting methods will be enhanced. ## 2 The Model and Additional Terminology ### The Model A _ranked-choice election_\((V,C,\succsim)\) consists of a (possibly infinite) set of voters, \(V\), a finite set of at least three candidates, \(C\), and a complete top-truncated order profile for \((V,C)\), \(\succsim\), which assigns to each voter \(v\in V\) a complete top-truncated order on \(C^{3}\). We define \((C,\succsim_{v})\) as a _ranked-choice ballot_ for voter \(v^{4}\). In general, \((C,\succsim_{v})\) is a _ballot_, with the type of order profile \(\succsim\) unspecified. Each voter can rank as many candidates (i.e., declare these candidates distinguishable to the others) as they wish, but they must rank at least one candidate. Example 1 below provides a sample ranked-choice ballot and its lattice representation. **Example 1**: Let \(C=\{a,b,c,d,x,y,z\}\) and let voter \(v\in V\)'s preferences over \(C\) be \(x\succ y\succ z\succ a\sim b\sim c\sim d\). This is alternatively represented as \(v\) ranking candidate \(x\) first, \(y\) second, \(z\) third, and candidates \(a\), \(b\), \(c\), and \(d\) unranked and tied for fourth. The lattice construction of this ballot is shown below; straight lines indicate strict preference between candidates and wavy lines indicate indifference between candidates. \(x\)\(y\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(y\)\(z\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(x\)\(z\)\(x\)\(z\)\(x\)\(x\)\ \[u(x)>u(y) \Leftrightarrow x\succ y\] \[u(x)=u(y) \Leftrightarrow x\sim y\] Additional terminology is needed for a full connection to the results of this paper and are defined in the next subsection. ### Additional Terminology The concepts in this subsection can be divided into two parts: one that focuses on the order- and lattice-theoretic concepts needed, and one that focuses on the concepts regarding the utility function used in this paper. A _partial order_ is a reflexive, transitive, and asymmetric binary relation. A set with a partial order is a _partially ordered set_. A binary relation \(\succsim\) is _monotone_ if for all \(x,y\in C\), \(x\geq y\Rightarrow x\succsim y\) and _antitone_ if for all \(x,y\in C\), \(x\geq y\Rightarrow x\precsim y\). If a candidate \(x\) is _preferred_ to candidate \(y\) by a voter \(v\), we write \(x\succ_{v}y\). If candidates \(x\) and \(y\) are _indistinguishable_ to voter \(v\), then we write \(x\sim_{v}y\). In the results section of this paper, it is sometimes easier to refer to candidates as _ranked_ or _unranked_; the former refers to candidates that are not indistinguishable to any other candidate5, whereas the latter refers to candidates that are indistinguishable to at least one other candidate. A voter who ranks all candidates except for one trivially causes that last candidate to be ranked as well. Footnote 5: Except, of course, itself, by the reflexivity of the binary relation. A _weak order_ is a partial order where indistinguishability is transitive. A _top-truncated order_ is a weak order where only the minimal elements are indistinguishable to one another, and a set with a top-truncated order is a _top-truncated set_ A partial order where every pair of elements are comparable is a _complete partial order_. A partially ordered set where no pair of elements are indistinguishable is a _totally ordered set_. A _join semilattice_ is a partially ordered set where the least upper bound of each two elements in the set exists6. An element \(x\in C\) is _join-irreducible_ if there exists a unique element \(y\in C\) such that \(x\) covers \(y\)7. Conversely, an element \(x\in C\) is _meet-irreducible_ if there exists a unique element \(y\in C\) such that \(x\) is covered by \(y\). An element is an _atom_ if it covers the least element of the set and is a _co-atom_ if it is covered by the greatest element of the set. Footnote 6: I.e., for all \(x,y\in C\), there exists \(z\) such that \(z=\sup\{x,y\}=x\lor y\). Footnote 7: I.e., \(x\succ y\) and there does not exist an element \(z\in C\) such that \(x\succ z\succ y\). **Remark 2**: Some elementary lattice-theoretic properties of top-truncated sets are noted here without proof. If a top-truncated set has a join-irreducible element, that element is also an atom. A top truncated set with a join-irreducible element is a totally ordered set. Every top-truncated set contains \(n-1\) meet-irreducible elements and has \(1\leq m\leq n-1\) co-atoms. \(\Box\) A binary relation \(\succsim\) is _modular_ (or _strongly quasisubmodular_) if for all \(x,y\in C\), \(x\sim(x\lor y)\Rightarrow x\lor z\sim(x\lor y)\lor z\). A _representation of \(\succsim\)_ is a function \(u:C\to\mathbb{R}\) such that for all \(x,y\in C\)\(x\succsim y\Rightarrow u(x)\geq u(y)\) and \(x\succ y\Rightarrow u(x)>u(y)\). A representation \(u:C\to\mathbb{R}\) is _submodular_ for a join semilattice if for all \(x,y\in C\) such that there exists a greatest lower bound8, \(u(x\wedge y)+u(x\lor y)\leq u(x)+u(y)\). As some methods of ranked-choice voting are used to elect multiple candidates from a single election--with "elect" here either meaning being one of the overall winners of the election or being one of the candidates who moves on to a head-to-head runoff--results relating to the representation of the preference relation are especially important for this context. Footnote 8: I.e., for all \(x,y\in C\), there exists \(z\) such that \(z=\inf\{x,y\}=x\wedge y\). Kalandrakis (2010) focuses on a similar notion, rationalizability. \(u:C\to\mathbb{R}\) is _strictly rationalizable_ if \(u(x)>u(y)\) for each pair \(x,y\in C\) such that \(x\succsim y\) and _rationalizable_ if \(u(x)\geq u(y)\) for each pair \(x,y\in C\) such that \(x\succsim y\). Clearly if a ballot has multiple unranked candidates, this leads to \(u\) not being strictly rationalizable. \(u:C\to\mathbb{R}\) is _almost strictly rationalizable_ if it is rationalizable over all pairs \(x,y\in C\) and strictly rationalizable for each pair \(x,y\in C\) such that it is not the case that \(x\succsim y\) and \(y\succsim x\). As should be clear from the definitions already provided, this allows for some results to be applied to the context of ranked-choice voting. Finally, \(u:C\to\mathbb{R}\) is _strictly concave_ if \(u(\lambda x+(1-\lambda)y)>\lambda u(x)+(1-\lambda)u(y)\) for all \(x,y\) with \(x\neq y\) and for all \(\lambda\in(0,1)\), and _strictly quasiconcave_ if \(u_{i}(\lambda x+(1-\lambda)y)>\min\{u_{i}(x),u_{i}(y)\}\) for all \(x,y\) with \(x\neq y\) and for all \(\lambda\in(0,1)\). While strategic voting is a usual feature of research regarding ranked-choice voting, providing a utility function that reflects this is beyond the scope of this paper. Contextualizing the results of this paper with a utility function for strategic voting is an area of future research. Strategic voting also potentially complicates analyses that rely on the preference relation being monotone, such as Chambers and Echenique (2008) and Chambers et al. (2020). The structure of a ranked-choice ballot for a voter who votes sincerely reflects an antitone preference relation--the candidate (say, \(x\)) that would provide the voter with the greatest utility is ranked \(1\), descending until the candidate (or candidates) who would provide the voter with the least utility (say, without loss of generality, \(y\)) is ranked \(k\); so \(y>\ldots>x\Rightarrow u(y)<\ldots<u(x)\). Chateauneuf et al. (2017) provide a result that is dualized below that is similar to one found in Chambers and Echenique (2008), but for an antitone preference relation. Results Having defined a ranked-choice ballot above, we begin providing results by formally connecting it to a well-known mathematical structure. **Theorem 3**: _If a ballot is a ranked-choice ballot, then it is a join semilattice._ **Proof:** Let \((C,\succsim)\) be a ranked-choice ballot. Since it is a top-truncated set, it necessarily is a partially ordered set. So all that remains to be shown is that the join exists for each pair of candidates. Let \(x,y\in C\). If \(x\) and \(y\) are distinguishable, then, without loss of generality, say \(x\succ y\); \(x=x\lor y\) immediately follows. If \(x\) and \(y\) are not distinguishable, i.e., \(x\sim y\), there exists at least one other candidate in the election, say, \(z\), since at least one candidate must be ranked. If \(z\) is the only other candidate, then \(z\succ x\sim y\), which in turn means that \(z\succ x\) and \(z\succ y\). So \(z=x\lor y\) similarly follows. If multiple candidates are ranked, for the previous relationships to hold, select \(z\) as the candidate ranked last amongst them; \(z=x\lor y\) again follows. Therefore, the join exists for each pair of candidates. Hence, \((C,\succsim)\) is a join semilattice. \(\Box\) With a substantive literature on preferences over semilattices, this result is the first to highlight the connection to ranked-choice voting. This, along with a couple of other features inherent in ranked-choice ballots, unlocks some important properties of the utility function associated with these ballots. These properties support the usage of ranked-choice voting as a way to increase the overall utility from an election. The next result is the second of the features needed to satisfy the conditions for the first result regarding utility functions. **Proposition 4**: _If \(\succsim\) is a top-truncated order, then it is modular._ **Proof:** Let \(\succsim\) be a top-truncated order on \(C\) and let \(x,y\in C\) such that \(x\sim(x\lor y)\). Then, since \(x\lor y\) must be a ranked candidate and \(x\sim(x\lor y)\), \(x\) must be \(x\lor y\) since a ranked candidate can only be indistinguishable to itself. So, since \(x=(x\lor y)\), \(x\lor z\sim(x\lor y)\lor z\), satisfying the definition of modularity. Therefore, top-truncated orders are modular. \(\Box\) Ranked-choice ballots are proven to be (finite) join semilattices, with top-truncated orders being modular (or strongly quasisubmodular). Additionally, the top-truncated orders in this model are complete, and thus a type of complete preorder. Finally, as the preferences in this paper are antitone, they are (weakly) decreasing. Therefore, ranked-choice ballots have all of the necessary conditions to satisfy the following proposition. **Proposition 5**: _(Dual of **Corollary 2** from Chateauneuf et al. (2017).) For a complete preorder \(\succsim\) on a finite join semilattice \((C,\succsim)\), the following are equivalent:_ 1. \(\succsim\) _is weakly decreasing and strongly quasisubmodular._ 2. \(\succsim\) _has a weakly decreasing and submodular representation._ \(\Box\)__ We can then establish the subsequent corollary. **Corollary 6**: _A ranked-choice ballot has a submodular representation. \(\Box\)_ We next show that the preferences over ranked-choice ballots allow for a result from Kalandrakis (2010) to hold that further characterizes the utility function associated with these ballots. It helps to first define the following concepts: let \(P\subseteq C\times C\) be the set of pairs of candidates, with \((x,y)\in P\) meaning that \(x\) is (weakly) preferred to \(y\). The potential weakness of preferences is necessary, as indistinguishable candidates \(x,y\) are in \(P\) as the separate pairs \((x,y)\in P\) and \((y,x)\in P\). Let \(Y(P)\) be the set of candidates that are (weakly) preferred to at least one other candidate, and \(N(P)\) be the set of candidates that are (weakly) not preferred to at least one other candidate. Finally, let \(\mathcal{E}(C)\) be the set of extreme points, or the candidates that are unable to be written as a strict convex combination of candidates in \(C\); \(\mathcal{E}(C(P))\) indicates that these candidates are part of at least one pair in \(P\). The following theorem is needed to apply the remainder of the result from Kalandrakis (2010). A necessary fact about the set of extreme points regarding ranked-choice ballots is that the highest-ranked candidate in a set and the lowest-ranked candidate are the extreme points; if a set has multiple unranked (i.e., lowest-ranked) candidates, each of those candidates are in the set of extreme points, unless the set contains only unranked candidates, as that set would then have no extreme points. **Theorem 7**: _For all nonempty \(P^{\prime}\subseteq P\), either_ 1. _there exists_ \(x\in\mathcal{E}(C(P^{\prime}))\) _such that_ \(x\not\in Y(P^{\prime})\)__ 2. _there exists a nonempty_ \(P^{\prime\prime}\subseteq P^{\prime}\) _such that_ \(N(P^{\prime\prime})=Y(P^{\prime\prime})\subseteq\mathcal{E}(C(P^{\prime}))\) _and_ \(Y(P^{\prime\prime})\cap Y(P^{\prime}\setminus P^{\prime\prime})=\varnothing\)_._ **Proof:** The proof proceeds in three parts which correspond to the three possible combinations of candidates in \(P^{\prime}\)--all ranked, at least one ranked and at least one unranked, and no ranked candidates. First, let \(P^{\prime}\subseteq P\) such that all candidates in \(P^{\prime}\) are ranked. Then, \(\mathcal{E}(C(P^{\prime}))=\{x,y\}\) with \(x\) the highest-ranked and \(y\) the lowest-ranked candidate; \(Y(P^{\prime})=P^{\prime}\setminus\{y\}\); and \(N(P^{\prime})=P^{\prime}\setminus\{x\}\). Clearly, as \(y\in\mathcal{E}(C(P^{\prime}))\) but \(y\not\in Y(P^{\prime})\), the conditions hold. Next, let \(P^{\prime}\subseteq P\) consist of at least one ranked candidate, \(x\), and at least one unranked candidate, \(y\). Then, without loss of generality, \(\mathcal{E}(C(P^{\prime}))=\{x,y\}\); \(Y(P^{\prime})=P^{\prime}\setminus\{y\}\); and \(N(P^{\prime})=P^{\prime}\setminus\{x\}\). Again, as \(y\in\mathcal{E}(C(P^{\prime}))\) but \(y\not\in Y(P^{\prime})\), the conditions hold. Finally, let \(P^{\prime}\subseteq P\) such that \(P^{\prime}\) consists of only unranked candidates. Then, \(\mathcal{E}(C(P^{\prime}))=\varnothing\). Similarly, \(Y(P^{\prime})=N(P^{\prime})=P^{\prime}\). So for any subset of \(P^{\prime}\), say, \(P^{\prime\prime}\subseteq P^{\prime}\), also has \(Y(P^{\prime\prime})=N(P^{\prime\prime}).\) However, since all nonempty \(P^{\prime\prime}\) are such that \(N(P^{\prime\prime})=Y(P^{\prime\prime})\) and \(N(P^{\prime\prime})=Y(P^{\prime\prime})\not\subseteq\mathcal{E}(C(P^{\prime}))=\varnothing\), this satisfies the contrapositive of the second condition. \(\square\) The following result from Kalandrakis (2010) proves that ranked-choice ballots lead to voters having concave utility functions. As work on the strategic voting of candidates such as Tajika (2021) assumes that voters have convex utility functions, candidates as well as voters have an incentive to act differently in a ranked-choice election than in a traditional first-past-the-post election. **Theorem 8**: _(**Theorem 2** from Kalandrakis (2010).) Let \(C\) be the set of candidates and \(P\subseteq C\times C\) be the voting record for a given voter. Then the following conditions are equivalent:_ 1. _For all nonempty_ \(P^{\prime}\subseteq P\)_, either there exists_ \(x\in\mathcal{E}(C(P^{\prime}))\) _such that_ \(x\not\in Y(P^{\prime})\) _or there exists a nonempty_ \(P^{\prime\prime}\subseteq P^{\prime}\) _such that_ \(N(P^{\prime\prime})=Y(P^{\prime\prime})\subseteq\mathcal{E}(C(P^{\prime}))\) _and_ \(Y(P^{\prime\prime})\cap Y(P^{\prime}\setminus P^{\prime\prime})=\varnothing\)_._ 2. _There exists a strictly concave utility function that almost strictly rationalizes_ \(P\)_._ 3. _There exists a strictly quasiconcave utility function that almost strictly rationalizes_ \(P\)_._ 4. _There exists a strictly concave utility function that rationalizes_ \(P\)_._ 5. _There exists a strictly quasiconcave utility function that rationalizes_ \(P\)_._ \(\square\)__ ## 4 Conclusion This paper was the first to formalize the preferences of ranked-choice voting and explore what structure a ballot having those preferences takes. Top-truncated preferences elicit specific types of representations and utility functions; now that these have been identified, a more substantive appraisal of ranked-choice voting's value can be done. There are also multiple areas of future research that can build upon the results from this paper. Ayadi et al. (2022) mention the need for normative work on top-truncated preferences, which is especially important because these preferences have been shown to be concave (and quasiconcave)--types of preferences not always assumed to reflect voters' actual preferences. Whether these preference types are affected if the utility function accounts for strategic voting is a valuable question to explore. Coughlin (1983) addresses utility functions for strategic voting, but in the context of candidates' utility functions rather than voters' utility functions. Ranked-choice voting provides both the opportunity for a voter to express their full set of preferences and the opportunity to vote strategically. This paper has explored theoretical properties associated with the former; the next step is to see if and where there is an intersection with the latter.
2306.09765
Additivity for the Motivic Trace and the Motivic Euler Characteristic
In this paper, we settle an open conjecture regarding the assertion that the Euler-characteristic of $\rmG/\NT$ for a split reductive group scheme $\rmG$ and the normalizer of a split maximal torus $\NT$ over a field is $1$ in the Grothendieck-Witt ring with the characteristic exponent of the field inverted, under the assumption that the base field contains a $\sqrt -1$. Numerous applications of this to splittings in the motivic stable homotopy category and to Algebraic K-Theory are worked out in several related papers by Gunnar Carlsson and the authors.
Roy Joshua, Pablo Pelaez
2023-06-16T10:52:57Z
http://arxiv.org/abs/2306.09765v1
# Additivity of the motivic trace and the motivic Euler-characteristic ###### Abstract. In this paper, we settle an open conjecture regarding the assertion that the Euler-characteristic of \(\mathrm{G/N_{G}(T)}\) for a split reductive group scheme \(\mathrm{G}\) and the normalizer of a split maximal torus \(\mathrm{N_{G}(T)}\) over a field is \(1\) in the Grothendieck-Witt ring with the characteristic exponent of the field inverted, under the assumption that the base field contains a \(\sqrt{-}1\). Numerous applications of this to splittings in the motivic stable homotopy category and to Algebraic K-Theory are worked out in several related papers by Gunnar Carlsson and the authors. 2010 AMS Subject classification: 14F20, 14F42, 14L30 Both authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme _K-Theory, Algebraic Cycles and Motivic Homotopy Theory_ where part of the work on this paper was undertaken. This work was supported by EPSRC grant no EP/R014604/1. ## 1. Introduction This paper is a continuation of the earlier work, [10], where Carlsson and one of the present authors set up motivic and etale variants of the classical Becker-Gottlieb transfer. If one recalls, the power and utility of the classical Becker-Gottlieb transfer stems from the fact it provided a convenient mechanism to obtain splittings to certain maps in the stable homotopy category. In view of the fact that the transfer is a map of spectra, it induces a map of the Atiyah-Hirzebruch spectral sequences associated to generalized cohomology theories, and reduces the question on the existence of stable splittings to the calculation of certain Euler-characteristics in singular cohomology. The most notable example of this is the calculation of the Euler-characteristic of \(\mathrm{G/N_{G}(T)}\), where \(\mathrm{G}\) is a compact Lie group and \(\mathrm{N_{G}(T)}\) is the normalizer of a maximal torus in \(\mathrm{G}\). Using the transfer, it becomes then possible to show that the generalized cohomology of the Borel construction with respect to \(\mathrm{G}\) for a space \(\mathrm{X}\) acted on by \(\mathrm{G}\), is a split summand of the generalized cohomology of the corresponding Borel construction for \(\mathrm{X}\) with respect to \(\mathrm{N_{G}(T)}\). This then provided numerous applications, such as double coset formulae for actions of compact groups, generalizing the well-known double coset formulae for the action of finite groups: see [1], [1], [1]. The motivic analogue of the statement that the Euler characteristic of \(\mathrm{G/N_{G}(T)}\) is \(1\) in singular cohomology for compact Lie groups is a conjecture due to Morel (see the next page for more details), that a suitable motivic Euler characteristic in the Grothendieck-Witt group is \(1\), for \(\mathrm{G/N_{G}(T)}\), where \(\mathrm{G}\) is a split connected reductive group and \(\mathrm{N_{G}(T)}\) is the normalizer of a maximal torus in \(\mathrm{G}\). We provide an affirmative solution to this conjecture in this paper assuming that the base field \(k\) contains a \(\sqrt{-}1\), the precise details of which are discussed below. Let \(k\) denote a _perfect field of arbitrary characteristic: we will restrict to the category of smooth quasi-projective schemes over \(k\) and adopt the framework of_[13]. Throughout, \(\mathbf{T}\) will denote \(\mathbb{P}^{1}\) pointed by \(\infty\) and \(\mathbf{T}^{n}\) will denote \(\mathbf{T}^{\wedge n}\) for any integer \(n\geq 0\). \(\mathbb{S}_{k}\) will denote the corresponding motivic sphere spectrum. Let \(\mathbf{Spt}(k_{mot})\) denote the category of motivic spectra over \(k\). The corresponding stable homotopy category will be denoted \(\mathcal{SH}(k)\). In positive characteristic \(\mathrm{p}\), we consider \(\mathbf{Spt}(k_{mot})[\mathrm{p}^{-1}]\): we will identify this with the category of motivic spectra that are module spectra over the localized sphere spectrum \(\mathbb{S}_{k}[\mathrm{p}^{-1}]\). Then assuming \(char(k)=0\), given a smooth scheme \(\mathrm{X}\) of finite type over \(k\), \(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}\) denotes the \(\mathbf{T}\)-suspension spectrum of \(\mathrm{X}\) and \(\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+})=\mathcal{RH}\mathrm{om }(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+},\mathbb{S}_{k})\), where \(\mathcal{RH}\mathrm{om}\) denotes the derived internal hom in the category \(\mathbf{Spt}(k_{mot})\). When \(char(k)=\mathrm{p}>0\), \(\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+})=\mathcal{RH}\mathrm{om }(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+},\mathbb{S}_{k}[\mathrm{p}^{-1}])\). \(\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+})\) is the _Spanier-Whitehead dual_ of \(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}\). It is known (see [11] and also [11]) that after inverting the characteristic exponent, \(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}\) is _dualizable_ in the sense that the natural map \(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}\to\mathrm{D}(\mathrm{D}(\Sigma^{ \infty}_{\mathbf{T}}\mathrm{X}_{+}))\) is an isomorphism in \(\mathcal{SH}(k)\). In this context, we have the _co-evaluation map_ \[\mathbb{S}_{k}{\stackrel{{ c}}{{\to}}}\Sigma^{\infty}_{\mathbf{T}} \mathrm{X}_{+}\wedge\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+})\] and the _evaluation map_ \[\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+})\wedge\Sigma^{\infty}_{ \mathbf{T}}\mathrm{X}_{+}\to\mathbb{S}_{k}\] in characteristic \(0\). In positive characteristic \(p\), we also have the co-evaluation map \[\mathbb{S}_{k}[\mathrm{p}^{-1}]{\stackrel{{ c}}{{\to}}}\Sigma^{ \infty}_{\mathbf{T}}\mathrm{X}_{+}[\mathrm{p}^{-1}]\wedge\mathrm{D}(\Sigma^{ \infty}_{\mathbf{T}}\mathrm{X}_{+}[\mathrm{p}^{-1}])\] and the _evaluation map_ \[\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}[\mathrm{p}^{-1}])\wedge \Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}[\mathrm{p}^{-1}]\to\mathbb{S}_{k}[ \mathrm{p}^{-1}].\] See [12, p. 87]. Let \(\mathrm{f}:\mathrm{X}\to\mathrm{X}\) denote a self-map. **Definition 1.1**.: Assume the above setting. Then, in characteristic \(0\), the following composition in \(\mathcal{SH}(k)\) defines the _trace_\(\tau_{\mathrm{X}}(\mathrm{f}_{+})\): \[\mathbb{S}_{k}{\stackrel{{ c}}{{\to}}}\Sigma^{\infty}_{\mathbf{T}} \mathrm{X}_{+}\wedge\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}){ \stackrel{{\tau}}{{\to}}}\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}} \mathrm{X}_{+})\wedge\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}{\stackrel{{ \mathrm{id}\wedge\mathrm{f}}}{{\to}}}\mathrm{D}(\Sigma^{\infty}_{\mathbf{T}} \mathrm{X}_{+})\wedge\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}{\stackrel{{ \mathrm{e}}}{{\to}}}\mathbb{S}_{k}. \tag{1.0.1}\] In positive characteristic \({\rm p}\), the following composition in \({\mathcal{SH}}(k)[{\rm p}^{-1}]\) defines the corresponding trace, which will be denoted \(\tau_{{\rm X},{\mathbb{S}}_{k}[{\rm p}^{-1}]}({\rm f}_{+})\): \[\begin{array}{c}{\mathbb{S}}_{k}[{\rm p}^{-1}]{\stackrel{{ \rm c}}{{\rightarrow}}}\Sigma_{\bf T}^{\infty}[{\rm p}^{-1}]{\rm X}_{+}\wedge{ \rm D}(\Sigma_{\bf T}^{\infty}[{\rm p}^{-1}]{\rm X}_{+}){\stackrel{{ \tau}}{{\rightarrow}}}{\rm D}(\Sigma_{\bf T}^{\infty}[{\rm p}^{-1}]{\rm X}_{+} )\wedge\Sigma_{\bf T}^{\infty}[{\rm p}^{-1}]{\rm X}_{+}\\ \stackrel{{ id\wedge{\rm f}}}{{\rightarrow}}{\rm D}(\Sigma_{\bf T }^{\infty}[{\rm p}^{-1}]{\rm X}_{+})\wedge\Sigma_{\bf T}^{\infty}[{\rm p}^{- 1}]{\rm X}_{+}{\stackrel{{\rm e}}{{\rightarrow}}}{\mathbb{S}}_{k }[{\rm p}^{-1}].\end{array} \tag{1.0.2}\] Here \(\tau\) is the map interchanging the two factors. When \({\rm f}={\rm id}_{\rm X}\), the corresponding trace \(\tau_{{\rm X}_{+}}=\tau_{{\rm X}}(id_{X+})\) (\(\tau_{{\rm X}_{+},{\mathbb{S}}_{k}[{\rm p}^{-1}]}=\tau_{{\rm X},{\mathbb{S}}_{ k}[{\rm p}^{-1}]}(id_{{\rm X}_{+}})\)) will be denoted \(\chi_{mot}({\rm X})\) and _called the motivic Euler-characteristic_ of \({\rm X}\). By [Mo4] (see also [Mo12]), \(\pi_{0,0}({\mathbb{S}}_{k})\) identifies with the Grothendieck-Witt ring of the field \(k\), \({\rm GW}(k)\), and therefore \(\chi_{mot}({\rm X})\) is a class in \({\rm GW}(k)\) in characteristic \(0\) and in \({\rm GW}(k)[{\rm p}^{-1}]\) in positive characteristic. (In [Mo4] the isomorphism of \(\pi_{0,0}({\mathbb{S}}_{k})\) with the Grothendieck-Witt ring of the field \(k\) was proven under the assumption \(char(k)\neq 2\): the above restriction on the characteristic of the field \(k\) is removed in [BH, Theorem 10.12].) Then an open conjecture in this setting (due to Morel (see [Lev18])) was the following: let \({\rm G}\) denote a split reductive group over \(k\), with \({\rm T}\) a split maximal torus and \({\rm N}_{\rm G}({\rm T})\) its normalizer in \({\rm G}\). Then the conjecture states that \(\chi_{mot}({\rm G}/{\rm N}_{\rm G}({\rm T}))=1\) in \({\rm GW}(k)\) with the characteristic exponent of the field inverted. In fact, this is the _strong form_ of the conjecture. The _weak form_ of the conjecture is simply the statement that \(\chi_{mot}({\rm G}/{\rm N}_{\rm G}({\rm T}))\) is a _unit_ in \({\rm GW}(k)\) with the characteristic exponent of the field inverted. The main result of the current paper is an affirmative solution of the above conjecture as stated in the following theorem. **Theorem 1.2**.: _Let \({\rm G}\) denote a split linear algebraic group over the perfect field \(k\), with \({\rm T}\) a split maximal torus and \({\rm N}({\rm T})\) its normalizer in \({\rm G}\). Then the following are true:_ 1. \(\chi_{mot}({\rm G}/{\rm N}_{\rm G}({\rm T}))=1\) _in_ \({\rm GW}(k)\) _if_ \(char(k)=0\) _and_ \(k\) _contains a_ \(\sqrt{-}1\)_._ 2. \(\chi_{mot}({\rm G}/{\rm N}_{\rm G}({\rm T}))=1\) _in_ \({\rm GW}(k)[{\rm p}^{-1}]\) _if_ \(char(k)={\rm p}>0\) _and_ \(k\) _contains a_ \(\sqrt{-}1\)_._ The statements (i) and (ii) already were used in the preprint [JP20] in the context of proving the additivity also for the transfer and proving various applications of these in the motivic stable homotopy category. Then, Ananyevskiy proved independently the _weak form_ of the conjecture in [An]. In this paper, we also show how to simplify the proof discussed in [An], by making use of our proof of the strong form of the conjecture for fields that contain \(\sqrt{-}1\). This is discussed in our proof of the following Corollary. **Corollary 1.3**.: _Assume as in Theorem 1.2 that \({\rm G}\) denotes a split linear algebraic group over \(k\), with \({\rm T}\) a split maximal torus and \({\rm N}({\rm T})\) its normalizer in \({\rm G}\). Then the following hold:_ 1. \(\chi_{mot}({\rm G}/{\rm N}_{\rm G}({\rm T}))\) _in_ \({\rm GW}(k)\) _is a unit if_ \(char(k)=0\)_, and_ 2. \(\chi_{mot}({\rm G}/{\rm N}_{\rm G}({\rm T}))\) _in_ \({\rm GW}(k)[{\rm p}^{-1}]\) _is a unit if_ \(char(k)=p>0\)_._ In view of the interest in these results, and because a proof of additivity for the trace is relatively straight-forward, 1 we have decided to write this short paper entirely devoted to a self-contained proof of these results. On feeding this result into the motivic variant of the transfer constructed in [CJ20] and [CJP22], we obtain a number of splitting results. The following result should serve as a proto-typical example of such applications. Footnote 1: i.e., unlike additivity for the transfer which is much more involved, and needs the notion of rigidity in an essential manner Let \({\rm E}\to{\rm B}\) denote a \({\rm G}\)-torsor for the action of _any_ linear algebraic group \({\rm G}\) with both \({\rm E}\) and \({\rm B}\) smooth quasi-projective schemes over \(k\), with \({\rm B}\)_connected_. Let \({\rm Y}\) denote a \({\rm G}\)-scheme or an unpointed simplicial presheaf provided with a G-action. Let \(q:E\times(G\underset{\mathrm{N}_{G}(T)}{\times}Y)\to E\times Y\) denote the map induced by the map \(G\underset{\mathrm{N}_{G}(T)}{\times}Y\to Y\) sending \((g,y)\mapsto gy\). (In case the group \(G\) is _special_ in the sense of Grothendieck, the quotient construction above can be carried out on the Nisnevich site, but in general this needs to be carried out on the etale site and then pushed forward to the Nisnevich site, by means of a derived push-forward: the details are in [10, 3.4.2].) Then the induced map \[q^{*}:h^{\bullet,*}(E\times Y,M)\to h^{\bullet,*}(E\times_{G}(G\underset{ \mathrm{N}_{G}(T)}{\times}Y),M)\] is a split injection, where \(h^{*,\bullet}\) denotes any generalized motivic cohomology theory with respect to the motivic spectrum \(M\). In order to show that the map \(q^{*}\) is a split monomorphism, one needs the _the transfer_ : \[\mathit{tr}(Y):\Sigma_{\mathbf{T}}^{\infty}(E\times_{G}Y)_{+}\to\Sigma_{ \mathbf{T}}^{\infty}(E\times_{G}(G\underset{\mathrm{N}_{G}(T)}{\times}Y)_{+},\] which is a map in \(\mathcal{SH}(k)\) (\(\mathcal{SH}(k)[p^{-1}]\), respectively) so that the composition \(\mathit{tr}(Y)^{*}\circ q^{*}\) is multiplication by \(\chi_{mot}(G/N(T))\). Therefore, knowing \(\chi_{mot}(G/N(T))\) is a unit in \(\mathrm{GW}(k)\) shows \(q^{*}\) is a split injection. See also [11], where such splittings obtained from the motivic transfer proves a variant of the classical Segal-Becker theorem for Algebraic K-Theory. _Remark 1.4_.: Certain special cases of the above splitting results, for groups that are _special_ seem to be also worked out in [14], _under the assumption the above conjecture is true_. Observe that a linear algebraic group \(G\) is _special_ in the sense of Grothendieck (see [12]), if any torsor for \(G\) is locally trivial on the Zariski site. _Special_ groups include \(\{\mathrm{GL}_{n},\mathrm{SL}_{n}|n\}\), but _exclude all orthogonal groups as well as finite groups_. For groups \(G\) that are _not special_, \(G\)-torsors are locally trivial only on the etale site. The construction of the transfer, worked out in [10] and [10, Chapter 3] apply for all such groups. Here is an overview of the paper. One of the key techniques that is used in the proof of Theorem 1.2 is to show that the trace and the motivic Euler-characteristic are _additive up to multiplication by a sign_ in general, and additive when the base field \(k\) has a \(\sqrt{-}1\). We devote section 2 of the paper to establishing this additivity. Section 3 then completes the proof of the above theorem closely following the ideas for a proof of the corresponding result as in [11, Lemma 3.5] in the etale setting. **Acknowledgments**. The first author would like to thank Gunnar Carlsson for getting him interested in the problem of constructing a Becker-Gottlieb transfer in the motivic framework and for numerous helpful discussions. Both authors would like to thank Michel Brion for helpful discussions on fixed point schemes as well as on aspects of Theorem 3.2. We are also happy to acknowledge [11, Lemma 3.5 and its proof] as one of the inspirations for this paper. We also thank Alexey Ananyevskiy for helpful comments on our preprint [11], which have enabled us to sharpen our results, and also for bringing his results to our attention. Finally, it is a pleasure to acknowledge our intellectual debt to Fabien Morel and Vladimir Voevodsky for their foundational work in motivic homotopy theory. In addition, the authors are also grateful to the referee for providing us very valuable feedback, which surely have helped us sharpen some results and improve the overall organization. ## 2. **Additivity of the Motivic Trace** The main goal of this section is to establish additivity properties for the pre-transfer and trace. But we begin by establishing certain properties of a general nature for the pre-transfer and the trace. ### Basic properties of the pre-transfer and trace It is convenient to reformulate the trace in terms of the pre-transfer, which we proceed to discuss next. At the same time, we extend the framework as follows. The following discussion is a variant of what appears in [LMS, Chapter III]. See also [May01], [GPS] and [HSS] for related discussions. **Definition 2.1**.: (Co-module structures) Assume that \(\mathrm{C}\) is an unpointed simplicial presheaf, i.e., \(\mathrm{C}\) is a contravariant functor from a given site to the category of unpointed simplicial sets. Let \(\mathrm{C}_{+}\) denote the corresponding pointed simplicial presheaf. Then the diagonal map \(\Delta:\mathrm{C}_{+}\to\mathrm{C}_{+}\wedge\mathrm{C}_{+}\) together with the augmentation \(\epsilon:\mathrm{C}_{+}\to\mathrm{S}^{0}\) defines the structure of an associative co-algebra of simplicial presheaves on \(\mathrm{C}_{+}\). A pointed simplicial presheaf \(\mathrm{P}\) will be called a right \(\mathrm{C}_{+}\)-co-module, if it comes equipped with maps \(\Delta:\mathrm{P}\to\mathrm{P}\wedge\mathrm{C}_{+}\) so that the diagrams: (2.1.1) commute. _The most common choice of \(\mathrm{P}\) is with \(\mathrm{P}=\mathrm{C}_{+}\)_ and with the obvious diagonal map \(\Delta:\mathrm{C}_{+}\to\mathrm{C}_{+}\wedge\mathrm{C}_{+}\) as providing the co-module structure. However, the reason we are constructing the pre-transfer in this generality (see the definition below) is so that we are able to obtain strong additivity results as in Theorem 2.5. **Definition 2.2**.: (_The pre-transfer_) Assume that the pointed simplicial presheaf \(\mathrm{P}\) is such that \(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P}\) is dualizable in \(\mathbf{Spt}(k_{\mathit{mot}})\) and is provided with a map \(\mathrm{f}:\mathrm{P}\to\mathrm{P}\). Assume further that \(\mathrm{C}\) is an unpointed simplicial presheaf so that \(\mathrm{P}\) is a right \(\mathrm{C}_{+}\)-co-module. Then the _pre-transfer with respect to \(\mathrm{C}_{+}\)_ is defined to be a map \(tr^{\prime}(\mathrm{f}):\mathrm{S}_{k}\to\Sigma_{\mathbf{T}}^{\infty}\mathrm{C }_{+}\), which is the composition of the following maps. Let \(e:\mathrm{D}(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\wedge\Sigma_{\mathbf{T}}^ {\infty}\mathrm{P}\to\mathrm{S}_{k}\) denote the evaluation map. We take the dual of this map to obtain: \[c=\mathrm{D}(\mathrm{e}):\mathrm{S}_{k}\simeq\mathrm{D}(\mathrm{S}_{k})\to \mathrm{D}(\mathrm{D}(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\wedge(\Sigma_{ \mathbf{T}}^{\infty}\mathrm{P}))\widetilde{\leftarrow}\mathrm{D}(\Sigma_{ \mathbf{T}}^{\infty}\mathrm{P})\wedge(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P}) \overset{\tau}{\to}(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\wedge\mathrm{D}( \Sigma_{\mathbf{T}}^{\infty}\mathrm{P}). \tag{2.1.1}\] Here \(\tau\) denotes the obvious flip map interchanging the two factors and \(c\) denotes the co-evaluation. The reason that taking the double dual yields the same object up to weak-equivalence is because we are in fact taking the dual in the setting discussed above. Observe that all the maps that _go in the left-direction are weak-equivalences_. All the maps involved in the definition of the co-evaluation map are _natural maps_. To complete the definition of the pre-transfer, one simply composes the co-evaluation map with the following composite map: \[\begin{array}{l}(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\wedge\mathrm{D}( \Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\overset{\tau}{\to}\mathrm{D}(\Sigma_{ \mathbf{T}}^{\infty}\mathrm{P})\wedge(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P}) \overset{\mathrm{id}\wedge\mathrm{f}}{\rightarrow}\mathrm{D}(\Sigma_{ \mathbf{T}}^{\infty}\mathrm{P})\wedge(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\\ \overset{id\wedge\Delta}{\rightarrow}\mathrm{D}(\Sigma_{\mathbf{T}}^{\infty} \mathrm{P})\wedge(\Sigma_{\mathbf{T}}^{\infty}\mathrm{P})\wedge(\Sigma_{ \mathbf{T}}^{\infty}\mathrm{C}_{+})\overset{e\wedge\mathrm{id}}{\rightarrow} \mathrm{S}_{k}\wedge(\Sigma_{\mathbf{T}}^{\infty}\mathrm{C}_{+})\simeq\Sigma_{ \mathbf{T}}^{\infty}\mathrm{C}_{+}.\end{array} \tag{2.1.1}\] The corresponding _trace_\(\tau(\mathrm{f})\), is defined as the composition of the above pre-transfer \(tr^{\prime}(\mathrm{f})\) with the projection \(\pi\) sending \(\mathrm{C}_{+}\) to \(\mathrm{S}_{+}^{0}\). When \(\mathrm{f}=\mathrm{id}_{\mathrm{P}}\), the pre-transfer (trace) will be denoted \(tr^{\prime}_{\mathrm{P}}\) (\(\tau_{\mathrm{P}}\), respectively), and when \(\mathrm{P}=\mathrm{C}_{+}\) and \(\mathrm{f}=\mathrm{id}_{\mathrm{P}}\), the pre-transfer (trace) will be denoted \(tr^{\prime}_{\mathrm{C}_{+}}\) (\(\tau_{\mathrm{C}_{+}}\), respectively). _Remark 2.3_.: Observe that now the trace map \(\tau_{\mathrm{C}_{+}}\) identifies with the following composite map: \[\tau_{\mathrm{C}_{+}}:\mathbb{S}_{k}{}^{c}{\stackrel{{ c}}{{ \rightarrow}}}\Sigma_{\mathbf{T}}^{\infty}\mathrm{C}_{+}\wedge\mathrm{D}( \Sigma_{\mathbf{T}}^{\infty}\mathrm{C}_{+}){\stackrel{{\tau}}{{ \rightarrow}}}\mathrm{D}(\Sigma_{\mathbf{T}}^{\infty}\mathrm{C}_{+})\wedge \Sigma_{\mathbf{T}}^{\infty}\mathrm{C}_{+}{\stackrel{{\mathrm{e}}}{{ \rightarrow}}}\mathbb{S}_{k}.\] **Definition 2.4**.: If \(\mathcal{E}\) denotes _any commutative ring spectrum_ in \(\mathbf{Spt}(k_{mot})\), for example, \(\mathbb{S}_{k}[\mathrm{p}^{-1}]\), we will let \(\mathbf{Spt}(k_{mot},\mathcal{E})\) denote the category of \(\mathcal{E}\)-module spectra over \(\mathcal{E}\). Then one may replace the sphere spectrum \(\mathbb{S}_{k}\) everywhere by \(\mathcal{E}\) and define the pre-transfer and trace similarly, provided the unpointed simplicial presheaf \(\mathrm{C}\) is such that \(\mathcal{E}\wedge\mathrm{C}_{+}\) is dualizable in \(\mathbf{Spt}(k_{mot},\mathcal{E})\) and is provided with a map \(\mathrm{f}:\mathrm{C}\to\mathrm{C}\). When \(\mathrm{P}=\mathrm{C}_{+}\), these will be denoted \(\mathit{tr}(\mathrm{f}_{+})^{\prime}_{\mathcal{E}}\), \(\mathit{tr}^{\prime}_{\mathrm{C}_{+},\mathcal{E}}\), \(\tau_{\mathrm{C}_{+},\mathcal{E}}\), etc. Let \[\mathrm{U}_{+}{\stackrel{{\mathrm{j}_{+}}}{{\rightarrow}}}\mathrm{ X}_{+}{\stackrel{{\mathrm{k}_{+}}}{{\rightarrow}}}\mathrm{X}/\mathrm{U}=\mathrm{Cone }(\mathrm{j}_{+})\to\mathrm{S}^{1}\wedge\mathrm{U}_{+} \tag{2.1.2}\] denote a cofiber sequence where both \(\mathrm{U}\) and \(\mathrm{X}\) are unpointed simplicial presheaves, with \(j_{+}\) a cofibration. Now a key point to observe is that all of \(\mathrm{U}\), \(\mathrm{X}\) and \(\mathrm{X}/\mathrm{U}\) have the structure of right \(\mathrm{X}_{+}\)-co-modules. The right \(\mathrm{X}_{+}\)-co-module structure on \(\mathrm{X}_{+}\) is given by the diagonal map \(\Delta:\mathrm{X}_{+}\to\mathrm{X}_{+}\wedge\mathrm{X}_{+}\), while the right \(\mathrm{X}_{+}\)-co-module structure on \(\mathrm{U}_{+}\) is given by the map \(\Delta:\mathrm{U}_{+}{\stackrel{{\Delta}}{{\rightarrow}}}\mathrm{ U}_{+}\wedge\mathrm{U}_{+}\stackrel{{\mathrm{id}\wedge\mathrm{j}_{+}}}{{ \rightarrow}}\mathrm{U}_{+}\wedge\mathrm{X}_{+}\), where \(j:\mathrm{U}\to\mathrm{X}\) is the given map. The right \(\mathrm{X}_{+}\)-co-module structure on \(\mathrm{X}/\mathrm{U}\) is obtained in view of the commutative square (2.1.3) which provides the map \[\mathrm{X}/\mathrm{U}\to(\mathrm{X}\times\mathrm{X})/(\mathrm{U}\times \mathrm{X})\cong(\mathrm{X}/\mathrm{U})\wedge\mathrm{X}_{+}. \tag{2.1.4}\] We begin with the following results, which are variants of [LMS, Theorem 7.10, Chapter III and Theorem 2.9, Chapter IV] adapted to our contexts. **Theorem 2.5**.: _Let \(\mathrm{U}_{+}{\stackrel{{\mathrm{j}_{+}}}{{\rightarrow}}}\mathrm{ X}_{+}{\stackrel{{\mathrm{k}_{+}}}{{\rightarrow}}}\mathrm{X}/\mathrm{U}=\mathrm{Cone }(\mathrm{j})\to\mathrm{S}^{1}\wedge\mathrm{U}_{+}\) denote a cofiber sequence as in (2.1.2). Let \(f:\mathrm{U}_{+}\to\mathrm{U}_{+}\), \(g:\mathrm{X}_{+}\to\mathrm{X}_{+}\) denote two pointed maps so that the diagram_ _commutes. Let \(h:\mathrm{X}/\mathrm{U}\to\mathrm{X}/\mathrm{U}\) denote the corresponding induced map. Then, with the right \(\mathrm{X}_{+}\)-co-module structures discussed above, one obtains the following commutative diagram:_ (2.1.5) _Assume further that the \(\mathbf{T}\)-suspension spectra of all the above simplicial presheaves are dualizable in \(\mathbf{Spt}(k_{\text{mot}})\). Then, one obtains in \(\mathcal{SH}(k)\):_ \[tr^{\prime}(g)=\text{tr}^{\prime}(f)+\text{tr}^{\prime}(h),\quad\text{ and }\tau(g)=\tau(f)+\tau(h).\] _Let \(\mathcal{E}\) denote a commutative ring spectrum in \(\mathbf{Spt}(k_{\text{mot}})\). Then the corresponding results also hold if the smash products of the above simplicial presheaves with the ring spectrum \(\mathcal{E}\) are dualizable in \(\mathbf{Spt}(k_{\text{mot}},\mathcal{E})\)._ **Theorem 2.6**.: _Let \(\mathrm{F}=\mathrm{F}_{1}\sqcup_{\mathrm{F}_{3}}\mathrm{F}_{2}\) denote a pushout of unpointed simplicial presheaves on the big Nisnevich site of the base scheme, with the corresponding maps \(\mathrm{F}_{3}\to\mathrm{F}_{2}\), \(\mathrm{F}_{3}\to\mathrm{F}_{1}\) and \(\mathrm{F}_{j}\to\mathrm{F}\), for \(j=1,2,3\), assumed to be cofibrations (that is, injective maps of presheaves). Assume further the following: the \(\mathbf{T}\)-suspension spectra of all the above simplicial presheaves are dualizable in \(\mathbf{Spt}(k_{\text{mot}})\). Let \(i_{j}:\mathrm{F}_{j}\to\mathrm{F}\) denote the inclusion \(\mathrm{F}_{j}\to\mathrm{F}\), \(j=1,2,3\). Then, one obtains in \(\mathcal{SH}(k)\):_ 1. \(tr^{\prime}_{\mathrm{F}_{+}}=i_{1}\circ tr^{\prime}_{\mathrm{F}_{1+}}+i_{2} \circ tr^{\prime}_{\mathrm{F}_{2+}}-i_{3}\circ tr^{\prime}_{\mathrm{F}_{3+}}\) _and_ \(\tau_{\mathrm{F}_{+}}=\tau_{\mathrm{F}_{1+}}+\tau_{\mathrm{F}_{2+}}-\tau_{ \mathrm{F}_{3+}}\)_,_ _where_ \(tr^{\prime}_{\mathrm{F}_{+}}\) _and_ \(tr^{\prime}_{\mathrm{F}_{j+}}\)_,_ \(j=1,2,3\) _(_\(\tau_{\mathrm{F}_{+}}\)_,_ \(\tau_{\mathrm{F}_{j+}}\)_,_ \(j=1,2,3\)_) denote the pre-transfer maps ( trace maps, respectively)._ 2. _In particular, taking_ \(\mathrm{F}_{2}=*\)_, and_ \(\mathrm{F}=\mathrm{Cone}(\mathrm{F}_{3}\to\mathrm{F}_{1})\)_, we obtain in_ \(\mathcal{SH}(k)\)_:_ \(tr^{\prime}_{\mathrm{F}}=i_{1}\circ tr^{\prime}_{\mathrm{F}_{1+}}-i_{3}\circ tr ^{\prime}_{\mathrm{F}_{3+}}\) _and_ \(\tau_{\mathrm{F}}=\tau_{\mathrm{F}_{1+}}-\tau_{\mathrm{F}_{3+}}\)_._ _Let \(\mathcal{E}\) denote a commutative ring spectrum in \(\mathbf{Spt}(k_{\text{mot}})\). Then the corresponding results also hold if the smash products of the above simplicial presheaves with the ring spectrum \(\mathcal{E}\) are dualizable in \(\mathbf{Spt}(k_{\text{mot}},\mathcal{E})\)._ _Our next goal is to provide proofs of these two theorems._ We will discuss the proofs explicitly only for the case of spectra in \(\mathbf{Spt}(k_{\text{mot}})\), as the corresponding results readily extend to spectra in \(\mathbf{Spt}(k_{\text{mot}},\mathcal{E})\) for a commutative ring spectrum \(\mathcal{E}\) in \(\mathbf{Spt}(k_{\text{mot}},\mathcal{E})\). The additivity of the trace follows readily from the additivity of the pre-transfer, as it is obtained by composing with the projection \(\Sigma^{\infty}_{\mathbf{T}}\mathrm{X}_{+}\to\mathbb{S}_{\mathrm{k}}\). Since this is discussed in the topological framework in [LMS, Theorem 7.10, Chapter III and Theorem 2.9, Chapter IV], our proof amounts to verifying carefully and in a detailed manner that the same arguments there carry over to our framework. This is possible, largely because the arguments in the proof of [LMS, Theorem 7.10, Chapter III and Theorem 2.9, Chapter IV] depend only on a theory of Spanier-Whitehead duality in a symmetric monoidal triangulated category framework and [DP84] shows that the entire theory of Spanier-Whitehead duality works in such general frameworks. Nevertheless, it seems prudent to show explicitly that at least the key arguments in [LMS, Theorem 7.10, Chapter III and Theorem 2.9, Chapter IV] carry over to our framework. It may be important to point out that the discussion in [LMS, Chapters III and IV] is carried out in the equivariant framework: as all our discussion is taking place with no group actions, one may take the group to be trivial in the discussion in _op. cit_. The very first observation is that the hypotheses of Theorem 2.5 readily imply the commutativity of the diagram: Next we proceed to verify the commutativity of the diagram (2.1.5). Since the first square clearly commutes, it suffices to verify the commutativity of the second square. This follows readily in view of the following commutative square of pairs: Observe, as a consequence that we have verified that the hypotheses of [LMS, Theorem 7.10, Chapter III] are satisfied by the \(\Sigma_{\mathbf{T}}^{\infty}\)-suspension spectra of all the simplicial presheaves appearing in (2.1.5). The next step is to observe that the \(\mathrm{F}_{\mathrm{i}}\), \(i=1,2,3\) (F) in our Theorem 2.6, correspond to the \(\mathrm{F}_{\mathrm{i}}\) (F, respectively) in [LMS, Theorem 2.9, Chapter IV]. Now observe that \[\mathrm{F}_{3+}\to(\mathrm{F}_{1}\sqcup\mathrm{F}_{2})_{+}\to\mathrm{F}_{+}\to \mathrm{S}^{1}\wedge\mathrm{F}_{3,+} \tag{2.1.6}\] is a distinguished triangle. Moreover as \(\mathrm{F}_{1}\sqcup\mathrm{F}_{2}\) has a natural map (which we will call \(k\)) into F, there is a commutative diagram: Then the distinguished triangle (2.1.6) provides the commutative diagram: so that the hypotheses of [LMS, Theorem 7.10, Chapter III] are satisfied with X, Y and Z there equal to the \(\Sigma_{\mathbf{T}}^{\infty}\)-suspension spectra of \((\mathrm{F}_{1}\sqcup\mathrm{F}_{2})_{+}\), \(\mathrm{F}_{+}\) and \(\mathrm{S}^{1}\wedge\mathrm{F}_{3,+}\). These arguments, therefore reduce the proof of Theorem 2.6 to that of Theorem 2.5. Therefore, what we proceed to verify is that, then the proof of [LMS, Theorem 7.10, Chapter III] carries over to our framework. This will then complete the proof of Theorem 2.5. A key step of this amounts to verifying that the big commutative diagram given on [LMS, p. 166] carries over to our framework. One may observe that this big diagram is broken up into various sub-diagrams, labeled (I) through (VII) and that it suffices to verify that each of these sub-diagrams commutes up to homotopy. This will prove that additivity holds for the trace. For this, it seems best to follow the terminology adopted in [LMS, Theorem 7.10, Chapter III]: therefore we will let \(\mathrm{U}_{+}\) (\(\mathrm{X}_{+}\) and \(\mathrm{X}/\mathrm{U}\)) in Theorem 2.5 be denoted X (Y and Z, respectively) for the remaining part of the proof of Theorem 2.5. Let \(k:\mathrm{X}\to\mathrm{Y}\) (\(i:\mathrm{Y}\to\mathrm{Z}\) and \(\pi:\mathrm{Z}\to\mathrm{S}^{1}\wedge\mathrm{X}\)) denote the corresponding maps \(j_{+}:\mathrm{U}_{+}\to\mathrm{X}_{+}\) (\(k_{+}:\mathrm{X}_{+}\to\mathrm{X}/\mathrm{U}\), and the map \(l:\mathrm{X}/\mathrm{U}\to\mathrm{S}^{1}\wedge\mathrm{U}_{+}\)) as in Theorem 2.5. Then the very first step in this direction is to verify that the three squares \[\begin{CD}\mathrm{DY}\wedge\mathrm{X}\xrightarrow{id\wedge k}\mathrm{DY} \wedge\mathrm{Y}\,\ \mathrm{\mathrm{DZ}}\wedge\mathrm{Y}\xrightarrow{id\wedge i}\mathrm{DZ}\wedge \mathrm{Z}\,\ \mathrm{and}\ \ \mathrm{D(S^{1}\wedge\mathrm{X})}\wedge\mathrm{Z}\xrightarrow{id\wedge\pi} \mathrm{D(S^{1}\wedge\mathrm{X})}\wedge(\mathrm{S}^{1}\wedge\mathrm{X})\\ \mathrm{Dk}\wedge\mathrm{X}\xrightarrow{e}\mathrm{Dk}\wedge\mathrm{id}\xrightarrow{e} \mathrm{Dk}\wedge\mathrm{id}\xrightarrow{e}\mathrm{Dk}\wedge\mathrm{id} \xrightarrow{e}\mathrm{Dk}\wedge\mathrm{id}\xrightarrow{e}\mathrm{Dk} \xrightarrow{e}\mathrm{Dk}\xrightarrow{e}\mathrm{Dk}\xrightarrow{e} commute up to homotopy. (The homotopy commutativity of these squares is a formal consequence of Spanier-Whitehead duality: see [10, pp. 324-325] for proofs in the classical setting.) As argued on [LMS, page 167, Chapter III], the composite \(e\circ(\mathrm{D}\pi\wedge\mathrm{i}):\mathrm{D}(\mathrm{S}^{1}\wedge\mathrm{X}) \wedge\mathrm{Y}\to\mathbb{S}_{k}\) is equal to \(e\circ((id\wedge\pi)\circ(id\wedge i)\) and is therefore the trivial map. Therefore, if \(j\) denotes the inclusion of \(\mathrm{D}\mathrm{Z}\wedge\mathrm{Z}\) in the cofiber of \(\mathrm{D}\pi\wedge\mathrm{i}\), one obtains the induced map \(\bar{e}:(\mathrm{D}\mathrm{Z}\wedge\mathrm{Z})/(\mathrm{D}(\mathrm{S}^{1} \wedge\mathrm{X})\wedge\mathrm{Y})\to\mathbb{S}_{k}\) so that the triangle (2.1.8) homotopy commutes. This provides the commutative triangle denoted (I) in [10, p. 166] there and the commutative triangle denoted (II) there commutes by the second and third commutative squares in (2.1.7). The duals of (I) and (II) are the triangles denoted (I*) and (II*) (on [10, p. 166]) and therefore, they also commute. Next we briefly consider the homotopy commutativity of the remaining diagram beginning with the squares labeled (III), (IV) and (V) in [10, p. 166]. Since the maps denoted \(\delta\) are weak-equivalences, it suffices to show that these squares homotopy commute when the maps denoted \(\delta^{-1}\) are replaced by the corresponding maps \(\delta\) going in the opposite direction. Such maps \(\delta\) appearing there are all special instances of the following natural map: \(\delta:\mathrm{DB}\wedge\mathrm{A}\to\mathrm{D}(\mathrm{DA}\wedge\mathrm{B})\), for two spectra \(\mathrm{A}\) and \(\mathrm{B}\) in \(\mathbf{Spt}(k_{mot})\). The homotopy commutativity of the squares (III), (IV) and (V) are reduced therefore to the naturality of the above map in the arguments \(\mathrm{A}\) and \(\mathrm{B}\): see the discussion in [10, pp. 167-168]. The commutativity of the triangle labeled (VI) follows essentially from the definition of the maps there. Finally the homotopy commutativity of the square (VII) is reduced to the following lemma, which is simply a restatement of [10, Lemma 7.11, Chapter III]. These will complete the proof for the additivity property for the trace and hence the proofs of Theorems 2.5 and 2.6. **Lemma 2.7**.: _Let \(\mathrm{f}:\mathrm{A}\to\mathrm{X}\) and \(\mathrm{g}:\mathrm{B}\to\mathrm{Y}\) be maps in \(\mathbf{Spt}(k_{mot})\) and let \(i:\mathrm{X}\to\mathrm{Cone}(\mathrm{f})\) and \(j:\mathrm{Y}\to\mathrm{Cone}(\mathrm{g})\) be the inclusions into their cofibers. Then the boundary map \(\delta:\Sigma_{\mathrm{S}^{1}}^{-1}Cone(i\wedge j)\to Cone(\mathrm{f}\wedge \mathrm{g})\) in the cofiber sequence \(Cone(\mathrm{f}\wedge\mathrm{g})\to\mathrm{Cone}((\mathrm{i}\circ\mathrm{f}) \wedge(\mathrm{j}\circ\mathrm{g}))\to\mathrm{Cone}(\mathrm{i}\wedge\mathrm{j})\) is the sum of the two composites:_ \[\Sigma_{\mathrm{S}^{1}}^{-1}Cone(i\wedge j)^{\Sigma_{\mathrm{S}^{1}}^{-1} Cone(i\wedge id)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\mathrm{D}(\mathrm{F}_{1+})\wedge\mathrm{F}_{2+}\wedge\mathrm{D}(\mathrm{F}_{2+})\) provides the co-evaluation map for \(\mathrm{F}\). The multiplicative property of the pre-transfer follows readily from the above two observations as well as from the definition of the pre-transfer as in Definition 2.2. In view of the definition of the trace as in Definition 2.2, the multiplicative property of the trace follows from the multiplicative property of the pre-transfer. These prove the statements when \(\mathrm{F}_{+}=\mathrm{F}_{1+}\wedge\mathrm{F}_{2+}\). The corresponding statements when \(\mathrm{F}_{2}\) is already a pointed simplicial presheaf may be proven along entirely similar lines. ### Additivity of the Motivic Trace The goal of this section is to prove the following theorem. **Theorem 2.9**.: _(Mayer-Vietoris and Additivity for the Trace)_ 1. _Let_ \(\mathrm{X}\) _denote a smooth quasi-projective scheme and let_ \(i_{j}:\mathrm{X}_{\mathrm{j}}\to\mathrm{X}\)_,_ \(j=1,2\) _denote the open immersion of two Zariski open subschemes of_ \(\mathrm{X}\)_, with_ \(\mathrm{X}=\mathrm{X}_{1}\cup\mathrm{X}_{2}\)_. Let_ \(\mathrm{U}\to\mathrm{X}\) _denote the open immersion of a Zariski open subscheme of_ \(\mathrm{X}\)_, with_ \(\mathrm{U}_{\mathrm{i}}=\mathrm{U}\cap\mathrm{X}_{\mathrm{i}}\)_. Then adopting the terminology above, (that is, where_ \(\tau_{\mathrm{P}}\) _denotes the trace associated to the pointed simplicial presheaf_ \(\mathrm{P}\)_), and when_ \(char(k)=0\)_,_ (2.2.1) \[\tau_{\mathrm{X}/\mathrm{U}}=\tau_{\mathrm{X}_{1}/\mathrm{U}_{1}}+\tau_{ \mathrm{X}_{2}/\mathrm{U}_{2}}-\tau_{(\mathrm{X}_{1}\cap\mathrm{X}_{2})/( \mathrm{U}_{1}\cap\mathrm{U}_{2})}\,\text{in}\,\mathcal{SH}(k).\] _In case_ \(char(k)=\mathrm{p}>0\)_,_ (2.2.2) _Throughout the following discussion, let_ \(<-1>\) _denote the class in the Grothendieck-Witt ring associated to_ \(-1\in k\) _as in_ _[_Mo4, p. 252]__._ 2. _Let_ \(i:\mathrm{Z}\to\mathrm{X}\) _denote a closed immersion of smooth schemes with_ \(j:\mathrm{U}\to\mathrm{X}\) _denoting the corresponding open complement. Let_ \(\mathcal{N}\) _denote the normal bundle associated to the closed immersion_ \(i\) _and let_ \(\mathrm{Th}(\mathcal{N})\) _denotes its Thom-space. Let_ \(c\) _denote the codimension of_ \(\mathrm{Z}\) _in_ \(\mathrm{X}\)_. Then adopting the terminology above, we obtain in_ \(\mathcal{SH}(k)\) _when_ \(char(k)=0\)_:_ (2.2.3) \[\tau_{\mathrm{X}_{+}}=\tau_{\mathrm{U}_{+}}+\tau_{\mathrm{X}/\mathrm{U}},\ \text{and}\ \tau_{\mathrm{X}/\mathrm{U}}=\tau_{\mathrm{Th}(\mathcal{N})}=<-1>^{c}\tau_{ \mathrm{Z}_{+}}.\] _In case_ \(\sqrt{-1}\in k\)_, it follows that_ \[\tau_{\mathrm{X}/\mathrm{U}}=\tau_{\mathrm{Th}(\mathcal{N})}=\tau_{\mathrm{Z}_ {+}}.\] _In case_ \(char(k)=\mathrm{p}>0\)_, we obtain in_ \(\mathcal{SH}(k)[\mathrm{p}^{-1}]\)_:_ (2.2.4) \[\tau_{\mathrm{X}_{+},\mathbb{S}_{k}[\mathrm{p}^{-1}]}=\tau_{\mathrm{U}_{+}, \mathbb{S}_{k}[\mathrm{p}^{-1}]}+\tau_{\mathrm{X}/\mathrm{U},\mathbb{S}_{k}[ \mathrm{p}^{-1}]},\tau_{\mathrm{X}/\mathrm{U},\mathbb{S}_{k}[\mathrm{p}^{-1}] }=\tau_{\mathrm{Th}(\mathcal{N}),\mathbb{S}_{k}[\mathrm{p}^{-1}]}=<-1>^{c}\tau_ {\mathrm{Z}_{+},\mathbb{S}_{k}[\mathrm{p}^{-1}]},\] _and assuming_ \(\sqrt{-1}\in k\)__ \[\tau_{\mathrm{X}/\mathrm{U},\mathbb{S}_{k}[\mathrm{p}^{-1}]}=\tau_{\mathrm{Th} (\mathcal{N}),\mathbb{S}_{k}[\mathrm{p}^{-1}]}=\tau_{\mathrm{Z}_{+},\mathbb{S }_{k}[\mathrm{p}^{-1}]}.\] 3. _Let_ \(\{\mathrm{S}_{\alpha}|\alpha\}\) _denote a stratification of the smooth scheme_ \(\mathrm{X}\) _into finitely many locally closed and smooth subschemes_ \(\mathrm{S}_{\alpha}\)_. Let_ \(c_{\alpha}\) _denote the codimension of_ \(\mathrm{S}_{\alpha}\) _in_ \(\mathrm{X}\)_. Then we obtain in_ \(\mathcal{SH}(k)\) _when_ \(char(k)=0\)_:_ (2.2.5) \[\tau_{\mathrm{X}_{+}}=\Sigma_{\alpha}<-1>^{c_{\alpha}}\tau_{\mathrm{S}_{\alpha +}}\ \text{and assuming}\ \sqrt{-1}\in k,\] \[\tau_{\mathrm{X}_{+}}=\Sigma_{\alpha}\tau_{\mathrm{S}_{\alpha+}}.\] _In case_ \(char(k)=\mathrm{p}>0\)_, we obtain in_ \(\mathcal{SH}(k)[\mathrm{p}^{-1}]\)_:_ (2.2.6) \[\tau_{\mathrm{X}_{+},\mathbb{S}_{k}[\mathrm{p}^{-1}]}=\Sigma_{\alpha}<-1>^{c_ {\alpha}}\tau_{\mathrm{S}_{\alpha+},\mathbb{S}_{k}[\mathrm{p}^{-1}]},\ \text{and again assuming}\ \sqrt{-1}\in k,\] \[\tau_{\mathrm{X}_{+},\mathbb{S}_{k}[\mathrm{p}^{-1}]}=\Sigma_{\alpha}\tau_{ \mathrm{S}_{\alpha+},\mathbb{S}_{k}[\mathrm{p}^{-1}]}.\] Proof.: We will explicitly discuss only the case in characteristic \(0\), as proofs in positive characteristics will follow along the same lines. First one recalls the stable homotopy cofiber sequence (see [10, p. 115, Theorem 2.23]) \[\Sigma^{\infty}_{\mathbf{T}}\mathrm{U}_{+}\to\Sigma^{\infty}_{\mathbf{T}} \mathrm{X}_{+}\to\Sigma^{\infty}_{\mathbf{T}}(\mathrm{X}/\mathrm{U})\simeq \Sigma^{\infty}_{\mathbf{T}}\wedge\mathrm{Th}(\mathcal{N}) \tag{2.2.7}\] in the stable motivic homotopy category over the base scheme. The first statement in (2.2.3) follows by applying Theorem 2.5 to the stable homotopy cofiber sequence in (2.2.7). Next we will consider (i), namely the Mayer-Vietoris sequence. For this, one begins with the stable cofiber sequences \[\Sigma^{\infty}_{\mathbf{T}}(\mathrm{U}_{1}\cap\mathrm{U}_{2})_{+}\to\Sigma^{ \infty}_{\mathbf{T}}(\mathrm{U}_{1}\sqcup\mathrm{U}_{2})_{+}\to\Sigma^{\infty }_{\mathbf{T}}(\mathrm{U})_{+},\quad\Sigma^{\infty}_{\mathbf{T}}(\mathrm{X}_{ 1}\cap\mathrm{X}_{2})_{+}\to\Sigma^{\infty}_{\mathbf{T}}(\mathrm{X}_{1} \sqcup\mathrm{X}_{2})_{+}\to\Sigma^{\infty}_{\mathbf{T}}(\mathrm{X})_{+}.\] Then one applies Theorem 2.6(i) to both of them, which will prove: \[\tau_{\mathrm{U}_{+}}=\tau_{(\mathrm{U}_{1}\cup\mathrm{U}_{2})_{+}}=\tau_{ \mathrm{U}_{1+}}+\tau_{\mathrm{U}_{2+}}-\tau_{(\mathrm{U}_{1}\cap\mathrm{U}_{ 2})_{+}}\text{ and } \tag{2.2.8}\] On applying the first statement in (ii) to \(\mathrm{U}_{i}\subseteq\mathrm{X}_{i}\), \(i=1,2\) and \(\mathrm{U}_{1}\cap\mathrm{U}_{2}\subseteq\mathrm{X}_{1}\cap\mathrm{X}_{2}\) we obtain: \[\tau_{\mathrm{X}_{i}/\mathrm{U}_{i}}=\tau_{\mathrm{X}_{i+}}-\tau_{\mathrm{U}_{ i+}},i=1,2\text{ and }\] \[\tau_{(\mathrm{X}/\mathrm{U})_{+}}=\tau_{(\mathrm{X}_{1}\cup\mathrm{X}_{2})/( \mathrm{U}_{1}\cup\mathrm{U}_{2})}=\tau_{(\mathrm{X}_{1}\cup\mathrm{X}_{2})_{ +}}-\tau_{(\mathrm{U}_{1}\cup\mathrm{U}_{2})_{+}}.\] The required statement in (2.2.1) now follows on substituting from (2.2.8). This completes the proof of (i). We proceed to establish the remaining statement in (2.2.3). First we will consider the case where the normal bundle \(\mathcal{N}\) is trivial, mainly because this is an important special case to consider. When the normal bundle is trivial, we observe that \(\mathrm{X}/\mathrm{U}\simeq\mathrm{Th}(\mathcal{N})\simeq\mathbf{T}^{c}\wedge \mathrm{Z}_{+}\). Next, the multiplicative property of the trace as in Lemma 2.8 shows that \[\tau_{\Sigma^{\infty}_{\mathbf{T}}(\mathbf{T}^{c}\wedge\mathrm{Z}_{+})}=( \tau_{\Sigma^{\infty}_{\mathbf{T}}\mathbf{T}})^{\wedge^{c}}\wedge\tau_{ \Sigma^{\infty}_{\mathbf{T}}\mathrm{Z}_{+}} \tag{2.2.9}\] as classes in \(\pi_{0,0}(\mathbb{S}_{k})\). In general, it is known that the class of \(\tau_{\Sigma^{\infty}_{\mathbf{T}}\mathbf{T}}=<-1>\) in the Grothendieck-Witt group \(\mathrm{GW}(k)\): recall that \(\mathrm{GW}(k)\) identifies with \(\pi_{0,0}(\mathbb{S}_{k})\), in view of [11, Theorem 6.2.2]. (Here it may be important to recall that \(\mathbf{T}\) is the pointed simplicial presheaf \(\mathbb{P}^{1}\) pointed by \(\infty\).) This implies that \(\tau_{\Sigma^{\infty}_{\mathbf{T}}\mathbf{T}}=-1\) in \(\pi_{0,0}(\mathbb{S}_{k})\) and proves the second statement in (2.2.3) when \(\mathcal{N}\) is trivial. Next we assume that \(\sqrt{-}1\in k\). Then the quadratic form \(<-1>\) gets identified with \(<1>\) in the Grothendieck-Witt group \(\mathrm{GW}(k)\): see, for example, [10, p. 44]. Therefore, \(\tau_{\Sigma^{\infty}_{\mathbf{T}}\mathbf{T}}=<1>\), hence \(\tau_{\Sigma^{\infty}_{\mathbf{T}}\mathbf{T}^{c}\wedge\mathrm{Z}_{+}}=\tau_{ \Sigma^{\infty}_{\mathbf{T}}\mathrm{Z}_{+}}\). This completes the proof of (ii), when the normal bundle to \(\mathrm{Z}\) in \(\mathrm{X}\) is trivial. To consider the general case when the normal bundle \(\mathcal{N}\) is not necessarily trivial, one takes a finite Zariski open cover \(\{\mathrm{U}_{i}|\mathrm{i}=1,\cdots,\mathrm{n}\}\) so that \(\mathcal{N}_{|\mathrm{U}_{i}}\) is trivial for each \(i\). Then the Mayer-Vietoris property considered in (i) and ascending induction on \(n\), together with the case where the normal bundle is trivial considered above, completes the proof in this case. (Observe that any scheme \(\mathrm{Z}\) over \(k\) of finite type is always quasi-compact, so that such a finite open cover always exists.) These complete the proof of all the statements in (ii). Next we consider the statement in (iii). This will follow from the second statement in (ii) using ascending induction on the number of strata. However, as this induction needs to be handled carefully, we proceed to provide an outline of the relevant argument. We will assume that the stratification of \(\mathrm{X}\) defines the following increasing filtrations: (a) \(\phi=\mathrm{X}_{-1}\subseteq\mathrm{X}_{0}\subseteq\cdots\subseteq\mathrm{X} _{\mathrm{n}}=\mathrm{X}\), where each \(\mathrm{X}_{i}\) is closed and the strata \(\mathrm{X}_{i}-\mathrm{X}_{i-1}\), \(i=0,\cdots,n\) are smooth. (b) \(\mathrm{U}_{0}\subseteq\mathrm{U}_{1}\subseteq\cdots\subseteq\mathrm{U}_{\mathrm{n-1 }}\subseteq\mathrm{U}_{\mathrm{n}}=\mathrm{X}\), where each \(\mathrm{U}_{\mathrm{i}}\) is open in \(\mathrm{X}\) (and therefore smooth), with \(\mathrm{U}_{\mathrm{i}}-\mathrm{U}_{\mathrm{i-1}}=\mathrm{X}_{\mathrm{n-i}}- \mathrm{X}_{\mathrm{n-i-1}}\), for all \(i=0,\cdots n\). Now observe that each \(\mathrm{U}_{\mathrm{k}}\to\mathrm{X}\) is an open immersion, while each \(\mathrm{X}_{\mathrm{k}}-\mathrm{X}_{\mathrm{k-1}}\to\mathrm{X}-\mathrm{X}_{ \mathrm{k-1}}\) is a closed immersion. Let \(c_{k}\) denote the corresponding codimension. We now apply Theorem 2.9(ii) with \(\mathrm{U}=\mathrm{U}_{\mathrm{n-1}}\), and \(\mathrm{Z}=\mathrm{U}_{\mathrm{n}}-\mathrm{U}_{\mathrm{n-1}}=\mathrm{X}_{0}- \mathrm{X}_{-1}=\mathrm{X}_{0}\), the closed stratum. Since \(\mathrm{X}\) is now smooth and so is \(\mathrm{Z}\), the hypotheses of Theorem 2.9(ii) are satisfied. This provides us \[\tau_{\mathrm{X}_{+}}=\tau_{\mathrm{U}_{\mathrm{n-1}+}}+\tau_{\mathrm{X}/ \mathrm{U}_{\mathrm{n-1}}}\text{ and }\tau_{\mathrm{X}/\mathrm{U}_{\mathrm{n-1}}}=<-1>^{c_{0}}\tau_{ \mathrm{X}_{0+}} \tag{2.2.10}\] Next we replace \(\mathrm{X}\) by \(\mathrm{U}_{\mathrm{n-1}}\), \(\mathrm{U}\) by \(\mathrm{U}_{\mathrm{n-2}}\) and \(\mathrm{Z}\) by \(\mathrm{X}_{1}-\mathrm{X}_{0}\). Since \(\mathrm{X}_{1}-\mathrm{X}_{0}\) is smooth, Theorem 2.9(ii) now provides us \[\tau_{\mathrm{U}_{\mathrm{n-1}+}}=\tau_{\mathrm{U}_{\mathrm{n-2}+}}+<-1>^{c_{1 }}\tau_{(\mathrm{X}_{1}-\mathrm{X}_{0})_{+}}. \tag{2.2.11}\] Substituting these in (2.2.10), we obtain \[\tau_{\mathrm{X}_{+}}=\tau_{\mathrm{U}_{\mathrm{n-2}+}}+<-1>^{c_{1}}\tau_{( \mathrm{X}_{1}-\mathrm{X}_{0})_{+}}+<-1>^{c_{0}}\tau_{\mathrm{X}_{0+}}.\] Clearly this may be continued inductively to deduce statement (iii) in Theorem 2.9 from Theorem 2.9(ii). ## 3. **Proofs of the main Theorems** We begin by discussing the following Proposition, which seems to be rather well-known. (See for example, [10, Proposition 4.10] or [BP, (3.6)].) **Proposition 3.1**.: _Let \(\mathrm{T}\) denote a split torus acting on a separated scheme \(\mathrm{X}\) all defined over the given perfect base field \(k\)._ _Then the following hold._ \(\mathrm{X}\) _admits a decomposition into a disjoint union of finitely many locally closed, \(\mathrm{T}\)-stable subschemes \(\mathrm{X}_{\mathrm{j}}\) so that_ \[\mathrm{X}_{\mathrm{j}}\cong(\mathrm{T}/\Gamma_{\mathrm{j}})\times\mathrm{Y}_{ \mathrm{j}}. \tag{3.0.1}\] _Here each \(\Gamma_{j}\) is a sub-group-scheme of \(\mathrm{T}\), each \(\mathrm{Y}_{\mathrm{j}}\) is a scheme of finite type over \(k\) which is also regular and on which \(\mathrm{T}\) acts trivially with the isomorphism in (3.0.1) being \(\mathrm{T}\)-equivariant._ Proof.: One may derive this from the generic torus slice theorem proved in [10, Proposition 4.10], which says that if a split torus acts on a reduced separated scheme of finite type over a perfect field, then the following are satisfied: 1. there is an open subscheme \(\mathrm{U}\) which is regular and stable under the \(\mathrm{T}\)-action 2. a geometric quotient \(\mathrm{U}/\mathrm{T}\) exists, which is a regular scheme of finite type over \(k\) 3. \(\mathrm{U}\) is isomorphic as a \(\mathrm{T}\)-scheme to \(\mathrm{T}/\Gamma\times\mathrm{U}/\mathrm{T}\) where \(\Gamma\) is a diagonalizable subgroup scheme of \(\mathrm{T}\) and \(\mathrm{T}\) acts trivially on \(\mathrm{U}/\mathrm{T}\). (See also [BP, (3.6)] for a similar decomposition.) Next we consider the following theorem. **Theorem 3.2**.: _Under the assumption that the base field \(k\) is of characteristic \(0\), the following hold, where \(\tau_{\mathrm{X}_{+}}\) denotes the trace associated to the pointed scheme \(\mathrm{X}_{+}\):_ 1. \(\tau_{\mathrm{G}_{m+}}=1-<-1>\) _in_ \(\mathrm{GW}(k)\)_, and if_ \(\mathrm{T}\) _is a split torus of rank_ \(n\)_,_ \(\tau_{\mathrm{T}_{+}}=(1-<-1>)^{n}\) _in_ \(\mathrm{GW}(k)\)_. Therefore, it follows that when_ \(k\) _contains a_ \(\sqrt{-1}\)_,_ \(\tau_{\mathrm{G}_{m+}}=0\) _and_ \(\tau_{\mathrm{T}_{+}}=0\) _in_ \(\mathrm{GW}(k)\) _._ 2. _Let_ \(\mathrm{T}\) _denote a split torus acting on a smooth scheme_ \(\mathrm{X}\)_. Then_ \(\mathrm{X}^{\mathrm{T}}\) _is also smooth, and_ \(\tau_{\mathrm{X}_{+}}-\tau_{\mathrm{X}_{+}^{\mathrm{T}}}\) _belongs to the ideal generated by_ \((1-<-1>)\) _in_ \(\mathrm{GW}(k)\)_. In particular, when_ \(k\) _contains a_ \(\sqrt{-1}\)_,_ \(\tau_{\mathrm{X}_{+}}=\tau_{\mathrm{X}_{+}^{\mathrm{T}}}\) _in_ \(\mathrm{GW}(k)\)_._ _If the base field is of positive characteristic \(\mathrm{p}\), the corresponding assertions hold with the the trace of a pointed smooth scheme \(\mathrm{Y}_{+}\) replaced by \(\tau_{\mathrm{Y}_{+},\mathbb{S}_{k}[\mathrm{p}^{-1}]}\) and the Grothendieck-Witt ring replaced by the Grothendieck-Witt ring with the prime \(\mathrm{p}\) inverted._ Proof.: We will only consider the proofs when the base field is of characteristic \(0\), since the proofs in the positive characteristic case are entirely similar. However, it is important to point out that in positive characteristics \(\mathrm{p}\), it is important to invert \(\mathrm{p}\): for otherwise, one no longer has a theory of Spanier-Whitehead duality. Next observe from Definition 2.2, that the trace \(\tau_{\mathrm{X}_{+}}\) associated to any smooth scheme \(\mathrm{X}\) is a map \(\mathbb{S}_{k}\to\mathbb{S}_{k}\): as such, we will identify \(\tau_{\mathrm{X}_{+}}\) with the corresponding class \(\tau_{\mathrm{X}_{+}}^{*}(1)\) in the Grothendieck Witt-ring of the base field. Next we consider (i). We observe that the scheme \(\mathbb{A}^{1}\) is the disjoint union of the closed point \(\{0\}\) and \(\mathbb{G}_{m}\). If \(i_{1}:\{0\}\to\mathbb{A}^{1}\) and \(j_{1}:\mathbb{G}_{m}\to\mathbb{A}^{1}\) are the corresponding immersions, Theorems 2.9(ii) and (iii) show that \[\tau_{\mathbb{A}_{+}^{1}}=\tau_{\mathbb{G}_{m+}}+\tau_{\mathbb{A}^{1}/\mathbb{ G}_{m}}=\tau_{\mathbb{G}_{m+}}+\tau_{\mathbf{T}}=\tau_{\mathbb{G}_{m+}}+<-1>. \tag{3.0.2}\] Therefore, it follows that \[\tau_{\mathbb{G}_{m+}}=\tau_{\mathbb{A}_{+}^{1}}-<-1>=1-<-1>. \tag{3.0.3}\] where \(\tau_{\mathbb{A}_{+}^{1}}=\tau_{\{0\}_{+}}=1\) by \(\mathbb{A}^{1}\)-contractibility. One may readily see this from the definition of the pre-transfer as in Definition 2.2, which shows that both the pre-transfer \(tr^{\prime}_{\mathrm{C}_{+}}=tr^{\prime}_{\mathrm{C}_{+}}(id)\) and hence the corresponding trace, \(\tau_{\mathrm{C}_{+}}=\pi\circ tr^{\prime}_{\mathrm{C}_{+}}\) depends on \(\mathrm{C}_{+}\) only up to its class in the motivic stable homotopy category. Since \(\mathrm{T}\) is a split torus, we may assume \(\mathrm{T}=\mathbb{G}_{m}^{n}\) for some positive integer \(n\). Then the multiplicative property of the trace and pre-transfer (see Proposition 2.8) prove that \(\tau_{\mathrm{T}_{+}}=(1-<-1>)^{n}\). In particular, when \(k\) contains a \(\sqrt{-1}\), it follows that \(\tau_{\mathbb{G}_{m+}}=0\) and \(\tau_{\mathbf{T}+}=0\) in \(\mathrm{GW}(k)\). These complete the proof of statement (i). Therefore, we proceed to prove the statement in (ii). First, we invoke Proposition 3.1 to conclude that \(\mathrm{X}^{\mathrm{T}}\) is the disjoint union of the subschemes \(\mathrm{X}_{\mathrm{j}}\) for which \(\Gamma_{j}=\mathrm{T}\). Let \(i_{j}:\mathrm{X}_{\mathrm{j}}\cong(\mathrm{T}/\Gamma_{\mathrm{j}})\times \mathrm{Y}_{\mathrm{j}}\to\mathrm{X}\) denote the locally closed immersion. Next observe that the additivity of the trace proven in Theorem 2.9, and the multiplicativity of the pre-transfer and trace proven in Proposition 2.8 along with the decomposition in (3.0.1) show that \[\tau_{\mathrm{X}_{+}}=\Sigma_{j}\tau_{\mathrm{X}_{\mathrm{j}+}}=\Sigma_{j}( \tau_{\mathrm{T}/\Gamma_{\mathrm{j}}+})\wedge\tau_{\mathrm{Y}_{\mathrm{j}+}}. \tag{3.0.4}\] Now statement (i) in the theorem shows that the term \(\tau_{\mathrm{T}/\Gamma_{\mathrm{j}}+}=(1-<-1>)^{n_{j}}\), if \(\mathrm{T}/\Gamma_{\mathrm{j}}\) is a split torus of rank \(n_{j}\). Since \(\mathrm{X}^{\mathrm{T}}\) is the disjoint union of the subschemes \(\mathrm{X}_{\mathrm{j}}=\mathrm{T}/\Gamma_{\mathrm{j}}\times\mathrm{Y}_{ \mathrm{j}}\) with \(\Gamma_{\mathrm{j}}=\mathrm{T}\), the additivity of the trace proven in Theorem 2.9 and applied to \(\mathrm{X}^{\mathrm{T}}\) proves the sum of such terms on the right-hand-side of (3.0.4) is \(\tau_{\mathrm{X}_{+}^{\mathrm{T}}}\). Therefore, it follows that \(\tau_{\mathrm{X}_{+}}-\tau_{\mathrm{X}_{+}^{\mathrm{T}}}\) belongs to the ideal in \(\mathrm{GW}(k)\) generated by \(1-<-1>\). In particular, when \(k\) contains a \(\sqrt{-1}\), it follows that \(\tau_{\mathrm{X}+}=\tau_{\mathrm{X}_{+}^{\mathrm{T}}}\). These complete the proof of the statements in (ii). Proof of Theorem 1.2.: We point out that it is important to assume the base field \(k\) is perfect in the following arguments: this will ensure that all the schemes considered here are defined over the same base field. First we will show that we can reduce to the case \(\mathrm{G}\) is _connected_. Let \(\mathrm{G}^{\mathrm{o}}\) denote the connected component of \(\mathrm{G}\) containing the identity element and let \(\mathrm{T}\) denote a split maximal torus in \(\mathrm{G}\). Then, one first obtains the isomorphisms \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\cong\{g\mathrm{T}g^{-1}|g\, \varepsilon\,\mathrm{G}\}\) and \(\mathrm{G}^{\mathrm{o}}/\mathrm{N}_{\mathrm{G}^{\mathrm{o}}}(\mathrm{T})\cong\{g_{o} \mathrm{T}g_{o}^{-1}|g_{o}\,\boldsymbol{\varepsilon}\,\mathrm{G}^{\mathrm{o}}\}\). Next observe that \(g\mathrm{T}g^{-1}\), being a maximal torus and hence a connected subgroup of \(\mathrm{G}\), is in fact a maximal torus in \(\mathrm{G}^{\mathrm{o}}\) for each \(g\,\boldsymbol{\varepsilon}\,\mathrm{G}\). These show that \[\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})=\{g\mathrm{T}g^{-1}|g\, \boldsymbol{\varepsilon}\,\mathrm{G}\}\cong\{g_{o}\mathrm{T}g_{o}^{-1}|g_{o} \,\boldsymbol{\varepsilon}\,\mathrm{G}^{\mathrm{o}}\}=\mathrm{G}^{\mathrm{o}}/ \mathrm{N}_{\mathrm{G}^{\mathrm{o}}}(\mathrm{T}).\] Therefore, we may assume the group \(\mathrm{G}\) is connected. Moreover, we may take the quotient by the unipotent radical \(\mathrm{R}_{\mathrm{u}}(\mathrm{G})\), which is a normal subgroup (and is isomorphic to an affine space), with the quotient \(\mathrm{G}_{\mathrm{red}}=\mathrm{G}/\mathrm{R}_{\mathrm{u}}(\mathrm{G})\) reductive. Now \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\cong\mathrm{G}_{\mathrm{red}}/ \mathrm{N}_{\mathrm{G}_{\mathrm{red}}}(\mathrm{T})\) (since the intersection of a maximal torus in \(\mathrm{G}\) with the unipotent radical \(\mathrm{R}_{\mathrm{u}}(\mathrm{G})\) is trivial), so that we may assume \(\mathrm{G}\) is a connected split reductive group. Then we observe that since \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\) is the variety of all split maximal tori in \(\mathrm{G}\), \(\mathrm{T}\) has an action on \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\) (induced by the left translation action of \(\mathrm{T}\) on \(\mathrm{G}\)) so that there is exactly a single fixed point, namely the coset \(e\mathrm{N}_{\mathrm{G}}(\mathrm{T})\), that is, \((\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T}))^{\mathrm{T}}=\{\mathrm{e} \mathrm{N}_{\mathrm{G}}(\mathrm{T})\}=\{\mathrm{Spec}\,k\}\). (To prove this assertion, one may reduce to the case where the base field is algebraically closed, since the formation of fixed point schemes respects change of base fields as shown in [Fog, p. 33, Remark (3)]. See also [BP, Lemma 3.5]. In fact, one may see this directly as follows. Making use of the identification of \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\) with \(\{g\mathrm{T}g^{-1}|g\,\boldsymbol{\varepsilon}\,\mathrm{G}\}\), one sees that if \(g_{0}\mathrm{T}g_{0}^{-1}\) is fixed by the conjugation action of \(\mathrm{T}\), then \(g_{0}^{-1}\mathrm{T}g_{0}\subseteq\mathrm{N}_{\mathrm{G}}(\mathrm{T})^{\mathrm{ o}}=\mathrm{T}\), so that \(g_{0}\,\boldsymbol{\varepsilon}\,\mathrm{N}_{\mathrm{G}}(\mathrm{T})\). Thus the coset \(g_{0}\mathrm{N}_{\mathrm{G}}(\mathrm{T})=\mathrm{e}\mathrm{N}_{\mathrm{G}}( \mathrm{T})\).) Next we will first consider the case where the base field is of characteristic \(0\). Therefore, by Theorem 3.2(ii), \[\tau_{\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})_{+}}=\tau_{(\mathrm{G}/ \mathrm{N}_{\mathrm{G}}(\mathrm{T}))_{+}^{\mathrm{T}}}=\tau_{\mathrm{Spec}\,k_{ +}}=id_{\mathbb{S}_{k}},\] which is the identity map of the motivic sphere spectrum. Therefore, \[\chi_{mot}(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T}))=\tau_{\mathrm{G}/ \mathrm{N}_{\mathrm{G}}(\mathrm{T})_{+}}^{\ast}(1)=1.\] The motivic stable homotopy group \(\pi_{0,0}(\mathbb{S}_{k})\) identifies with the Grothendieck-Witt ring by [Mo4]. This completes the proof of the statement on \(\tau_{\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})_{+}}\) in Theorem 1.2 in this case. In case the base field is of positive characteristic \(\mathrm{p}\), one observes that \(\Sigma_{\mathbf{T}}^{\infty}\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})_{+}\) will be dualizable only in \(\mathbf{Spt}(k_{mot})[\mathrm{p}^{-1}]\). But once the prime \(\mathrm{p}\) is inverted the same arguments as before carry over proving the corresponding statement. These complete the proof of the theorem. **Proof of Corollary 1.3**. Observe, first that if \(\bar{k}\) is the algebraic closure of the given field, then it contains a \(\sqrt{-1}\), and therefore the conclusions of the theorem hold in this case. In positive characteristic \(\mathrm{p}\), we proceed to show that this already implies that \(\chi_{mot}(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T}))\) is a _unit_ in the group \(\mathrm{GW}(k)[\mathrm{p}^{-1}]\), without the assumption on the existence of a square root of \(-1\) in \(k\). For this, one may first observe the commutative diagram, where \(\bar{k}\) is an algebraic closure of \(k\): (3.0.5) Here the left vertical map is induced by the change of base fields from \(k\) to \(\bar{k}\), and \(rk\) denotes the _rank_ map. Since the motivic Euler-characteristic of \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\) over \(Spec\,k\) maps to the motivic Euler-characteristic of the corresponding \(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T})\) over \(Spec\,k\), it follows that the rank of \(\chi_{mot}(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T}))\) over \(Spec\,k\) is in fact \(1\). By [An, Lemma 2.9(2)], this shows that the \(\chi_{mot}(\mathrm{G}/\mathrm{N}_{\mathrm{G}}(\mathrm{T}))\) over \(Spec\,k\) is in fact a unit in \(\mathrm{GW}(k)[\mathrm{p}^{-1}]\), that is, when \(k\) has positive characteristic. (For the convenience of the reader, we will summarize a few key facts discussed in [An, Proof of Lemma 2.9(2)]. It is observed there that when the base field \(k\) is _not_ formally real, then \(\mathrm{I}(k)=kernel(\mathrm{GW}(k)\frac{rk}{-}\mathbb{Z})\) is the nil radical of \(\mathrm{GW}(k)\): see [Bae78, Theorem V.8.9, Lemma V.7.7 and Theorem V. 7.8]. Therefore, if \(char(k)=\mathrm{p}>0\), and the rank of \(\chi_{mot}(\mathrm{G/N_{G}(T)})\) is \(1\) in \(\mathbb{Z}[\mathrm{p}^{-1}]\), then \(\chi_{mot}(\mathrm{G/N_{G}(T)})\) is \(1+q\) for some nilpotent element \(q\) in \(\mathrm{I}(k)[\mathrm{p}^{-1}]\) and the conclusion follows.) An alternative shorter proof is the following: observe that \(\chi_{mot}(\mathrm{G/N_{G}(T)})-\chi_{mot}((\mathrm{G/N_{G}(T)})^{\mathrm{T}})= \chi_{mot}(\mathrm{G/N_{G}(T)})-1\) belongs to the ideal generated by \(1-<-1>\). \(1-<-1>\) clearly belongs to \(\mathrm{I}(k)=kernel(\mathrm{GW}(k)\overset{rk}{\rightarrow}\mathbb{Z})\). When \(k\) is not formally real, the above ideal is nilpotent as observed above, and therefore, \(\chi_{mot}(\mathrm{G/N_{G}(T)})\) is a unit when \(k\) is not formally real. In characteristic \(0\), the commutative diagram (3.0.5) shows that once again the rank of \(\chi_{mot}(\mathrm{G/N_{G}(T)})\) is \(1\). Therefore, to show that the class \(\chi_{mot}(\mathrm{G/N_{G}(T)})\) is a unit in \(\mathrm{GW}(k)\), it suffices to show its signature is a unit: this is proven in [An, Theorem 5.1(1)]. (Again, for the convenience of the reader, we summarize some details from the proof of [An, Theorem 5.1(1)]. When the field \(k\) is not formally real, the discussion in the last paragraph applies, so that by [An, Lemma 2.12] one reduces to considering only the case when \(k\) is a real closed field. In this case, one lets \(\mathbb{R}^{alg}\) denote the real closure of \(\mathbb{Q}\) in \(\mathbb{R}\). Then, one knows the given real closed field \(k\) contains a copy of \(\mathbb{R}^{alg}\) and that there exists a reductive group scheme \(\widetilde{\mathrm{G}}\) over \(\mathrm{Spec}\,\mathbb{R}^{alg}\) so that \(\mathrm{G}=\widetilde{\mathrm{G}}\times_{\mathrm{Spec}\,\mathbb{R}^{alg}} \mathrm{Spec}\,\mathrm{k}\). Let \(\mathrm{G}_{\mathbb{R}}=\widetilde{\mathrm{G}}\times_{\mathrm{Spec}\,\mathbb{R} ^{alg}}\mathrm{Spec}\,\mathbb{R}\). Then one also observes that the Grothendieck-Witt groups of the three fields \(k\), \(\mathbb{R}^{alg}\) and \(\mathbb{R}\) are isomorphic, and the motivic Euler-characteristics \(\chi_{mot}(\mathrm{G/N_{G}(T)})\), \(\chi_{mot}(\widetilde{\mathrm{G/N_{G}(T)}})\) and \(\chi_{mot}(\mathrm{G_{Spec}\,\mathbb{R}}/\mathrm{N(T)}_{\mathrm{Spec}\,\mathbb{R}})\) over the above three fields identify under the above isomorphisms, so that one may assume the base field \(k\) is \(\mathbb{R}\). Then it is shown in [An, Proof of Theorem 5.1(1)] that, in this case, knowing the rank and signature of the motivic Euler characteristic \(\chi_{mot}(\mathrm{G/N_{G}(T)})\) are \(1\) suffices to prove it is a unit in the Grothendieck-Witt group.) These complete the proof of the corollary.
2307.11203
The Structure of Turbulence in Pulsatile Flow over Urban Canopies
The transport of energy, mass, and momentum in the atmospheric boundary layer (ABL) is regulated by coherent structures. Although past studies have primarily focused on stationary ABL flows, the majority of real-world ABL flows are non-stationary, and a thorough examination of coherent structures under such conditions is lacking. To fill this gap, this study examines the topological changes in ABL turbulence induced by non-stationarity and their effects on momentum transport. Results from a large-eddy simulation of pulsatile open channel flow over an array of surface-mounted cuboids are examined with a focus on the inertial sublayer, and contrasted to those from a corresponding constant pressure gradient case. The analysis reveals that flow pulsation primarily affects the ejection-sweep pattern. Inspection of the instantaneous turbulence structures, two-point autocorrelations, and conditionally-averaged flow fields shows that such a pattern is primarily influenced by the phase-dependent shear rate. From a turbulence structure perspective, this influence is attributed to the changes in the geometry of hairpin vortices. An increase (decrease) in the shear rate intensifies (relaxes) the hairpin vortices, leading to an increase (decrease) in the frequency of ejections and an amplification (reduction) of their percentage contribution to the total momentum flux. Moreover, the size of the hairpin vortex packets changes according to the hairpin vortices comprising them, while the packet inclination remains unaltered during the pulsatile cycle. Findings underscore the important impact of non-stationarity on the structure of ABL turbulence and associated mechanisms supporting momentum transport.
Weiyi Li, Marco G. Giometto
2023-07-20T19:46:30Z
http://arxiv.org/abs/2307.11203v2
# The Structure of Turbulence in Pulsatile Flow over Urban Canopies ###### Abstract The transport of energy, mass, and momentum in the atmospheric boundary layer (ABL) is regulated by coherent structures. Although past studies have primarily focused on stationary ABL flows, the majority of real-world ABL flows are non-stationary, and a thorough examination of coherent structures under such conditions is lacking. To fill this gap, this study examines the topological changes in ABL turbulence induced by non-stationarity and their effects on momentum transport. Results from a large-eddy simulation of pulsatile open channel flow over an array of surface-mounted cuboids are examined with a focus on the inertial sublayer, and contrasted to those from a corresponding constant pressure gradient case. The analysis reveals that flow pulsation primarily affects the ejection-sweep pattern. Inspection of the instantaneous turbulence structures, two-point autocorrelations, and conditionally-averaged flow fields shows that such a pattern is primarily influenced by the phase-dependent shear rate. From a turbulence structure perspective, this influence is attributed to the changes in the geometry of hairpin vortices. An increase (decrease) in the shear rate intensifies (relaxes) the hairpin vortices, leading to an increase (decrease) in the frequency of ejections and an amplification (reduction) of their percentage contribution to the total momentum flux. Moreover, the size of the hairpin vortex packets changes according to the hairpin vortices comprising them, while the packet inclination remains unaltered during the pulsatile cycle. Findings underscore the important impact of non-stationarity on the structure of ABL turbulence and associated mechanisms supporting momentum transport. ## 1 Introduction Coherent turbulent structures, also known as organized structures, play a crucial role in governing the exchange of energy, mass, and momentum between the earth's surface and the atmosphere, as well as in many engineering systems. In wall-bounded flows, these structures have been shown to carry a substantial fraction of the mean shear stress (Lohou _et al._, 2000; Katul _et al._, 2006), kinetic energy (Carper & Porte-Agel, 2004; Huang _et al._, 2009; Dong _et al._, 2020), and scalar fluxes (Li & Bou-Zeid, 2011; Wang _et al._, 2014; Li & Bou-Zeid, 2019). It hence comes as no surprise that substantial efforts have been devoted to their characterization across many fields. These structures are of practical relevance in applications relating to agriculture (Raupach _et al._, 1986; Pan _et al._, 2014), air quality control (Michioka _et al._, 2014), urban climate (Christen _et al._, 2007), and energy harvesting (Ali _et al._, 2017), to name but a few. Previous studies on coherent structures in atmospheric boundary layer (ABL) flows have mainly focused on the roughness sublayer (RSL) and the inertial sublayer (ISL)--the lower portions of the ABL. These layers host physical flow phenomena regulating land-atmosphere exchanges at scales relevant to weather models and human activities (Stull, 1988; Oke _et al._, 2017). The RSL, which extends from the surface up to 2 to 5 times the average height of roughness elements, is characterized by flow heterogeneity due to the presence of these elements (Fernando, 2010). In the RSL, the geometry of turbulent structures is mainly determined by the underlying surface morphology. Through field measurements and wind tunnel data of ABL flow over vegetation canopies, Raupach _et al._ (1996) demonstrated that coherent structures near the top of a vegetation canopy are connected to inflection-point instabilities, akin to those found in mixing layers. Specifically, RSL turbulence features a characteristic length scale that is determined by the mean shear and is more efficient in transporting momentum when compared to its boundary-layer counterpart. Ever since, the so-called mixing-layer analogy has become a cornerstone in our understanding of vegetation canopy flows and provides an explanation for many of the observed distinctive features of turbulence in such a region. Its validity has been confirmed by experimental approaches (Novak _et al._, 2000; Dupont & Patton, 2012; Bohm _et al._, 2013) and numerical simulations (Dupont & Brunet, 2008; Huang _et al._, 2009; Gavrilov _et al._, 2013). Building on the mixing-layer analogy, Finnigan _et al._ (2009) employed conditional averaging techniques to show that the prevalent eddy structure in the roughness sublayer (RSL) is a head-down hairpin vortex followed by a head-up one. This pattern is characterized by a local pressure peak and a strong scalar front located between the hairpin pair. More recently, Bailey & Stoll (2016) challenged this observation by proposing an alternative two-dimensional roller structure with streamwise spacing that scales with the characteristic length suggested by Raupach _et al._ (1996). Extending the mixing-layer analogy to flow over urban canopies has proven challenging. In a numerical simulation study, Coccal _et al._ (2007) discovered the absence of Kelvin-Helmholtz waves, which are a characteristic of the mixing-layer analogy, near the top of the urban canopy. This finding, corroborated by observations from Huq _et al._ (2007), suggests that the mixing-layer analogy is not applicable to urban canopy flows. Instead, the RSL of urban canopy flows is influenced by two length scales; the first is dictated by the size of individual roughness elements such as buildings and trees, and the second by the imprint of large-scale motions above the RSL. The coexistence of these two length scales can be observed through two-point correlation maps (Castro _et al._, 2006; Reynolds & Castro, 2008) and velocity spectra (Basley _et al._, 2019). However, when the urban canopy has a significant aspect ratio between the building height \(h\) and width \(w\), such as \(h/w>4\), the momentum transport in the RSL is dominated by mixing-layer-type eddies, as shown by Zhang _et al._ (2022). The ISL, located above the RSL, is the geophysical equivalent of the celebrated law-of-the-wall region in high Reynolds number turbulent boundary layer (TBL) flows. In the absence of thermal stratification effects, the mean flow in the ISL displays a logarithmic profile and the momentum flux remains approximately constant with height (Stull, 1988; Marusic _et al._, 2013; Klewicki _et al._, 2014). Surface morphology has been shown to impact ISL turbulence under certain flow conditions, and this remains a topic of active research, as elaborated next. Volino _et al._ (2007) highlighted the similarity of coherent structures in the log region of TBL flow over smooth and three-dimensional rough surfaces via a comparison of velocity spectra and two-point correlations of the fluctuating velocity and swirl. Findings therein support Townsend's similarity hypothesis (Townsend, 1976), which states that turbulence dynamics beyond the RSL do not depend on surface morphological features, except via their role in setting the length and velocity scales for the outer flow region. The said structural similarity between TBL flows over different surfaces was later confirmed by Wu & Christensen (2007) and Coccal _et al._ (2007), where a highly irregular rough surface and an urban-like roughness were considered, respectively. However, Volino _et al._ (2011) later reported pronounced signatures of surface roughness on flow structures beyond the RSL in a TBL flow over two-dimensional bars. Similar observations were also made in a TBL flow over a surface characterized by cross-stream heterogeneity (Anderson _et al._, 2015_a_), thus questioning the validity of Townsend's similarity hypothesis. To reconcile these contrasting observations, Squire _et al._ (2017) argued that structural similarity in the ISL is contingent on the surface roughness features not producing flow patterns significantly larger than their own size. If the surface-induced flow patterns are larger than their own size, then they may control flow coherence in the ISL. For example, cross-stream heterogeneous rough surfaces can induce secondary circulations as large as the boundary-layer thickness, which profoundly modify momentum transport and flow coherence in the ISL (Barros & Christensen, 2014; Anderson _et al._, 2015_a_). Although coherent structures in cases with significant surface-induced flow patterns necessitate case-specific analyses, researchers have extensively worked towards characterizing the topology of turbulence in cases that exhibit ISL structural similarity. These analyses have inspired scaling laws (Meneveau & Marusic, 2013; Yang _et al._, 2016; Hu _et al._, 2023) and the construction of statistical models (Perry & Chong, 1982) for TBL turbulence. In this context, the hairpin vortex packet paradigm has emerged as the predominant geometrical model (Christensen & Adrian, 2001; Tomkins & Adrian, 2003; Adrian, 2007). The origins of this model can be traced back to the pioneering work of Theodorsen (1952), who hypothesized that inclined hairpin or horseshoe-shaped vortices were the fundamental elements of TBL turbulence. This idea was later supported by flow visualizations from laboratory experiments (Bandyopadhyay, 1980; Head & Bandyopadhyay, 1981; Smith _et al._, 1991) and high-fidelity numerical simulations (Moin & Kim, 1982, 1985; Kim & Moin, 1986). In addition to providing evidence for the existence of hairpin vortices, Head & Bandyopadhyay (1981) also proposed that these vortices occur in groups, with their heads describing an envelope inclined at 15\({}^{\circ}\)-20\({}^{\circ}\) with respect to the wall. Adrian _et al._ (2000) expanded on this idea, and introduced the hairpin vortex packet paradigm, which posits that hairpin vortices are closely aligned in a quasi-streamwise direction, forming hairpin vortex packets with a characteristic inclination angle of 15\({}^{\circ}\)-20\({}^{\circ}\). Nested between the legs of these hairpins are low-momentum regions, which extend approximately 2-3 times the boundary layer thickness in the streamwise direction. These low-momentum regions are typically referred to as large-scale motions (Smits _et al._, 2011). Flow visualization studies by Hommema & Adrian (2003) and Hutchins _et al._ (2012) further revealed that ABL structures in the ISL are also organized in a similar manner. Of relevance for this work is that previous studies on coherent structures have predominantly focused on (quasi-)stationary flow conditions. However, stationarity is of rare occurrence in both ABL and engineering flow systems (Mahrt & Bou-Zeid, 2020; Lozandoruran _et al._, 2020). As discussed in the recent review paper by Mahrt & Bou-Zeid (2020), there are two major drivers of non-stationarity in the ABL. The first involves temporal variations of surface heat flux, typically associated with evening transitions or the passage of individual clouds (Grimsdell & Angevine, 2002). The second kind corresponds to time variations of the horizontal pressure gradient driving the flow, which can be induced by modes associated with propagating submeso-scale motions, mesoscale disturbances, and synoptic fronts (Monti _et al._, 2002; Mahrt, 2014; Cava _et al._, 2017). Previous studies have demonstrated that non-stationarity significantly affects flow statistics in the ABL, and can result in deviations from equilibrium turbulence Hicks _et al._ (2018) reported that during morning and late afternoon transitions, the rapid change in surface heat flux disrupts the equilibrium turbulence relations. Additionally, several observational studies by Mahrt et al. (Mahrt, 2007, 2008; Mahrt _et al._, 2013) demonstrated that time variations in the driving pressure gradient can enhance momentum transport under stable atmospheric stratifications. Non-stationarity is also expected to impact the geometry of turbulence in the ABL, but this problem has not received much attention thus far. This study contributes to addressing this knowledge gap by investigating the impact of non-stationarity of the second kind on the topology of coherent structures in ABL turbulence and how it affects the mechanisms controlling momentum transport. The study focuses on flow over urban-like roughness subjected to a time-varying pressure gradient. To represent flow unsteadiness, a pulsatile pressure gradient with a constant average and a sinusoidal oscillating component is selected as a prototype. Besides being relevant from a practical perspective--wave-current boundary layers, internal-wave induced flows, blood flows in arteries--this flow regime is also particularly suited for identifying the temporal characteristics of coherent structures owing to the periodic nature of flow statistics. Pulsatile flows share some similarities with oscillatory flows, i.e., flow driven by a time-periodic pressure gradient with zero mean. Interestingly, in the context of oscillatory flows, several studies have been devoted to the characterization of coherent structures. For instance, Costamagna _et al._ (2003); Salon _et al._ (2007) carried out a numerical study on transitional and fully turbulent oscillatory flow over smooth surfaces, and observed that streaky structures form at the end of the acceleration phases, then distort, intertwine, and eventually break into small vortices. Carstensen _et al._ (2010) performed a series of laboratory experiments on transitional oscillatory flow, and identified two other major coherent structures, namely, cross-stream vortex tubes, which are the direct consequences of inflectional-point shear layer instability, and turbulent spots, which result from the destruction of near-wall streaky structures as those in stationary flows. Carstensen _et al._ (2012) observed turbulent spots in oscillatory flows over sand-grain roughness, suggesting that the presence of such flow structures is independent of surface types, and it was later highlighted by Mazzuoli & Vittori (2019) that the mechanism responsible for the turbulent spot generation is similar over both smooth and rough surfaces. Although the primary modes of variability in oscillatory flows are relatively well understood, the same cannot be said for pulsatile flows. A notable study by Zhang & Simons (2019) on wave-current boundary layers, a form of pulsatile flow, revealed phase variations in the spacing of streaks during the wave cycle. However, a detailed analysis of this particular issue is still missing. To investigate the structure of turbulence in current-dominated pulsatile flow over surfaces in fully-rough aerodynamic flow regimes, we conducted a wall-modeled large-eddy simulation (LES) of flow over an array of surface-mounted cuboids. This study builds on the findings of a companion study currently under review that primarily focuses on characterizing the time evolution of flow statistics (Li & Giometto, 2022). By contrasting findings against a corresponding stationary flow simulation, this study addresses these specific questions: (i) Does flow unsteadiness alter the topology of coherent structures in a time-averaged sense? (ii) How does the geometry of coherent structures evolve throughout the pulsation cycle? (iii) What is the effect of such modifications on the mechanisms governing momentum transfer? Answering these questions will achieve a twofold research objective: first, it will contribute to a better understanding of coherent patterns in pulsatile flow over complex geometries; and second, it will shed light on how these patterns regulate momentum transfer. This paper is organized as follows. Section 2 outlines the numerical procedure and the simulation setups. First- and second-order statistics are presented and discussed in SS3.1. Section 3.2 is dedicated to the quadrant analysis. Two-point velocity correlations and visualizations of instantaneous flow structures provide the spatial information of dominant coherent structures in SS3.3 and SS3.4, respectively. The technique of conditional averaging is employed to extract the characteristic eddy structures during the pulsatile cycle in SS3.5. Concluding remarks are given in SS4. ## 2 Methodology ### Numerical procedure Simulations are carried out via an in-house LES algorithm (Albertson & Parlange, 1999\(a\), \(b\); Giometto _et al._, 2016). The LES algorithm solves the spatially-filtered momentum and mass conservation equations, namely, \[\frac{\partial u_{i}}{\partial t}+u_{j}(\frac{\partial u_{i}}{ \partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}}) =-\frac{1}{\rho}\frac{\partial P}{\partial x_{i}}-\frac{\partial \tau_{ij}}{\partial x_{j}}-\frac{1}{\rho}\frac{\partial P_{\infty}}{\partial x _{1}}\delta_{i1}+F_{i} \tag{1}\] \[\frac{\partial u_{i}}{\partial x_{i}} =0 \tag{2}\] where \((u_{1},u_{2},u_{3})\) represent the filtered velocities along the streamwise \(x_{1}\), cross-stream \(x_{2}\), and wall-normal \(x_{3}\) directions, respectively. The rotational form of the convective term is used to ensure kinetic energy conservation in the discrete sense in the inviscid limit (Orszag & Pao, 1975). \(\tau_{ij}\) is the deviatoric part of the kinematic subgrid-scale (SGS) stress tensor, which is parameterized via the Lagrangian scale-dependent dynamic (LASD) Smagorinsky model (Bou-Zeid _et al._, 2005). The flow is assumed to be in the fully rough aerodynamic regime, and viscous stresses are not considered. \(P=p+\rho\frac{1}{3}\tau_{ii}+\rho\frac{1}{2}u_{i}u_{i}\) is the modified pressure, which accounts for the trace of SGS stress and resolved turbulent kinetic energy, and \(\rho\) is a constant fluid density. The flow is driven by a spatially uniform, pulsatile pressure gradient in the \(x_{1}\) direction, namely \(\partial P_{\infty}/\partial x_{1}=-\rho f_{m}\left[1+\alpha_{p}\sin(\omega t)\right]\), where the \(f_{m}\) parameter controls the magnitude of the temporally averaged pressure gradient, \(\alpha_{p}\) controls the forcing amplitude, and \(\omega\) the forcing frequency. \(\delta_{1i}\) in (1) denotes the Kronecker delta function. Periodic boundary conditions apply in the wall-parallel directions, and a free-slip boundary condition is imposed at the top of the computational box. The lower surface is represented by an array of uniformly distributed cuboids, which are explicitly resolved via a discrete forcing immersed boundary method (IBM) (Mittal & Iaccarino, 2005; Tseng _et al._, 2006; Chester _et al._, 2007; Giometto _et al._, 2016). The IBM approach makes use of an artificial force \(F_{i}\) to impose the no-slip boundary condition at the solid-fluid interface, and of an algebraic equilibrium wall-layer model to evaluate surface stresses (Piomelli, 2008; Bose & Park, 2018). Spatial derivatives in the wall-parallel directions are computed via a pseudo-spectral collocation method based on truncated Fourier expansions (Orszag, 1970), whereas a second-order staggered finite differences scheme is employed in the wall-normal direction. Since dealiasing errors are known to be detrimental for pseudo-spectral discretization (Margairaz _et al._, 2018), non-linear convective terms are de-aliased exactly via the 3/2 rule (Canuto _et al._, 2007). The time integration is performed via a second-order Adams-Bashforth scheme, and the incompressibility condition is enforced via a fraction step method (Kim & Moin, 1985). ### Simulation setup Two LESs of flow over an array of surface-mounted cubes are carried out. The two simulations only differ by the pressure forcing term: One is characterized by a pressure gradient that is constant in space and time (CP hereafter), and the other by a pressure gradient that is constant in space and pulsatile in time (PP). The computational domain for both simulations is sketched in figure 1. The size of the box is \([0,L_{1}]\times[0,L_{2}]\times[0,L_{3}]\) with \(L_{1}=72h\), \(L_{2}=24h\), and \(L_{3}=8h\), where \(h\) denotes the height of cubes. Cubes are organized in an in-line arrangement with planar and frontal area fractions set to \(\lambda_{p}=\lambda_{f}=0.\overline{1}\). The relatively high packing density and the relatively large scale separation \(L_{3}/h\) support the existence of an inertial sublayer in the considered flow system (Coccal _et al._, 2007; Castro, 2007; Zhang _et al._, 2022). In terms of horizontal extent, \(L_{1}/L_{3}\) and \(L_{2}/L_{3}\) are larger than those from previous works focusing on coherent structures above aerodynamically rough surfaces (Coccal _et al._, 2007; Xie _et al._, 2008; Leonardi & Castro, 2010; Anderson _et al._, 2015_b_) and are sufficient to accommodate large-scale motions (Balakumar & Adrian, 2007). An aerodynamic roughness length \(z_{0}=10^{-4}h\) is prescribed at the cube surfaces and the ground, via the algebraic wall-layer model, resulting in negligible SGS drag contributions to the total surface drag (Yang & Meneveau, 2016). The computational domain is discretized using a uniform Cartesian grid of \(N_{1}\times N_{2}\times N_{3}=576\times 192\times 128\), so each cube is resolved by \(8\times 8\times 16\) grid points. Such a grid resolution yields flow statistics that are poorly sensitive to grid resolution in both statistically stationary and pulsatile flows at the considered oscillation frequency (Tseng _et al._, 2006; Li & Giometto, 2022). For the PP case, the chosen forcing amplitude and frequency are \(\alpha_{p}=12\) and \(\omega T_{\rm h}=\pi/8\), respectively, where \(T_{\rm h}=h/u_{\tau}\) is the averaged turnover time of characteristic eddies of the urban canopy layer (UCL) and \(u_{\tau}=\sqrt{f_{m}L_{3}}\) is the friction velocity. In dimensional terms, considering realistic values of the friction velocity and UCL height in the ABL, i.e., \(0.1\leq u_{\tau}\leq 0.5\) m/s and \(3\leq h\leq 30\) m (Stull, 1988), the considered frequency corresponds a time period \(24\leq T\leq 4800\) s. This range of time scales pertains to sub-mesoscale motions (Mahrt, 2009; Hoover _et al._, 2015), which, as outlined in SS1, are a major driver of atmospheric pressure gradient variability. In addition, as demonstrated in Li & Giometto Figure 1: Side and planar view of the computational domain (a and b respectively). The red dashed line denotes the repeating unit. (2022), the current flow system exhibits a Stokes layer, where turbulence generation and momentum transport are considerably modified during the pulsation cycle. The selected frequency produces a Stokes layer thickness of \(\delta_{s}=5h\), indicating that the roughness sublayer (RSL) and inertial sublayer (ISL) are both affected by the non-stationarity. To focus on the current-dominated flow regime, we choose a value of \(\alpha_{p}=12\), which is large enough to induce significant changes in the coherent structures with the varying pressure gradient while avoiding mean flow reversals. Both simulations are initialized with velocity fields from a stationary flow case and integrated over \(400T_{atm}\), corresponding to 200 pulsatile cycles for the PP case. Here \(T_{atm}=L_{3}/u_{\tau}\) refers to the turnover time of the largest eddies in the domain. The size of time step is set to \(\delta t=5\cdot 10^{-5}T_{atm}\) which satisfies the Courant-Friedrichs-Lewy stability condition, i.e., \(\max\left(CFL\right)=u_{max}\delta t/\delta\approx 0.05\), where \(u_{max}\) is the maximum velocity magnitude at any point in the domain during the simulation, and \(\delta\) is the size of the grid stencil, \(\delta t\) being the size of the time step. The initial \(20T_{atm}\) are discarded for both the CP and PP cases (transient period for the PP case), which correspond to about 10 oscillation periods, after which instantaneous snapshots of velocities and pressure are collected and saved every \(0.025T_{atm}\) (1/80 of the pulsatile cycle). ### Notation and terminology For the PP case, \(\overline{(\cdot)}\) denotes an ensemble averaging operation, performed over the phase dimension and over repeating surface units (see figure 1), i.e., for a given flow quantity \(\theta\), \[\overline{\theta}(x_{1},x_{2},x_{3},t)=\frac{1}{N_{p}n_{1}n_{2}} \sum_{n=1}^{N_{p}}\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}\theta(x_{1}+il_{1},x_{2 }+jl_{2},x_{3},t+nT),\\ 0\leqslant x_{1}\leqslant l_{1},\quad 0\leqslant x_{2}\leqslant l _{2},\quad 0\leqslant t\leqslant T. \tag{3}\] Using the usual Reynolds decomposition, one can write \[\theta(x_{1},x_{2},x_{3},t)=\overline{\theta}(x_{1},x_{2},x_{3},t)+\theta^{ \prime}(x_{1},x_{2},x_{3},t) \tag{4}\] where \((\cdot)^{\prime}\) denotes a fluctuation from the ensemble average. For the CP case, \(\overline{(\cdot)}\) denotes a quantity averaged over repeating units only. An ensemble averaged quantity can be further decomposed into an intrinsic spatial average and a deviation from the intrinsic average (Schmid _et al._, 2019), i.e., \[\overline{\theta}(x_{1},x_{2},x_{3},t)=\langle\overline{\theta}\rangle(x_{3}, t)+\overline{\theta}^{\prime\prime}(x_{1},x_{2},x_{3},t). \tag{5}\] Note that, for each \(x_{3}\), the intrinsic averaging operation is taken over a thin horizontal "slab" \(V_{f}\) of fluid, characterized by a thickness \(\delta_{3}\) in the wall-normal (\(x_{3}\)) direction, namely, \[\langle\overline{\theta}\rangle(x_{3},t)=\frac{1}{V_{f}}\int_{x_{3}-\delta_{3 }/2}^{x_{3}+\delta_{3}/2}\int_{0}^{L_{2}}\int_{0}^{L_{1}}\overline{\theta}(x_ {1},x_{2},x_{3},t)dx_{1}dx_{2}dx_{3}. \tag{6}\] Further, any phase-averaged quantity from the PP case consists of a longtime-averaged component and an oscillatory component with a zero mean, which will be hereafter denoted via the subscripts \(l\) and \(o\), respectively, i.e., \[\overline{\theta}(x_{1},x_{2},x_{3},t)=\overline{\theta}_{l}(x_{1},x_{2},x_{ 3})+\overline{\theta}_{o}(x_{1},x_{2},x_{3},t) \tag{7}\] and \[\langle\overline{\theta}\rangle(x_{3},t)=\langle\overline{\theta}\rangle_{l} (x_{3})+\langle\overline{\theta}\rangle_{o}(x_{3},t). \tag{8}\] As for the CP case, the longtime and ensemble averages are used interchangeably due to the lack of an oscillatory component. In the following, the longtime-averaged quantities from the PP case are contrasted against their counterparts from the CP case to highlight the impact of flow unsteadiness on flow characteristics in a longtime average sense. Oscillatory and phase-averaged quantities are analyzed to shed light on the phase-dependent features of the PP case. ## 3 Results ### Overview of flow statistics Li & Giometto (2022) have proposed a detailed analysis of pulsatile flow over an array of surface-mounted cuboids, discussing the impact of varying forcing amplitude and frequency on selected flow statistics. Here, we repropose and expand upon some of the findings that are relevant to this work. Figure 2(a) presents the wall-normal distributions of the longtime-averaged resolved Reynolds shear stress \(\langle\overline{u^{\prime}_{1}u^{\prime\prime}_{3}}\rangle_{l}\) and dispersive shear stress \(\langle\overline{u^{\prime\prime}_{1}}\overline{u^{\prime\prime}_{3}}\rangle_ {l}\). Note that SGS components contribute \(<1\%\) to the total Reynolds stress, and are hence not discussed. From the figure, it is apparent that flow unsteadiness does not noticeably affect the \(\langle\overline{u^{\prime}_{1}u^{\prime}_{3}}\rangle_{l}\) profile, which is characterized by \(<3\%\) departures from the statistically stationary case. On the contrary, flow pulsation within the UCL leads to pronounced increases in \(\langle\overline{u^{\prime\prime}_{1}}\overline{u^{\prime\prime}_{3}}\rangle_ {l}\) (up to 500% locally). However, despite this increase, the dispersive flux remains a modest contributor to the total momentum flux in the UCL. Figure 2(b) displays the longtime-averaged resolved turbulent kinetic energy \(k_{l}=\langle\overline{u^{\prime}_{i}u^{\prime}_{i}}\rangle_{l}/2\) and wake kinetic energy \(k_{w,l}=\langle\overline{u^{\prime\prime}_{i}}\overline{u^{\prime\prime}_{i}} \rangle_{l}/2\). Both \(k_{l}\) and \(k_{w,l}\) from the PP case feature modest departures from their CP counterparts (discrepancies are \(<5\%\)), highlighting a weak dependence of both longtime-averaged turbulent and wake kinetic energy on flow unsteadiness. Also, the RSL thicknesses \(\delta_{RSL}\) for the CP and PP cases are depicted in figure 2. Following the approach by Pokrajac _et al._ (2007), \(\delta_{RSL}\) is estimated by thresholding the spatial standard deviation of the longtime-averaged streamwise velocity normalized by its intrinsic average, namely, \[\sigma=\frac{\sqrt{\langle(\overline{u}_{1,l}-\langle\overline{u}_{1}\rangle_{l})^ {2}\rangle}}{\langle\overline{u}_{1}\rangle_{l}}\, \tag{1}\] where the threshold is taken as 1%. An alternative method to evaluate \(\delta_{RSL}\) involves using phase-averaged statistics instead of long-time-averaged ones in (1). Although now shown, such a method yields similar predictions (with a discrepancy of less than 5%). Both \(\langle\overline{u}_{1}^{\prime\prime}\overline{u}_{3}^{\prime\prime}\rangle_{l}\) and \(k_{w,l}\), which result from spatial variations of time-averaged flow quantities, reduce to \(<1\%\) of their peak value above \(\delta_{RSL}\). From figure 2, one can readily observe that flow unsteadiness increases the extent of the RSL, with estimated \(\delta_{RSL}\)s not exceeding \(1.5h\) in both cases. Hereafter, we will assume \(\delta_{RSL}=1.5h\). As discussed in SS1, RSL and ISL feature distinct coherent structures. Specifically, the structures in the RSL are expected to show strong imprints of roughness elements, whereas those in the ISL should, in principle, be independent of surface morphology (Coccal _et al._, 2007). The response of first- and second-order flow statistics to flow unsteadiness are depicted in figure 3. Figure 3(a) highlights the presence of an oscillating wave in the oscillatory shear rate \(\langle\partial\overline{u}_{1}/\partial x_{3}\rangle_{o}\) generated at the canopy top in response to the flow unsteadiness, with a \(\pi/2\) phase lag respective to the pulsatile pressure forcing. Such a wave propagates in the positive vertical direction while being attenuated and diffused by turbulent friction and mixing. It is noteworthy that the propagation speed of the oscillating shear rate is constant, as suggested by the constant tilting angle along the \(x_{3}\) direction in the \(\langle\partial\overline{u}/\partial x_{3}\rangle_{o}\) contour. As apparent from figure 3(b,c), the space-time diagrams of the oscillatory resolved Reynolds shear stress \(\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}\) and oscillatory resolved turbulent kinetic energy \(k_{o}=\langle\overline{u_{i}^{\prime}u_{i}^{\prime}}\rangle_{o}/2\) are also characterized by decaying waves traveling away from the RSL at constant speeds. The speeds of these waves are similar to that of the corresponding oscillating shear rate, which can be inferred by the identical tilting angles in the contours. There is clearly a causal relation for this behavior: above the UCL, the major contributors of shear production terms in the budget equations of \(\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}\) and \(k_{o}\) are \[\langle\overline{\mathcal{P}}\rangle_{13,o}=-\langle\overline{u_{3}^{\prime}u_ {3}^{\prime}}\rangle_{l}\langle\frac{\partial\overline{u}}{\partial x_{3}} \rangle_{o}-\langle\overline{u_{3}^{\prime}u_{3}^{\prime}}\rangle_{o}\langle \frac{\partial\overline{u}}{\partial x_{3}}\rangle_{l}\, \tag{2}\] and \[\langle\overline{\mathcal{P}}\rangle_{k,o}=-2\langle\overline{u_{1}^{\prime}u_{3}^{ \prime}}\rangle_{l}\langle\frac{\partial\overline{u}}{\partial x_{3}}\rangle_{o }-2\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}\langle\frac{ \partial\overline{u}}{\partial x_{3}}\rangle_{l}\, \tag{10}\] respectively. As the oscillating shear rate travels upwards away from the UCL, it interacts with the local turbulence by modulating \(\langle\overline{\mathcal{P}}\rangle_{13,o}\) and \(\langle\overline{\mathcal{P}}\rangle_{k,o}\), and ultimately yielding the observed oscillations in resolved Reynolds stresses. On the other hand, no pulsatile-forcing-related terms appear in the budget equations of resolved Reynolds stresses. This indicates that the oscillating shear rate induced by the pulsatile forcing modifies the turbulence production above the UCL rather than the pressure forcing itself. A similar point about pulsatile flows was made in Scotti & Piomelli (2001), where it was stated that "[...] in the former [pulsatile flow] it is the shear generated at the wall that affects the flow". It is worth noting that such a study was however based on pulsatile flow over smooth surfaces and at a relatively low Reynolds number. In addition, a visual comparison of the contours of \(\langle\partial\overline{u}/\partial x_{3}\rangle_{o}\) and \(-\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}\) highlights the presence of a phase lag between \(\langle\partial\overline{u}/\partial x_{3}\rangle_{o}\) and \(-\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle_{o}\). That is to say, the turbulence is not in equilibrium with the mean flow during the pulsatile cycle, even though the pulsatile forcing or the induced oscillating shear wave does not substantially modify the longtime-averaged turbulence intensity (see figure 2). To gain further insight into this behavior, the next section examines the structure of turbulence under the considered non-equilibrium condition. ### Quadrant analysis The discussions will first focus on the impact of flow pulsation on \(u_{1}^{\prime}u_{3}^{\prime}\) quadrants. This statistical analysis will allow us to quantify the contributions of different coherent motions to the turbulent momentum transport. The quadrant analysis technique was first introduced by Wallace _et al._ (1972), and has thereafter been routinely employed to characterize the structure of turbulence across a range of flow systems (Wallace, 2016). The approach maps velocity fluctuations to one of four types of coherent motions (quadrants) in the \(u_{1}^{\prime}-u_{3}^{\prime}\) phase space, namely, \[\begin{cases}Q1:&u_{1}^{\prime}>0,u_{3}^{\prime}>0\\ Q2:&u_{1}^{\prime}<0,u_{3}^{\prime}>0\\ Q3:&u_{1}^{\prime}<0,u_{3}^{\prime}<0\\ Q4:&u_{1}^{\prime}>0,u_{3}^{\prime}<0\.\end{cases} \tag{11}\] Q2 and Q4 are typically referred to as ejections and sweeps, respectively. They are the main contributors to the Reynolds shear stress, and compose the majority of the events in boundary layer flows. Ejections are associated with the lift-up of low-momentum fluid by vortex induction between the legs of hairpin structures, whereas sweeps correspond to the down-draft of the high-momentum fluid (Adrian _et al._, 2000). Q1 and Q3 denote outward and inward interactions, and play a less important role in transporting momentum compared to Q2 and Q4. Coccal _et al._ (2007) and Finnigan (2000) showed that the RSL of stationary flows is dominated by ejections in terms of events, but that the overall Reynolds stress contribution from sweeps stress exceeds that from ejections. Away from the RSL, the trend is the opposite. This behavior is indeed apparent from figure 4, where ejection and sweep profiles are shown for the CP case (red lines). We first examine the overall frequency of events in each quadrant and the contribution of each quadrant to the resolved Reynolds shear stress. For the considered cases, the contribution to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) and the number of the events of each quadrant are summed over different wall-parallel planes and over the whole sampling time period (i.e., these are longtime-averaged quantities). Results from this operation are shown in figure 4. What emerges from this figure is that flow pulsation does not significantly alter the relative contribution and frequency of each quadrant. Some discrepancies between CP and PP profiles can be observed immediately above the UCL, but do not sum to more than 4% at any given height. Next, we shed light on the phase-dependent features of quadrant distributions with a focus on sweeps and ejections, as it has been shown that they are the major contributors to the momentum flux (see figure 4). Hereafter the ratio between the numbers of ejections Figure 4: (a) Relative contribution to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) by events in each quadrant summed over the wall-parallel planes and the whole sampling time period and (b) relative number of events in each quadrant from the PP case (black) and CP (red) as a function of \(x_{3}\). Cross: outward interaction; triangles: ejection; diamonds: inward interaction; circles: sweep. Figure 5: (a) Ratio between the numbers of ejections to sweeps (\(\gamma_{\#}\)) from the PP case at a streamwise/wall-normal plane. (b) Location of the selected streamwise/wall-normal plane (red dashed line) within a repeating unit. (c) \(\gamma_{\#}\) from the CP case at the same plane. Black dashed lines denote \(x_{3}/h=1.5\), where is the upper limit of the RSL. and sweeps is denoted by \(\gamma_{\#}\), and the ratio of the contribution to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) is represented by \(\gamma_{c}\). Note that as mentioned in the previous section, a turbulence fluctuation is defined as a deviation from the local ensemble average, so the number of occurrences and contribution to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) of each quadrant are functions of both the location relative to the cube within the repeating unit and the phase for the PP case, and so are \(\gamma_{\#}\) and \(\gamma_{c}\). Conversely, in the CP case, \(\gamma_{\#}\) and \(\gamma_{c}\) are only functions of the spatial location relative to the cube. Figure 5(a) and (c) present \(\gamma_{\#}\) up to \(x_{3}/h=2\) at a selected streamwise/wall-normal plane for the PP and CP cases, respectively. The chosen plane cuts through the center of a cube in the repeating unit, as shown in 5(b). In the cavity, the ejection-sweep pattern from the PP case is found to be qualitative similar to its CP counterpart throughout the pulsatile cycle (compare subplots (a) and (c) in figure 5). Specifically, a preponderance of sweeps characterizes a narrow region in the leeward side of the cube (the streamwise extent of this region is \(\lessapprox 0.3h\)), whereas ejections dominate in the remainder of the cavity. As also apparent from figure 5(a), the streamwise extent of the sweep-dominated region increases (decreases) during the acceleration (deceleration) time period. During the acceleration phase, the \(h<x_{3}<2h\) region (i.e., immediately above the UCL) transitions from an ejection-dominated flow regime to a sweep-dominated one, and vice versa as the flow decelerates. Such a transition first occurs immediately above the cavity, where a larger amount of sweeps (ejections) are generated during the acceleration (deceleration) period, until they populate in the whole RSL. Although not presented here, \(\gamma_{c}\) features an exactly opposite trend. Shifting the attention to the ejection-sweep pattern in the ISL, figure 6 shows the intrinsic average of \(\gamma_{c}\) and \(\gamma_{\#}\) at the \(x_{3}/h=\{2,3,4\}\) planes. These quantities are hereafter denoted as \(\langle\gamma_{c}\rangle\) and \(\langle\gamma_{\#}\rangle\), respectively. The use of \(\langle\gamma_{c}\rangle\) and \(\langle\gamma_{\#}\rangle\) instead of \(\gamma_{c}\) and \(\gamma_{\#}\) to characterize the ejection-sweep pattern in the ISL can be justified by the fact that the spatial variations in \(\gamma_{\#}\) and \(\gamma_{c}\) on the wall-parallel directions vanish rapidly above the RSL, as apparent from figure 5. This is in line with the observations Figure 6: (a) - (c): Intrinsic-averaged ratio of contributions to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) from ejections and sweeps (\(\langle\gamma_{c}\rangle\)); (d) - (f): intrinsic-averaged ratio of ejections to sweeps (\(\langle\gamma_{\#}\rangle\)); (g) - (i): phase-averaged shear rate \(\partial(\overline{u}_{1})/\partial x_{3}\) from the PP case at three wall-normal locations within the ISL (a,d,g) \(x_{3}/h=2\), (b,e,h) \(x_{3}/h=3\), and (c,f,i) \(x_{3}/h=4\) as a function of phase. Black dashed lines denote longtime-averaged values, whereas solid red lines represent corresponding quantities from the CP case. of Kanda _et al._ (2004) and Castro _et al._ (2006) that the spatial variations in \(\gamma_{\#}\) and \(\gamma_{c}\) are concentrated in the RSL for stationary flow over urban canopy. The ejection-sweep pattern varies substantially during the pulsatile cycle. For example, at \(x_{3}/h=2\), even though the contribution from the ejections to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\) dominates in a longtime average sense, i.e., \(\langle\gamma_{c}\rangle_{l}>1\), the flow features \(\langle\gamma_{c}\rangle<1\) for \(\omega t\in[0,\approx\pi/2]\) (see figure 6,a). More interestingly, this ejection-sweep pattern at a given wall-normal location appears to be directly controlled by the phase-averaged shear rate \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), as elaborated in the following. As \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases at a given \(x_{3}\), the corresponding \(\langle\gamma_{c}\rangle\) increases whereas \(\langle\gamma_{\#}\rangle\) decreases, highlighting the presence of fewer but stronger ejections events. The absolute maximum (minimum) of \(\langle\gamma_{c}\rangle\) (\(\langle\gamma_{\#}\rangle\)) approximately coincides with the maximum (minimum) of \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). This observation is consistent across the considered planes. As discussed in the next sections, such behavior can be attributed to time variations in the geometry of ISL structures. ### Spatial and temporal flow coherence To gain a better understanding of the extent and organization of coherent structures in the ISL, this section analyzes two-point velocity autocorrelation maps. This flow statistic provides information on the linear correlation of the flow field in space, making it an effective tool for describing spatial flow coherence (Dennis & Nickels, 2011; Guala _et al._, 2012). For the PP case, the phase-dependent two-point correlation coefficient tensor \(\overline{R}_{ij}\) can be defined as \[\overline{R}_{ij}(\Delta_{1},\Delta_{2},x_{3},x_{3}^{*},t)=\frac{\langle \overline{u_{i}^{\prime}(x_{1},x_{2},x_{3}^{*},t)u_{j}^{\prime}(x_{1}+\Delta_{ 1},x_{2}+\Delta_{2},x_{3},t)}\rangle}{\sqrt{\langle\overline{u_{i}^{\prime}u_ {i}^{\prime}}\rangle(x_{3}^{*},t)\langle\overline{u_{j}^{\prime}u_{j}^{\prime }}\rangle(x_{3},t)}}\, \tag{10}\] where \(\Delta_{i}\) is the separation on the wall-parallel directions, \(x_{3}^{*}\) represents a reference wall-normal location, and \(t\) denotes the phase. In the CP case, the flow is statistically stationary, and therefore \(\overline{R}_{ij}\) is not a function of \(t\), i.e., \(\overline{R}_{ij}=\overline{R}_{ij,l}\). Figure 7 compares \(\overline{R}_{11,l}\) for the PP and CP cases over the \(x_{3}^{*}/h=\{1.5,2,3,4\}\) planes. In both cases, \(\overline{R}_{11,l}\) features an alternating sign in the cross-stream direction, signaling the presence of low- and high-momentum streaks flanking each other in the cross-stream direction. The cross-stream extent of longtime-averaged streaks can be identified as the first zero-crossing of the \(\overline{R}_{11,l}\) contour in the \(\Delta_{2}\) direction. Based on this definition, figure 7 shows that flow unsteadiness has a modest impact on such a quantity. This finding agrees with observations from Zhang & Simons (2019) for pulsatile flow over smooth surfaces. Further, although not shown, the streamwise and cross-stream extent of streaks increases linearly in \(x_{3}\), suggesting that Townsend's attached-eddy hypothesis is valid in a longtime average sense (Marusic & Monty, 2019). Turning the attention to the phase-averaged flow field, figure 8 shows the time variation of the cross-stream streaks extent, which is identified as the first zero crossing of the \(\overline{R}_{11}=0\) field in the cross-stream direction. The linear \(x_{3}\)-scaling of the streak width breaks down in a phase-averaged sense. Such a quantity indeed varies substantially during the pulsatile cycle, diminishing in magnitude as \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases throughout the boundary layer. Interestingly, when \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) reaches its maximum at \(\omega t\approx\pi\) and \(x_{3}/h\approx 1.5\), the cross-stream extent of streaks approaches zero, suggesting that streaks may not be a persistent feature of pulsatile boundary layer flows. To further quantify topological changes induced by flow pulsation, we hereafter examine variations in the streamwise and wall-normal extent of coherent structures. Such quantities will be identified via the \(\overline{R}_{11}=0.3\) contour, in line with the approach used by Krogstad & Antonia (1994). Note that the choice of the \(\overline{R}_{11}\) threshold for such a task is somewhat subjective, and several different values have been used in previous studies to achieve this same objective, including \(\overline{R}_{11}=0.4\)(Takimoto _et al._, 2013) and \(\overline{R}_{11}=0.5\)(Volino _et al._, 2007; Guala _et al._, 2012). In this study, the exact threshold is inconsequential as it does not impact the conclusions. Figure 9 presents \(\overline{R}_{11,l}\) contours in the streamwise/wall-normal plane for \(x_{3}^{*}/h=\{1.5,2,3,4\}\). The jagged lines at \(x_{3}/h\approx 1\) (the top of the UCL) bear the signature of roughness elements. The dashed lines passing through \(x_{3}^{*}\) identify the locus of the maxima in \(\overline{R}_{11,l}\) at each streamwise location. The inclination angle of such lines can be used as a surrogate for the longtime-averaged tilting angle of the coherent structure (Chauhan _et al._, 2013; Salesky & Anderson, 2020). It is clearly observed that at each reference wall-normal location, the tilting angle of longtime-averaged structures is similar between the PP case and CP. The tilting angle in both cases is \(\Delta_{1}/h=0.5\) and \(\Delta_{1}/h=0. cases decreases monotonically and slowly from \(15^{\circ}\) at \(x_{3}^{*}/h=1.5\) to \(10^{\circ}\) at \(x_{3}^{*}/h=4\)--a behavior that is in excellent agreement with results from Coceal _et al._ (2007), even though a different urban canopy layout was used therein. Further, the identified tilting angle is similar to the one inferred from real-world ABL observations in Hutchins _et al._ (2012) and Chauhan _et al._ (2013). On the other hand, longtime-averaged coherent structures in the PP case are relatively smaller than in the CP case in both the streamwise and wall-normal coordinate directions. Discrepancies become more apparent with increasing \(x_{3}^{*}\). The difference in the streamwise (wall-normal) extent of the longtime-averaged structure from the two cases increases from \(2\%\) (\(2\%\)) at \(x_{3}^{*}/h=1.5\) to \(15\%\) (\(4\%\)) at \(x_{3}^{*}/h=4\). From the above analysis, it is hence apparent that flow pulsation reduces the streamwise and wall-normal extents of the longtime-averaged coherent structure while preserving their inclination angle. More insight into the mechanisms underpinning the observed behavior can be gained by examining the time evolution of such structures for the PP case in figure 10. When taken together with figure 8(b), it becomes clear that both the streamwise and the wall-normal extents of the coherent structures tend to reduce with increasing local \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). Compared to the streamwise extent, the wall-normal extent of the coherent structure is more sensitive to changes in \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). For example, at \(x_{3}^{*}/h=4\), we observe an overall \(15\%\) variation in the wall-normal extent of the coherent structure during a pulsation cycle, whereas the corresponding variation in streamwise extent is \(8\%\). Further, the phase-averaged \(\overline{u}_{1}\) field at the considered heights appears to be more correlated with the flow in the UCL for small \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), thus highlighting a stronger coupling between flow regions. Interestingly, the tilting angle of the coherence structure remains constant during the pulsatile cycle, as shown in figure 11. Next, we will show that the hairpin vortex packet paradigm (Adrian, 2007) can be used to provide an interpretation for these findings. The validity of such a paradigm is supported by a vast body of evidence, including laboratory experiments of canonical TBL (Adrian _et al._, 2000; Christensen & Adrian, 2001; Dennis & Nickels, 2011) to ABL field measurements (Hommema & Adrian, 2003; Morris _et al._, 2007) and numerical simulations Figure 9: \(\overline{R}_{11,l}\) in the streamwise/wall-normal plane of the PP (black) and CP (red) cases. Results correspond to four reference wall-normal locations: (a) \(x_{3}^{*}/h=1.5\), (b) \(x_{3}^{*}/h=2\), (c) \(x_{3}^{*}/h=3\), and (d) \(x_{3}^{*}/h=4\). Contour levels (solid lines) range from \(0.2\) to \(0.5\) with increments of \(0.1\). Dashed lines denote the locus of the maximum correlation at each streamwise location. The slopes of the dashed lines represent the tilting angles of the structures. (Lee _et al._, 2011; Eitel-Amor _et al._, 2015). This formulation assumes that the dominant ISL structures are hairpin vortex packets, consisting of a sequence of hairpin vortices organized in a quasi-streamwise direction with a characteristic inclination angle relative to the wall. These structures encapsulate the low-momentum regions, also known as "streaks". The observed changes in \(\overline{R}_{11,l}\) between the CP and PP cases and of \(\overline{R}_{11}\) contours during the pulsatile cycle reflect corresponding changes in the geometry of vortex packets in a longtime- and phase-averaged sense. Specifically, as \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases, the phase-averaged size of vortex packets is expected to shrink, and, in the longtime-averaged sense, the vortex packets are smaller than their counterparts in the CP case. However, upon inspection of \(R_{11}\), it is unclear whether the observed change in packet size is attributable to variations in the composing hairpin vortices or the tendency for packets to break into smaller ones under high \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) and merge into larger ones under low \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). To answer this question, we will examine the instantaneous turbulence structures and extract characteristic hairpin vortices through conditional averaging in the following sections. Also, the constant tilting angle of the structure during the pulsatile cycle indicates that, no matter how vortex packets break and reorganize and how individual hairpin vortices deform in response to the time-varying shear rate, the hairpin vortices within the same packet remain aligned with a constant tilting angle. ### Instantaneous flow structure Figure 12(a) and (b) show the instantaneous fluctuating streamwise velocity \(u_{1}^{\prime}\) at \(x_{3}/h=1.5\) from the PP case. The chosen phases, \(\omega t=\pi/2\) and \(\omega t=\pi\), correspond to the local minimum and maximum of \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), respectively (see figure 6,g). Streak patterns can be observed during both phases. As shown in figure 12(a), at low \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) values, instantaneous \(u_{1}^{\prime}\) structures intertwine with neighboring ones, and form large streaks Figure 12: (a, b): Instantaneous fluctuating streamwise velocity \(u_{1}^{\prime}\) normalized by \(u_{\tau}\); (c, d): wall-normal swirl strength \(\lambda_{s,3}\) of the PP case at \(x_{3}=2h\). (a, c): \(\omega t=\pi/2\), ; (b, d), \(\omega t=\pi\). Shaded regions in (c, d) highlight the low-momentum (\(u_{1}^{\prime}<0\)) regions. The instantaneous flow fields are obtained from the same pulsatile cycle. Figure 13: Instantaneous fluctuating streamwise velocity \(u_{1}^{\prime}\) in a streamwise/wall-normal plane during a pulsatile cycle. Black dashed lines denote the \(12^{\circ}\) structural tilting angle of the coherent structure. with a cross-stream extent of about \(5h\). Conversely, when \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) is large, streaks are scrambled into smaller structures, characterized by a cross-stream extent of about \(h\). This behavior is consistent with the observations we made based on figure 8. Figure 12(c) and (d) depict the corresponding low-pass filtered wall-normal swirl strength \(\lambda_{s,3}\). The definition of the signed planar swirl strength \(\lambda_{s,i}\) is based on the studies of Stanislas _et al._ (2008) and Elsinga _et al._ (2012). The magnitude of \(\lambda_{s,i}\) is the absolute value of the imaginary part of the eigenvalue of the reduced velocity gradient tensor \(J_{jk}\), which is \[J_{jk}=\begin{bmatrix}\partial u_{j}/\partial x_{j}&\partial u_{j}/\partial x_ {k}\\ \partial u_{k}/\partial x_{j}&\partial u_{k}/\partial x_{k}\end{bmatrix},i\neq j \neq k\, \tag{10}\] with no summation over repeated indices. The sign of \(\lambda_{s,i}\) is determined by the vorticity component \(\omega_{i}\). Positive and negative \(\lambda_{s,i}\) highlight regions with counterclockwise and clockwise swirling motions, respectively. To eliminate the noise from the small-scale vortices, we have adopted the Tomkins & Adrian (2003) idea and low-pass filtered the \(\lambda_{s,i}\) field (a compact top-hat filter) with support \(h\) to better identify instantaneous hairpin features. As apparent from this figure, low-momentum bulges are bordered by pairs of oppositely signed \(\lambda_{s,3}\) regions at both the considered phases; these counter-rotating rolls are a signature of hairpin legs. Based on these signatures, it is also apparent that hairpin vortices tend to align in the streamwise direction. Comparing subplots (c) and (d) in figure 12, it is clear that, as \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases, the swirling strength of the hairpin's legs is intensified, which in turn increases the momentum deficits in the low-momentum regions between the hairpin legs. This behavior leads to a narrowing of low-momentum regions to satisfy continuity constraints. Also, it is apparent that a larger number of hairpin structures populates the flow field at a higher \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), which can be attributed to hairpin vortices spawning offsprings in both the upstream and downstream directions as they intensify (Zhou _et al._, 1999). Figure 13 displays a \(u_{1}^{\prime}\) contour for the PP case at a streamwise/wall-normal plane. Black dashed lines feature a tilting angle \(\theta=12^{\circ}\). It is evident that the interfaces of the low- and high-momentum regions, which are representative instantaneous manifestations of hairpin packets (Hutchins _et al._, 2012), feature a constant tilting angle during the pulsatile cycle. This behavior is in agreement with findings from the earlier \(\overline{R}_{11}\) analysis, which identified the typical tilting angle of coherent structures as lying between \(10^{\circ}\) to \(15^{\circ}\), depending on the reference wall-normal location. We close this section by noting that while the instantaneous flow field provides solid qualitative insight into the structure of turbulence for the considered flow field, a more statistically-representative picture can be gained by conditionally averaging the flow field on selected instantaneous events. This will be the focus of the next section. ### Temporal variability of the composite hairpin vortex This section aims at providing a deeper and more quantitative insight into the temporal variability of the individual hairpin structures, and elucidating how variations in their geometry influence the ejection-sweep pattern (SS3.2) and the spatio-temporal coherence of the flow field (SS3.3). To study the phase-dependent structural characteristics of the hairpin vortex, we utilize the conditional averaging technique (Blackwelder, 1977). This technique involves selecting a flow event at a specific spatial location to condition the averaging process in time and/or space. The conditionally-averaged flow field is then analyzed using standard flow visualization techniques to identify the key features of the eddies involved. By applying this technique to the hairpin vortex, we can gain valuable insights into its structural attributes and how they vary over time. In the past few decades, various events have been employed as triggers for the con ditional averaging operation. For example, in the context of channel flow over aerodynamically smooth surfaces, Zhou _et al._ (1999) relied on ejection event as the trigger, which generally coincides with the passage of a hairpin head through that point. More recently, Dennis & Nickels (2011) considered both positive cross-stream and streamwise swirl as triggers, which are indicative of passages of hairpin heads and legs, respectively. In flow over homogeneous vegetation canopies, Watanabe (2004) used a scalar microfront associated with a sweep event. Shortly after, Finnigan _et al._ (2009) noted that this choice might introduce a bias towards sweep events in the resulting structure and instead used transient peaks in the static pressure, which are associated with both ejection and sweep events. Here, we adopt the approach first suggested by Coccal _et al._ (2007), where the local minimum streamwise velocity over a given plane was used as the trigger. It can be shown that this approach yields similar results as the one proposed in Dennis & Nickels (2011) and that it is suitable for the identification of hairpin vortices in the ISL. The conditional averaging procedure used in this study is based on the following operations: 1. Firstly, at a chosen wall-parallel location \(x_{3,e}\), we first identify the set of \((x_{1,e},x_{2,e})\) locations where the instantaneous streamwise velocity is 75% below its phase-averaged value. This is our "triggering event". Such an operation is repeated for each available velocity snapshot. 2. Next, for each identified event, the fluctuating velocity field at the selected \(x_{3,e}\) plane is shifted by \((-x_{1,e},-x_{2,e})\). After this operation, all identified events are located at \((x_{1}^{\prime},x_{2}^{\prime})=(0,0)\), where \((x_{1}^{\prime},x_{2}^{\prime})\) is the new (translated) coordinate system. 3. Lastly, the shifted instantaneous velocity fields are averaged over the identified events and snapshots, for each phase. The end result is a phase-dependent, conditionally-averaged velocity field that can be used for further analysis. Figure 14 shows a wall-parallel slice at \(x_{3}/h=2\) of the conditionally averaged fluctuating velocity field in the same plane as the triggering event. Counter-rotating vortices associated with a low-momentum region in between appear to be persistent features of the ISL throughout the pulsation cycle. Vortex cores move downstream and towards each other as \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases, and the vortices intensify. This behavior Figure 14: Vector plot of the conditionally averaged fluctuating velocity (PP case) over the \(x_{3}/h=2\) wall-parallel plane. The flow has been conditioned on a local minimum streamwise velocity event in the same plane, i.e., \(x_{3}=x_{3,e}\). Color contours represent the wall-normal swirling strength \(\lambda_{s,3}\). Green dots identify the cores of the counter-rotating vortices. occurs in the normalized time interval \(\omega t\in[\pi/2,\pi]\). Instead, when \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) decreases, the cores move upstream and further apart. Such behavior provides statistical evidence of the behavior depicted in figure 12(c,d) for the instantaneous flow field. Note that the composite counter-rotating vortex pair in the conditionally averaged flow field is in fact an ensemble average of vortex pairs in the instantaneous flow field. Thus, the spacing between the composite vortex pair cores (\(d_{\omega}\)) represents a suitable metric to quantify the phase-averaged widths of vortex packets in the considered flow system. Figure 15 presents \(d_{\omega}\) evaluated with the triggering event at \(x_{3,e}/h=\{1.5,2,3,4\}\). The trend in \(d_{\omega}\) is similar to that observed in figure 8(a) for the first zero crossing of \(\overline{R}_{11}\), which is an indicator of the streak width. The explanation for this behavior is that low-momentum regions are generated between the legs of the hairpins, justifying the observed linear scaling of the streak width with the cross-stream spacing of hairpin legs. Figure 16 and 17 depict a conditionally averaged fluctuating velocity field, which is obtained with a triggering event at \(x_{3,e}/h=2\), in the \(x_{2}^{\prime}=0\) plane and the \(x_{1}^{\prime}=-h\) plane, respectively. Note that the \(x_{2}^{\prime}=0\) plane corresponds to the center plane, and the \(x_{1}^{\prime}=-h\) cross-section is located \(h\) upstream of the triggering event. From Fig. 16, a region of positive \(\lambda_{s,2}\) can be identified immediately above and downstream the location of the triggering event, i.e., \((x_{1}^{\prime},x_{2}^{\prime},x_{3}^{e})=(0,0,2h)\). This \(\lambda_{s,2}>0\) region can be interpreted as the head of the composite hairpin vortex (Adrian _et al._, 2000; Ganapathisubramani _et al._, 2003). As \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases, the vortex structure is deflected downstream and \(\lambda_{s,2}\) increases, leading to enhanced upstream ejection events. This behavior is also apparent from figure 6, where the overall contribution from ejection events to \(\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle\) increases, while the number of ejection events reduces, highlighting enhanced individual ejection events. The deflection of the hairpin head in the downstream direction is caused by two competing factors. The first is the increase in \(\langle\overline{u_{1}^{\prime}u_{3}^{\prime}}\rangle\), which leads to the downstream deflection. The second factor is the enhancement of the sweep events, which induce an upstream deflection. The first factor outweighs the second thus yielding the observed variations in the hairpin topology. Figure 17 shows the response of hairpin legs to changing \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) in a cross-stream plane at \(x_{1}^{\prime}=-h\). A pair of counter-rotating streamwise rollers is readily observed, which, as explained before, identify the legs of the composite hairpin vortex. It also further corroborates our analysis, highlighting that the spacing between the legs reduces from \(\approx 5h\) at \(\omega t=\pi/2\) to \(\approx 2h\) at \(\omega t=\pi\). This also provides a justification for findings in SS3.3 and SS3.4. Further, the swirling of the hairpin legs, which is quantified with \(\lambda_{s,1}\) and \(\lambda_{s,3}\) in the wall-normal/cross-stream and wall-parallel planes, respectively, intensifies Figure 15: Spacing between the composite vortex pair cores \(d_{\omega}\), corresponding to local minimum streamwise velocity events at \(x_{3,e}/h=1.5\) (black lines), \(x_{3,e}/h=2\) (blue lines), \(x_{3,e}/h=3\) (green lines) and \(x_{3,e}/h=4\) (magenta lines). with increasing \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). Interestingly, when \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) approaches its peak value at \(\omega t=\pi\), a modest albeit visible secondary streamwise roller pair is induced by the hairpin legs at \(x_{2}^{\prime}=\pm 3\). This suggests that the hairpin vortex not only generates new offsprings upstream and downstream, as documented in (Zhou _et al._, 1999; Adrian, 2007), but also in the cross-stream direction when it intensifies. The intensification of hairpin legs creates counter-rotating quasi-streamwise roller pairs between the hairpin vortices adjacent to the cross-stream direction. These roller pairs are lifted up due to the effect of the induced velocity of one roller on the other according to the Biot-Savart law, and the downstream ends of the rollers then connect, forming new hairpin structures. A more comprehensive picture is provided by isocontours of the conditionally averaged swirling magnitude \(\lambda_{s}=0.1\) shown in figure 18. \(\lambda_{s}\) is the imaginary part of the complex eigenvalue of the velocity gradient tensor (Zhou _et al._, 1999). In this case, the conditionally averaged swirling field corresponds to a triggering event at \(x_{3,e}/h=2\). Zhou _et al._ (1999) pointed out that different thresholds of the iso-surface result in vortex structures of similar shapes but different sizes. \(\lambda_{s}=0.1\), in this case, strikes the best compromise between descriptive capabilities and surface smoothness. Note that other vortex identification criteria, such as the Q criterion (Hunt _et al._, 1988) and the \(\lambda_{2}\) criterion (Jeong & Hussain, 1995), are expected to result in qualitatively similar vortex structures (Chakraborty _et al._, 2005). The extents of the conditional eddy in figure 18 vary substantially from roughly \(10h\times 8h\times 5h\) at relatively low \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) (\(\omega t=\pi/2\)), to \(6h\times 6h\times 3h\) at high \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) (\(\omega t=\pi\)). During the period of decreasing \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), i.e., \(0<\omega t<3\pi/4\) and \(\pi<\omega t<2\pi\), the conditional eddy resembles the classic hairpin structure in the stationary case, where two Figure 16: Time evolution of the conditionally averaged fluctuating velocity field of the PP case in the streamwise/wall-normal plane \(x_{2}^{*}/h=0\) given a local minimum streamwise velocity event at \(x_{3,e}/h=2\). Color contours represent the cross-stream swirling strength \(\lambda_{s,2}\). Red and blue lines mark the \(\lambda_{s,2}=0.1\) and \(\lambda_{s,2}=-0.1\) contours, respectively. hairpin legs and the hairpin head connecting the hairpin legs can be vividly observed. The sizes of the hairpin legs increase with decreasing \(\partial(\overline{u}_{1})/\partial x_{3}\), and so does their spacing, which is in line with our prior observations based on figure 17. One possible physical interpretation for the change in the size of hairpin legs is that the reduction in swirling strength of the hairpin head resulting from a decrease in \(\partial(\overline{u}_{1})/\partial x_{3}\) weakens the ejection between the hairpin legs, as shown in Figure 16. As a result, the swirling strength of the legs decreases, causing an increase in their size due to the conservation of angular momentum. Conversely, during the period of increasing \(\partial(\overline{u}_{1})/\partial x_{3}\) (\(3\pi/4<\omega t<\pi\)), the hairpin structure is less pronounced. The conditional eddy features a strengthened hairpin head, and the intensified counter-rotating hairpin legs move closer to each other and ultimately merge into a single region of non-zero swirling strength, as apparent from Figure 18. Moreover, downstream of the conditional eddy, a pair of streamwise protrusions, known as "tongues" (Zhou _et al._, 1999), persist throughout the pulsatile cycle. According to Adrian (2007), these protrusions reflect the early stage of the generation process of the downstream hairpin vortex. These protrusions would eventually grow into a quasi-streamwise vortex pair and later develop a child hairpin vortex downstream of the original one. In summary, the proposed conditional analysis complements and extends findings from SS3.4 and elucidates fundamental physical mechanisms underpinning the observed variability in momentum transport (SS3.2) and flow coherence (SS3.3) in the considered pulsatile flow. More specifically, the analysis reveals that the time-varying shear rate resulting from the pulsatile forcing affects the topology and swirling intensity of hairpin vortices. As the shear rate increases (decreases), hairpin vortices tend to shrink (grow) Figure 17: Time evolution of the conditionally averaged fluctuating velocity field in figure 16 in a cross-stream/wall-normal plane \(x^{\prime}_{1}=-h\). Color contours represent the streamwise swirling strength \(\lambda_{s,1}\). Red and blue lines mark \(\lambda_{s,1}=0.1\) and \(\lambda_{s,1}=-0.1\), respectively. Green dots identify the cores of the counter-rotating vortices. Figure 18: Time evolution of the conditionally averaged swirling field \(\lambda_{s}\) of the PP case given a local minimum streamwise velocity event at \(x_{3,e}=2h\). The shown iso-surfaces are for \(\lambda_{s}=0.1\). with a corresponding enhancement (relaxation) of the swirling strength. These variations in hairpin geometry are responsible for the observed time-varying ejection-sweep pattern depicted in figure 6. Ejection events primarily occur between the hairpin legs, which become more widely spaced as the vortices grow and less spaced as they shrink. Therefore, a decrease in hairpin vortex size due to an increasing shear rate reduces the number of ejection events, while an increase in vortex size due to decreasing shear rate leads to an increased number of ejections. Moreover, the intensification (relaxation) of hairpin vortices at high (low) shear rates results in enhanced (attenuated) ejection events between the hairpin legs, as evidenced by figures 16 and 17. This enhancement (attenuation) of ejection events is also corroborated by results from figure 6, which indicated that high (low) shear rates decrease (increase) the number of ejection events but increase (decrease) their contribution to \(\overline{u_{1}^{\prime}u_{3}^{\prime}}\). From a flow coherence perspective, this physical process also explains the observed time evolution of \(\overline{R}_{11}\) (see figures 8 and 10), which is a statistical signature of hairpin packets. Changes in the size of individual hairpin vortices in response to the shear rate directly influence the dimensions of hairpin packets, as the latter are composed of multiple individual hairpin structures. ## 4 Conclusions In this study, the structure of turbulence in pulsatile flow over an array of surface-mounted cuboids has been characterized and contrasted with its counterpart in a stationary flow regime. The goal is to shed light on the impact of non-stationarity on turbulence topology and its implications for momentum transfer. The flow unsteadiness does not substantially modify the profiles of turbulent kinetic energy and resolved Reynolds shear stress in a longtime average sense, and marginally increases the height of the RSL. In terms of quadrant analysis, we have found that the flow unsteadiness does not noticeably alter the overall distribution of each quadrant. However, the ejection-sweep pattern exhibits an apparent variation during the pulsation cycle. Flow acceleration yields a large number of ejection events within the RSL, whereas flow deceleration favors sweeps. In the ISL, it is shown that the ejection-sweep pattern is mainly controlled by the intrinsic- and phase-averaged shear rate \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) rather than by the driving pressure gradient. Specifically, the relative contribution from ejections increases, but their frequency of occurrence decreases with increasing \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). The aforementioned time variation in the ejection-sweep pattern was later found to stem from topological variations in the structure of ISL turbulence, as deduced from inspection of the two-point streamwise velocity correlation function and the conditionally-averaged flow field. Specifically, the geometry of hairpin vortex packets, which are the dominant coherent structures in the ISL, has been examined through the analysis of two-point velocity correlation to explore its longtime-averaged and phase-dependent characteristics. Flow unsteadiness was found to yield relatively shorter vortex packets in a longtime average sense (up to 15% discrepancy). From a phase-averaged perspective, the three-dimensional extent of hairpin packets was found to vary during the pulsation cycle and to be primarily controlled by \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), while their tilting angle remained constant throughout. A visual examination of instantaneous structures also confirmed such behavior: the size of low-momentum regions and spacing of the hairpin legs encapsulating them were found to change with \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\), while the hairpin vortices remained aligned at a constant angle during the pulsation cycle. Further insight into phase variations of instantaneous hairpin structures was later gained using conditional averaging operations, which provided compelling quantitative evidence for the behaviors previously observed. Specifically, the conditional averaged flow field revealed that the size and swirling intensity of the composite hairpin vortex vary considerably with \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\). When \(\partial\langle\overline{u}_{1}\rangle/\partial x_{3}\) increases to its peak value, the swirling strength of the hairpin head is intensified, yielding strengthened ejections upstream of the hairpin head and a downstream deflection of the hairpin head. Following the intensification of the hairpin head, an intensification of the hairpin legs is observed, along with a narrowing of the spacing between the legs. This justifies the observed reduction in the extent of the ejection-dominated region. In other words, individual ejections become stronger and are generated at a reduced frequency as the shear rate increases, which provides a kinematic interpretation and justification for the observed time-variability of the quadrant distribution. Such a process, needless to say, is reversed when the shear rate decreases. The findings of this study emphasize the significant influence that departures from statistically stationary flow conditions can have on the structure of ABL turbulence and associated processes. Such departures are typical in realistic ABL flows and have garnered growing attention in recent times (Mahrt & Bou-Zeid, 2020). While the study focuses on a particular type of non-stationarity, its results underscore the importance of accounting for this flow phenomenon in both geophysical and engineering applications. Flow unsteadiness-induced modifications in turbulence structure can significantly impact land- and ocean-atmosphere exchanges and the aerodynamic drag of vehicles, thus calling for dedicated efforts toward their comprehensive characterization. For example, this understanding can facilitate the development of improved non-equilibrium wall-layer models, such as those proposed in the works of Marusic _et al._ (2001, 2010), which utilize information on turbulence structure to enhance the predictive capabilities of wall-layer models for wall-bounded turbulence. ## Declaration of Interests The authors report no conflict of interest. ### Acknowledgements The authors acknowledge support from the Department of Civil Engineering and Engineering Mechanics at Columbia University. This material is based upon work supported by, or in part by, the Army Research Laboratory and the Army Research Office under contract/grant number W911NF-22-1-0178. This work used the Stampede2 cluster at the Texas Advanced Computing Center through allocation ATM180022 from the Extreme Science and Engineering Discovery Environment (XSEDE), which was supported by National Science Foundation grant number #1548562.